Understanding the Landscape of Viewpoints on the Risks and Benefits of AI

I’ve seen seven main viewpoints on AI and the future from those who spend a lot of time thinking about it:


(1) Superintelligence Doomers – they believe we are likely to build AI that’s superintelligent (i.e., that surpasses human intelligence in all respects) and that once we do, it will kill or enslave humanity.

See: Eliezer Yudkowsky

“The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else.”


(2) AI Corrosionists – they believe AI, while it could be beneficial, has a very substantial likelihood of making things far worse. Unlike doomers, they see most risk coming from other processes rather than a sudden AI takeover or annihilation: for instance, it could be that humans lose control over the future slowly (by ceding more and more control over to AIs that are optimizing for things that are different than human values), or it could be that AI de-stabilized aspects of society that make other large risks (like nuclear war) more likely

See: Paul Christiano

“If you imagine a society in which almost all of the work is being done by these inhuman systems who want something that’s significantly at cross purposes, it’s possible to have social arrangements in which their desires are thwarted, but you’ve kind of set up a really bad position. And I think the best guess would be that what happens will not be what the humans want to happen, but what the systems who greatly outnumber us want to happen.”


(3) Near Risk Doomers – they believe AI will have very bad effects (e.g., increasing unfairness, authoritarianism, climate change, unemployment, or concentration of power) but that we’re not going to build superintelligence anytime soon.

See: Cathy O’Neil

“The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives. Like gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, were beyond dispute or appeal. And they tended to punish the poor and the oppressed in our society, while making the rich richer.”


(4) Current Paradigm Doubter – they believe the current AI paradigm is overhyped and isn’t going to change things all that much (some of them see the current paradigm as net negative; others see it as positive but only as one useful and overhyped tech among many other useful technologies). Some, like Marcus, hope that future paradigms might be more trustworthy and beneficial.

See: Gary Marcus

“What a strange world… All the major AI companies spending billions producing almost exactly the same results using almost exactly the same data using almost exactly the same technology, all flawed in almost exactly the same ways. Historians gonna be scratching their heads…The only way we will move significantly forward is to develop new architectures—likely neurosymbolic—that are less tied to the idiosyncrasies of specific training set.”


(5) AI Stewardists – they believe AI advancement is important to humanity’s future, but it’s not necessarily going to be good or bad: it will be what we make of it, so it should be developed thoughtfully based on what we want to achieve.

See: Kevin Kelly

“AI could just as well stand for ‘alien intelligence.’ We have no certainty we’ll contact extraterrestrial beings in the next 200 years, but we have almost 100 percent certainty that we’ll manufacture an alien intelligence by then. When we face these synthetic aliens, we’ll encounter the same benefits and challenges that we expect from contact with ET. They will force us to reevaluate our roles, our beliefs, our goals, our identity. What are humans for?”


(6) Near Benefit Boosters – they believe AI will be very useful, impactful and important as a technology, and they also believe superintelligence is a ridiculous thing to worry about.

See: Yann LeCun

“AI is intrinsically good, because the effect of AI is to make people smarter….AI is an amplifier of human intelligence and when people are smarter, better things happen: people are more productive, happier and the economy thrives.”


(7) Superintelligence Boosters – they believe we are likely to build superintelligent AI and that it will usher in an incredible and positive new era.

See: Ray Kurzweil

“By the time of the Singularity, there won’t be a distinction between humans and technology. This is not because humans will have become what we think of as machines today, but rather machines will have progressed to be like humans and beyond. Technology will be the metaphorical opposable thumb that enables our next step in evolution.”


How do we best organize the positions on risks from AI?

I think the two most informative spectrums to think about are

(1) How *substantial* the near-term impacts of AI are expected to be

and

(2) Whether the effects of AI are likely to be *good* or *bad*

The image shown is my attempt to place these different thinkers who talk about AI on this two-axis system. Though note that this just reflects my best guesses, some of these thinkers have very nuanced views, and I’m not certain that I’m placing each of them in the right place.


This piece was first written on July 27, 2024, and first appeared on my website on August 4, 2024.


  

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *