Exclusive: 63 percent of Americans want regulation to actively prevent superintelligent AI, a new poll reveals.
Sigal Samuel is a senior reporter for Vox’s Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic.
Major AI companies are racing to build superintelligent AI — for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?
Americans, by and large, don’t want it.
That’s the upshot of a new poll shared exclusively with Vox. The poll, commissioned by the think tank AI Policy Institute and conducted by YouGov, surveyed 1,118 Americans from across the age, gender, race, and political spectrums in early September. It reveals that 63 percent of voters say regulation should aim to actively prevent AI superintelligence.
Companies like OpenAI have made it clear that superintelligent AI — a system that is smarter than humans — is exactly what they’re trying to build. They call it artificial general intelligence (AGI) and they take it for granted that AGI should exist. “Our mission,” OpenAI’s website says, “is to ensure that artificial general intelligence benefits all of humanity.”
But there’s a deeply weird and seldom remarked upon fact here: It’s not at all obvious that we should want to create AGI — which, as OpenAI CEO Sam Altman will be the first to tell you, comes with major risks, including the risk that all of humanity gets wiped out. And yet a handful of CEOs have decided, on behalf of everyone else, that AGI should exist.
Now, the only thing that gets discussed in public debate is how to control a hypothetical superhuman intelligence — not whether we actually want it. A premise has been ceded here that arguably never should have been.
“It’s so strange to me to say, ‘We have to be really careful with AGI,’ rather than saying, ‘We don’t need AGI, this is not on the table,’” Elke Schwarz, a political theorist who studies AI ethics at Queen Mary University of London, told me earlier this year. “But we’re already at a point when power is consolidated in a way that doesn’t even give us the option to collectively suggest that AGI should not be pursued.”
Building AGI is a deeply political move. Why aren’t we treating it that way?
Technological solutionism — the ideology that says we can trust technologists to engineer the way out of humanity’s greatest problems — has played a major role in consolidating power in the hands of the tech sector. Although this may sound like a modern ideology, it actually goes all the way back to the medieval period, when religious thinkers began to teach that technology is a means of bringing about humanity’s salvation. Since then, Western society has largely bought the notion that tech progress is synonymous with moral progress.
In modern America, where the profit motives of capitalism have combined with geopolitical narratives about needing to “race” against foreign military powers, tech accelerationism has reached fever pitch. And Silicon Valley has been only too happy to run with it.
AGI enthusiasts promise that the coming superintelligence will bring radical improvements. It could develop everything from cures for diseases to better clean energy technologies. It could turbocharge productivity, leading to windfall profits that may alleviate global poverty. And getting to it first could help the US maintain an edge over China; in a logic reminiscent of a nuclear weapons race, it’s better for “us” to have it than “them,” the argument goes.
But Americans have learned a thing or two from the past decade in tech, and especially from the disastrous consequences of social media. They increasingly distrust tech executives and the idea that tech progress is positive by default. And they’re questioning whether the potential benefits of AGI justify the potential costs of developing it. After all, CEOs like Altman readily proclaim that AGI may well usher in mass unemployment, break the economic system, and change the entire world order. That’s if it doesn’t render us all extinct.
In the new AI Policy Institute/YouGov poll, the “better us than China” argument was presented five different ways in five different questions. Strikingly, each time, the majority of respondents rejected the argument. For example, 67 percent of voters said we should restrict how powerful AI models can become, even though that risks making American companies fall behind China. Only 14 percent disagreed.
Naturally, with any poll about a technology that doesn’t yet exist, there’s a bit of a challenge in interpreting the responses. But what a strong majority of the American public seems to be saying here is: just because we’re worried about a foreign power getting ahead, doesn’t mean that it makes sense to unleash upon ourselves a technology we think will severely harm us.
AGI, it turns out, is just not a popular idea in America.
“As we’re asking these poll questions and getting such lopsided results, it’s honestly a little bit surprising to me to see how lopsided it is,” Daniel Colson, the executive director of the AI Policy Institute, told me. “There’s actually quite a large disconnect between a lot of the elite discourse or discourse in the labs and what the American public wants.”
And yet, Colson pointed out, “most of the direction of society is set by the technologists and by the technologies that are being released … There’s an important way in which that’s extremely undemocratic.”
He expressed consternation that when tech billionaires recently descended on Washington to opine on AI policy at Sen. Chuck Schumer’s invitation, they did so behind closed doors. The public didn’t get to watch, never mind participate in, a discussion that will shape its future.
According to Schwarz, we shouldn’t let technologists depict the development of AGI as if it’s some natural law, as inevitable as gravity. It’s a choice — a deeply political one.
“The desire for societal change is not merely a technological aim, it is a fully political aim,” she said. “If the publicly stated aim is to ‘change everything about society,’ then this alone should be a prompt to trigger some level of democratic input and oversight.”
AI companies are radically changing our world. Should they be getting our permission first?
AI stands to be so transformative that even its developers are expressing unease about how undemocratic its development has been.
Jack Clark, the co-founder of AI safety and research company Anthropic, recently wrote an unusually vulnerable newsletter. He confessed that there were several key things he’s “confused and uneasy” about when it comes to AI. Here is one of the questions he articulated: “How much permission do AI developers need to get from society before irrevocably changing society?” Clark continued:
Technologists have always had something of a libertarian streak and this is perhaps best epitomized by the ‘social media’ and Uber et al era of the 2010s — vast, society-altering systems ranging from social networks to rideshare systems were deployed into the world and aggressively scaled with little regard to the societies they were influencing. This form of permissionless invention is basically the implicitly preferred form of development as epitomized by Silicon Valley and the general ‘move fast and break things’ philosophy of tech. Should the same be true of AI?
That more people, including tech CEOs, are starting to question the norm of “permissionless invention” is a very healthy development. It also raises some tricky questions.
When does it make sense for technologists to seek buy-in from those who’ll be affected by a given product? And when the product will affect the entirety of human civilization, how can you even go about seeking consensus?
Many of the great technological innovations in history happened because a few individuals decided by fiat that they had a great way to change things for everyone. Just think of the invention of the printing press or the telegraph. The inventors didn’t ask society for its permission to release them.
That may be partly because of technological solutionism and partly because, well, it would have been pretty hard to consult broad swaths of society in an era before mass communications — before things like a printing press or a telegraph! And while those inventions did come with perceived risks, they didn’t pose the threat of wiping out humanity altogether or making us subservient to a different species.
For the few technologies we’ve invented so far that meet that bar, seeking democratic input and establishing mechanisms for global oversight have been attempted, and rightly so. It’s the reason we have a Nuclear Nonproliferation Treaty and a Biological Weapons Convention — treaties that, though they’re struggling, matter a lot for keeping our world safe.
While those treaties came after the use of such weapons, another example — the 1967 Outer Space Treaty — shows that it’s possible to create such mechanisms in advance. Ratified by dozens of countries and adopted by the United Nations against the backdrop of the Cold War, it laid out a framework for international space law. Among other things, it stipulated that the moon and other celestial bodies can only be used for peaceful purposes, and that states can’t store their nuclear weapons in space.
Nowadays, the treaty comes up in debates about whether we should send messages into space with the hope of reaching extraterrestrials. Some argue that’s very dangerous because an alien species, once aware of us, might oppress us. Others argue it’s more likely to be a boon — maybe the aliens will gift us their knowledge in the form of an Encyclopedia Galactica. Either way, it’s clear that the stakes are incredibly high and all of human civilization would be affected, prompting some to make the case for democratic deliberation before any more intentional transmissions are sent into space.
As Kathryn Denning, an anthropologist who studies the ethics of space exploration, put it in an interview with the New York Times, “Why should my opinion matter more than that of a 6-year-old girl in Namibia? We both have exactly the same amount at stake.”
Or, as the old Roman proverb goes: what touches all should be decided by all.
That is as true of superintelligent AI as it is of nukes, chemical weapons, or interstellar broadcasts. And though some might argue that the American public only knows as much about AI as a 6-year-old, that doesn’t mean it’s legitimate to ignore or override the public’s general wishes for technology.
“Policymakers shouldn’t take the specifics of how to solve these problems from voters or the contents of polls,” Colson acknowledged. “The place where I think voters are the right people to ask, though, is: What do you want out of policy? And what direction do you want society to go in?”
Source: vox.com