President Biden’s executive order on artificial intelligence was criticized by many for overreaching, but the danger from uncontrolled AI progress is real.
Zvi Mowshowitz is the author of Don’t Worry About the Vase, a widely spanning substack trying to help us think about, model, and improve the world. He is a rationalist thinker with experience as a professional trader, game designer and competitor, and startup founder. His blog spans diverse topics and is currently focused on extensive weekly AI updates
President Joe Biden’s recent executive order on artificial intelligence made an unexpectedly big splash. This was despite the fact that the order itself actually does very little: It has a few good governance provisions and small subsidies, such as $2 million for a Growth Accelerator Fund Competition bonus prize, but mostly it calls for the creation of reports.
Despite this sparing use of force, the order has proven surprisingly divisive in the tech world. Some ardently praised it. Others, many of whom call themselves accelerationists or techno-optimists, implied the order was effectively a ban on math and spread American Revolution-inspired resistance memes.
Why the absurd reaction? One reporting requirement for AI work in particular. Biden’s order requires that those doing sufficiently large AI training runs, much larger than any we’ve run in the past, must report what safety precautions they are taking. Giant data centers that could enable such training runs also have reporting obligations and must report what foreign parties they sell services to.
Everyone sees that this reporting threshold could become something stronger and more restrictive over time. But while those on both sides of the rift in tech over the order see the stakes as existential, they worry about different threats.
AI will be central to the future. It will steadily become smarter and more capable over time, superior to the best of us at an increasing number of tasks and perhaps, ultimately, far smarter than us.
Those supporting the executive order see AI as a unique challenge posing potentially existential dangers — machines that may soon be smarter and more capable than we are. For them, the order isn’t merely about catching and punishing bad actors, like any ordinary government regulation, but about ensuring that humanity stays in control of its future.
Those opposed do not worry about AI taking control. They do not ask whether tools smarter and more capable than us would long remain our tools for long. Some would welcome and even actively work to bring about our new AI overlords.
Instead, they worry about the dangers of not building superintelligent AI, or of the wrong humans gaining control over superintelligent AI. They fear a few powerful people will get control and that without access to top AI, the rest of us will be powerless.
Collectively, this opposition embodies a long history of deep suspicion of any limits on technology, of all governments and corporations, and of all restrictions and regulations. Opponents often have roots in libertarianism, and many are diehard believers in the open source software movement.
They believe that most regulations, however well intentioned, are inevitably captured over time by insiders, ending up distorted from their original purpose, failing to adjust to a changing world, strangling our civilization on front after front. They have watched for decades in horror as our society becomes a vetocracy. We struggle to build houses, cannot get permission to construct green energy projects, and have gutted self-driving cars.
The accelerationists are not imagining this. It is indeed happening. While we have created digital wonders, we have largely turned our backs on physical-world progress for 50 years, resulting in a great stagnation. It is vital we fight back.
They are also right that current AI, already in use, offers far more promise than danger.
Many largely fear our society effectively has a singular Dial of Progress, as I’ve written before, based on the extent to which our civilization places restrictions, demands permissions, and puts strangleholds on human activity. They worry that society is increasingly citing phantom dangers to hinder progress and limit our future. They want to keep having nice things and for the world to continue getting wealthier, so they push back and celebrate progress. They fear any further nuance will be lost, along with the golden goose.
Many previous attempts to regulate technology illustrate our government’s cluelessness. Laws that would supposedly “break the internet” get introduced every year. And accelerationists expect the same problems from any regulations on AI. Where others see an executive order calling for government reports from Big Tech, they see the groundwork for future botched regulations that they expect to be captured by Big Tech or the government. They expect these restrictions to prevent anyone but Big Tech from training advanced AI models, which will then hand control over the future to a combination of Big Tech, oppressive government at home, and our rivals abroad.
Thus they see a fight for survival, warning what will happen if the wrong people take control and shut down AI progress: a loss of competitiveness, a stifling of progress, or a totalitarian world dominated by some combination of China, future oppressive Western governments, and Big Tech.
Others, myself included, instead see a very different and more literal fight for survival.
The most important danger of AI
An open letter signed earlier this year by the heads of all top AI labs and many leading academics and scientists, including Turing Award winners Geoffrey Hinton and Yoshua Bengio, states it plainly: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
There are many technical arguments about how difficult it will be to avoid this, what methods might work, and how likely we are to succeed. Reasonable people disagree. What is not reasonable is refusing to acknowledge that creating machines smarter and more capable than we are poses a risk to our survival.
All three leading AI labs — DeepMind, OpenAI, and Anthropic — were explicitly founded primarily because of the fear that future AI would pose an extinction risk to humanity. That our last invention, a machine smarter than us, could take control of the future and wipe us out as we wiped out the Neanderthals. Surveys of employees at major labs find they assign roughly a 10 percent chance to the possibility of AI causing human extinction or “similarly permanent and severe disempowerment.” Would you get on a plane with a 10 percent chance of crashing?
When you put it that way, it sounds wild. If you thought AI might wipe out humanity, why would you want to build it?
We do not know how to align AI systems to make them do what humans want them to do. If we created a sufficiently superintelligent and hypercapable machine or set of machines that prioritized something we did not care about, that would likely be the end for us. Thus, if you have what you believe is a uniquely safety-focused lab, you might rush to build a safe and aligned AI first, before someone else builds a relatively unsafe one and potentially gets us all killed.
We also risk malicious AI, whether or not it is guided by malicious humans, explicitly seeking control of the future. Losing control over even one such entity could lose us control of everything. And yet there are some working on AI who would welcome this scenario.
These dangers are especially acute for open source models. An open source model, once released, cannot be recalled if it proves dangerous and others find ways to give it unexpected capabilities. All known methods to restrict what an open source model will do for a user can and will be removed within days at trivial cost. The unlocked version will then be in the hands of every bad actor and rival state.
Open source approaches enhances security and provides great value in many other contexts. But sufficiently capable open source AI models are inherently unsafe, and nothing can fix this. Yet there are those who would create them, whether for commercial advantage, for prestige, or out of ideology.
Thus, Demis Hassabis founded DeepMind, which was then bought by Google. When Elon Musk asked Google co-founder Larry Page whether humans would be all right after AI was created, Page called him a “speciesist.” Musk went on to help start OpenAI, which later partnered with Microsoft, to Musk’s dismay. Concerned that OpenAI lacked a sufficient commitment to safety, some employees of the company left to create Anthropic, which has taken in billions of investment of its own.
Now all three companies, and others, face commercial pressures to race ever forward as investment flows in and the cost of training AI is cut in half every few months.
The recent fight at OpenAI between CEO Sam Altman and the company’s board grew out of this struggle to balance those commercial pressures against OpenAI’s founding mission to ensure that as we build artificial general intelligence, we guard against it as a potential existential threat and ensure everyone benefits. I believe Altman understands the threat and is doing what he thinks is best, but he wanted to control the board and have a free hand to decide how to handle things. So he moved against the board, causing a crisis that temporarily led to his firing before he was brought back. You can read more about that on Substack.
That fight, and the extreme pressure brought by an alliance of capitalists and major corporations led by Microsoft, illustrated how difficult it will be for individual labs to stand up on their own to commercial pressures and ensure they only develop and deploy safe systems. OpenAI’s unique corporate structure and Anthropic’s own corporate safeguards represent attempts to make such responsible decisions possible, but on their own, they may not be enough even at these labs. Thus the labs recognize they will also need help from government regulations.
When those companies call for government regulations to ensure AI is developed safely in the face of commercial pressures pushing them forward and antitrust laws paradoxically making it difficult for firms to coordinate on AI safety, all the while warning that their products could destroy the world, those who want no regulations at all accuse the labs of lying to drive hype or achieve regulatory capture. That accusation is absurd. I know many of the people who work in these labs. I have had these concerns about AI existential risk since 2009. I assure you the warnings are genuine, and justified.
Regulating AI by regulating computing power
There is a growing consensus that the only tractable way to regulate AI is to keep careful watch on the computer processors that are used to train large models. Advancing core capabilities of AI systems requires using massive amounts of compute. Biden’s executive order makes the first move in this direction. Without controlling the flow of processors, the only known alternative is an uncontrolled race to build increasingly powerful systems we will not know how to control.
Both calls to regulate specific applications rather than capabilities and fears that such regulations lead to dystopian totalitarianism are misplaced. Allowing model proliferation and then monitoring the applications of AI systems would be far more intrusive and totalitarian. This is similar to how controlling the supply of enriched uranium in an effort to stem the spread of nuclear weapons makes us safer and more free, not less. We monitor large concentrations of computing power so that we need not monitor smaller ones.
One important safety precaution noted in the executive order is protection against future AI models being stolen. If a malicious actor or rival state stole the weights that help define a model’s neural network, they could copy it and unlock any sealed-off capabilities. Widespread access to such a model might force those who do not wish to be left behind to increasingly cede control to AI systems. We would have no way to prevent this. Some malicious actors might want to intentionally set the AI free and have it take control of the future, or a model might escape on its own or manipulate its users. Giving the model away via open source simply ensures these same results.
To those who say that this is a totalitarian intervention, I say it is the most freedom-preserving option we have. We can monitor AI work now on the high level of data centers, or even in the best case the government will instead feel forced to do so later on the level of individual computers, if only to guard against misuse.
Corporations are about to train models that will likely be capable of transforming how we live our lives, and that could cause humanity to lose control of its future. We need to lay groundwork now to ensure proper safety precautions are taken when such models are trained. The reporting threshold in the executive order is a minimal first step, allowing us to at least know the broadest outlines of what is going on.
In the future, before we allow capabilities to advance much further, we will need to figure out an adequate means to ensure our safety, and mandate it. Until we know how to do that — and we do not yet — we will need the ability to pause advanced model development entirely.
That means preparing to monitor and, if necessary, prevent sufficiently large model training runs and concentrations of compute everywhere. This includes international cooperation.
The alternative risks catastrophe — or even human extinction.
Source: vox.com