A law professor proposes an old-fashioned remedy for very new problems: legal liability.
Dylan Matthews is a senior correspondent and head writer for Vox’s Future Perfect section and has worked at Vox since 2014. He is particularly interested in global health and pandemic prevention, anti-poverty efforts, economic policy and theory, and conflicts about the right way to do philanthropy.
One quote about AI I think about a lot is something that Jack Clark, a co-founder of the artificial intelligence company Anthropic, told me last year: “It’s a real weird thing that this is not a government project.”
Clark’s point was that the staff at Anthropic, and much of the staff at major competitors like OpenAI and Google DeepMind, genuinely believe that AI is not just a major innovation but a huge shift in human history, effectively the creation of a new species that will eventually surpass human intelligence and have the power to determine our fate. This isn’t an ordinary product that a company can sell to willing customers without bothering anybody else too much. It’s something very different.
Maybe you think this viewpoint is reasonable; maybe you think it’s grandiose, self-important, and delusional. I honestly think it’s too early to say. In 2050, we might look back at these dire AI warnings as technologists getting high on their own products, or we might look around at a society governed by ubiquitous AIs and think, “They had a point.” But the case for governments to take a more active role specifically in case the latter scenario comes true is pretty strong.
I’ve written a bit about what form that government role could take, and to date most of the proposals involve mandating that sufficiently large AIs be tested for certain dangers: bias against certain groups, security vulnerabilities, the ability to be used for dangerous purposes like building weapons, and “agentic” properties indicating that they pursue goals other than the ones we humans give them on purpose. Regulating for these risks would require building out major new government institutions and would ask a lot of them, not least that they not become captured by the AI companies they need to regulate. (Notably, lobbying by AI-related companies increased 185 percent in 2023 compared to the year before, according to data gathered by OpenSecrets for CNBC.)
As regulatory efforts go, this one is high difficulty. Which is why a fascinating new paper by law professor Gabriel Weil suggesting a totally different kind of path, one that doesn’t rely on building out that kind of government capacity, is so important. The key idea is simple: AI companies should be liable now for the harms that their products produce or (more crucially) could produce in the future.
Let’s talk about torts, baby
Weil’s paper is about tort law. To oversimplify wildly, torts are civil rather than criminal harms, and specifically ones not related to the breaching of contracts. It encompasses all kinds of stuff: you punching me in the face is a tort (and a crime); me infringing on a patent or copyright is a tort; a company selling dangerous products is a tort.
That last category is where Weil places most of his focus. He argues that AI companies should face “strict liability” standards. Normal, less strict liability rules typically require some finding of intent, or at least of negligence, by the party responsible for the harm in order for a court to award damages. If you crash your car into somebody because you’re driving like a jerk, you’re liable; if you crash it because you had a heart attack, you’re not.
Strict liability means that if your product or possession causes any foreseeable harm at all, you are liable for those damages, whether or not you intended them, and whether or not you were negligent in your efforts to prevent those harms. Using explosives to blast through rocks is one example of a strict liability activity today. If you are blowing stuff up near enough to people that they might be hurt as a consequence, you’ve already screwed up.
Weil would not apply this standard to all AI systems; a chess-playing program, for instance, does not fit the strict liability requirement of “creating a foreseeable and highly significant risk of harm even when reasonable care is exercised.” AIs should face this standard, he writes, if their developer “knew or should have known that the resulting system would pose a highly significant risk of physical harm, even if reasonable care is exercised in the training and deployment process.” A system capable of synthesizing chemical or biological weapons, for instance, would qualify. A highly capable system that we know to be misaligned, or that has secret goals it hides from humans (which sounds like sci-fi but has already been created in lab settings), might qualify too.
Placing this kind of requirement on systems would put their developers on the hook for potentially massive damages. If someone used an AI in this category to hurt you in any way, you could sue the company and get damages. As a result, companies would have a huge incentive to invest in safety measures to prevent any such harms, or at least reduce their incidence by enough that they can cover the cost.
But Weil takes things a step further. Experts who think AI poses a catastrophic risk say that it could cause harms that cannot be redressed … because we’ll all be dead. You can’t sue anybody if the human race goes extinct. Again, this is necessarily speculative, and it’s possible this school of thought is wildly wrong and AI poses no extinction risk. But Weil suggests that if this risk is real, we might still be able to use tort law to address it.
His idea is to “pull forward” the cost of other potential harms that might arise from the technology, so that damages can be awarded before they arise. The idea would be to add punitive damages (that is, awards not meant to compensate for harm but to punish wrongdoing and deter it in the future) based on the existential risk posed by AI. He gives as an example a system with a 1 in 1 million chance of causing human extinction. Under this system, a person suffering harm right now from this AI could sue, get damages for that minor harm, and then also get a share of punitive damages on the order of $61.3 billion — one-millionth of a conservative estimate of the cost of human extinction. Given how many people use and are affected by AI systems, that plaintiff could be just about anyone.
Interestingly, these are changes that courts can make on their own, by altering their approach to tort law. Additional legislation would be helpful, Weil argues; for instance, Congress or other countries’ legislatures could require that AI companies carry liability insurance to pay for these kinds of harms, the same way car owners have to carry insurance (in most places), or how some states require doctors to carry malpractice insurance.
But in common-law countries like the US, where law is based on tradition and precedent, legislative action is not strictly necessary to get courts to adopt a novel approach to product liability.
Will the lawyers save us?
The downside of this approach is the downside of any measure to regulate or slow down new technology: if the benefits of the technology greatly outweigh the costs, and the regulations slow down progress meaningfully, that could have huge costs. If advanced AI greatly accelerates drug discovery, for instance, delay would literally cost lives. The hard part of AI regulation is balancing the need to prevent truly catastrophic outcomes with the need to preserve the transformative potential for good the technology has.
That said, the US and other rich countries have gotten so good at using legal frameworks and regulations to stop extremely beneficial technologies — high-rise buildings, genetically modified foods, nuclear power — that there would be something poetic about turning those very same tools against a technology that might, for once, pose a genuine threat.
The writer Scott Alexander once put this point more eloquently than I can: “We designed our society for excellence at strangling innovation. Now we’ve encountered a problem that can only be solved by a plucky coalition of obstructionists, overactive regulators, anti-tech zealots, socialists, and people who hate everything new on general principle. It’s like one of those movies where Shaq stumbles into a situation where you can only save the world by playing basketball.”
A version of this story originally appeared in the Future Perfect newsletter. Sign up here!
Source: vox.com