AI nudges us to prioritize speed and scale. In Gaza, it’s turbocharging mass bombing.
A December 2023 photo shows a Palestinian girl injured as a result of the Israeli bombing on Khan Yunis in the southern Gaza Strip. Saher Alghorra/Middle East images/AFP via Getty Images Sigal Samuel is a senior reporter for Vox’s Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic.
This story is part of a group of stories called
Finding the best ways to do good.
Israel has reportedly been using AI to guide its war in Gaza — and treating its decisions almost as gospel. In fact, one of the AI systems being used is literally called “The Gospel.”
According to a major investigation published last month by the Israeli outlet +972 Magazine, Israel has been relying on AI to decide whom to target for killing, with humans playing an alarmingly small role in the decision-making, especially in the early stages of the war. The investigation, which builds on a previous exposé by the same outlet, describes three AI systems working in concert.
“Gospel” marks buildings that it says Hamas militants are using. “Lavender,” which is trained on data about known militants, then trawls through surveillance data about almost everyone in Gaza — from photos to phone contacts — to rate each person’s likelihood of being a militant. It puts those who get a higher rating on a kill list. And “Where’s Daddy?” tracks these targets and tells the army when they’re in their family homes, an Israeli intelligence officer told +972, because it’s easier to bomb them there than in a protected military building.
The result? According to the Israeli intelligence officers interviewed by +972, some 37,000 Palestinians were marked for assassination, and thousands of women and children have been killed as collateral damage because of AI-generated decisions. As +972 wrote, “Lavender has played a central role in the unprecedented bombing of Palestinians,” which began soon after Hamas’s deadly attacks on Israeli civilians on October 7.
The use of AI could partly explain the high death toll in the war — at least 34,735 killed to date — which has sparked international criticism of Israel and even charges of genocide before the International Court of Justice.
Although there is still a “human in the loop” — tech-speak for a person who affirms or contradicts the AI’s recommendation — Israeli soldiers told +972 that they essentially treated the AI’s output “as if it were a human decision,” sometimes only devoting “20 seconds” to looking over a target before bombing, and that the army leadership encouraged them to automatically approve Lavender’s kill lists a couple weeks into the war. This was “despite knowing that the system makes what are regarded as ‘errors’ in approximately 10 percent of cases,” according to +972.
The Israeli army denied that it uses AI to select human targets, saying instead that it has a “database whose purpose is to cross-reference intelligence sources.” But UN Secretary-General Antonio Guterres said he was “deeply troubled” by the reporting, and White House national security spokesperson John Kirby said the US was looking into it.
How should the rest of us think about AI’s role in Gaza?
While AI proponents often say that technology is neutral (“it’s just a tool”) or even argue that AI will make warfare more humane (“it’ll help us be more precise”), Israel’s reported use of military AI arguably shows just the opposite.
“Very often these weapons are not used in such a precise manner,” Elke Schwarz, a political theorist at Queen Mary University of London who studies the ethics of military AI, told me. “The incentives are to use the systems at large scale and in ways that expand violence rather than contract it.”
Schwarz argues that our technology actually shapes the way we think and what we come to value. We think we’re running our tech, but to some degree, it’s running us. Last week, I spoke to her about how military AI systems can lead to moral complacency, prompt users toward action over non-action, and nudge people to prioritize speed over deliberative ethical reasoning. A transcript of our conversation, edited for length and clarity, follows.
Sigal Samuel
Were you surprised to learn that Israel has reportedly been using AI systems to help direct its war in Gaza?
Elke Schwarz
No, not at all. There have been reports for years saying that it’s very likely that Israel has AI-enabled weapons of various kinds. And they’ve made it quite clear that they’re developing these capabilities and considering themselves as one of the most advanced digital military forces globally, so there’s no secret around this pursuit.
Systems like Lavender or even Gospel are not surprising because if you just look at the US’s Project Maven [the Defense Department’s flagship AI project], that started off as a video analysis algorithm and now it’s become a target recommendation system. So, we’ve always thought it was going to go in that direction and indeed it did.
Sigal Samuel
One thing that struck me was just how uninvolved the human decision-makers seem to be. An Israeli military source said he would devote only about “20 seconds” to each target before authorizing a bombing. Did that surprise you?
Elke Schwarz
No, that didn’t either. Because the conversation in militaries over the last five years was that the idea is to accelerate the “kill chain” — to use AI to increase the fatality. The phrase that’s always used is “to shorten the sensor-to-shooter timeline,” which basically means to make it really fast from the input to when some weapon gets fired.
The allure and the attraction of these AI systems is that they operate so fast, and at such vast scales, suggesting many, many targets within a short period of time. So that the human just kind of becomes an automaton that presses the button and is like, “Okay, I guess that looks right.”
Defense publications have always said Project Convergence, another US [military] program, is really designed to shorten that sensor-to-shooter timeline from minutes to seconds. So having 20 seconds fits quite clearly into what has been reported for years.
Sigal Samuel
For me, this brings up questions about technological determinism, the idea that our technology determines how we think and what we value. As the military scholar Christopher Coker once said, “We must choose our tools carefully, not because they are inhumane (all weapons are) but because the more we come to rely on them, the more they shape our view of the world.”
You wrote something reminiscent of that in a 2021 paper: “When AI and human reasoning form an ecosystem, the possibility for human control is limited.” What did you mean by that? How does AI curtail human agency or reshape us as moral agents?
Elke Schwarz
In a number of ways. One is about the cognitive load. With all the data that is being processed, you kind of have to place your trust in the machine’s decision. First, because we don’t know what data is gathered and exactly how it then applies to the model. But also, there’s a cognitive disparity between the way the human brain processes things and the way an AI system makes a calculation. This leads to what we call “automation bias,” which is basically that as humans we tend to defer to the machines’ authority, because we assume that they’re better, faster, and cognitively more powerful than us.
Another thing is situational awareness. What is the data that is incoming? What is the algorithm? Is there a bias in it? These are all questions that an operator or any human in the loop should have knowledge about but mostly don’t have knowledge about, which then limits their own situational awareness about the context over which they should have oversight. If everything you know is presented to you on a screen of data and points and graphics, then you take that for granted, but your own sense of what the situation is on the battlefield becomes very limited.
And then there’s the element of speed. AI systems are simply so fast that we don’t have enough [mental] resources to not take what they’re suggesting as a call to action. We don’t have the wherewithal to intervene on the grounds of human reasoning. It’s like how your phone is designed in a way that makes you feel like you need to react — like, when a red dot pops up in your email, your first instinct is to click on it, not to not click on it! So there’s a tendency to prompt users toward action over non-action. And the fact is that if a binary choice is presented, kill or not kill, and you’re in a situation of urgency, you’re probably more likely to act and release the weapon.
Sigal Samuel
How does this relate to what the philosopher Shannon Vallor calls “moral de-skilling” — her term for when technology negatively affects our moral cultivation?
Elke Schwarz
There’s an inherent tension between moral deliberation, or thinking about the consequences of our actions, and the mandate of speed and scale. Ethics is about deliberation, about taking the time to say, “Are these really the parameters we want, or is what we’re doing just going to lead to more civilian casualties?”
If you’re not given the space or the time to exercise these moral ideas that every military should have and does normally have, then you’re becoming an automaton. You’re basically saying, “I’m part of the machine. Moral calculations happen somewhere prior by some other people, but it’s no longer my responsibility.”
Sigal Samuel
This ties into another thing I’ve been wondering about, which is the question of intent. In international law contexts like the genocide trial against Israel, showing intent among human decision-makers is key. But how should we think about intent when decisions are outsourced to AI? If tech reshapes our cognition, does it become harder to say who is morally responsible for a wrongful act in war that was recommended by an AI system?
Elke Schwarz
There’s one objection that says, well, humans are always somewhere in the loop, because they’re at least making the decision to use these AI systems. But that’s not the be-all, end-all of moral responsibility. In something as morally weighty as warfare, there are multiple nodes of responsibility — there are lots of morally problematic points in the decision-making.
And when you have a system that distributes the intent, then with any subsystem, you have plausible deniability. You can say, well, our intent was this, then the AI system does that, and the outcome is what you see. So it’s hard to attribute intent and that makes it very, very challenging. The machine doesn’t give interviews.
Sigal Samuel
Since AI is a general-purpose technology that can be used for a multitude of purposes, some beneficial and some harmful, how can we try to foretell where AI is going to do more harm than good and try to prevent those uses?
Elke Schwarz
Every tool can be refashioned to become a weapon. If you’re vicious enough, even a pillow can be a weapon. You can kill somebody with a pillow. We’re not going to prohibit all pillows. But if the trajectory in society is such that it seems there’s a tendency to use pillows for nefarious purposes, and access to pillows is really easy, and in fact some people are designing pillows that are made for smothering people, then yes, you should ask some questions!
That requires paying attention to society, its trends and its tendencies. You can’t bury your head in the sand. And at this point, there are enough reports out there about the ways in which AI is used for problematic purposes.
People say all the time that AI will make warfare more ethical. It was the claim with drones, too — that we have surveillance, so we can be a lot more precise, and we don’t have to throw cluster bombs or have a large air campaign. And of course there’s something to that. But very often these weapons are not used in such a precise manner.
Making the application of violence a lot easier actually lowers the threshold to the use of violence. The incentives are to use the systems at large scale and in ways that expand violence rather than contract it.
Sigal Samuel
That was what I found most striking about the +972 investigations — that instead of contracting violence, Israel’s alleged AI systems expanded it. The Lavender system marked 37,000 Palestinians as targets for assassination. Once the army has the technological capacity to do that, the soldiers come under pressure to keep up with it. One senior source told +972: “We were constantly being pressured: ‘Bring us more targets.’ They really shouted at us. We finished [killing] our targets very quickly.”
Elke Schwarz
It’s kind of a capitalist logic, isn’t it? It’s the logic of the conveyor belt. It says we need more — more data, more action. And if that is related to killing, it’s really problematic.
Related
Silicon Valley’s vision for AI? It’s religion, repackaged.
Sourse: vox.com