The Israeli military has employed an artificial intelligence-driven “kill list” to select more than 30,000 targets in the Gaza Strip with minimal human intervention, according to +972 Magazine's shocking new investigation. It has been revealed that civilian casualties are increasing in areas torn by the war.
According to the IDF, especially in the early stages of the Gaza war, Israeli Defense Forces (IDF) personnel ignored AI's 10% false positive rate and were forced to fly into their homes, despite the increased potential for harm to civilians. They say they deliberately targeted suspected extremists with unguided “dumb bombs.” Sources spoke to +972 Magazine.
This investigation reveals the myriad ways in which cutting-edge AI technology and lax rules of engagement by IDF commanders on the ground have combined to accelerate civilian casualties of alarming proportions in Gaza. At least 33,000 Palestinians have been killed in the Israeli operation, which follows a Hamas attack that killed 1,200 Israelis last October.
The AI targeting software, known as “Lavender,” reportedly relies on a sprawling surveillance network that assigns each Gazan resident a score from 1 to 100 that estimates their likelihood of being a Hamas militant. ing. Soldiers then input this information into software called “Where's Daddy,” which uses AI to alert them when suspected extremists return to their homes.
A previous report in +972 Magazine revealed the existence of a similar AI system called “The Gospel” that targets homes used by armed groups. In both cases, +972 Magazine exaggerates the role and impact of these high-tech tools, the IDF said.
“The killer algorithm doomsday scenario is already playing out in Gaza,” argues Brianna Rosen, a senior fellow at Just Security and the University of Oxford who worked on the National Security Council during the Obama administration.
RS interviewed Rosen for the latest revelations about Israel's use of AI in Gaza, how AI is changing warfare, and what U.S. policymakers should do to regulate military technology. I asked for your opinion on this. The following conversation has been edited for length and clarity.
RS: What does this new report in +972 Magazine tell us about how Israel used AI in Gaza?
Rosen: First of all, I would like to emphasize that this is not just a +972 magazine. In fact, the IDF itself has commented on these systems. Although many claimed that the report exaggerated some of the claims about the AI system, Israel itself has made numerous comments that corroborate some of these facts. The report confirms a trend seen since December with Israel's use of AI in the Gaza Strip: AI is increasing the pace of targeting in the war and expanding the scope of the war.
As IDF itself admits, it uses AI to accelerate targeting, and the facts support this. In the first two months of the conflict, Israel attacked around 25,000 targets, more than four times as many targets as in the previous war in Gaza. And they are attacking more targets than in the past. As the pace of targeting accelerates, AI also expands the scope of warfare, or the pool of potential targets to act for elimination. They are targeting more young operatives than ever before. Previous operations would have left Israel depleted of known combatants and legitimate military targets. However, in this report, [shows] It no longer seems to be a barrier to killing. In Israel's own words, AI is acting as a force multiplier, meaning it removes resource constraints that previously prevented the IDF from identifying sufficient targets. Now, even though they do not pursue such goals, even though their ties to Hamas are tenuous or non-existent, because normally their death would have minimal impact on military goals, You can pursue significantly lower goals.
In short, AI is increasing the tempo of operations and expanding the pool of targets, making target verification and other precautionary obligations required by international law much more difficult to meet. All of this increases the risk that civilians will be misidentified and mistakenly targeted, contributing to the severe civilian casualties seen to date.
RS: How does this relate to the idea of humans being “involved” in AI-driven decision-making?
Rosen: This is what I'm really curious about. The debate about military AI has long focused on the wrong issues. They have focused on banning lethal autonomous weapons systems, or “killer robots,” without recognizing that AI is already a pervasive feature of warfare. Other countries, including Israel and the United States, are already incorporating AI into military operations. They're saying they're doing it in a responsible way, with humans fully “involved.” But what I have, and what I think we're seeing unfolding here in Gaza, is that even if humans are fully involved, human review of machine decisions is fundamental. Because the attack was so perfunctory, it would cause serious harm to civilians.
The report, released today, includes human verification of the output produced by the AI system, which takes just 20 seconds and previously took much longer to confirm whether a target is male or female. There is an argument that there was enough time. bombing was authorized.
Regardless of whether that particular claim is actually borne out, there's a lot of academic research being done about the risk of automation bias from AI, and I think that's clearly at play here. Because this machine is so smart and has all these data streams and intelligence streams coming into it, there's a risk that humans won't question its output enough. The risk of this automation bias is that even when humans approve targets, they simply rubber-stamp the decision to use force, rather than combing through machine-generated data and retroactively scrutinizing targets very carefully. This means that there is a possibility that It just hasn't been done yet, and given explainability and traceability issues, it may never even be possible for humans to truly understand how AI systems are producing these outputs. yeah.
By the way, this is one of the questions I asked in my December Just Security article. Policymakers and the public need to ask Israel this question: What does the human review process for these operations actually look like? Is this just rubber stamping the decision to use force, or is there serious consideration?
RS: In this case, the IDF's use of loose rules of engagement appears to have amplified the impact of the AI. Could you tell us more about the relationship between emerging technology and practical policy decisions about how to use it?
Rosen: That's another problem. First, there is the issue of Israel's interpretation of international law. In some ways, this is much more permissive than the way other states interpret fundamental principles such as proportionality. On top of that, AI systems inevitably make errors that lead to harm to civilians. This latest report claims, for example, that Lavender's system was wrong 10% of the time. In fact, the margin of error could be even larger depending on how Israel classifies individuals as Hamas extremists.
The AI system is trained on data to identify specific characteristics of people who Israel claims are Hamas or Palestinian Islamic Jihad operatives, and feed that data into the machine. But what if the traits they're identifying are too broad? For example, being in possession of a weapon, being part of his WhatsApp group with someone associated with Hamas, or even simply moving frequently. There are refugees all over the country. That's a big concern because if these characteristics are fed into an AI system to identify extremists, the system will pick up that data and misidentify civilians the majority of the time.
It can be said that Israel follows international law and that there is human review of all these decisions, and all of that may be true. But again, this is Israel's interpretation of international law. And that's how they define who counts as a combatant in this war, and how that data is fed into the AI system. All of these things can combine to cause really serious harm.
And all the well-documented problems with AI in domestic contexts, from the fundamental biases of algorithms to problems with hallucinations, will no doubt persist in war and will only get worse due to its pace. I would also like to point out that. of decision making. None of these are going to be reviewed very carefully. For example, we know that Israel has set up extensive surveillance systems in the Gaza Strip, and all of this data is fed into AI systems to contribute to these targeting outputs. The underlying biases in these systems contribute to and compound errors in the final targeting output. If human review is perfunctory, the consequences will be grave harm to civilians. That's what we've seen.
RS: The United States is interested in AI for a number of military applications, including autonomous swarming of lethal drones. What can Israel's experience tell us about how American policymakers should approach this technology?
Rosen: This shows that U.S. policymakers need to be very cautious about the use of AI in both intelligence and military operations. The White House, the Department of Defense, and other agencies have made numerous statements about responsible AI, especially in the military. However, all this is only at the level of principles.
It all depends on how these broad principles for the responsible use of military AI are operationalized in practice. And of course, we have yet to actually see the United States relying on these tools in public. collision. But it's definitely coming, and the United States is taking this time now to not only learn all the lessons of what's happening in Gaza, but also to operationalize these broad principles for the responsible use of military AI. We should be very active in socializing AI among other nations. We lead the world in signing up to these principles for military AI. Although some success has been achieved, progress has been very slow. That is what is desperately needed right now.
From an article on your site
Related articles on the web