Home

AI in Warfare is a Slippery Slope

Atef Said / Mar 5, 2024

A MARTAC T-38 Devil Ray unmanned surface vessel, attached to U.S. 5th Fleet’s Task Force 59, sails in the Persian Gulf, Oct. 26, 2023. Source

Artificial intelligence is already being used in warfare. According to a recent report in Bloomberg, in 2017, the US authorized a project to deploy AI in warfare called Project Maven. Now, the technology is being put to use:

Less than four years after that milestone, America’s use of AI in warfare is no longer theoretical. In the past several weeks, computer vision algorithms that form part of the US Department of Defense’s flagship AI effort, Project Maven, have located rocket launchers in Yemen and surface vessels in the Red Sea, and helped narrow targets for strikes in Iraq and Syria, according to Schuyler Moore, the chief technology officer of US Central Command.

Incorporating AI into war, the report warns, will provide the advantage “to those who no longer see the world like humans.” The US is not alone; China has also started incorporating AI in its military. Ukraine is employing AI software in its effort to turn back Russia’s invasion. And most recently, Israel has employed AI in the ongoing war on Gaza. According to statements by the Israeli Defense Forces (IDF), Israel is using AI extensively in its military operations in Gaza. The use of AI in the ongoing war was described by the IDF as Habsora—or gospel—and one substantial goal of its use is to rapidly produce targets based on the latest intelligence.

As a scholar studying the role of technology in movements and politics for a decade, and as a previous human rights researcher, I believe it is not appropriate to use AI in war, especially in combat. The use of AI could potentially decrease the impact and the quality of human decision-making in military operations. According to research in International Studies Review, the new modes of war, based on an expanded use of AI, make wars detached from human agency.

Such detachment could lead to potentially destabilizing warfare that is conducted outside the bounds of human control. Undermining human decision-making in war is a very serious concern, as emphasized by a recent position paper by the International Committee of the Red Cross. The ICRC says that the expansion of AI in warfare must be in compliance with existing rules of international humanitarian law. Without this, expanded use of AI could be a big step away from the application of such law, and “the potential implications may be far-reaching and yet to be fully understood.”

The expanded use of AI in warfare could mean blurring the difference between “decision support” (to human fighters) or mere “automated decision-making” (deploying AI to find and destroy the target, which may include people). In the context of expanding AI in intelligence, surveillance and reconnaissance tools, and also targeting, the important decision and responsibility of killing another human could be lost in a patchwork of technology. This could potentially diminish the sense of responsibility felt by human combatants, making the concept of war more ethically acceptable to them. By delegating the selection of targets to AI, human soldiers may become less directly engaged in combat operations.

This is no longer theoretical. In November, 2023, Yuval Abraham, an Israeli researcher, activist and investigative journalist. conducted an investigation for +972 magazine and found that Israel has loosened its constraints on attacks that could kill civilians, a move that “deliberately endangers the civilian population in Gaza and tries to forcefully prevent civilians from evacuating.”

When the main priority becomes finding targets, AI in military operations becomes no less than “a mass assassination factory,” according to Abraham.

Of course, these are not just problems abroad. The US military is investing heavily in artificial intelligence. The US Department of Defense allocated $1.8 billion just for AI and machine learning “modernization” in its fiscal 2024 budget. A 2020 Georgetown University study based on US military investments in AI research found “the ambiguity about the nature and scope of US military investments in autonomy and AI research makes it difficult to ensure oversight.”

And the US is a major source of financing for the Israeli military. Approximately $3.3 billion a year is provided as grants from the US, making Israel’s defense budget one of the biggest in the Middle East region at $23.4 billion in 2022. AI is a major focus for the IDF. As Colonel Eli Birenbaum, chief of Israel’s military operational data and applications unit stated in a recent interview,“Around half of Israel's military technologists will be focused on AI by 2028.”

It is unfortunate that many nations seem to consider the military spending on AI separate from critical public oversight. The constant expansion of military spending comes at the expense of spending on schools and hospitals and marginalized populations. It is urgent that policymakers prioritize using AI in healthcare, education, conservation and more community-building efforts, instead of wars and killing.

Authors

Atef Said
Atef Said is an Associate Professor of Sociology at the University of Illinois at Chicago, where he teaches sociological theory, political sociology, social movements, and digital politics. He is the author of Revolution Squared: Tahrir, Political Possibilities, and Counterrevolution in Egypt (Duke ...

Topics