Lethal Autonomous Weapons and the 'Right for Machine Hesitation'
Virgílio Almeida, Ricardo Fabrino Mendonça, Fernando Filgueiras / Jul 28, 2025
An air defense unit of the Ukrainian Armed Forces, known as drone hunters, illuminates the sky with a powerful searchlight at night at a position in the Kyiv region of Ukraine, March 24, 2024. Shutterstock
There is little doubt today that the world is going through a period of profound instability and war-related risks. While the open armed conflicts around the world clearly demonstrate the turbulence of our times, deeper structural transformations are underway—challenging the very meaning of war and reshaping its practices. Chief among them is the automation of weaponry through the integration of algorithmic systems capable of making decisions without human intervention.
Lethal autonomous weapons and AI agents do not merely expand military capacity—they represent a profound shift in how war is conceived and enacted. The ability to autonomously select, identify, and target human beings, without requiring human oversight or decision-making, enables a form of robotic warfare that is not only more scalable but qualitatively transforms the nature of conflict. These systems, relying on complex algorithms, allow for both offensive and defensive actions with remarkable speed and precision.
To understand what is at stake, one must grasp the broader context of algorithmic acceleration in society. Virtually every domain of contemporary life now depends on automated decision-making systems: navigating the urban map of cities, forming romantic relationships, managing human resources, even making medical decisions. We have argued that this process cannot be framed merely in terms of costs, benefits, or decision-making efficiency. Algorithmic systems function like institutions: they establish the rules that shape behavior across various contexts, influencing both individuals and society as a whole. Like any institution, algorithms now play a central role in shaping the arenas in which social practices unfold. They interact with existing institutions, often destabilizing or reshaping them—legitimizing some practices while delegitimizing others.
In a world increasingly automated by algorithms, the use of autonomous weapons—machines capable of making life-or-death decisions without human input—is a growing concern. These systems violate fundamental ethical principles: they do not comprehend the value of human life and reduce their targets to mere data points. Once activated, they operate independently, identifying and striking targets based solely on sensor input, without any human review or authorization. In 2023, the late Pope Francis warned of the risks these systems pose to human dignity, asserting that they can never be morally responsible. For him, decisions over life and death must be grounded in human ethical judgment—not algorithms.
Although there are still no specific treaties governing AI use in warfare, these systems are partially subject to international humanitarian law. Movements are emerging that seek to ban lethal autonomous weapons altogether, or at least to impose additional standards ensuring human oversight, legal accountability, and bias mitigation. As University of California, Berkeley computer scientist Stuart Russell put it, there is an urgent need for a code of conduct based on principles such as: “We should not develop algorithms that decide to kill humans.” Equally important is the creation of institutional structures to govern the use of algorithmic warfare, reinforcing the roles of international bodies like the Geneva Conventions and the International Committee of the Red Cross.
This brings us back to the question of lethal autonomous weapons. The algorithmization of war is more than a boost to military efficiency—it alters the very context in which war takes place, with significant moral consequences. As Norbert Wiener warned back in the 1960s, transferring morally complex decisions to machines is perilous. The decision to take a life—how to do so, under what conditions, and when to stop—is profoundly difficult, involving not just mathematical calculations but moral reflection. Moral decisions are deeply tied to the human capacity to hesitate. Machines cannot, however, hesitate; algorithmic systems typically respond with reflexive behavior. Machines may compute complex algorithms, follow advanced protocols, or perform multiple verification checks—but they cannot pause for ethical reflection. To hesitate is a uniquely human capacity, rich with moral significance.
A war without hesitation is a war governed by the pure calculus of force. And this is true not only for machines, but for humans operating within a context reshaped by the algorithmic rationality of autonomous weapons. In this complex assemblage of humans, machines and algorithms, our behaviors are profoundly altered—not only by what we can now do, but also by what we may cease to do or become unable to do.
Autonomous lethal weapons have transformed the dynamics of war. They create new institutional logics and military strategies that directly impact armed forces, international law, and the lives of those caught in the crossfire of wars without hesitation. These weapons bring an impersonal logic to warfare, one that optimizes human extermination by treating individuals as targets. The drive for optimization is justified through a narrative of technical efficiency in pursuing designated targets—targets that are increasingly difficult to define within the bounds of international law. As Wiener already warned us:
There is nothing more dangerous to contemplate than World War III. It is worth considering whether part of the danger may not be intrinsic in the unguarded use of learning machines.
Yet these weapons are more than tools of optimization. They represent a rupture in the logic of human rights, ushering in a disturbing era of automated conflict. This unsettling automation may lead humanity down a path where machines manage death itself, drawing us closer to a necropolitics in which technology determines who is disposable. Autonomous lethal weapons make violence impersonal, less hesitant, more distant—and in doing so, they fundamentally disrupt the very logic of human rights. They inaugurate an age of impersonal cruelty and the dismantling of previously institutionalized human rights norms, giving way to the cold and raw logic of automated death management.
As algorithms reshape the dynamics of war and social practices, they also inaugurate new modes of resistance—grounded in a novel human right: the right to hesitation.
Authors


