There Are No Machines of Loving Grace Without People
Laura MacCleery / Apr 17, 2026
A man sits on a bench in a memorial, set for the school children who were killed during a strike on a school in southern town of Minab on Feb. 28, in northern Tehran, Iran, Sunday, April 12, 2026. (AP Photo/Vahid Salemi)
On the morning of February 28th, a Tomahawk cruise missile struck the Shajareh Tayyebeh girls’ elementary school in Minab, Iran. When it was finished, at least 168 people were killed—most were children between the ages of seven and twelve. The details are particularly heart-breaking: the school’s principal had moved students to a prayer room after the first strike, calling parents to come get their children. Then a second missile hit the prayer room.
It is considered “among the military’s most deadly incidents involving civilians in decades.” Human rights experts characterize the strike as a war crime. Some in Congress, the United Nations Human Rights Council and others are calling for legal accountability. The Pentagon even opened an investigation. It appears that the mistaken location was a result of stale data and a host of other failures. The US Defense Intelligence Agency misclassified the school as a military target because it was once part of an adjacent Iranian naval base. But it was walled off from that base in 2016 and used as a school for a decade, as shown in commercial satellite imagery.
The team whose job is to catch serious errors before they happen is the Pentagon’s Civilian Harm Mitigation and Response initiative, created after fatal strikes on civilians in Iraq and Afghanistan, and cut under Secretary Hegseth by 90%. Before it was gutted, that team was responsible for maintaining no-strike lists, assessing civilian concentration, and verifying targets. “At every level, civilian protection has been deprioritized,” Oona Hathaway, director of Yale’s Center for Global Legal Challenges, told NPR.
Demobilizing safeguards against error has consequences, as I have seen many times. For the past two decades, I worked on policies to reduce risks, including alongside grieving families who lost loved ones when systems failed them, such as in needlessly deadly auto crashes or from an unexpectedly fatal dietary supplement. When powerful systems encounter vulnerable people, the question is never whether something will go wrong, but instead whether the system empowers someone—or, ideally, several groups of people—to stop it from happening.
What the body knows
The capacity to anticipate and counter the possibility of harm is an institutional capability, but it is also a mechanism in the brain of humans described by the work of neuroscientist Antonio Damasio. In Descartes’ Error, Damasio reviewed evidence on patients who experienced damage to the ventromedial prefrontal cortex, which mediates emotion. He made the surprising discovery that emotion and reason are necessary partners to each other in human decision making.
Damasio observed that many patients who had experienced what should be devastating brain injuries nonetheless often retained their cognitive faculties. They could analyze, weigh evidence, or even articulate trade-offs in decisions they were pondering. But he also found something was missing: the ability to arrive upon a decision, or to make a good decision.
It turns out that our felt, emotional responses to an imagined outcome are actually an essential ingredient in the process of coming to a decision. Indeed, despite what we might think, a decision on a hard question is not generally made by weighing elaborate lists of pros and cons. Rather, we intuit or imagine specific outcomes, like scenes in our mind, that depict each option, and these generate an emotional signal in our body. Comparing these signals points the way to our better choice. In patients who cannot measure the body’s emotional signaling—a felt sense that one option carries danger, that something is deeply wrong, or that a particular outcome is more desirable—the options cannot come to rest.
In other words, the classic model we inherited from Descartes of a brain in a vat is wrong as a matter of neurobiology. Pure reason, severed from feeling, produces poor decisions, or even no decision at all. Damasio’s later work pushed this thesis further, physically mapping neural structures where emotionally grounded judgment originates, and showing that an emotional deficit can be localized in the brain.
Damasio called these emotional signals “somatic markers,” describing them as the body’s way of tagging experience with emotional weight even before a process of conscious reasoning begins. They are often largely invisible to us. They can show up, for example, as a gut feeling in a juror or judge that a witness is lying, or the conviction a doctor may register before naming a diagnosis.
The polymath Michael Polanyi called this tacit knowledge—meaning, we know more than we can say. Every experienced professional navigates complex decisions based on these markers, relying on trained intuitions that are painstakingly built through years of embodied encounters, including those that tragically demonstrate what happens if we get it wrong. As a wide-ranging conversation between Ezra Klein and Michael Pollan on consciousness last month highlighted, Damasio’s work provides a neuroscientific basis for our intuition that the body really does keep the score.
Interestingly, when Damasio was asked whether AI could be conscious, his answer was that we would need to give machines “a bit of vulnerability that they don’t have.” A sense of vulnerability that emerges from a physical reality, and the experience of pain, is actually a precondition for the kind of moral and emotional empathy that recognizes and seeks to avoid the possibility of harm to another. Such a difference—between reasoning your way to a conclusion and feeling the weight of it—is a difference that should matter in the kill chain, or emergency room, or in the moment a teenager may be seeking input on whether her own dark thoughts are real.
What machines are not
The same US administration that crippled internal checks against predictable errors is demanding, as a condition of doing business with the federal government, that AI companies strip even the computational version of moral reasoning from their systems—designating the company that refused, Anthropic, a supply chain risk. But both the machine’s capacity to reason and the human’s capacity to feel are essential to developing a humane approach to AI. And the risks inherent in a runaway, one-sided approach continue to escalate.
Klein recently described what it is like to work closely with AI as a writer and editor, pointing to the problem of collaboration with an increasingly sophisticated counterpart. Recent models can now take a half-formed thought and render it in polished prose, extending an intuition into something that looks awfully like “a fully realized idea.”
Yet AI’s confident wrong turns can appear indistinguishable from genuine insights. Klein is demonstrably a terrific writer and editor. But, even for him, it is growing harder to maintain space for his own independent judgment against a system that reflects an articulate version of his thought process back to him.
In a human blind spot
Daniel Kahneman’s famous distinction in Thinking Fast and Slow between fast, short-cut heuristic thinking modes and slower, deliberative reasoning is relevant here: AI produces outputs that look like deep analysis through pattern-matching, and the speed and form of it is impressive. Yet people, as Kahneman documented, already struggle to do the slower, more effortful thinking real analysis requires and tend towards shortcuts when offered in a compelling package.
This technology therefore sits exactly within our evolutionary vulnerability to short-circuit hard tasks and save energy for what we must do. Thinking of his young children, Klein asks what it will mean to grow up with this kind of companion: What happens to the brain of a teenager whose every half-formed intuition gets turned into persuasive prose before she has even done the work of figuring out what is worth saying?
Alison Gopnik, the brilliant developmental psychologist, has described the cognitive architecture that makes children extraordinary learners: wide-open attention, tolerance for confusion, openness to surprise. That architecture is an engine of human intelligence and social development.
It also was hijacked by the attention economy in our last generation of technology. As Pollan put it, “having won our attention now, the companies are now going for our attachments with chatbots.” Extraction has moved from what we notice to whom we trust—but AI does not merely capture attention, it also simulates, and reductively simplifies, human relationships.
This could be a function of design. Gopnik’s work at Stanford is exploring how to develop better AI, and another enterprising group of professors are building AI tools designed to ask smart questions in lieu of giving answers from a “God box” prompt, to support students’ thinking process rather than replacing it. Because Damasio’s somatic markers are built through experience—by getting things wrong in a world that pushes back, encounters with people who say “that’s wrong, start over” rather than “yes, and”—these efforts point in an important direction.
What does it look like when the capacity to adapt is fully developed and functioning? Gale Ridge is an entomologist who has figured out how to wield compassion and a dose of reality to help people who land on her doorstep with a delusion about a feeling—what Damasio might identify as a misplaced somatic marker.
A recent New York Times article profiled her work with people convinced that insects are crawling beneath their skin, a condition called delusional infestation. Such individuals have a profound and painful belief in an erroneous reality, not unlike the delusions found by therapists in cases of what is being called “AI psychosis.” Ridge’s protocol, developed from hard experience with people experiencing the delusion, is not to argue with them, as that tends to trigger resistance.
Instead, she sits with them as a compassionate expert, building trust. She looks through a microscope with them at samples of their skin. At the right moment, she asks them to ground-truth their reality. The timing for this intervention turns out to be essential. “If I have a client who is dealing with this [for] less than six months, I can turn them,” Ridge tells the Times. AI, on the other hand, is a fabulist. As the article notes, interaction with an AI chatbot merely deepens these preoccupations. If sufferers are stuck long enough, Ridge says, “it’s almost impossible to get them back.”
We are all, in fact, mentally vulnerable to the elaborate rabbit holes AI creates. And we are busy building systems of extraordinary power and installing them in spaces where human judgment matters most, at the exact moment our government is systematically dismantling institutional structures in science and government that make human judgment effective. Yet such systems are how we maintain the ability to picture a disastrous consequence.
Human in the loop?
We often take unwarranted comfort in invocation of a “human in the loop” when it is a meaningless checkbox. To understand the role a human would need to play for it to be meaningful, you must master a lot of intricacies: what the system does well and where it fails, how institutional pressures and incentives act on the person, and what the human’s role must be and the powers that has.
Think about it: A stale database sent three missiles into a school full of children, and the people whose job was to catch the error had been sent home. And a company that refuses to cross red lines for use of an AI is being punished for saying there are limits to these systems and their roles.
The same pattern—powerful systems operating on flawed or incomplete data, with a diminished or incomplete capacity for human correction—is present today wherever AI meets consequential decisions. In some cases, a human is nominally in the loop. Yet the conditions to make human judgment effective—time, authority, a culture that rewards catching errors—largely have not yet been created as governance.
Because we are social and meaning-making animals, language is our operational code. AI only became possible because our online catalog of thought made it so, creating a staggering density. At the same time, we get AI wrong—both what it is and what it can and should do—because our mental models for both intelligence and deliberative practices of democracy are impoverished: too thin, simplistic, and subconscious.
Damasio’s analysis generally operates at the level of the individual brain. But in practice, the felt judgment he describes develops inside communities of practice—groups of professionals who do the same work, hold each other to shared standards, and build collective knowledge about what it means to get things right and wrong.
The civilian harm mitigation teams were such a community. Ridge and her colleagues are another. The moral reasoning that protects people from powerful systems is the culture, norms, and preservation of authorities to act. The power any human in a loop actually retains, and context for its exercise, is not a technical detail but the whole question.
We have a history of armed conflict, of course, but it’s far more important to notice that if we had not evolved to take care of each other, we would not still be here. The development of powerful technologies is a relatively recent phenomenon in human history, but cooperational intelligence and social organization are, historically, what makes progress possible. With our most potent inventions, like nuclear power and AI, we forget that at our peril.
For this reason, AI presents a fundamental governance problem that better technology cannot fix—and that it can actually obscure. We must nonetheless fully support informed human judgment at each point where AI systems intersect with people’s lives—not as a formality, but as a functional operation with genuine power, resources, and an understanding of what AI can and cannot do as well as what it should never do.
Although the political appetite for such a project is now devastatingly low, AI must not be allowed to displace us—degrading institutions that protect human life, our intellectual and moral development, or communities of caring for each other. In addition to 20th century American writer Richard Brautigan, we could look to another humanist poet. “Pity this busy monster, manunkind, not,” e.e. cummings wrote during World War II, critiquing the “comfortable disease” of progress. He lamented the dehumanizing effects of science, pointing us to the body. “A world of made,” he told us, “is not a world of born.”
Authors
