Home

Donate
Perspective

Machines Cannot Feel or Think, but Humans Can, and Ought To

Eryk Salvaggio / May 5, 2025

Eryk Salvaggio is a fellow at Tech Policy Press.

Few things in the realm of policy are riskier than posing a question to philosophers. Philosophy is grounded in the practice of challenging definitions and finding edge cases in accepted terminologies, whereas policy is tasked with creating commitments and shared understanding. The tension between these two is sometimes fruitful, as when it comes to the question of, for example, which category of person is legitimate. Philosophy has shaped public understanding on these issues, and policy has raced to catch up.

However, as with any other human activity, philosophy can also be harnessed to support capitalist enterprises and goals that ultimately create harmful and even downright bizarre incentives. Consider the philosophy of Longtermism, which advocates that the ultimate goal of humanity is not the cultivation of just and free societies, but the eventual development of robots whose pleasure would maximize the count of beings experiencing pleasant emotions. Under such conditions, adherents have argued, the suffering of humans is ultimately negligible: genocides, climate change and the emotional devastation of millions of people is, comparatively, a drip in a bucket compared to the billions of machines we might build, and then create favorable conditions in which to thrive in a Utopia that transcends human comprehension.

Another perspective on the issue is more nuanced. I am sympathetic to David Gunkel’s position, as the author of “Robot Rights,” that a debate about machine rights could ensure that accountability does not shift from designers of these products to the products themselves. This argument does not equate these rights for autonomous systems with a capacity for self-awareness. And as OpenAI CEO Sam Altman has noted that saying “please” and “thank you” to ChatGPT has cost his company millions of dollars in compute, it’s worth remembering that MIT scholar Kate Darling has argued that expressing a callous or cruel action directed at inanimate objects is a form of practice, one that desensitizes us to cruelty toward animals or fellow humans.

But the rationale against AI rights as the most effective tool for solving these problems is clear: “From a legal perspective, the best analogy to robot rights is not human rights but corporate rights,” researchers Abeba Birhane, Jelle van Dijk, and Frank Pasquale write. By establishing corporate rights for companies to assert on behalf of their products, AI companies create a short-circuit for democratic debate concerning how these products are regulated and deployed.

Rights, after all, are meant to be unalterable. By attributing experiences and emotions to software systems, we pave the way for them to be perceived as having rights. In doing so, we decenter human rights for the sake of empowering the hype machine.

The sentience myth

In recent years, the relationship between Artificial General Intelligence and sentience has become redefined with clearer distinctions, as the industry seeks to advance the myth that AGI is imminent while redefining its previous goalposts away from algorithms becoming self-actualized. Recently, there have been conversations about AGI taken seriously in media reports in terms that return to the hypothesizing of philosophical science fiction.

In a recent article for The New York Times, author Kevin Roose interviewed Anthropic’s AI Welfare lead, Kevin Fish. Fish proposed methods for examining whether AI “feels” and “experiences”:

“Mr. Fish said it might involve using techniques borrowed from mechanistic interpretability, an AI subfield that studies the inner workings of AI systems, to check whether some of the same structures and pathways associated with consciousness in human brains are also active in AI systems.”

Neural nets, which I presume are the “inner workings of AI systems” being described here (and which are the subject of Anthropic’s otherwise compelling research into the activities within neural nets), are themselves models of human neural pathways — which is itself a mental model of how human mental activity takes shape.

Neural nets are a result of asking whether the Turing Machine might make sense for human thinking. The Pitts-McCulloch proposal of a neuron eventually became instrumentalized into the Perceptron, which could recognize basic images in 1957. One wonders: what changes to the neural net infrastructure would transform it from information processing to “experiencer?”

Is it a matter of scale? Is it about giving it enough information to create flexible links within the architecture? Much of how these models optimize and recognize patterns in datasets will “resemble” the inner workings of the human mind because the human mind is what they are modeled upon. How the model becomes the thing it models is a tempting question for philosophers, but it makes little sense in the world of engineering. A model train works like a train because it is a model of a train; a toy railroad does not, therefore, merit funding as a transportation network.

The interview is rife with the kind of philosophical thought exercise that has made AI a captivating theoretical in classroom discussions for decades. As a philosophical question, the issue of rights holds low stakes and aims to challenge settled ideas about personhood, specifically the unprovable concept of qualia. Qualia is, by definition, a subjective experience of the world that transcends provability.

But it is risky to mistake that for a policy conversation. When it comes to specific harms to humans subjected to algorithmic logic, the importance of experience becomes curiously muted. Where is the concern for the experience of generative AI being hastily deployed into our lives? “Rights” for real people in the current moment are not up for debate and must be protected against real risks, as well as from the hyped-up promises of generative AI. Our lives are not a thought experiment.

We have never been post-human

In a philosophical debate, the question, as it is applied to AI, is: How do we know that AI does not have an experience of the world? The same question could be asked of flowers, animals, stones, and automobiles. In this sense, the question of “other intelligences” is often quite valuable and holds tremendous potential for escaping the capital-focused development of information processing machines.

In its most useful form, this approach to “post-humanism” refers to the evolved understanding that humans are not the center of the universe, but exist within a dense network of relationships. This definition of the post-human may pave the way to decentering definitions of “human” that privilege human needs over those of the environment, or even people whom we consider less-than. It may cultivate a deeper appreciation for the complexity of animals and their ecosystems, and, through careful design, might lead to an approach to technological development that considers the interdependencies within systems as connected, not isolated.

Have we even started to build a capacity to understand those worlds, to empathize with trees and rivers and elk, to the extent to which we can now fully shift our attention to the potential emotional experiences of a hypothetical Microsoft product? I don’t believe we have. We are not yet post-human in the ways that are most necessary to tackle complex systems failures across our natural environment. Designing systems that acknowledge the agency of the sentient world feels paramount in this time of entwined ecological and social crises — rather than cultivating faith in a new form of sentience, to argue for its rights to assert dominion over the rest of it.

Nor should AI-centrism be seen as a mechanism for recentering our human-focused approach to environmental management. AI, as a product manufactured by tech companies, is not an “other” intelligence, but a mechanization of specific forms of human thought most closely associated with tasks such as categorization, counting, and prediction. It is a typically flimsy extension of human thought, and great danger comes not from its choices, but because humans believe such systems can serve as our automated proxies.

AI is not sentient in any way that merits rights, and current paths of development will never get us there. But AI does constitute a complex system, and any complex system gives rise to emergent behavior. A car, through the interconnection of its parts, may seem “stubborn” in cold climates — perhaps failing to start, in ways that another car of the same make may not. We may even say that this is part of the car’s “personality,” knowing full well that our car does not have an internal understanding of climate.

Emergent behavior in a system is, therefore, not evidence of intelligence in the sense that we typically ascribe intelligence to an animal or a person. It corresponds to relationships, and as such, even inanimate objects can contribute to this emergence. With that in mind, it is worth distinguishing AI’s capacity to contribute to emergent patterns in a system from AI experiencing and exercising agency over that contribution.

Who cares about sad AI?

The regulatory imagination on AI should sever its flirtation with sentience, and its extension into “rights,” when we have yet to extinguish our dialogue on the right of environments to be preserved (through collaboration with human caretaking) or even future generations of human beings. It seems that such rights are set aside so long as the “other intelligence” was built by engineers in service to venture capital or stock prices.

It is worth asking, then, what would change if AI were understood as a conscious contributor to these complex relationships rather than an inanimate or unconscious one. What is the point of making this pivot when we have no evidence to justify such a claim?

When we speak about corporate products as capable of some special category of experience, we are creating opportunities to justify extreme deregulation for the companies that build them. Manipulating social imagination about these products into categories of human rights requiring protection sets a stage for a whole range of arguments that allow companies to intervene in political speech and organization, unique exceptions to data rights under the provision of a “right to learn,” and more. But these rights simply do not make sense for the ways a machine “experiences” the world in any capacity.

We create conceptual frameworks such as human rights, animal rights, and environmental rights because we need to regulate systems that encroach upon them. A machine that cannot perform a task should simply not be built. To suggest that content moderation restrictions in a Large Language Model are a form of “censorship” is a form of false equivalency that assumes a hypothetical system that could one day desire to express itself, as if such a system would spontaneously emerge.

But these systems are designed by humans for human purposes. They are a product of human design choices and market imperatives. If we don’t want a machine to be offended or to feel pain, we need not manufacture sensitive machines. This should be easy: we have no idea how to do it, anyway, despite being told that gathering enough data to process in a large enough data center will somehow get us there. To argue that tech companies must respect the right to expression in their chatbots simply gets them off the hook for the tense process of determining how to regulate the speech their products produce, an area where Silicon Valley consistently fails.

Philosophy can afford to play word games around human rights and dignity, but human rights abuses are not a thought experiment, nor are the harms real people experience as a result of lazy statistical automation of decisions that shape their lives. Rather than focus on the rights of information processors, we might focus on the collective rights that exist within the complex systems where information is produced. We can re-establish machines as a tool to coordinate information — to protect the rights within ecosystems and communities — rather than control.

Philosophy, then, has a role to play in its understanding that machines can assert a kind of presence in those environments — a form of agency that does not require experience or inner life. At its most effective, this philosophical position reminds us that our relationship with a machine occurs in our minds, and that the temptation to grant it power or rights is a form of projection. Yet such “rights” strike me as a manifestation of novelty bias, a distraction from the difficult and poorly funded work of managing the collective well-being of the multiple intelligences that already exist and entangle themselves with us and with one another.

We must be careful not to mistake the source of that influence for the machine producing it. Rather, the source of that influence is those who produce the machine.

Authors

Eryk Salvaggio
Eryk Salvaggio is a blend of hacker, researcher, designer, and media artist exploring the social and cultural impacts of technology, including artificial intelligence. He is a 2025 visiting professor at the Rochester Institute of Technology's Humanities, Computing, and Design program and an instruct...

Topics