Home

Donate

Debating Whether AI is Conscious Is A Distraction from Real Problems

Giada Pistilli / Jun 16, 2022

Giada Pistilli is an Ethicist at Hugging Face and a P.h.D. Candidate in Philosophy at Sorbonne University.

Alan Turing, Wikimedia Commons

As a researcher in philosophy specializing in ethics applied to conversational AI systems, I have been studying conversational agents and human-computer interaction for years. At nearly every talk or panel I participate in, during the Q&A session, I am asked to engage in philosophical discussions about conscious AI and superintelligent machines, and often to explain the details of the technology to audiences that are unfamiliar. This happened a couple of weeks ago. Frustrated, I tweeted a thread that went viral, probably because many colleagues face the same situation. Many people in the machine learning industry– and the interested public– want to discuss consciousness at any opportunity.

This week, of course, the subject is in the news again, since a Google engineer announced that he believes he may be talking to another consciousness when he interacts with a large language model he helped develop.

Talking about consciousness is intriguing. Philosophers of mind have been studying it for a very long time, and it is no accident that they refer to it as "the hard problem of consciousness." What does it mean to be conscious? Have an awareness of space and time? Have self-awareness and being self-conscious? Being able to learn from experiences? In 1974, American philosopher Thomas Nagel published a famous paper, "What is it like to be a bat?". Through qualia, a term that defines subjective experiences of consciousness, Nagel tries to describe the difficulty of putting oneself in a bat's shoes since it has a sensory apparatus that is impossible to imagine from a human perspective.

If we move from animals to machines, asking "Can machines think?" is an equally interesting question. In the late 1940s and 1950s, Alan Turing talked about universal machines and wanted to push their development to simulate human activities. His famous article "Computing Machinery and Intelligence" in 1950 inspired generations of computer scientists, and the Turing test became a goal to be achieved.

But we need to take a step back. First, it is true that Turing's writings have been interpreted differently and have conditioned an all too anthropomorphized field of computer development. For example, we refer to "intentions" when sending requests and talk about "neural networks" and "learning." Yet, in that famous article, Turing discusses the "Imitation Game," and he talks about "illusion". The goal is to delude the person being experimented on into believing they are talking to another human.

From a philosophical perspective, focusing on the illusionary part and not the intelligent one is intriguing. On this illusion, we have created a technological empire. We still know very little about the human brain from neurological, cognitive, psychological, and philosophical perspectives. And while we humans may fear what we don't know, we do love competition. So from the moment we were made to believe, through semantic choices that gave us the phrase "artificial intelligence", that our human intelligence will eventually contend with an artificial one, the competition began.

(The reality is that we don't need to compete for anything, and no one wants to steal the throne of ‘dominant’ intelligence from us. In fact, if that were to happen, I think it may emerge first from a new understanding of the animal kingdom rather than machines. But that's another story.)

When Blake Lemoine says he is interacting with a conscious being when he interacts with LaMDA, Google’s artificially intelligent chatbot generator, we are observing a person indulging in a mixture of illusions and subjective perceptions, as described by Alan Turing and Thomas Nagel. A language model is trained on a snapshot of data representing a fraction of space and time. Impersonation, and thus speaking in the first person, is one of the main tasks of a conversational agent based on those language models. In other words, making the interlocutor believe they are facing a person is part of the conversational experience, and the engineers are well aware of that.

Lemoine’s reaction to LaMDA also makes me think of the ‘transitional object’ in developmental psychology. From the time they are infants to about 3-4 years of age, children may be particularly attached to a particular toy or object - remember Linus' blue blanket in Peanuts? It is a transitional object because it transitions the child's emotions from their parents to the toy. The child’s emotions are projected onto the object when they are alone and need to feel protected. This is just one example of how humans make emotional projections onto inanimate objects. So the difference between objectivity and subjectivity remains present and difficult to distinguish.

I do not deny the interest in studying comparisons between human and artificial brains. I think advances in machine learning have made it possible to analyze humans in new ways, and value their unique capabilities. Nevertheless, it is critical to pay attention to what we communicate to lay audiences and how we can potentially interpret the experiences we have when interacting with language models. Because, again, what is critical here is the interests of the humans involved in the analysis, not the machines.

When I speak to lay audiences about artificial intelligence, the most common reactions are fear of the unknown and the regurgitation of narratives that have constituted the last decades in this field of study. A passionate but frightening mysticism surrounds the purported competition between human and artificial intelligence. I believe machine learning experts should help demystify artificial intelligence and make its advances accessible. Our duty as scientists is not to help sclerotize the conversation and feed a narrative of fear, angst, and false myths. On the contrary, we need to inform the public, who are not experts in the field, and reassure them: no, superintelligent machines are not replacing humans, and they are not even competing with us.

In fact, these large language models are merely tools made so well that they manage to delude us. But being aware of this cognitive trap can perhaps help us in the future, when our fridge will ask us what we want for dinner and, to entice us, will say, "I'm hungry too; I would love some pasta and broccoli." It is impressive to study these models' immense progress in recent years. But it is crucial to keep a cool head, rationalize, and contribute healthily to the debate. Scientists and social scientists should remind the public that these artificial intelligence models are supposed to cooperate with humans, not compete with us.

I would be happy to hear an argument in favor of developing models of ‘conscious’ artificial intelligence. What would be its purpose, aside from proving that we can do it? But that is all it would be, for the moment: an argument, a provocation. And, I conclude that it is a distraction, when current AI systems are increasingly pervasive, and pose countless ethical and social justice questions that deserve our urgent attention. For now, if we want to talk to another consciousness, the only companion we can be certain fits the bill is ourselves.

Authors

Giada Pistilli
Giada Pistilli is an Ethicist at Hugging Face, where she works on philosophical and interdisciplinary research on AI Ethics. She is also a P.h.D. Candidate in Philosophy at Sorbonne University. Her research mainly focuses on comparative ethical frameworks, Value Pluralism, and Machine Learning (Natu...

Topics