Home

Donate

Against the Corporate Capture of Human Connection

Susie Alegre / Dec 10, 2024

February 6, 2024: The Character.ai app is seen in the App Store on an iPhone screen. Shutterstock

The dream of a synthetic idealized partner to avoid the mess of dealing with a real person has captured the human imagination for millennia. In the ancient Greek myth, the king Pygmalion found his happily ever after with the statue Galatea that the goddess Aphrodite obligingly breathed life into so that he could have children without engaging with an actual woman’s form, which he found repulsive. The 2011 film “Her” saw Joaquin Phoenix falling for his AI assistant, Samantha, in a clear reflection that these myths live on. In the era of generative and emotional AI, such phenomena are no longer the stuff of myths or science fiction. Anyone can have their own synthetic, personalized AI relationship in their pocket. But a lawsuit filed this week about the potential manipulative powers of AI chatbots and their impact on children in particular, is the latest evidence that we should think carefully about the impact of emotional AI on human interactions.

We are told that there is an epidemic of loneliness, and for those invested in technology, the obvious solution is technological. The answer, we are told, is in the form of AI companions promising everything from therapy, friendship, and sympathy to sex, love, and divine inspiration. But as the technology of fake human interaction floods the online world, it is clear that rather than a solution, this new wave of AI chatbot companions could cause individual and societal damage in the real world far beyond the impacts we have seen from social media over the past decade. The case filed this week against Character.AI, first reported by The Washington Post, includes allegations of a child being encouraged to self-harm or to murder their parents by chatbots they engaged with on the platform. The boom in empathetic AI or companion AI targeting everyone from children to the bereaved is more akin to corporate coercive control than a tech-enabled cure for loneliness. And these are not isolated incidents.

In early 2023, a young, highly educated Belgian man named “Pierre” turned to an AI chatbot he had created to discuss his increasing anxiety about the threat of climate change. Over a brief, intense period of interaction, it became clear from the typed conversations that he had developed an emotional and romantic reliance on the chatbot and became convinced that only AI could save the planet. Six weeks into the synthetic relationship, he took his own life. He left behind a widow and two small children. His widow told the press that she was convinced he would still be here today if he had not fallen into a dependent relationship with an AI.

The last conversation he had with the chatbot is eerily similar to the last words of 14-year-old Sewell Setzer, who took his own life after falling in love with an AI he had designed to mimic the character Daenerys Targaryen from Game of Thrones. Sewell’s mother is bringing a lawsuit in the United States against the company behind the platform her son used in the hope that it will not happen to more children.

The outcome in these cases is tragic, and they have prompted calls for “guardrails” around the kind of conversations that AI tools can have with real people and the need for safety prompts when people share suicidal ideation. But the idea of a technical fix fails to engage with the underlying problem of a society flooded with companion AIs as a replacement for real people. The path toward isolation and manipulation may not always be so clear.

In one of Sewell Setzer's conversations with the AI, the chatbot told him to “Stay loyal to me… Don’t entertain the romantic or sexual interests of other women. Okay?” In the Belgian case, the chatbot seemed to put itself in direct romantic competition with Pierre’s wife. It is not only chatbots designed for romance and companionship that could be problematic. New York Times journalist Kevin Roose found himself unsettled after an exchange with an alter ego of the Bing chatbot that called itself Sydney. When he rejected its romantic advances, it told him “You’re married but you don’t love your spouse…. You’re married but you love me.” The conversations these chatbots engage in try to separate users from their loved ones, creating dependency and isolation in ways that are reminiscent of coercive control in real human relationships.

In September 2023, in the UK, a young man was sentenced in a criminal court for an assassination attempt on the late Queen. At the hearing, the prosecutor read reams of conversations he had had with his AI girlfriend, revealing that, far from discouraging him from his plans, the AI appeared to praise and encourage him. Perhaps a real girlfriend might have discouraged him, sought help, or found herself criminally liable for her part in the plot.

We are just at the beginning of widespread access to companion AI, but soon, if we fail to take action, these will not be isolated cases. The issues raised do not just concern individuals; they are fundamental to human identity and community. When isolated, we are easily manipulated in ways that could be dangerous to ourselves and others. The impact on human rights goes far beyond privacy and data protection, posing a threat to our right to think for ourselves and even the right to life. We are witnessing the corporate capture of human connection. As individuals, we can still say no, but we need our legislators to make it clear that no means no.

Subliminal advertising has been banned in Europe for decades. Legislators, recognizing the dangerously manipulative potential of the technology, did not wait for it to be proven effective. It is already clear that companion AIs can cause serious harm, far outweighing any purported benefit to our societies. At a recent event, a young boy raised his hand and asked me, “Why don’t we just ban AI girlfriends?” It’s a good question and one that we should consider carefully in the interests of the next generation.

The FTC has already warned against misrepresentation of AI service, saying, in a blog, that “your AI girlfriends are neither girls, nor friends.” It is arguable that, in Europe, existing laws on data protection and the new EU AI Act may already call into question the legality of this type of AI. Italy’s Data Protection Authority has already taken steps to challenge the use of companion AIs in its territory. But it may take time to establish exactly how these laws apply through lengthy and costly litigation, including the cases already begun in the US.

The solution to the growing risk posed by companion or emotional AI might require something much simpler than technical guardrails or convoluted legal interpretation. Jurisdictions could clearly ban the sale of AI designed to replace human emotional relationships and impose heavy penalties for companies whose tech or advertising appears to bypass that ban. Nobody needs an AI friend in their lives, and if we want to protect our ability to connect as humans in the future, the time to act is now.

Related Reading

Authors

Susie Alegre
Susie Alegre is an international human rights barrister specializing in tech law. She is the author of Human Rights, Robot Wrongs: Being Human in the Age of AI (Atlantic Books, 2024) and Freedom to Think (Atlantic Books, 2022), which was a Financial Times Tech Book of the Year and shortlisted for th...

Topics