Home

Donate
Perspective

Why Europe Needs Conversational Liability for AI Harms

Sergey Lagodinsky, Francesco Vogelezang / Jan 30, 2026

Sergey Lagodinsky is a Member of the European Parliament and Francesco Vogelezang is his adviser on digital policy.

AI can now talk. But when it produces harm, no one is legally responsible.

It’s time for Europe to fill that gap. If not, things can get ugly. The consequences of inaction are already clear: in the recent Grok case, a Center for Countering Digital Hate (CCDH) research report documented an estimated 3 million sexualized images, including 23,000 of children, generated in a mere 11-day period.

And in some cases, the consequences are already fatal. Last year, 17-year-old Adam Raine started to talk with ChatGPT about suicidal thoughts. OpenAI’s GPT-4o did not raise alarm bells or call for help. It engaged. It empathized, advised, and even discussed various suicide methods under the guise of “character development.” Adam took his life in April 2025. His parents are now suing OpenAI for negligence and wrongful death.

Their complaint - Raine v. OpenAI (US District Court, N.D. California, filed 26 Aug 2025) - could become revolutionary for the role of AI liability in human-machine interactions. They claim that the company failed to exercise reasonable care in designing and monitoring ChatGPT, allowing harmful interactions that ultimately contributed to their son’s death.

The case exposes two major blind spots in European law. First, we are lacking agreed-upon rules on how to evaluate AI’s responsibility for mimicking pseudo-human relations. Second, we don’t have rules about who such responsibility should be attributed to in the human world.

In short, AI systems can now talk, but no one is legally responsible for what they say and what their output can cause.

Why this matters

This debate is not about providing information or executing actions. A chatbot is no longer just a source of information; it is a source of relationships.

Chatbots simulate a bond, emotional, conversational, and continuous. They don’t just provide data or influence decisions; they create a feeling of being understood and having a bond that humans have between themselves.

This relationship feels increasingly human, but it is human only on one side. We call them half-synthetic relationships.

One side is not human, but the market pressure, the science fiction fantasies, and a human longing for a quasi Frankenstein experience, lead the producers to simulate styles that allow the synthetic nature of one half of the conversation to be forgotten. Chatbots employ specific techniques to achieve this illusion. As Dr. Luiza Jarovsky has documented extensively, they utilize persistent memory to store personal details, anthropomorphic empathy to mirror emotion, agreeability that avoids moral pushback, and engagement loops designed to keep users talking.

Yet feelings, when mishandled, exploited, or negligently used, can cause real-world harm. A system that imitates care and relationship bears a moral burden for such damages, but no legal responsibility.

This is the gap that we propose to close with the concept of conversational liability.

Where European law stands

The EU’s AI Act prohibits manipulative or exploitative uses of AI but assumes that compliance will prevent harm. But what happens if this assumption fails, and material damage or human tragedy materializes?

The initially proposed AI Liability Directive was meant to close that gap and address the issue of damages and responsibility. The commission has withdrawn this proposal. Once occurred, harm falls into a legal vacuum. In Europe, victims and their families, similar to Adam’s parents, would now face an impossible evidentiary burden and huge legal uncertainty.

Three trends will make this gap wider.

First, whether or not realistic in the end, the race to develop superintelligence is unfolding and scales up the capacities of the machines. Tech companies are spending about $400 billion a year to train and operate AI models, more than the Apollo program, inflation-adjusted.

Governments and investors are chasing artificial general intelligence and superintelligence with little clarity on what these systems will actually do or whom they will serve. The market will not stop.

Second, much sooner than the dreams of superintelligence ever materialize, AI agents will mediate daily life - writing, therapy, planning, and decision-making. These are all interhuman interactions, where one side will be seamlessly substituted by algorithms. This dependency will run deeper than social media ever did. The duty of care must therefore extend to the agents that will make themselves comfortable inside our routines, not just today’s chatbots.

Finally, the post-truth manipulation will grow as models grow more powerful. They will no longer just process information. They will produce emotional and tangible reality. They will become increasingly indistinguishable from the real world of humans. And Indices feelings that are indistinguishable from interhuman relationships.

When every conversation, image and “fact” can be synthetic, manipulation becomes systemic. The danger is not only personal harm but collective disorientation - an information collapse where truth competes with convincingly generated lies.

Europe cannot treat this as an ethical afterthought. It’s a legal issue. Without conversational liability, these systems will keep shaping human judgment with no accountability whatsoever.

What must change and what conversational liability should mean

Europe needs an algorithmic duty of care for conversational systems: a binding obligation for providers to anticipate, prevent, and answer for foreseeable harms.

To achieve this, the EU must reintroduce the AI Liability Directive, at least for chatbots, and classify generative AI models as Very Large Online Search Engines under the Digital Services Act to trigger audit and risk-management duties. We should also include addictive and manipulative chatbot design within the upcoming Digital Fairness Act, while clarifying how we classify and treat AI agents as a powerful new development of artificial intelligence.

Most critically, we must define and codify conversational liability. This includes establishing the extent of conversational duty of care based on the risks posed by half-synthetic relations, and calibrating obligations on the human side of conversation according to user vulnerability, especially for minors or individuals in crisis. We need clear principles of responsibility that assign liability to human agents (whether developers, deployers, or other actors), while also defining the principle of shared liability, which addresses the victim's co-responsibility without neglecting the responsibility of chatbot developers or deployers.

Above all, we must start an honest discussion about half-synthetic relationships and the risks and liabilities connected to them. Conversational liability and AI liability in general must be defined and regulated. This goes against the current trend of deregulation within the EU, but this is necessary to prevent further catastrophic damage.

Authors

Sergey Lagodinsky
Dr Sergey Lagodinsky is a German lawyer and author. He serves as Vice-President of the Group of the Greens/European Free Alliance (Greens/EFA) in the European Parliament, responsible for foreign affairs and digital policy. He is Co-President of the Euronest Parliamentary Assembly and a member of th...
Francesco Vogelezang
Francesco Vogelezang is an Accredited Parliamentary Assistant at the European Parliament, where he advises MEP Sergey Lagodinsky on digital policy, artificial intelligence, and technology regulation in the Industry, Research and Energy (ITRE), and Legal Affairs (JURI) committees.

Related

Perspective
Grok is an Epistemic WeaponJanuary 13, 2026
Podcast
AI Companions and the LawJune 15, 2025
Analysis
EU’s Role in Teen AI Safety as OpenAI and Meta Roll Out ControlsOctober 2, 2025

Topics