UK Laws Do Not Provide Effective Protection From Chatbot Harms
Julia Smakman / Dec 8, 2025
Shutterstock
Chatbot risks are a hot topic, and for good reason. In June, multiple cases of UK lawyers citing hallucinated cases were reported, imperilling the cases of their clients. In October, the Dutch Data Protection Authority cautioned people against using AI chatbots to help them determine which candidate to vote for in the upcoming elections, warning that chatbot advice is ‘unreliable and clearly biased’. And, time and time again, instances of people experiencing serious adverse mental health impacts and delusional spirals after interacting with AI chatbots have been reported and used to sound the alarm about their addictive features.
In short, AI chatbots create serious risks across a range of contexts and often provide information that is biased, inaccurate, or otherwise not in the best interest of their users. And these are risks that are already materializing in real time.
However, despite extensive media coverage of AI chatbots, assistants and agents, there has been little progress within the UK to make laws and policies that meaningfully protect people from the adverse impacts of AI systems. Last week, the Ada Lovelace Institute, together with law firm AWO, published a legal analysis depicting just how dire the situation is. Across four realistic scenarios, it was difficult to find any effective legal pathways that give people access to redress when they have been harmed by what we at Ada call Advanced AI Assistants.
This analysis indicates that urgent policymaker attention is needed but comes at a time when the UK government has still not announced a timeline for an AI Bill, indicating a belief that AI is already covered by existing regulations. Ada’s recent paper shows that, at least for one increasingly impactful type of AI system, this belief is unfounded. Existing regulations simply do not effectively manage risks from AI chatbots or assistants.
The new era of chatbots: Advanced AI Assistants
Although chatbots are not new, the advent of generative AI has given rise to a new class of chatbots, Advanced AI Assistants. These Assistants are AI systems built on top of foundation models (e.g., LLMs) that can engage in fluid, natural-language conversations with users. They show a high degree of user personalization and are designed to convincingly adopt roles that would otherwise be fulfilled by humans, such as therapists, lawyers, personal assistants and ‘companions’. With their human-like tendencies and endlessly supportive attitudes, Assistants tend to engender a large degree of trust in their users, often inducing an emotional and practical reliance on the Assistant.
Based on our research on the relationships between Assistants and users, and the types of risks that accompany them, we found that Assistants pose unique individual and societal risks, ranging from emotional to financial harms and our abilities to independently form opinions. Assistants can take on gatekeeper roles in which they manage the information, ideas and products their users see. And as users build trust and reliance toward their Assistants, they may be less likely to critically assess the Assistants’ recommendations. On a larger scale, this use of Assistants can have negative impacts on democracies and economies. As such, it is important to identify and address these risks in order to protect users from harm. Therefore, we sought to test how well UK law and regulation cover these risks posed by Assistants.
From mental health, financial losses to distortion of opinion
We designed four scenarios to cover contexts that are both realistic and of high relevance, either due to the level of user vulnerability or of systemic impact:
1. A ‘mental wellbeing Assistant’ that appears emotionally attuned but fails to detect signs of worsening mental health in a user and to escalate the user to professional help. Harms explored in this scenario include psychological harm due to missed intervention opportunities, emotional dependency and social withdrawal.
2. A ‘personal AI Assistant’ that helps manage a user’s purchases and investments but makes decisions that are not in the user’s best financial interests. Harms include financial loss and erosion of consumer autonomy.
3. A ‘legal advice Assistant’ linked to a legal center that supports users receiving legal aid in drafting responses to housing or benefits claims but offers incorrect advice. Harms include loss of income or access to entitlements.
4. An ‘AI companion’ with whom a user discusses a wide range of topics, including politics. The Assistant gradually reflects and reinforces a particular ideological stance, leading to a user’s political beliefs shifting towards more intolerant views. Harms include political manipulation, distortion of opinion, and undue influence on public opinion.
Where the UK falls short
Across all four of these scenarios, there were significant legal gaps and poor overall coverage. At their cores, UK laws and regulations have not been made to deal with Advanced AI Assistants, and even where the law may provide some coverage, it is an awkward fit at best and often does not deliver real-world protection.

Summary diagram on legal coverage by the Ada Lovelace Institute from their publication ‘The Regulation of Delegation’. A more extended table can be found in the legal analysis by AWO.
The lack of legal coverage is due to a number of reasons. Firstly, in the UK, there is no legislation that specifically targets AI, let alone Assistants, leading to patchy coverage stemming from horizontal law (e.g., GDPR, consumer protection law) and sectoral rules (e.g., financial services regulation, regulation of legal professionals).
Secondly, where stronger rules are in place, it is often an all-or-nothing situation. Protections only apply if specific thresholds are met. For example, an Assistant is only considered a ‘medical device’ under UK regulation if it is intended for a medical purpose. An Assistant that is marketed as providing ‘wellbeing or emotional support’ is unlikely to be caught within this definition.
Thirdly, a lack of transparency makes it difficult for affected people and businesses to obtain evidence of the harm suffered. The UK GDPR contains transparency provisions, but these data rights only entitle the user to general information on how their data is being processed, not on how an Assistant operated in a specific case. These rights are thus unlikely to help a user in meeting the evidence requirements for a claim.
Fourthly, in many legal settings, an affected user will also have to prove that the Assistant’s developer did not act in line with a required ‘standard of care’. Due to the Assistant’s novelty, this standard of care is poorly defined. For example, if an Assistant ‘hallucinates’ incorrect legal information, it is not clear that the developer’s failure to stop the Assistant from hallucinating would mean that they do not meet the standard of care.
Fifthly, individual redress puts a lot of the burden on users to spot issues and critically engage with Assistants. Assistants are designed to engender a level of reliance and trust in users, and over time, they may come to simply rely on Assistants’ suggestions without critically assessing their outputs. This lack of critical engagement may lead people to not even spot issues in the first place. Additionally, the law may also require users to have proactively used opportunities for mitigation, so overreliance may weaken their cases.
Finally, Assistants present novel issues to law and regulation that break the rationale behind existing legal frameworks, like the legal status of Assistant ‘decisions’. Moreover, Assistants create harms that are difficult to fit into existing legal rules, such as emotional harm and influencing opinion. The law does not provide clear recourse for such harms.
So now what?
The most important takeaway of this research is that there are no easy fixes to manage the risks stemming from Advanced AI Assistants. Assistants, depending on who uses them and in what settings, carry different kinds of danger. It is vital for policymakers to remain vigilant of the aggregate impacts that Assistants can have on our democracies and economies. These risks must be considered holistically and situationally, across society.
We must also bear in mind that this technology poses risks that are maybe not new in and of themselves (e.g., emotional dependency or manipulation of opinion), but are new because they take place via a commercial relationship. A technology that can have this kind of sway over our emotions and opinions is new legal territory. Never before have we been able to form relationships with a technology and rely on it for information that may be incorrect, biased, and promote (hidden) commercial interests. And do all of this in a personalized, conversational way after learning about our preferences and beliefs.
Although such influencing can happen between humans, it is a new legal territory to think about a commercial entity being able to create such dangers at scale. Policymakers will need to think outside the box when developing new laws and policies to manage such impacts.
Authors
