EU’s Role in Teen AI Safety as OpenAI and Meta Roll Out Controls
Raluca Besliu / Oct 2, 2025
Shutterstock
The headlines sound reassuring: OpenAI has introduced parental controls for ChatGPT, allowing parents to link their accounts with their teens’ and adjust settings for what it calls a “safe, age-appropriate experience.” The company also announced it is developing an age prediction algorithm to guide users toward suitable content, potentially involving law enforcement in rare cases of acute distress. Meta made a similar announcement, pledging to train its AI chatbots to avoid engaging with teens on sensitive topics like suicide, self-harm, and eating disorders, instead directing them to professional support resources.
Amid growing concerns about the dangers chatbots may pose to teens, tech companies seem to be stepping up to address these mental health risks. A Meta spokesperson told Tech Policy Press that the company will, for now, limit teens' access to a select group of AI characters, with these updates set to roll out over the coming weeks.
The recent corporate pledges did not emerge from proactive safety considerations, but from lawsuits, such as the case of a teenage boy in the United States whose parents allege that a chatbot encouraged his suicide. Naomi Baron, a Professor Emerita at the American University in Washington, DC, argues that these corporate measures, “while a step in the right direction,” are insufficient. Measures like OpenAI’s new parental controls and the emerging age prediction algorithms, she asserts, “won’t solve the problem.” What is needed, Baron believes, are “disincentives for users of all ages to become dependent on AI programs designed to replace humans as sources of companionship and advice.”
While parental controls may address immediate risks, they fail to tackle the deeper, systemic issue: the growing reliance on AI without effective legal frameworks.
At the policy level, the EU’s AI Act offers a regulatory framework that includes measures to mitigate the mental health risks posed by AI chatbots. However, the current EU legal framework still treats such incidents as unfortunate outliers rather than predictable consequences of deploying powerful conversational AI without sufficient safeguards.
Where regulation is lacking
The EU’s AI Act falls short of addressing the complexities of how AI systems actually operate in practice. Many AI chatbots still fall into the limited risk category, which only requires basic transparency about users interacting with machines, leaving many of the mental health concerns unaddressed.
Italian Social-Democratic Member of Parliament (MEP) Brando Benifei, one of the AI Act's co-rapporteurs, points out that, beyond transparency requirements, “under Chapter II, Article 5 of the Act, all AI systems are required to comply with prohibitions against purposefully manipulative techniques that could materially distort behavior and cause significant harm.”
These provisions are relevant to mental health, as AI chatbots can manipulate users’ emotions and behaviors in ways that negatively affect their well-being.
However, the "purposeful" standard creates a significant evidentiary burden, requiring regulators to access closely guarded corporate communications and technical documentation that companies are unlikely to disclose voluntarily. This type of information has been revealed so far only through leaks, rather than deliberate disclosures.
The AI Act also provides stronger safeguards through comprehensive risk assessment, “specifically addressing mental health concerns,” MEP Benifei added. The “most commercially successful chatbots like ChatGPT” fall under obligations for general-purpose AI with systemic risks, “requiring providers to assess and mitigate risks, including those posed to public mental health.”
This essentially means that companies providing these models must test how their systems might impact public health, including mental health, identify risks, and implement mitigation measures to reduce those risks before or as part of product deployment.
But the Netherlands' Green MEP Kim van Sparrentak questions whether current testing requirements go far enough. "We have strict rules for medical devices or even toys, yet we currently let businesses unleash chatbots giving psychological and medical advice to people," she emphasizes. "We are letting them use people like human guinea pigs for their AI."
Advisory, not mandatory
Recent EU legislative developments further highlight the gap between recognizing problems and implementing real safeguards. In July 2025, the European Commission published its guidelines on the protection of minors online. The document promises a step forward: it sets out expectations around escalation mechanisms, age verification, and independent auditing for online platforms. The problem is baked into the format. The guidelines are advisory, not binding.
Age verification is a prime example. The Commission urges companies to implement age assurance mechanisms, but stops short of requiring them. The result is a patchwork where some of the most widely used chatbots, including Claude AI and DeepSeek, rely on little more than checkboxes asking users to confirm they are over 18.
“Self-declaration and token parental check-boxes are easily circumvented; they do not constitute real protection,” Germany’s Green MEP Alexandra Geese told Tech Policy Press. “I believe mandatory, effective verification should become law for any AI service that interacts with children. Oversight should be continuous, not a one-off certification.”
Stronger technical solutions exist. The EU has developed an age verification app that enables users to prove their age without disclosing personal details. The system issues age attestations through trusted identity services, such as national eIDs, and stores them in a secure wallet.
Platforms only receive a "yes/no" response regarding a user’s age, not their underlying personal data. Pilots are already running in Denmark, Greece, Spain, France, and Italy.
Yet the European Commission has not made this tool mandatory. Its adoption depends on platforms volunteering to use it and member states aligning their enforcement strategies.
The false positive problem
Beyond regulatory shortcomings, when companies implement safeguards, both technical and ethical challenges reveal why regulatory frameworks cannot rely on voluntary corporate measures alone.
Suicide-prevention systems in AI chatbots rely on automated systems trained to recognize verbal or emotional cues that suggest a user may be in crisis. When detecting warning signs, the chatbot might share links to crisis hotlines, or in more advanced setups, escalate the case to human moderators or even emergency services.
False negatives, where systems fail to detect genuine distress, can lead to missed opportunities for intervention, potentially resulting in serious consequences. False positives create their own problems: overly sensitive algorithms might trigger unwarranted interventions, causing unnecessary distress or even involving authorities without justification.
Cultural and linguistic biases embedded in these systems add another layer of complexity. Detection models trained primarily on certain demographic groups may misinterpret cultural expressions of distress or miss warning signs that don't fit their training patterns.
“Automation can help flag risk, but alone it isn’t acceptable in life-or-death contexts. Providers must put people behind these systems: proportionate human review at critical moments, clear pathways to helplines, and swift escalation when signals persist,” stressed MEP Benifei.
German Christian-Democratic MEP Axel Voss emphasized that industry-led measures create additional blind spots: “The focus often falls on the largest platforms, while smaller or less visible ones, which may follow more harmful practices, escape scrutiny. A harmful video may be removed from TikTok only to resurface on Reddit; one chatbot may refuse to engage on mental health topics, while another fills that gap without safeguards.”
He noted: “This is why binding regulation is needed. A holistic, enforceable framework is required to ensure that all platforms and services, large and small, uphold the same basic standards of safety and protection for users, especially minors and those in vulnerable situations."
Beyond voluntary compliance
Indeed, legal frameworks need to treat AI mental health impacts more clearly as a public health crisis, not a corporate responsibility exercise. This means mandatory safety standards for AI systems handling mental health conversations, with real enforcement mechanisms and proactive design, not reactive patches after tragedies.
As van Sparrentak puts it: "These systems are simply built to produce text, not to be psychologists. That makes them dangerous. It's like receiving therapy from someone who has read all the books in the world but doesn't necessarily know everything about psychology."
Authors
