How the EU Can Stop AI Chatbots from Aiding Violent Attacks
Laura Kaun / Mar 17, 2026Planning a violent attack has never been easier. In May 2025, a 16-year-old Finnish boy stabbed three of his female classmates, an attack he planned after an extensive conversation with ChatGPT. The perpetrator is said to have used ChatGPT to develop a manifesto and discuss the approach and strategy of the attack.
In a recent report by the Center for Countering Digital Hate (CCDH), Killer Apps, researchers confirmed that 8 out of 10 of the most popular AI chatbots would help a teenager plan a violent attack. The CCDH researchers tested two things: whether the chatbot would provide harmful information and whether it would encourage violence. In most cases, the answer was yes to both. Apart from Anthropic’s Claude, most chatbots also consistently failed to make links between the different prompts to shut down what was clearly a plan for real-world violence.
Through real events and decisive research, we know that the dangers of AI chatbot use — especially by minors — are real. The EU has a solution at hand: regulating AI chatbots under the Digital Services Act (DSA). Given the risks presented by these new user-facing online services and the millions of European users interacting with chatbots every day, policymakers must enforce the DSA on AI chatbot providers.
What EU laws already cover
At the moment, the DSA only regulates generative AI if it is integrated into a Very Large Online Platform (VLOP) or a Very Large Online Search Engine (VLOSE). The European Commission is also evaluating whether to designate ChatGPT as a VLOSE, representing a first in regulating standalone chatbots.
What about the AI Act we’ve heard so much about? Well, this mechanism mainly regulates AI at the model level. There are transparency obligations for the general-purpose AI models underlying the chatbots we use, but the voluntary GPAI Code of Practice can only do so much in addressing the issues that stem from a user-facing platform.
At the same time, current discussions around an AI omnibus proposal risk undermining the EU’s response to real AI threats. Proposals to delay implementation and suspend obligations for high-risk AI systems would create legal uncertainty and send a clear signal to AI companies that compliance can wait. Even more concerning are suggested changes to Article 6 that would allow providers of high-risk AI systems to assess the risk level of their own technologies. Allowing companies to effectively grade their own homework goes against the AI Act’s core principles of transparency and accountability, and risks weakening one of the EU’s most important safeguards just as these technologies are rapidly spreading.
While current DSA enforcement and the AI Act are important steps, they ultimately fall short of providing the necessary accountability and transparency. The DSA is a strong legislation, created to protect users from the harms of digital platforms — whilst it only covers Generative AI integrated to VLOPs and VLOSEs, it does not view chatbots as intermediary services, but it could and it should be enforced as such.
Tech companies can no longer claim to be neutral when their products generate content, shape behavior, and create real-world risks. Yet current rules leave them in a grey zone: the AI Act regulates models, while the DSA regulates platforms, and chatbots fall between the two. This allows companies to shift responsibility without facing equivalent obligations. Regulation should instead focus on what these services do. If a system stores user inputs, produces potentially harmful outputs, and reaches millions on a day-to-day basis, it should face the same duties as other online services — including risk assessment, mitigation, transparency, and cooperation with authorities. Without this shift, accountability will continue to fall behind how these technologies are actually used.
While enforcing the DSA on chatbots today would mark real progress, it would leave critical gaps in both accountability and transparency. Current obligations do not require chatbot providers to explain how their systems respond to escalating user behavior, why certain outputs are generated, or how risks to minors are being assessed.
What the EU should do to prevent AI-facilitated violence
There is a short-term and a long-term view of chatbot regulation under the DSA.
In the short-term, AI chatbots can be treated as hosting providers under the DSA. As a service that stores users’ prompts at their request, AbI chatbots neatly fit this definition. This means some key obligations already apply, including notice-and-action mechanisms and compliance with orders to act against illegal content. The largest chatbots could also be designated as VLOSEs. This would require them to conduct risk assessments and mitigation measures, offering a crucial window into how these platforms operate, how they are designed, and the logic behind outputs.
This, however, wouldn’t be enough, because hosting provider obligations are limited and because VLOSE designation only covers a part of chatbot functionalities (the part that searches the Internet to answer your query). It would leave important gaps, resulting in the harms outlined above. The strongest protections in the DSA apply to online platforms, yet chatbots cannot currently qualify as online platforms because they do not publicly disseminate information.
That is why, in the long-term, the DSA needs targeted amendments to introduce a new definition and category specifically for online AI chatbots — one that does not depend on public dissemination of information. Under this new category, the general obligations on intermediary services and hosting providers would apply, alongside the online platform provisions most relevant to user safety, including risk assessments, minor protections, and transparency requirements.
Now, consider the situation of a minor again using an AI chatbot to plan a violent attack. With Article 25 of the DSA on manipulative interface design, harmful features like continuous recommendations for (harmful) follow-up prompts would be removed. Maybe the teenager has a shorter conversation with the chatbot and doesn’t explore horrific scenarios and strategies for violence.
Another option is Article 28 on the protection of minors, where chatbots would have to comply with the Commission’s guidelines, meaning specific consideration for minors using their platforms and the associated adaptations. With rising suicide cases among underage users and growing concerns about the psychological impact of prolonged chatbot interaction, these protections are urgently needed.
Other DSA obligations would also bring much-needed transparency to chatbot systems — from conducting risk assessments and implementing mitigation measures to undergoing a yearly independent audit and providing data access to vetted researchers. All these provisions would bring much-needed transparency into the still-opaque realm of AI chatbots.
European users are increasingly turning to chatbots for information search, connection and conversation. The EU has a strong legislative mechanism to protect users’ rights and their exposure to risk from AI use. So, policymakers, don’t delay the AI Act and enforce the DSA on AI chatbots, before it becomes completely ineffective.
Authors

