Dutch Warning on Chatbots Echoes Trump Attacks on "Woke AI”
Jacob Mchangama, Jordi Calvet-Bademunt / Nov 14, 2025The Trump administration is waging a public crusade against so-called “woke AI,” emphasizing the need for “neutral models” that engage in “truth seeking” instead of promoting certain (left-leaning) biases. We recently warned that such actions pose significant risks to free expression.
Many Europeans are likely to roll their eyes at such a policy, rightly pointing out how this approach could easily be weaponized to promote particular viewpoints. In other words, they reject Trump’s concerns that AI platforms must be reined in to avoid promoting a particular political agenda.
That is, until it comes to EU elections, apparently.
In October, the Dutch Data Protection Authority warned that AI chatbots are “unreliable and clearly biased” when offering voting advice. According to the regulator, several systems — including those developed by OpenAI, xAI, and Mistral — produced skewed recommendations favoring certain parties ahead of national elections.
These warnings about political bias in AI-generated content might be aimed at different outcomes, but they echo many of the Trump administration's underlying complaints — that AI platforms are not producing the type of political content that government officials think they should.
Dutch authorities also contend that these chatbots’ voting advice may violate the EU’s AI Act. This ambitious and sweeping legislation requires powerful AI models to mitigate “systemic risks,” including “negative effects on … society as a whole.” The Dutch regulator has shared its findings with the European Commission, the body that enforces these rules.
As researchers at The Future of Free Speech, an independent global think tank hosted at Vanderbilt University, we have long warned about the dangers of these vague systemic-risk obligations. We recently conducted a systematic analysis of these obligations as part of a new report examining AI-related legislation in six major economies, including the EU, the US, China, Brazil, India, and South Korea.
We ranked the EU a close second to the US in AI policies that respect freedom of expression, but argued that its position could falter if it fails to address the serious concerns posed by policies such as the AI Act and the Digital Services Act (DSA), Europe’s online safety rulebook.
Our worries stem from “systemic risk” provisions in both the AI Act and DSA. As experts and the UN Special Rapporteur on Freedom of Expression Irene Khan have warned, such open-ended obligations can be misused. Indeed, under the DSA, which has been in force longer, early warning signs have already emerged — from threats to close down platforms in the response to riots following the killing of kid of North African descent by the French police to conflating disinformation with illegal content in relation to the conflict in Gaza to threats to X and Elon Musk for interviewing Trump during the 2024 US elections.
Several AI companies have joined a voluntary Code of Practice that attempts to clarify how these rules might apply, but key terms remain overly vague. “Radicalising” and “hateful” content, for instance, are listed as potential risks — concepts so ill-defined they could potentially encompass legitimate political expression, such as immigration, the Gaza conflict, gender identity, or other hot-button issues that divide Europeans. Language so open to interpretation is a recipe for politicized enforcement. The Dutch authority’s rationale for curbing chatbots’ voting advice is equally problematic. Europeans routinely consume highly partisan newspapers, talk shows, and online content urging them to vote one way or another. Why should a chatbot — which aggregates and summarizes information and answers differently depending on user prompts — be held to a stricter neutrality standard than traditional media? And how is neutrality and objectivity even measured in a neutral and objective manner?
Instead of restricting political outputs, European regulators should focus on ensuring pluralism in the AI market itself. Citizens should have access to a diversity of models reflecting different epistemic and cultural assumptions, not a single homogenized system designed to satisfy official definitions of “balance.” That means promoting open-source models that reduce dependence on a handful of tech giants and allow developers to adapt systems transparently. A broad ecosystem of models offering diverse perspectives is a far more effective safeguard than government-mandated “neutrality” or “truth-seeking.” Pluralism diffuses control and reduces the risk that any single authority or company can weaponize AI outputs for political ends.
Of course, AI companies also bear responsibility. Our research shows that even as refusal rates on controversial topics have declined since 2024, models still block lawful but sensitive discussions on topics such as the right to abortion, transgender athletes' participation in women’s tournaments, Taiwan’s sovereignty, or the role of colonialism in inequality.
Content restrictions in companies’ usage policies, which serve as guardrails for what users can do with the models, remain vague and broad. However, the solution to imperfect models is not state-imposed, preemptive safetyism but user empowerment grounded in openness, competition, and accountability.
Calls for tighter EU regulations on the types of content chatbots can produce, especially political content, fail to consider how such policies could be misused. One need only imagine what a more illiberal future government could do with such vague powers to define things like “systemic risk.”
This is not hypothetical. In July, a Turkish court partially blocked access to xAI’s chatbot Grok, which the court found responsible for generating content “insulting” to both President Erdogan and modern Turkey’s founder, Kemal Ataturk, as well as religious values.
International human rights law protects not just the right to speak, but the right to receive information across borders, regardless of medium — a principle that matters even more as chatbots replace search engines and become our daily interlocutors with the world. When lawful AI-generated content is suppressed or filtered out in the name of risk mitigation, Europeans lose their ability to access diverse perspectives.
As the European Court of Human Rights has long affirmed, freedom of expression protects even information and ideas that “offend, shock, or disturb.” The EU should keep this in mind as it finalizes its enforcement playbook. The upcoming simplification discussions on the AI Act offer an opportunity to narrow or remove the systemic-risk provisions in both the AI Act and DSA — and to align Europe’s digital governance with the international human rights standards it helped create.
Otherwise, in its effort to protect democracy from algorithms, Europe may end up eroding the very freedom that sustains it.
Authors


