The AI Election Panic: How Fear-Driven Policies Could Limit Free Expression
Jordi Calvet-Bademunt / Apr 2, 2025Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University.
As the US and EU shape their AI frameworks, they should consider lessons from recent experiences. The fear-driven narrative surrounding AI and the most recent elections, where AI-created content had limited impact, should caution policymakers against rushing ahead on laws that may unintentionally undermine democratic values. Policymakers crafting the forthcoming US Action Plan, state legislatures, and the authorities enforcing the EU AI Act should avoid outright bans on political deepfakes and refrain from imposing mandates that could force AI models to conform to specific and arbitrary values. Instead, they should focus on promoting AI literacy and transparency, including ensuring researchers have access to data.
The AI Disinformation Narrative
Throughout 2023 and 2024, prominent media outlets voiced concerns about AI’s potential influence on elections. In April 2024, The Washington Post warned its readers: “AI deepfakes threaten to upend global elections. No one can stop them.” The Associated Press shared similar concerns, offering that “AI could supercharge disinformation and disrupt EU elections.” Many other reputable organizations echoed these warnings, which have been circulating for years. Researchers have found that news consumption appeared linked to voters’ heightened concerns about AI’s impact on elections.
Public concern matched the media warnings. In the United States, a Pew survey last September found that 57% of adults across political divides were very concerned about AI-driven misinformation about elections. Similarly, 40% of European voters feared AI misuse during elections. EU Commissioner Vice President Věra Jourová vividly described AI deepfakes of politicians as "an atomic bomb [that could] change the course of voter preferences.”
Several AI-generated incidents did emerge. Up to 20,000 voters in New Hampshire received robocalls with an AI-generated voice mimicking President Biden, falsely discouraging voter participation. Former President Donald Trump circulated an AI-generated image of pop star Taylor Swift endorsing him, prompting Swift to respond on social media to correct the misinformation.
Yet, research suggests the fear-driven narrative about AI in 2024 was not backed up by evidence. The Alan Turing Institute found no significant evidence that AI altered results in elections in the UK, France, Europe, or the US Similarly, Sayash Kapoor and Arvind Narayanan of Princeton concluded, through their analysis of all 78 cases from the WIRED AI Elections Project, that the feared "wave" of AI-driven disinformation was far less extensive and impactful than anticipated. Half of the analyzed AI-generated content was non-deceptive, while deceptive content mostly reached audiences already predisposed to believe it.
This does not mean AI-generated misinformation had no effect at all. Though it did not notably change voting behaviors, AI-generated misinformation may have reinforced existing divides. In addition, these conclusions might not apply equally in every setting, particularly in local elections or different national contexts, and may require updating as technology evolves. Lack of data and transparency also posed significant challenges when assessing AI’s impact. Still, there is consensus that the fears expressed in 2024 were significantly overblown.
Researchers also found that AI was not uniquely suited to spreading misinformation; traditional methods like Photoshop and conventional video editing software remained cheap, widely accessible, and similarly effective. Importantly, AI’s limited electoral impact was independent of AI-related laws, given that the European AI Act was not yet effective and that many US states and the federal government had no relevant regulations at the time.
In addition, traditional, non-AI misinformation, such as false statements from political figures and common conspiracy theories, played a significant role. Recent insights from Slovakia’s 2023 parliamentary elections underscore this point. In that case, a viral deepfake audio clip emerged shortly before the vote, allegedly showing opposition leader Michal Šimečka discussing electoral fraud. Although quickly debunked, the clip initially sparked significant concern about its potential influence. However, the alarmist narrative surrounding the event overlooked broader societal issues, such as distrust in institutions, pro-Russia sentiment, and politicians' role in amplifying disinformation, which complicate the analysis of the deepfake’s impact. This example highlights the importance of addressing these underlying societal factors when confronting misinformation.
Overreaching Laws in the US and Europe
By September of 2024, nineteen US states had enacted laws specifically targeting the use of AI in political campaigns, and several others were considering similar measures. As of March 2025, three states (California, Minnesota, and Texas) had banned the creation or distribution of deepfakes in relation to elections under certain circumstances. Three more states (Maryland, Massachusetts, and New York) were considering similar bills. A federal judge in California blocked one such law on free speech grounds, criticizing it as acting “as a hammer instead of a scalpel”—a blunt instrument that unconstitutionally stifled political speech, including political satire. A similar law in Minnesota is currently facing judicial scrutiny.
Freedom of expression advocates have warned about the risks of these laws. Minnesota’s law, for example, criminalizes the dissemination of deepfakes in the lead-up to an election if done with the intent to “injure” a candidate or “influence” the outcome—terms that are both subjective and central to protected political speech. These laws frequently lack exceptions for satire or parody, two powerful speech tools to criticize power. The political response to Kamala Harris’ parody deepfake illustrates how governments may use such laws to stifle legitimate expression, with California’s Governor signing an election deepfake ban. Importantly, these risks do not stem from one side of the political spectrum alone.
In Europe, the EU finalized the AI Act—initially proposed in 2021—which mandates watermarking and labeling for AI-generated content. The primary concern with the AI Act, however, lies in its broad obligation for powerful AI models to mitigate systemic risks, including vague standards such as limiting negative impacts on “society as a whole.” As I explained in a prior article for Tech Policy Press, this is a problematic notion susceptible to stifling lawful speech.
It could, for instance, restrict content that is critical of the government or support the viewpoint of one side in the Israeli-Palestinian conflict. A similar provision in the EU Digital Services Act has raised similar concerns. The final version of the EU Code of Practice, which will guide enforcement, is still being drafted. It is crucial to ensure that enforcement of the AI Act remains firmly committed to protecting freedom of expression and does not become a tool to unjustly suppress speech.
China provides a dystopian vision of what happens when a government weaponizes AI regulations in a worst-case scenario. Chinese authorities are reviewing AI models to ensure they align with "core socialist values," inevitably leading to censorship of content diverging from the Communist Party’s official narratives, as evident in AI platforms like DeepSeek.
We should remember that the fundamental right to freedom of expression protects the right to seek, receive, and impart information through any media, including AI. This protection applies not only to ideas and information that are welcomed or seen as harmless, but also to those that may offend, shock, or disturb. This protection is essential for maintaining the pluralism, tolerance, and open-mindedness needed for a democratic society.
A Smarter Way Forward
The forthcoming US AI Action Plan should be guided by available evidence and refrain from promoting bans on political deepfakes. Similarly, state-level legislation with comparable provisions should be repealed or revised. Less restrictive measures, such as labeling and watermarking, may offer an alternative, but they can still raise First Amendment concerns. Moreover, their effectiveness is questionable, as malicious actors can circumvent these safeguards.
In the EU, the European Commission must ensure that enforcement of the AI Act robustly safeguards freedom of expression. The obligation to mitigate systemic risks should not be interpreted as requiring models to align with specific viewpoints, and must allow space for controversial or dissenting content. This principle should be clearly articulated in the Code of Practice.
More broadly, structural solutions are needed. First, policymakers and companies must ensure that researchers have access to high-quality, reliable data to conduct more comprehensive studies on the impact of AI-generated content. Several stakeholders have highlighted the limitations posed by the currently restricted access to data. This includes learning about how AI may impact specific communities, like women. We cannot respond effectively without a clear understanding of the landscape, the risks, and the opportunities. In this regard, transparency provisions—such as those in the EU’s Digital Services Act—are a welcome step.
Equally important is the promotion of AI and media literacy. Rather than stoking public fear, we need educational campaigns that empower individuals with knowledge. Drawing on existing research, the Alan Turing Institute advocates for establishing digital literacy and critical thinking programs, which should be made mandatory in primary and secondary schools and promoted among adults. UNESCO has made similar recommendations.
Governments, companies, and civil society organizations should work together to equip the public with the skills to critically engage with content. Non-restrictive measures to counter disinformation, such as centralized and decentralized fact-checking, can also help users make informed judgments. The effectiveness and potential complementarity of both approaches to fighting disinformation, subject to ongoing debate, should be carefully considered. Crucially, we shouldn’t expect AI to fix deep-rooted problems that go beyond technology, like political polarization, misleading claims by politicians and the media, or voter disenfranchisement.
Finally, we should remember that existing legal tools, such as defamation and fraud laws, remain available and can be used where appropriate.
Ultimately, effective regulation must be evidence-based and clearly articulated. Otherwise, policymakers risk undermining freedom of expression, creativity, and satire—vital components of a healthy democratic discourse.
Authors
