Home

Five Myths About How AI Will Affect 2024 Elections

Irene Solaiman / Jun 20, 2024

AI is a threat to 2024 elections. Just not in the way you may think, writes Irene Solaiman, head of global policy at Hugging Face.

We’re already a few months into the most important election year around the globe, with billions of people from at least 64 countries and the European Union taking to the polls. Public concern has fixated on a relatively new and rapidly advancing threat: generative AI.

There are plenty of reasons to be afraid.

We’ve already seen the spread of AI-generated false images of former President Donald Trump supposedly getting arrested by New York City Police and an AI voice clone of President Joe Biden falsely telling New Hampshire constituents not to vote. People are taking notice. A spring 2023 global survey of 29 countries shows 60% of voters worried about AI’s potential for exacerbating disinformation. A July 2023 poll found 73% of US adults are concerned about AI creating deceptive political ads. And the solutions we’ve been promised aren’t working: AI text detectors have not only failed, but advertised disastrous false positives, and image and video detection isn’t much better.

This all sounds like a recipe for disaster. But with the benefit of my experience leading AI policy at OpenAI and now Hugging Face, as well as some years working on US election security, I’m confident we already have the tools to address these problems–if we can only dispel five key myths holding us back from diagnosing the problem.

Myth 1: AI introduces wholly new risks.

We’ve been warned about eerily realistic deepfakes for years, and we’ve been plagued with disinformation for longer. AI’s lower barrier for anyone with a stable internet connection to generate believable fake content is concerning in layers: the threat level differs by type of content, such as fake images or audio, and impact depends on distribution and audience reach. AI-generated text may not be cheaper to produce. In other words, people can make up facts by text for cheap and for fun. Image manipulation has long been possible through tools such as Photoshop. The most pressing threats around AI content with fewer safeguards are realistic audio and video generation.

Reality: AI builds on existing risks.

The predominant risks AI poses for democratic integrity are obstruction of voting procedures and infrastructure, and eroding trust in information and democratic institutions. The former includes infrastructure such as polling locations and voter databases, with risks including cyberattacks and false information about how and where to vote as well as interference with official records. The latter includes disinformation about candidates, processes, and results.

That said, the influence of AI on voter opinion can vary; many voters make up their mind long before election day. Disparate performance in AI systems trained on high-resource languages such as English may skew the quality of content in more underrepresented regions. But AI content can still interfere with infrastructure, and simply the narrative of AI being influential can contribute to eroded trust.

Myth 2: Watermarking is the solution.

While AI researchers are working overtime on provenance techniques such as AI labeling, technical tools such as content filters for language models and watermarking for images are not ready for prime time. The performance of tools and techniques differs by the type of content, and ensuring watermarks are tamperproof is an ongoing challenge.

Reality: Mitigation is multifaceted.

Since technical safeguards are not silver bullets, model licenses and terms of service must be enforced as concrete legal repercussions for misuse. Mitigation must get to the root: even if labeling and watermarking worked perfectly in the near-term, it would not dissuade voters who are committed to believing conspiracy theories. Given how, in 2016, the US some voters saw parallels between a candidate and Satanic figures, watermarking images such as the viral Shrimp Jesus would do little to detract from its influence. The seed of belief substantiated by false content is actualized when it sprouts a narrative that affects views on candidate platforms.

Myth 3: Open-Source is more dangerous.

In a post-ChatGPT era, the debate remains active as to whether AI should be “open-source,” colloquially meaning anyone can download and run an AI model, or kept “closed,” referring to companies hosting the AI model and providing access to approved developers. There is no definitive evidence that more open models notably increase the threat of election risks. The infamous robocall of President Biden’s cloned voice was not from an open model. 

Reality: Lowering the barrier for harmful electoral content is primarily dependent on access.

Access can be related to but not dependent on a system’s openness. A generative system capable of high-quality, believable content is likely expensive and requires more computing infrastructure than the average person has available. Existing systems, including open-source models, may not be best for influence operations and well-resourced and state actors may build their own tailored systems explicitly for election interference.

Myth 4: All attackers need is AI.

Anyone attempting to influence voters, from malicious actors and well-resourced state actors to a teenager with an odd sense of humor, may not be successful in reaching audiences without the key piece of powerful distribution. 

Reality: Distribution is key.

The harm from a generated representation of President Biden’s voice exists in the first instance from the generation being nonconsensual, but is exacerbated when it is not only distributed, but also targeted to vulnerable groups when deployed in a robocall to voters. These targeted campaigns require planning outside content generation; in the case of the robocall, it required access to a given region’s voter phone numbers.

Myth 5: It’s too late for action.

Several months into 2024, elections around the world are already in progress. Expert claims that we don’t have time for legislative changes are correct, but action can be lean and targeted.

Reality: All of us can make a big impact.

We’re in crunch time and everyone is responsible. The main actors needed to protect election security in the AI landscape are: electoral bodies and administrators; electoral campaigns; content distribution platforms and social media; journalists and news media; AI companies, developers, and deployers; and voters. That likely includes you.

There is still time for official election bodies from administrators to campaigns to coordinate shepherding voters to reliable information, following the lead of the US National Association of Secretaries of State (NASS) #TrustedInfo2024 initiative. Social and news media teams and channels should be properly equipped to respond to global elections, not only a few in high resource regions. Media and AI organizations are and should continue to invest in capacity to verify content.

While responsibility should not rest heavily on voters, voters must invest in their own literacy and verify information to the best of their ability before action. Voters should also realize that AI’s “hallucination” problem extends to election information, so they should simply not use AI as a trusted source for voter information.

We all have a part to play in making our democracy more resilient to threats, regardless of how they are generated.

Sincere thank you to Brigitte Tousignant and Justin Hendrix for their thoughtful comments and advice.

Authors

Irene Solaiman
Irene Solaiman is the Head of Global Policy at Hugging Face, where she leads policy and conducts research on the social impact, release methods, and safety of AI systems. Before entering the AI industry, she worked with US election officials on election cybersecurity and combatting disinformation af...

Topics