Researchers Assess Origins of Public Concerns About AI in the 2024 US Elections
Prithvi Iyer / Sep 19, 2024The World Economic Forum’s 2024 Global Risks Report identified AI-powered misinformation as the biggest short-term risk facing society. The use of AI to spread misinformation in recent elections in India and Brazil, as well as incidents in the United States (such as the use of audio deepfakes to dissuade voters in New Hampshire from going to the polls), indicate the emerging threats of AI to democracy and election integrity.
Despite growing awareness about AI and its impact on democracy, there is a shortage of empirical evidence about how the general public feels about AI and its impact on elections. To address this gap, researchers Harry Yaojun Yan, Garrett Morrow, Kai-Cheng Yang, and John Wihbey surveyed a random sample of 1,000 Americans to gauge how the US public feels about AI in the context of the upcoming general election in November 2024. This research project – currently a pre-print awaiting peer review – is one of the first attempts to “pinpoint the origins of public concerns about AI in elections.” The findings provide a baseline for researchers to see how these perceptions evolve after election day while also providing crucial insights to aid policymakers in crafting effective and targeted regulations.
The survey found that four out of five respondents expressed some concern about AI-generated misinformation shaping the upcoming election, while only ~10% of the sample had no such concerns. The researchers looked at whether demographic factors affected these findings and found that older participants and those with high education levels were more concerned about AI. However, these findings were not statistically significant. The authors were intentionally vague about what “AI-driven misinformation” entails, given the general public’s lack of technical expertise. In an interview with Tech Policy Press, Northeastern Assistant Professor of Journalism and Media Innovation John Wihbey said, “Beyond this sort of broad cultural products like movies and TV, they (i.e., respondents) probably don't have a really strong grasp of what AI-driven misinformation would be.” Thus, the researchers tried to focus on scoping general public perceptions towards AI threats rather than parsing out specific details about different AI technologies and how they would shape election integrity efforts.
The researchers also wanted to understand whether interactions with AI products (like OpenAI’s ChatGPT) and consumption of AI-related news shape concerns about AI and the US election. Participants who reported familiarity with AI tools like DALL-E and ChatGPT were marginally less likely to express concerns about AI disrupting the election. Still, these findings were not significant, showing that access and awareness about AI products is not a strong predictor of AI-related fears.
To gauge the impact of news consumption on concerns about AI and elections, the researchers asked respondents about their frequency of consuming AI-related news and incorporated that data into their regression model. They found that those who consume more AI-related news expressed significantly higher concerns about AI spreading misinformation in the election. Interestingly, these findings differed based on the source of news. Respondents who reported getting their news from television sources (51.1% of the sample) were more likely to fear AI misinformation compared to those who relied on other sources.
Implications
This study shows that concerns about AI disrupting the upcoming US election are due to a combination of factors, including “worries about election integrity in general, fear of the disruptive potential of AI technology, and its sensationalized news coverage.” These findings point to the importance of media literacy campaigns that educate the general public about what AI is and how it can shape democratic processes. While media literacy campaigns aren't always effective, the authors argue that it is crucial to “reform the education system to incorporate more up-to-date materials about generative AI and its potential impact on society.”
This study also showed that education levels and awareness about AI products do not greatly affect public opinion. Rather, news consumption and, especially, consumption of TV news related to AI had the most impact on amplifying fears about AI and the upcoming election. The authors call this the “mean AI syndrome,” wherein TV news anchors often hype the threat of AI, causing the public to echo such doomsday scenarios. It is crucial that journalists and TV newsrooms covering AI focus on balanced reporting that refrains from unsubstantiated AI-hype stories.
Such balanced reporting will ensure an informed public, which will, in turn, lead to a more “nuanced understanding and thoughtful policy development regarding AI.” Whether these concerns about AI disrupting elections are legitimate or overblown, only time will tell. Wihbey also spoke to Tech Policy Press about how to build on these findings for the future. He spoke about possibly replicating this study after the US election and comparing findings pre and post-election. On the question of broadening the scope of the research to include other countries, Wihbey said, “I think it would make an awful lot of sense to have some comparisons with other countries. We've basically got the Anglo-American sphere, but that's pretty parochial. I mean, I'd love to have continental Europe, Africa, Latin America, Asia. Obviously, cost is always an issue.”
Future research efforts may change the findings and potential implications of this study. That being said, a core message from this paper is that investing in responsible journalism about AI and educating the public about how to spot AI-generated content will ensure that those going to the polls are well-informed and able to distinguish between fact and fiction, something that is the bedrock of an informed citizenry and a functioning democracy.