Home

Donate

Content Concerns: Navigating Declining Newsfeed Quality in the 2024 Election

Rehan Mirza / Nov 5, 2024

Shutterstock

In the months leading up to the 2020 U.S. Presidential election, social media became a battleground for dis/misinformation at an unprecedented scale. A study looking at Facebook content two months prior to the 2020 US Presidential election found that 23 percent of political image posts on the social media platform contained misinformation. Following voting on November 5th, “Stop the Steal,” a group falsely casting doubt on the election's legitimacy, grew to 360,000 members in less than 24 hours before being taken down.

As Jeff Horwitz highlights in The Broken Code, platform design choices enabled these concerning outcomes. A tiny fraction of pages and groups drove 94 percent of engagement on political image posts. “Toxic virality” was propelled by a handful of "super-sharers" who sent thousands of invites daily and group organizers manipulating content to evade automated detection systems. Facebook’s Civic Integrity team stepped in.

The platform set limits on "bulk invites" and imposed a three-week hold before recommending new groups. When "Stop the Steal" incited violence and hate speech against election workers, the platform introduced 64 unprecedented "Break the Glass" measures designed to ‘slam the brakes’ on groups that could spread misinformation and extremism. These measures reduced violence and hate speech to pre-election levels within a month. This time around, Facebook and other platforms have made it clear: they’re stepping back, potentially leaving misinformation and toxic content unchecked to spread freely.

Platforms step back from moderation

Layoffs at social media companies have hit trust and safety teams hard. Despite their pivotal role in the post-election crisis, Facebook’s Civic Integrity team was dissolved following the elections, and its members were reassigned to disparate teams, leading many to leave the company. As Nora Benevidez at Free Press reports, between 2022 and 2023, Meta, X (formerly Twitter), and YouTube laid off 40,000, including significant cuts to trust and safety. Despite recovering cashflows and profits in 2023, layoffs continue in 2024 as investors demand cost discipline. Integrity roles are easy targets as they are not seen as profit centers, with perceived trade-offs between integrity and engagement.

Platforms have also shifted to a more “hands off” approach to deciding what content appears on users’ feeds. Since adopting free speech as a core platform value, X now suspends significantly fewer users for hate speech violations compared to the period before Elon Musk’s takeover. Having cut 80 percent of its trust and safety engineers, the platform is noted to rely heavily on its Community Notes feature, effectively delegating moderation responsibility to its users. Meta now asks users to adjust their settings to dial down exposure to content flagged as false by fact-checkers. The platform has also stepped back from recommending political and news content in the feed. These decisions, framed as giving users more choice, place the onus on users to filter content and reduce platform accountability.

The platforms are stepping back just as AI presents new challenges to information integrity. In 2020, misinformation was often “highly repetitive and simple.” Generative AI introduces variety to boost virality. Furthermore, an army of bots can spam likes and positive reactions to create a false sense of social proof. It can also create sophisticated deepfakes – hyper-realistic images, audio, and videos mimicking real people or events. Audio deepfakes cost as little as $5 to produce and are particularly hard to distinguish. Moreover, extensive coverage gives the impression that “AI is everywhere”. This can be exploited through the “liar’s dividend” – real content is falsely dismissed as AI-generated, further eroding trust in information.

How this could affect the election

Following claims of a 'stolen election in 2020,' disinformation-fueled attacks on the integrity of election officials have become common, aimed at undermining public trust in the electoral process. Efforts to challenge the integrity of this year's election highlight how online narratives can translate to real-world mobilization. For instance, a coalition of 70 organizations formed on the basis of a false claim that non-citizens are voting in US elections in large numbers falsely asserts that "millions of illegal aliens and noncitizens may be able to vote in November."

This incident is part of a larger movement against election certification. The New York Times reports that election officials have faced increasing pressure from local residents influenced by conspiracy theories and election denialism. In some cases, these officials, citing "patriotic duty" have themselves adopted these beliefs. Furthermore, millions of dollars of funding have poured into "election integrity" groups, which strategically place allies into local election offices to refuse certifications.

Voter disenfranchisement is a related concern. The Brennan Center for Justice warns AI could exacerbate disenfranchisement efforts by creating false narratives or emergencies to dissuade people from voting, especially in minority communities. For instance, they could deploy AI-generated audio or video deepfakes to fabricate false emergencies at vote centers, falsely portray officials as preventing eligible voters from casting ballots, or deploy chatbots to create apathy about voting. Disinformation campaigns designed to undermine trust in elections targeted minority communities in 2020 including the Latino and Indian American communities. Last year's Slovakian national elections, where an audio deepfake falsely portrayed a progressive candidate discussing election manipulation tactics, are a high-profile example of how AI could undermine trust and disenfranchise voters.

Incitement to violence is also a serious issue. Meta's Oversight Board has documented several cases of political figures spreading inflammatory messages during elections and noted the platform was slow to act. Amongst its recommendations, the Board comments that "during a period of a heightened risk of violence, [messages which encourage or legitimize violence] should not be protected under the guise of the right to protest."

While these threats to election integrity have always existed, they risk becoming significantly worse due to recent cuts to trust and safety teams and the rollback of content moderation efforts by social media platforms.

What can the public do now?

Emerging evidence shows the consequences of declining information quality on audience numbers, which could, in the long term, provide platforms with incentives to invest in integrity. The USC Neeley Social Media Index shows that 30 percent of adults saw content they considered "bad for the world" on social media, primarily on X and Facebook. A Gartner survey predicts that by 2025, declining social media quality will cause 50 percent of consumers to "abandon or significantly limit” their use.

Consequently, the long-term economic costs for platforms of declining information quality are increasingly recognized. A 2019 internal Facebook study found that disabling clickbait increased long-term user activity. Stanford researchers found that downranking posts with “anti-democratic attitudes” reduced partisan animosity and preserved engagement. Researchers across leading universities and tech companies suggest balancing traditional engagement metrics with quality metrics. Furthermore, AI could enable integrity at a lower cost – ex-Meta integrity employees are developing an AI-powered content moderation model, promising “better memory, consistent decision-making, and enhanced reasoning”.

Promising pathways to improve information quality exist. These measures shine a light on a changing economic calculus, highlighting a critical intersection between user safety and commercial viability. However, these solutions will take time to implement, and with the upcoming elections, the urgency for effective action is paramount.

Given this urgency, the Shorenstein Center has produced guidance to help voters stay informed as they navigate social media this election. For example, one way to immediately take action is to update your user settings and reduce the presence of low-quality information on your feed. By default, false content (rated as false by third-party fact-checkers), clickbait and spam, and content defined as “unoriginal” by Meta are not by default minimized on Facebook and Instagram. Our guide provides step-by-step instructions to address this. Nevertheless, it's wise to critically evaluate content across all platforms and fact-check across various sources.

Considering the discussed movements to disenfranchise voters and undermine trust in the election, it is more important than ever to make a clear voting plan during elections.

Finally, to avoid disinformation around elections, we recommend that you rely on your state and local elections offices for election information, as they will provide you with the most accurate information and protect you from efforts to mislead you on social media. The guide shows you where you can find these official sources.

Authors

Rehan Mirza
Rehan Mirza is a researcher and budding writer focused on technology policy and governance to promote information integrity. As a Research Associate for the Democracy & Internet Governance Initiative at Harvard’s Shorenstein Center, he contributed to studies on digital platform governance and its im...

Topics