Home

Donate

Tracking Social Media Platforms’ Fluctuating Approaches to US Elections

Elise Silva, Beth Schwanke / Nov 13, 2024

Elise Silva is the director of policy research at the University of Pittsburgh’s Institute for Cyber Law, Policy, and Security. Beth Schwanke is the executive director.

Social media’s impact on elections is undeniable—from shaping news consumption to publishing political ads to platforming or de-platforming conspiracy theories. In the wake of the 2024 US election, researchers, journalists, and the public will no doubt spend time unpacking how platforms impacted the information environment surrounding the election for months to come.

Yet, platforms’ approaches to election-related content are nearly impossible to follow given layers of decreasing transparency and vacillating approaches shaped by social and political events; even as those very approaches then, in turn, reshape the social and political landscape. Recent work, including that by the Center for Democracy and Technology on social media platforms’ political advertising policies, has sought to document and understand these policies in the context of past and current election information ecosystems. Other work, including from the Institute for Strategic Dialogue (ISD), provides valuable contextualized analysis assessing platforms’ preparedness for the 2024 US elections.

To this growing body of work, we add our contribution to documenting social media platforms’ election-related actions: a tool created to broadly chronicle a timeline of selected platforms’ evolving policies that impact the information environment related to US elections and campaigns from 2016 to the present. We created the Social Media Election Policy Tracker to support those interested in understanding information landscapes, elections, and the complex relationship between technology companies, platforms, and democratic processes. Tracking the evolutions and developments in social media platforms’ election policies over time is a valuable tool for understanding how these platforms' actions align with major news, social shifts, and political events.

Our methods for creating the Social Media Election Policy Tracker rely on desk research and information gathering on the open web—recording and organizing searchable and findable documentation in company blogs, press releases, news coverage, and executives’ congressional testimonies regarding what we consider as platform election policies, or statements on matters that are election policy adjacent. Our work seeks to track the ebbs and flows of these decisions over time. We self-consciously outline the scope of this project as including policies that we could find published by the platforms themselves, or through verified news sources’ tracking of policies through comments, analysis, or investigation. This approach does not include research that uncovered unacknowledged approaches or a lack of action by platforms, instead choosing to focus on the story available to the broader public.

We define election policy widely, including issues such as those policies aimed at promoting voter registration, providing accurate election information, combating foreign election interference, mitigating the spread of false or misleading information about elections, and policies surrounding political advertising. Our inclusion criteria extend beyond formal policies to internal changes made by social media companies that directly impact election information environments. Examples of these changes are adjustments to election integrity team staffing and the implementation of specialized tools like AI-generation labels for synthetic election-related content. We do not capture broader policies, such as violent extremism and hate speech, that, while unfortunately relevant to US elections, are not currently framed by platforms in election-specific ways. We also sought to represent a number of diverse social media platforms in terms of numbers, notoriety and impact, and user demographics. Those represented include YouTube, Facebook, X/Twitter, TikTok, Gab, and Parler.

We consider the evolution of social media platform election policies across a range of periods:

1. The 2016 election and its aftermath:

We document a narrative that opens with revelations of foreign influence attempts in the 2016 Presidential campaign, spurring Congressional attention to Twitter, Facebook in particular. In response, platforms focused initially on what they call ‘inauthentic behavior,’ cracking down on (suspected foreign) bots. YouTube received less national scrutiny during this time but endeavored to elevate authoritative content through information panels and changes to its algorithm and recommendation feature. In the lead-up to the 2018 midterms, platforms launched various features aimed at increasing transparency/verification around political ads, while Twitter and TikTok (becoming available in the US in August 2018) banned political ads entirely in 2019. Third-party fact-checking programs became commonplace in this period, although they quickly became a source of political division, with some falsely claiming right-wing views were disproportionately censored.

2. 2020, learning from past mistakes:

As the 2020 Presidential election approached, platforms rolled out more design features (information panels, political advertising/sponsored by labels, etc.), some demonstrated during the 2018 midterms, that redirected users to vetted information about candidates and election processes. Disallowed election content focused on voter interference and intimidation, such as misleading information about the time/place of elections. Other election content that ran afoul of non-election-specific platform community standards/rules was largely removed or limited; for example, the decision to temporarily restrict the New York Post Hunter Biden laptop story fell under platforms’ policies on misinformation (Meta) or hacked materials (Twitter). Meta often points to its “newsworthiness allowance” (rolled out in October 2016 in relation to a decision whether to allow photos of the Vietnam War era “Napalm Girl” photo) to permit content by politicians that otherwise violates its Community Standards. However, in 2020, both Twitter and Facebook added labels to many election-related posts from President Donald Trump to direct users to authoritative information about the electoral process or alert readers that the information was misleading. The platforms also rolled out educational features related to elections and voting, such as Meta’s Voter Information Center, Twitter’s Election Center, and YouTube’s YouChoose 2020 campaign.

3. The 2020 election and its immediate aftermath:

This time marked some of the platforms’ most proactive efforts to address election interference. Before the election, Google announced it would cease running political ads on all platforms (including YouTube) after polls closed on Election Day; Meta followed suit with a decision not to run political ads for one week after the election and labeling posts that claimed premature electoral victory. Anticipating the possibility of a disputed election, Twitter set forth advance standards by which it would consider the election decided; early victory claims were to be labeled as premature. In the aftermath of the election, YouTube committed to remove any videos that endorse conspiracies of a stolen election beginning on December 9, the safe harbor deadline. Despite the changes, social media platforms were hotbeds of false information regarding the 2020 elections, including “Stop the Steal” voter fraud conspiracy theories.

Online discord spilled into real-life violence on January 6, 2021, during the US Capitol riots. The events of January 6th promote uncommonly strong responses across the main platforms. In the immediate aftermath of the attack on the Capitol, Twitter and Meta suspended President Trump’s accounts—a departure from Meta’s traditional “noteworthy” exception that it applies to politicians. Google suspends all political ads from January 14 – February 22 in an effort to slow the spread of misinformation.

4. 2022 midterms: 

The 2022 election saw social media platforms largely maintain policies adopted in the wake of January 6 (excluding Twitter, which stopped removing content about the Big Lie in January 2022); though the application of policies designating ‘Big Lie claims’ as disallowed content was spotty. Former President Trump remained suspended from Twitter and Meta during the midterms.

5. Post-2022 midterms mark a more hands-off approach:

Following the 2022 elections, Meta quietly changed its policy to allow ads claiming that past elections were stolen/rigged; in January 2023, the company restored former President Trump to the platform in accordance with the timeline recommended by the quasi-independent Meta Oversight Board. Upon release, Threads was exempt from Meta’s regular fact-checking programs and did not feature verification labels. Meta’s other platforms introduced a feature allowing users the power to turn off fact-checking for their feeds. YouTube also reversed its previous ban on “content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections” in June 2022.

6. 2023 is marked by large-scale layoffs in Big Tech:

In a March 2023 letter to employees, Mark Zuckerberg stated that “we’re focusing on returning to a more optimal ratio of engineers to other roles.” Although public data on industry staffing for election integrity and trust & safety work is hard to come by, it appears clear that trust & safety roles were hit hard across platforms.

Twitter (rebranded X), acquired by Elon Musk in October 2022, saw dramatic change. For instance, its Blue Check program overturned previous account verification strategies. Further, the company disbanded its external advisory Trust and Safety Council, allowing previously banned users to return to the platform. X also reversed a previous ban on political advertising and instituted an opaque certification process, removed state media labels, cut an estimated 80% of staff, and shifted towards a content moderation policy that “de-amplified” violative content as opposed to removing it. Content quality declined accordingly: in September 2023, European Commission Vice President Věra Jourová labeled X “the platform with the largest ratio of mis- or disinformation posts.”

7. Lead up to the 2024 presidential election:

As companies navigated unprecedented civic engagement online, they also faced a backlash for their decisions, caught between accusations of censorship, bias, and political motivations for policies on the one hand and concerns that they stepped back from earlier useful—if imperfect—efforts on the other. With the growing specter of AI and its potentially harmful impact on election information and related content, Meta and Google released policies requiring labeling requirements for AI-manipulated media. At the same time, X’s chatbot, Grok, spread election-related misinformation after President Biden ended his re-election bid, and Meta’s chatbot responded incorrectly about the attempted assassination of former president Trump; a limitation of LLM-driven technology’s ability to respond to rapidly developing events. Meta, notably, lifted remaining restrictions on Trump’s Facebook and Instagram accounts prior to the election; discontinued its CrowdTangle data tool, making it harder for researchers to study the platform; and ended temporary content demotion pending fact-checks. Many social media companies (Meta, YouTube, X) banned Russian-affiliated state network accounts for their attempts to influence the 2024 election.

8. Post-2024 election period:

Given the widespread voter fraud claims that plagued the 2020 elections, there was understandably much commentary about how social media companies would react to a similar situation in 2024. Such a situation, however, did not come to pass. Indeed, as the Center for an Informed Public illustrated, claims of fraud on Election Day dropped dramatically as it became clear President-elect Trump would win.

In the week since the election, we have already seen the outcome affecting platforms’ policies. For example, in early November, Meta extended its policy to ban new ads for several days after the polls closed—longer than normal. But after the election outcome became clear, it updated the ban to last only until November 7. President-elect Trump’s victory and the choices made by his administration and a new Republican-led Congress will no doubt continue to impact platform choices in the years to come.

A Work in Progress: The Tracker's Gaps Are Telling

In developing this Tracker, we were able to visualize and reflect on broader trends in comparative policies over the last eight years. The trajectory is clear: we have seen platforms move from a reactive policy approach to a proactive policy approach and now back to a less active policy approach, particularly for some of the major actors in the social media election world, such as Meta, YouTube, and X.

Our Tracker is imperfect. One of the challenges we faced in developing the Tracker was the general lack of platform transparency. It is quite difficult to find information surrounding many social media platforms’ election policies. Often, important information about these policies is buried in corporate reports, blog posts, fleeting media coverage, or social media posts from executives. Indeed, most of what we include in the Tracker is from the platforms themselves and so is largely shaped by the image they want to convey. There is value in representing, in a centralized place, what is findable through popular and open media sources exclusively and, therefore, accessible to the public.

While this Tracker necessarily cannot be exhaustive, we think these gaps tell a story in and of themselves. They highlight the need for greater platform accountability and transparency around election-related policies. The gaps also show us where more critical attention is needed to understand how social media companies are (or are not) prioritizing this critical information environment.

Gaps, in this sense, are informative for many reasons. For example, missing information about previously documented policies could indicate a reversal or quiet removal of a policy without public awareness. Policies may sound impactful, but without appropriate staffing, resources, and authority, they may be little more than (online) paper responding to external pressures. Our struggle to find historical information through our internet searches underscores the critical need for archiving all of a platform’s social media policy documents in a centralized, accessible place. In the absence of this, the Tracker serves as a historical record, a living archive to track these changes over time.

While some gaps in coverage are inevitable and indeed reveal the lack of transparency, it is important to note that collecting documents to add to the tracker, both historic and ongoing, is a developing project. We are continually researching and adding new records, and we welcome contributions from the Tech Policy Press readers and the broader public to help us expand and improve the Tracker.

We invite researchers, journalists, and members of civil society to contribute to the Social Media Election Policy Tracker. If you have information about platforms’ election policies that we have missed, please contact us. Your contributions will help us to make the Tracker a more comprehensive and valuable resource. As we move beyond the 2024 election cycle, our hope is that the Tracker will continue to grow and adapt, reflecting the ongoing dialogue about platform responsibility and the future of a healthy online civic information environment.

Related reading:

Authors

Elise Silva
Elise Silva, PhD, MLIS, is the Director of Policy Research at the University of Pittsburgh's Institute for Cyber Law, Policy, and Security, where she studies information ecosystems and conducts tech policy-related research.
Beth Schwanke
Beth Schwanke, JD, is the Executive Director of the University of Pittsburgh's Institute for Cyber Law, Policy, and Security.

Topics