Home

Looking to the Midterms: The State of Platform Policies on U.S. Political Speech

Daniel Kreiss, Erik Brooks / Oct 13, 2022

Erik Brooks is a Ph.D. student in the Hussman School of Journalism and Media at the University of North Carolina at Chapel Hill and a Graduate Research Fellow at the UNC Center for Information, Technology, and Public Life (CITAP); Daniel Kreiss is the Edgar Thomas Cato Distinguished Associate Professor in the Hussman School and a principal researcher at CITAP.

Introduction

As political mis- and dis-information have become increasingly widespread over the past decade, the moderation of political speech on social media platforms has emerged as a complex and complicated problem. As a result, there are increasingly fierce debates about moderation and platforms’ role in securing election integrity and security.

In 2020, in advance of the U.S. presidential election, researchers at the UNC Center for Information, Technology, and Public Life (CITAP) published a report titled “Enforcers of Truth: Social Media Platforms and Misinformation.” The report documented how major social media platforms—including Facebook and Instagram, Reddit, Snapchat, Twitter, and YouTube—use both internal teams and third-party companies to moderate four categories of political content related to democratic processes, manipulated media, tragic events, and health. We provided a side-by-side comparison of social media platforms’ differential attempts to enforce their policies on election misinformation and some of the considerable challenges they faced in doing so.

Since the 2020 publication of this report, however, some notable changes have taken place regarding these platforms’ content moderation policies, primarily concerning democratic processes and manipulated media. Here, we take stock of the state of play in October 2022 in advance of the U.S. midterm elections, with an eye towards reviewing these policies and noting where they have, and have not, been updated. We find:

  • Increased saliency of discussions in platform policy documents regarding social media’s role in protecting elections and election integrity;
  • Reevaluations of previous thinking concerning ‘public interest’ exceptions related to potentially false, harmful, or misleading posts;
  • Detailed outlines and approaches for handling challenges related to the upcoming 2022 U.S. midterms by Meta, Twitter, and YouTube;
  • Updates to Meta political advertising and fact-checking policies;
  • The continuation of Twitter’s complete ban on political advertisements;
  • An updated, more nuanced discussion of Reddit’s unique political content moderation approach and challenges;
  • YouTube’s new election integrity policies in the wake of the 2020 election;
  • Few, but notable, changes to platforms’ manipulated media policies, primarily concerning troll farms, impersonation and spam; and,
  • Little to no changes concerning parody, satire and humor as it relates to manipulated media.

Finally, while our 2020 report focused on Facebook and Instagram, Reddit, Snapchat, Twitter, and YouTube, this report adds analysis of TikTok’s policies as they relate to democratic processes and manipulated media, as well as the platform’s broad approach to the 2022 U.S. midterms.

Revisiting Democratic Processes and Manipulated Media in 2022

Democratic Processes

Of particular note since 2020 is the increased saliency of discussions surrounding the extent to which–if at all–social media companies should be involved in protecting democratic processes and election integrity. Following the 2021 attempted coup at the U.S. Capitol, social media platforms such as Twitter and Facebook were again thrust into the limelight for their perceived culpability regarding the actions that took place on January 6, 2021 as well as the events that may have led up to it.

Twitter

In 2020, “misleading content” and “election misinformation” were against Twitter’s policies but may have been granted a public interest exception if posted by a world leader. Following the attack on the Capitol, this “public interest framework” was reevaluated by Twitter when the company stated that then President Trump’s Twitter posts violated other policies, such as the “Glorification of Violence” policy. Twitter outlined its rationale for the subsequent permanent suspension of President Trump’s account, arguing that his tweets following the events at the Capitol were “likely to inspire others to replicate the violent acts that took place on January 6, 2021, and that there are multiple indicators that they are being received and understood as encouragement to do so.” Although Twitter has not substantively changed its policies regarding election integrity and misinformation, this change reveals that the company is willing to break with its public interest exception policy when faced with extraordinary events that entail violence.

Twitter has recently outlined its approach to the 2022 U.S. midterms. The company has made a commitment to re-activating its Civic Integrity Policy, originally implemented in October 2021, for the 2022 U.S. midterms, which aims to tackle harmful misleading information regarding elections and civic events. Types of moderation the company states that it may engage in under this policy include: 1) moderating claims about how to participate in a civic process, such as voting; 2) moderating misleading content intended to intimidate or dissuade users from participating in elections; and, 3) moderating misleading claims intended to undermine public confidence in an election, which may include mis- or disinformation about the results or outcome of a given election.

Relatedly, Twitter has also brought back “prebunks” ahead of the 2022 midterms, which aim to preempt the spread of misleading information or narratives on the platform with proactive prompts on user timelines, as well as in search results for related terms, phrases, or hashtags. Lastly, Twitter has continued to carry over its policy of globally prohibiting the promotion of political content, including ads of any type that relate to candidates, political parties, elected or government officials, regulations, policies, and ballot measures.

Meta

Facebook and Instagram, now Meta, took similar actions to those of Twitter in 2021 regarding the suspension of former President Trump’s accounts. In June, 2021, this decision was upheld by Facebook’s Oversight Board. However, unlike Twitter’s permanent suspension of the former president, Meta opted for a two year suspension starting January 7, 2021. The company contended that its “job is to make a decision in as proportionate, fair and transparent a way as possible, in keeping with the instruction given to us by the Oversight Board.” Ahead of the midterms, Meta stated that:

We are also committing to being more transparent about the decisions we make and how they impact our users. As well as our updated enforcement protocols, we are also publishing our strike system, so that people know what actions our systems will take if they violate our policies. And earlier this year, we launched a feature called ‘account status’, so people can see when content was removed, why, and what the penalty was.

In advance of the 2022 elections, Meta argued that the company is actively taking measures to prevent the spread of misinformation and violent content and providing context for people to make informed decisions. In a recently updated transparency statement regarding its approach to elections, Meta outlines a number of policies including: 1) labeling misinformation, 2) removing violent content including “fake accounts and misinformation that may contribute to the risk of imminent violence or harm,” and, 3) employment of a third-party fact checking program, which includes over 80 partners in over 60 languages. Additionally, as outlined in a 2022 Factsheet regarding the company’s approach to the U.S. midterms, Meta reaffirmed its commitment to “preventing election and voter interference” through monetary investments in safety and security ($5 billion in 2021), fighting foreign and domestic deceptive actors, and demoting “Groups content from members who have broken our voter interference policies and other Community Standards….”.

For political ads, Meta has consistently deployed third-party fact checkers since 2021. However, Meta’s “Preparing for Elections” page specifies ways in which third-party fact-checking is happening as well as how they are increasing transparency around political advertising and pages. This includes:

1) Ad Library - all ads are stored and searchable for seven years;

2) Verifying Political Advertisers - advertisers have to undergo an authorization process;

3) Disclosure of Political and Issue Ads - inclusion of “paid for by” disclaimer and additional information such as a Federal Election Commission ID or Tax-ID number;

4) Transparency Around Pages and Accounts - listing the owner, location, etc. of

Facebook pages;

5) Political Branded Content Live Display - ability to see what presidential candidates are saying and which branded content they sponsor;

6) Controls to Turn Off Political Ads - ability to silence political advertising content on one’s account; and,

7) Elections Ads Spending Tracker - ability to see how much candidates spent on ads (current data reflects spending leading up to the 2020 election).

Since January 2021, Meta has also stopped allowing advertisers to target users based on factors such as religious beliefs and race/ethnicity, in what we assess to be a problematic color-blind way. While this has been a matter of concern for political groups, and has benefited some candidates and causes over others, Graham Mudd, Meta Platforms vice-president of product marketing, argues:

We want to better match people’s evolving expectations of how advertisers may reach them on our platform and address feedback from civil rights experts, policymakers and other stakeholders on the importance of preventing advertisers from abusing the targeting options we make available.

The company has also stated the importance of moving beyond the English language in its content moderation efforts: “Univision and Telemundo added fact-checking tiplines on WhatsApp as part of its fact-checking program to give people tools to verify Spanish-language information.” The company also has increased its contextual language sensitivity and started: “showing election-related in-feed notifications in a second language other than the one from your app settings if we think the second language may be one you better understand. For example, if a person has their language set to English but is interacting with a majority of content in Spanish, then we will show the voting notifications in both English and Spanish.”

Reddit

In 2020, we noted that “Reddit is the farthest from flat-out banning” content such as misinformation about elections. This broadly remains true due to the fact that the majority of content moderation occurs on the subreddit level, and that most subreddits are not concerned explicitly with issues of politics and elections. However, there is a great deal of variety and nuance to how political subreddits approach election-related misinformation. Reddit’s broad content policies which we outlined in 2020 have not significantly changed within the last two years.

However, it is worth explicating and clarifying Reddit’s unique situation when it comes to political content moderation. While the notion that Reddit is the farthest from flat-out banning content relating to democratic processes is broadly true, Reddit's content policies, namely Rule 1, do encapsulate wide-ranging harassment, bullying, and incitement to violence moderation, which is quite broad and sweeping. However, as previously alluded to, the reality of each subreddit being responsible for its own moderation, or “self-policing,” makes Reddit an incredibly complicated case to singularly break down. The communities where politics and issues of democratic processes would be relevant are congregated in a comparatively small number of subreddits, such as r/Politics, r/News, r/Democrats, and r/Conservative, among others. As a result, a true account of how these issues are or are not moderated would need to be centered on analyzing and investigating these select politically-related subreddits and how they choose to moderate content.

YouTube

YouTube has undergone significant changes within the last couple years, beginning with a comprehensive outline of new Election Misinformation Policies. YouTube lists specific policies that appear to be in response to the 2020 election and its aftermath. For example, under Election Integrity, YouTube lists specific examples of violations of its content policies, such as: “Claims that a candidate only won a swing state in the U.S. 2020 presidential election due to voting machine glitches that changed votes” and “Claims that the U.S. 2020 presidential election was rigged or stolen,” in addition to similar claims. Similarly, under the Candidate Eligibility section, YouTube lists “Claims that a candidate or sitting government official is not eligible to hold office based on false info about citizenship status requirements to hold office in that country/region” as a violation. This could be viewed as a response to the false claims about Vice President Kamala Harris’s citizenship status circulating in the months leading up to the 2020 election. YouTube has also created a video (published September, 2021) clearly outlining its Election Misinformation policies.

YouTube recently clarified its forthcoming strategy for limiting the spread of harmful election misinformation in a public announcement. In an effort to highlight authoritative information related to the midterms, the platform is going to prominently recommend “content coming from authoritative national and local news sources like PBS NewsHour, The Wall Street Journal, Univision and local ABC, CBS and NBC affiliates,” while subsequently identifying and limiting the spread of “borderline content” that may contain misinformation. In conjunction with these 2022 midterm actions, the platform will continue to remove and moderate content in line with its aforementioned election misinformation policies.

TikTok

TikTok is an increasingly popular and salient platform not included in our previous report. Landing its billionth user in 2021 and on-track to match YouTube in revenue by 2024, TikTok has become one of – if not the – most prominent platforms in the world. As such, the political uses of the app have become more prominent in recent years, leading TikTok to address issues of political and election misinformation. In TikTok’s comprehensive and extensive Community Guidelines, the platform outlines what it considers to be Harmful Misinformation (which it defines as information that is inaccurate or false) under its Integrity and Authenticity policy section. The platform specifies that it will remove misinformation that causes “significant harm” and lists that users may not post, upload, stream, or share content “undermining public trust in civic institutions and processes such as governments, elections, and scientific bodies.”

In preparation for the 2020 election, Tik Tok outlined “three new measures to combat misinformation, disinformation, and other content that may be designed to disrupt the 2020 election.” These included updating their misleading content policies, expanding their “fact-checking partnerships to help verify election-related misinformation,” and “adding an in-app reporting option for election misinformation.” These measures have stayed in place and appear set to continue for future elections, as outlined in TikTok’s election integrity page. TikTok has collected and shared data regarding the number of accounts removed based on violations of its policies as well as the rate at which content has been removed. This data is available here (the latest information can be found from 2021).

TikTok’s Hateful Behavior policy section comprehensively lists “protected attributes” that may not be the target of attacks or slurs on the platform:

  • Race
  • Ethnicity
  • National origin
  • Religion
  • Caste
  • Sexual orientation
  • Sex
  • Gender
  • Gender identity
  • Serious disease
  • Disability
  • Immigration status

While TikTok allows commercial ads, the platform does not allow what it defines as political ads. Under its Political Ads policy, it states that ads promoting, referencing, or opposing: 1) a candidate for public office, 2) a current or former political leader, political party, or political organization, and, 3) content that advocates a stance (for or against) on a local, state, or federal issue of public importance in order to influence a political decision or outcome—are explicitly not allowed. Despite this, however, the platform asserts that, “Cause-based advertising or public service announcements from non-profit organizations or government agencies may be allowed, if not driven by partisan political motives.” How the platform would assess the underlying motives is unclear.

Manipulated Media

The ease with which people can edit, change, and manipulate visual and written media, including with new synthetic media tools and techniques, has raised concerns about the blurring of reality and truth. As we outlined in the 2020 report, manipulated media is typically held to a higher content moderation standard across many platforms, especially given that it can sometimes be checked against an original.

Meta

As in 2020, Meta’s Manipulated Media policy still does not “extend to content that is parody or satire, or is edited to omit words that were said or change the order of words that were said.” However, in preparation for the 2022 election, Meta notes it now has “advanced security operations to take down manipulation campaigns and identify emerging threats.” For example, Russia-backed troll farms reached over 140 million Americans in the months leading up to the 2020 election. To combat such threats during the 2022 election season, Meta’s “advanced security options” include: 1) stopping influence operations - preventing inauthentic actors from manipulating public debate, 2) fighting abuse - preventing the creation of fake accounts and securing the accounts of elected officials, candidates, and their staff, and, 3) global partnerships and collaborations. The company states that this is to prevent any emergent threats. Relatedly, as outlined in a news release from January 6, 2020, Meta has engaged in a number of partnerships to identify and combat deepfakes, a particularly difficult and challenging form of manipulated media to detect. Meta has continued its deepfake policies and removal efforts through the present.

Twitter

Twitter continues to prohibit users from posting in a “manner intended to artificially amplify or suppress information” in their Platform Manipulation and Spam policy. Twitter has a range of enforcement options for content that violates their rules. At the level of a tweet, this includes: 1) labeling a tweet that may contain disputed or misleading information, 2) limiting tweet visibility, 3) requiring tweet removal, and 4) hiding a violating tweet while awaiting its removal.

TikTok

TikTok’s “Integrity and Authenticity” policies, under the platform’s general Community Guidelines, notes that users may not post “Digital Forgeries (Synthetic Media or Manipulated Media) that mislead users by distorting the truth of events and cause significant harm to the subject of the video, other persons, or society.” With input from its Content Advisory Council, TikTok states that the platform is always updating its guidelines. Prior to the 2020 election, on its Combating Misinformation and Election Interference on TikTok policy page, TikTok shared its efforts “to protect users from things like shallow or deep fakes, so while this kind of content was broadly covered by our guidelines already, this update makes the policy clearer for our users.”

Reddit

Reddit continues to have broadly worded manipulated media policies. Reddit houses its manipulated media clause within its “Do Not Impersonate an Individual or Entity” policy. Like many other platforms, Reddit notes that parody and satire are permitted, however the context may be taken into consideration.

YouTube

YouTube has one of the more comprehensive policy pages when it comes to manipulated media. For example, the platform specifies multiple types of impersonation (channel impersonation and personal impersonation) on its Impersonation Policy page, which also outlines a strike system for violators of these policies.

Conclusion

This report has outlined how the major social media platforms in the U.S. have responded to the challenges to political life posed by election-related mis- and dis- information since 2020. Although these platforms have responded with updated and refined policies, researchers such as Samantha Lai argue that “...the same issues over the algorithmic amplification of disinformation and misinformation and microtargeted political ads will once again resurface. Much work remains to be done for the U.S. to rise to the challenge of protecting the integrity of our elections.”

Thankfully, there is somewhat greater accountability over platforms now. In recent years, a number of other organizations and institutions have put forth insightful and encouraging catalogs of knowledge concerning social media platforms and elections, which we want to highlight here.

The Bipartisan Policy Center, in partnership with the Integrity Institute, has compiled a filterable Technology Platforms Election Database, which houses information from nearly 50 companies concerning their work, policies, and positions on political elections across the globe over the past 20 years. This extensive database offers great insight into how, and in what ways, tech platforms have changed and adjusted election-related policies and positions over recent years, outside of just 2020-2022.

The NYU Ad Observatory, a project of NYU Cybersecurity for Democracy, provides the ability for users to explore political advertising across Facebook and Instagram. Users are able to search by keywords, topics, sponsors, regions, as well as by elections, to see analyses on spending, messaging trends, and more.

The previously mentioned Integrity Institute is a social platform-focused institution whose mission is to “advance the theory and practice of protecting the social internet” and “understand the systemic causes of problems on the social internet and how to mitigate them.” The group offers services and insights for companies, policymakers, and academics alike to advise, collaborate, and analyze a variety of social media platform problems and questions, including those related to politics, government, and elections.

A program of the Stanford Cyber Policy Center, the Stanford Internet Observatory (SIO) is a cross-disciplinary research, teaching and policy engagement program focused on studying and analyzing abuse among various information technologies, with a particular focus on social media. Some of its major projects include Platform Trust and Safety, Platform Takedown Reports analyzing removed information from platforms, and the 2020 Elections Oral History Project.

Relatedly, SIO, in tandem with the University of Washington Center for an Informed Public, assists in leading the Election Integrity Partnership (EIP). Founded in 2020 in response to emerging threats to election safety and integrity, the EIP is a non-partisan coalition that seeks to “empower the research community, election officials, government agencies, civil society organizations, social media platforms, and others to defend our elections against online behavior harmful to the democratic process” both during and after election cycles. Heading into the 2022 U.S. midterms, the EIP continues these efforts in analyzing election information, studying public conversation surrounding elections, and striving to reduce election confusion amidst a crowded information environment.

Taken together, these efforts, alongside ours at the UNC Center for Information, Technology, and Public Life, provide important transparency into the role that platforms have in shaping public life and political discourse. With emerging legislative efforts at the state and federal level, and judicial review underway in cases that directly affect platforms, it is crucial that we have a good understanding of what platforms are currently doing to protect electoral institutions, promote political speech, secure democratic inclusion, and ensure public participation in elections and governance.

Madhavi Reddi contributed background research for this report.

Authors

Daniel Kreiss
Daniel Kreiss is the Edgar Thomas Cato Distinguished Associate Professor in the Hussman School of Journalism and Media at the University of North Carolina at Chapel Hill and a principal researcher of the UNC Center for Information, Technology, and Public Life.
Erik Brooks
Erik Brooks is a Ph.D. student in the Hussman School of Journalism and Media at the University of North Carolina at Chapel Hill and a Graduate Research Fellow at the UNC Center for Information, Technology, and Public Life.

Topics