Home

Donate
Perspective

The Conservative Political Playbook Driving the FTC Platform Censorship Inquiry

Lisa Macpherson / May 21, 2025

Lisa Macpherson is policy director at Public Knowledge.

WASHINGTON, DC - MAY 15, 2025: Federal Trade Commission Chairman Andrew Ferguson testifies before the House Appropriations Committee Subcommittee on Financial Services and General Government in the Rayburn House Office Building. (Photo by Kevin Dietsch/Getty Images)

The Federal Trade Commission recently posted a request for public comment on the topic of “technology platform censorship.” The FTC’s request consists of a series of questions based on the premise that platforms like Facebook, YouTube, and X (previously Twitter) disproportionately “deny or degrade” users’ access to services based on the content of the users’ speech or their affiliations. The request is rooted in a years-long, escalating series of claims that digital platforms systematically censor conservative political speech, and that information researchers, nonprofit advocacy groups, advertisers and their agencies, the media, and various branches of the federal government under the Biden administration conspired with platforms to curtail free expression. Most recently, this conspiratorial campaign has manifested as a cross-agency effort among the FTC, Federal Communications Commission, and the Department of Justice to “smash” what they call “the censorship cartel.”

Most people not closely following inside-the-beltway politics may wonder where this “censorship cartel" of technology platforms, advertisers, and academic researchers came from, and why it needs smashing at the highest levels of government. Here, we’ll review the political history of the FTC’s request, dismantle the notion of the “censorship cartel,” and then suggest ways in which the agency could actually use its powers related to consumer protection and competition to improve content moderation and ensure free speech on technology platforms.

First, an important note on language: Censorship refers to the use of government force to suppress speech and other forms of expression. The First Amendment of the US Constitution protects us from the government restricting freedom of speech, press, religion, and association. When private platforms amplify, label, remove, or reduce the visibility of content, or suspend or ban users who violate their terms of service, it is called content moderation. This isn’t just semantics. Content moderation policies are one of the platforms’ principal means of differentiation and competition in the marketplace (both to attract audiences and to attract advertisers). We use the term “censorship” in this post, not because it is accurate, but to make it clear we are referring to the relevant FTC docket. For more on this topic, see our recent article, “‘Censorship’: President Trump keeps using this word, but I do not think it means what he thinks it means.”

The real incentives behind technology platform content moderation

Another important framing note is that users’ own behavior drives most of what they see online. That is the straightforward result of how most technology platforms make money: by selling users’ attention to advertisers in the form of ads targeted to the users’ interests and behaviors. Because of that business model, the fundamental machinery of most technology platforms consists of algorithms, fueled by behavioral data and calibrated to precisely target and deliver content to users in a way that maximizes users’ attention. Users’ attention is the platforms’ only inventory; it’s what they sell to advertisers. So, platforms are, above all, incentivized to serve more of whatever their users post, dwell on, like, comment on, and share. The platforms call these user behaviors “engagement.”

Here’s the tricky part: To optimize shareholder value, platforms must maximize engagement among their current users while keeping their platform appealing enough to attract new users and advertisers seeking to promote their brands. New user growth is a key measure for financial markets (e.g., it drives stock prices for shareholders), and advertiser revenue fuels current profits (e.g., it drives big bonuses for executives). That is the inherent tension in content moderation: The content that best engages users may not be content with which advertisers want to associate their brands. The financial imperatives that drive platforms’ content moderation are only somewhat and occasionally tempered by real or threatened government regulation or pressure from the public or the media.

Content moderation on technology platforms is conducted through a combination of automated and human processes. Given the volume of content on most platforms, artificial intelligence first identifies content candidates for moderation based on keywords or machine-recognizable signals. Users may also flag content for moderation. Human reviewers then make the final determination of whether or how to moderate the content. (Critics have claimed the “viewpoint discrimination” by platforms can be attributed to the “left-leaning corporate cultures of technology companies.” But the human reviewers are often contract employees based in foreign countries with low-cost labor models, difficult working conditions, and likely little interest in the ideological crosswinds of their employers’ employers.) Some platforms may differentiate themselves by giving certain “moderation privileges” to a subset of users who may be self-appointed (e.g., Reddit), appointed through semi-democratic processes (e.g., Wikipedia), or selected by the platform to supplement automated processes (e.g., community notes on X, Meta platforms, and more recently, TikTok).

In fact, the FTC itself, in this very docket, has removed or redacted content in attachments from some public comments due to their being “inappropriate” or full of “profanity.” Apparently, even the FTC has its own community standards – content moderation rules, if you will – and enforces them rigorously on its own platform.

Both the automated and human components of content moderation have the potential for bias. AI systems are known to have societal biases embedded in their training sets and among the coding teams that create them. Human moderators can also bring their own life experiences and biases into their decision-making. However, this does not mean that political bias in outcomes is the result of any deliberate choices by the platforms. In fact, while empirical research has consistently found evidence of societal bias (as described below), it has consistently failed to find evidence of bias based on political affiliation.

How did we get here? The origins of “conservative censorship”

Public discussion regarding the societal impact of platform content moderation began in the early 2010s due to three factors:

  1. Concerns about the impact of a few highly centralized platforms on users’ free expression;
  2. The velocity of false information exploding on technology platforms and its potential effect on public health and safety; and
  3. The hypothesis that algorithms were pushing users toward increasingly extreme sources of content.

When the dialogue entered the political sphere, these concerns were initially bipartisan. The earliest research into platform content moderation indicated that the circulation of “misinformation” (defined as false information spread unintentionally) was not pervasive (example here). But researchers found substantial evidence that “disinformation” (defined as false information spread with intent) often appeared as concentrated campaigns designed to gain power or profit (example here). Researchers began to focus on how these narratives were seeded and spread, and how digital and traditional channels interacted to get them to scale. The result was a growing body of research illuminating the intersection of disinformation, political campaigns, and platform content moderation.

Accusations of over-moderation of conservative viewpoints by liberal Silicon Valley elites had first ignited in 2016, when the online journal Gizmodo published claims by former Facebook employees (later refuted) that they had suppressed news stories from the platform’s now-defunct “Trending Topics” page because they were of interest to conservative users. There was a major surge of research interest in the topic of political content moderation in 2018 due to revelations of efforts by the Russia-based Internet Research Agency to influence the 2016 US elections, and of Cambridge Analytica providing user analytics to influence political campaigns, such as the 2016 Brexit vote, using personally targeted ads. Given the political dynamics in both the US 2016 presidential election and Brexit, this research caused further divergence in political perspectives on content moderation and heightened accusations from conservative policymakers of online censorship. For the platforms, this created two countervailing political pressures that continue today: pressure on platforms from those on the political left to identify and moderate more content in the interest of protecting health, safety, and democratic institutions, and pressure on platforms from those on the political right to moderate less content in the interest of advancing their political goals.

The accusations of censorship from political elites on the right reached a fever pitch in 2020. That year, researchers and platforms – sometimes in collaboration with government agencies – focused intently on platform content moderation related to the COVID-19 pandemic and the US national elections, both of which had become highly politicized topics. (Public Knowledge assessed and reported on platforms’ content moderation for these events at the time. See our blog post about platform content moderation for COVID-19, and our blog post for the 2020 elections for more information.) One dominant theme of the research was that engagement in false information online was asymmetric, with conservative audiences – especially older, male audiences – more likely to share it. In fact, as one scientist and correspondent wrote, “...study after study has concluded that in the US, misinformation circulates more widely on the right of the political spectrum.” These findings compounded the sense that researchers were part of a broader effort to suppress conservative voices.

Partly in response to the accusations of censorship, researchers started to publish studies and reports that undermined the idea that conservative viewpoints were over-moderated. They argued that when content was moderated disproportionately, it was because of disproportionate use of false information, low-quality information sources, and offensive language in violation of platform policies. (Research also showed that marginalized communities were far more likely to have their content moderated even when it was not violative due to how content policies are crafted, bias in automated moderation algorithms, content filters that lack cultural context, and the inability of automated systems to detect language nuances.) Conservative politicians and their allies began to attack and discredit the researchers and their funders, simultaneously pressuring platforms to loosen their moderation policies. Dedicated research laboratories like the Stanford Internet Observatory (established in 2019) and its Election Integrity Partnership (in 2020) eventually became the focus of political attacks, requests for information, and lawsuits.

President Trump first tried to exert actual government authority against “conservative censorship” at the end of his first term by issuing an Executive Order directing the FTC and FCC to launch various investigations, rulemakings, and enforcement actions (though nothing much came of the effort). The censorship fever climbed even higher, becoming an article of faith for Trump’s followers, when he was deplatformed from Twitter and Facebook for his role in the riot at the US Capitol on January 6, 2021. Both Florida and Texas passed laws claiming to end “censorship” and ensure “neutrality” of social media platforms.

In Washington, Representative Jim Jordan (R-OH) has led the charge among Republicans to keep alive Congressional allegations that platforms and the researchers that investigate them are biased against the right. Eventually, Jordan’s accusations extended to civil society organizations, advertisers and their agencies, government agencies focused on national security and foreign disinformation campaigns, and even the traditional media. As the chair of the House of Representatives’ Judiciary Committee (and the subcommittee he founded, the Select Subcommittee on the Weaponization of the Federal Government), he spearheaded Congressional reports, hearings, and letters advancing the conspiratorial idea of a “censorship industrial complex” or “censorship cartel” that even implicated the Biden White House. (For more information on Rep. Jordan and his attacks on advertisers and their agencies, see our article, “Antitrust or Anti-truth? Jim Jordan’s Latest Attack on the ‘War on Disinformation.’”) Jordan’s efforts have been amplified by Republican state lawmakers and attorneys general, who initiated lawsuits and state legislation to constrain platform content moderation.

These accusations were the basis of the 2023 Supreme Court case Murthy v. Missouri, in which the plaintiffs claimed the Biden Administration had colluded with platforms to censor content. The Court rejected the claims, finding that “...the evidence indicates that the platforms had independent incentives to moderate content and often exercised their own judgment.” (For more information on how Murthy v. Missouri manifested an orchestrated effort to equate government collaboration with platforms with “censorship,” read our article, “A Supreme Court Ruling in Murthy v. Missouri Could Help – or Hinder – Democracy Next Year.”)

Now, three months into the second Trump administration, the FTC’s public inquiry, related efforts by the FCC, and potentially collaboration with the DOJ extend this effort into federal agencies.

There is a role for the FTC that does not involve regulating for “neutrality”

Public Knowledge has written extensively about how public policy should be used to improve content moderation and ensure free expression. There is ample room for improvement in how the platforms design and enforce their content moderation policies.

But courts – for example, in Prager Univ. v. Google LLC in 2020 and more recently in the Supreme Court case Moody v. Netchoice in 2024 – have affirmed that content moderation falls within private technology platforms’ First Amendment rights. The 9th Circuit found in X Corp v. Bonta that the government can’t compel platforms to take positions on what controversial terms like “hate speech” mean. Combined with users’ own expressive rights, this implies a limited role for the government in influencing platform content moderation.

Given the history we outline here, the FTC should resist the temptation to directly intervene in substantive content moderation controversies. Instead, there are several constructive ways in which the FTC could use its powers under Section 5 related to consumer protection and competition to help ensure free speech on technology platforms. This is possible to accomplish in a content-neutral, constitutionally compatible way.

The FTC should:

  • Ensure that technology platforms articulate clear terms and conditions to guide users’ expectations of the experience on that platform, and to reduce the potential for abuse or weaponization. Terms of service should include clear and accessible mechanisms for due process, such as notification requirements explaining which specific policy was violated, meaningful appeal mechanisms with human review for significant account actions, and reasonable timeframes for resolution.
  • Use its authority under Section 5 of the FTC Act to take action against platforms that misrepresent or fail to enforce the terms of service and community standards in their commercial contracts, including those related to due process. The FTC mustn’t intrude on the editorial freedom of platforms to determine what terms like “hate speech” mean. Content moderation is hard; it requires some discretion and will never be perfect. But in our view, consistent or egregious failure to enforce their terms of service (assuming the terms don’t infringe on the First Amendment) should be considered an unfair or deceptive act or practice, or UDAP.
  • Encourage healthy competition and choice in digital markets so users can select technology platforms whose moderation practices align with their values (for example, X and Truth Social relative to Bluesky and Mastodon). This will also allow for the emergence of platforms with alternative business models and, therefore, different incentives (for example, subscriptions, for which user satisfaction will be a more compelling incentive than engagement). Pro-competition policies, such as those that bolster interoperability, can also diminish the importance of network size and allow smaller competitors to thrive.
  • Continue to pursue the antitrust case pending against Meta in social media and messaging, and pursue assertive structural and behavioral remedies in the cases against Google in ad tech and search. Restoring competition in these markets will allow users to select platforms that align with their values and enable advertisers and publishers to choose platforms that align with their brand and business practices. The lack of choice, opacity, and waste in the digital advertising supply chain has compelled advertisers to utilize mechanisms for brand safety, such as the Global Alliance for Responsible Media (GARM), NewsGuard, and the Global Disinformation Index.
  • Call for privacy-protected access to platform data for qualified researchers to understand the true impact of platforms’ algorithmic moderation systems on particular communities or users. The opaque nature of the content moderation process and its outcomes can breed conspiracy theories on both sides of the political spectrum and allow whatever bias exists to persist unchecked.

It remains to be seen what the FTC will do with the input it gains from the public inquiry. And platforms may chill their own content moderation policies and enforcement while they wait to find out.

Read more about the history of technology platform content moderation and Public Knowledge’s view about the role of public policy in its “Policy Primer for Free Expression and Content Moderation.”

Authors

Lisa Macpherson
Lisa Macpherson is Public Knowledge’s Policy Director, focused on democratic information systems, including content moderation on digital platforms and media policy to support local journalism. Prior to Public Knowledge, Lisa was a consumer marketing executive at Fisher-Price, Timberland, Hallmark, ...

Related

Analysis
What Does Research Tell Us About Technology Platform “Censorship”?May 20, 2025
Perspective
Fantasy Becomes Reality as Trump Takes Revenge on Disinformation ResearchersApril 30, 2025

Topics