Scientists Respond to FTC Inquiry into Tech Censorship
Dean Jackson / Mar 25, 2025Dean Jackson is a Contributing Editor at Tech Policy Press.
On February 20, 2025, the US Federal Trade Commission (FTC) announced an inquiry into “how technology platforms deny or degrade users’ access to services based on the content of their speech or affiliations, and how this conduct may have violated the law.” In its request for public comment, the FTC further claims that platforms may do so using “opaque or unpredictable internal procedures,” with little explanation, notice, or opportunity for appeal.
Tech Policy Press has a long track record of publishing calls for greater transparency from technology platforms. In December 2024, for example, Sandra González-Bailón and David Lazer criticized the lack of accountability for how Meta specifically, and other platforms generally, act at critical moments. Taking inspiration from former Supreme Court Justice Antonin Scalia, they argue that users are “entitled to understand what speech platforms make visible.” A month later, they argued in a separate piece that the nature of social media is such that “companies are always choosing what people do and do not see.” Content moderation is the process or processes by which platforms make this choice.
However, President Donald Trump and other prominent Republicans frequently equate content moderation to censorship against conservative users. FTC Chairman Andrew Ferguson has himself compared content moderation to censorship on several occasions. This political context has raised concerns from critics across the political spectrum that the FTC’s inquiry will end in a partisan effort to exert greater government control over platform trust and safety.
Science, as it turns out, has already inquired into the question of whether or not social media content moderation unfairly penalizes conservative speech. We asked leading scholars on this issue the following questions:
- Do social media companies disproportionately moderate posts from one side of the political spectrum? If so, is this the result of bias or something else?
- Second, does social science show that one side of the political spectrum is unfairly penalized or rewarded by platform recommendation algorithms? If so, which side and why?
In total, we received fourteen replies. To summarize, science holds that to the extent conservatives experience content moderation more often, it is because they are more likely to share information from untrustworthy news sources, even when other conservative users rate the trustworthiness of those sources. Regarding the second question about algorithmic bias, the evidence largely suggests that conservative sources and accounts tend to receive more engagement, not less—not because of platform bias but more likely because of the nature of the right-wing news ecosystem and the valence of the content shared in it.
The full text of the submitted responses is below.
Paul Barrett
Deputy Director and Senior Research Scholar, NYU Stern Center for Business and Human Rights
A central conservative talking point during the Trump era holds that a conspiracy of liberal political figures, social media companies, and left-leaning academics have censored people on the right. Since 2021, the NYU Stern Center for Business and Human Rights has published extensive research that refutes claims of anti-conservative bias online, pointing out that there are no systematic studies supporting the accusation. More recent empirical social science reinforces our findings.
In a study published last year in the journal Nature, researchers from Oxford, Massachusetts Institute of Technology, Yale, and Cornell made a pair of findings: First, supporters of former President Donald Trump and other conservatives are more likely to have their content taken down or accounts suspended on major social media platforms like Facebook and X (formerly Twitter) than are supporters of President Joe Biden and other liberals. But that does not mean that content moderation—the human and automated filtering done by social media companies—is biased. Rather, the researchers reported that conservative accounts may be sanctioned more often because they post more misinformation.
As The Washington Post noted in its helpful analysis of the Nature research, this study “is not the first to find that conservatives are more likely to share stories that have been debunked, or that originate from fake news sites or other sources deemed ‘low-quality.’” The Post added that “one common objection to such studies is that defining what counts as misinformation can be subjective. For instance, if the fact-checkers skew liberal or the list of fake news sites skews conservative, that in itself could explain the discrepancy in sharing behavior.”
All true, but consider: the aforementioned study in Nature found that conservatives share more falsehoods and low-quality information online even when Republicans define what counts as untrue or “low-quality.” If content moderation is unbiased, then what about content recommendation engines? Relevant new research on this question from Global Witness finds that recommendation algorithms operated by TikTok and X have shown evidence of far-right political bias in Germany ahead of that country's recent federal election. Global Witness found that content displayed to new users via algorithmically sorted “For You” feeds skewed heavily toward amplifying content that favors the far-right AfD Party. TechCrunch covered the research here.
In short: to the extent conservatives experience more frequent content moderation on social media, it is largely because they behave differently online and are more likely to share low-quality content—and platform recommendation algorithms are actually more likely to reward right-leaning content.
David A. Broniatowski
Professor, Department of Engineering Management and Systems Engineering, The George Washington University
My research with coauthors examines how publicly listed policies on misinformation enforcement, recommendation algorithms, and monetization affect online discourse. These studies rely on stated platform policies, meaning we do not assess whether platforms deviated from these policies in ways that could reflect political bias.
Platforms’ public policies targeting COVID-19 and vaccine misinformation did not explicitly target political content. However, misinformation about these topics was more prevalent in vaccine-skeptical communities, which increasingly endorsed vaccine refusal as a civil right – a message aligned with political conservatism – in the years leading up to the COVID-19 pandemic. Thus, those communities bore the brunt of enforcement actions. Importantly, other health topics also contained misinformation suggesting that health misinformation is not restricted to partisan issues. Rather, health misinformation is a systemic feature of social media more broadly.
On Facebook, we found that misinformation policies led to vaccine-skeptical accounts being removed at 2.13 times the rate of pro-vaccine accounts. Because vaccine-skeptical content was more likely to be right-leaning, this content was likely disproportionately affected. However, Facebook also appears to have removed some pro-vaccine content, likely penalizing left-wing content to a lesser extent. These results suggest flawed enforcement, rather than political bias.
On Twitter, as on Facebook, preliminary results suggest that vaccine-skeptical clusters were also more likely to share content from accounts representing the political right wing. Although misinformation policies led to reductions in content from one cluster of the most prominent vaccine-skeptical accounts in the USA, they preceded increases in content and virality from other clusters of vaccine-skeptical accounts. Twitter’s policies were also imperfectly enforced, curbing some vaccine-skeptical content but facilitating its spread elsewhere.
Our research does not assess whether recommendation algorithms systematically favor one political ideology. However, moderation actions influenced visibility, driving engagement toward more misinformative, and more politically polarized sources contrary to the stated intent of these moderation policies. Furthermore, although Twitter removed 70,000 QAnon-affiliated accounts (which disproportionately shared content from accounts associated with the political right wing) and the account of President Donald Trump following the events of January 6, 2021, the amount of content and virality in other right-wing account clusters increased. Despite high-profile removals, politically-aligned content surged.
We also found that pro-vaccine sources were more monetized than anti-vaccine sources. This is because they were more likely to link to news sites, which are more monetized. We did not detect differences in monetization between pro- and anti-vaccine news sites and pro- and anti-vaccine non-news sites.
Thus, Facebook's and Twitter's disproportionate impact on political content, if any, appears to have been an unintended consequence of flawed enforcement, not deliberate targeting.
Alexandros Efstratiou
Postdoctoral Scholar, Center for an Informed Public, University of Washington
A claim that we have commonly heard, and one that has been echoed by the current HHS secretary and the Trump nominee to head the NIH, is that social media platforms have been censoring voices that opposed the scientific consensus on masks, COVID-19 vaccines, and lockdowns, among other things, occasionally even questioning whether this scientific consensus was real to begin with. Late last year, we published a paper that put this to the test. We collected all of the scientific preprints on COVID-19 we could find in the biological and medical sciences up to that point, and used them to determine scientific consensus on issues like COVID-19 vaccines, non-pharmaceutical measures, and the dangers of the pandemic. Our findings agreed with what official health institutions like the WHO and CDC were saying at the time: vaccines work, non-pharmaceutical interventions work, and COVID-19 should be taken seriously. We also restricted this exercise to papers that had passed peer review too, and found largely similar figures. Therefore, these scientific consensus figures were neither due to publication bias, nor due to “bad science.”
When we looked at Twitter, however (before Elon Musk made any substantial changes to the platform), we found that the few papers that went against consensus were widely and disproportionately shared on Twitter—in the case of vaccines, Twitter shared anti-COVID-vaccine papers at a rate ten times higher than their respective prominence in the scientific literature. Looking at Twitter, one would have assumed that anti-vaccine and pro-vaccine papers were essentially equally prominent, but this was far from the truth when looking at the scientific evidence. People like the incoming NIH director, who frequently invoked censorship, were among the most prominent amplifiers of this false consensus.
Daniel Kreiss
Edgar Thomas Cato Distinguished Professor, UNC Chapel Hill
Social media companies based in the US are businesses. Because they are businesses, they moderate content. They need to create places where people enjoy using social media for entertainment, gaming, sports, and yes, at times, politics. They have long determined that most of their customers don’t want to see explicit pornography, terrorist content, or financial scams, and that if they do, they may stop logging in—or that parents will prevent their children from doing so.
This should not be controversial. US Courts have long recognized that it is as problematic for the government to compel the speech of tech and media businesses—for example, by requiring companies to host content that undermines their businesses and profits—as it is for the government to tell companies directly what they can and cannot say if it is otherwise legal. The First Amendment only checks government efforts to censor speech, despite people’s passionate feelings that they deserve to be heard on Facebook.
Today, the right is using claims of ‘political censorship’ to undermine the private sector. Most social media companies care about making money, not pushing an ideological agenda. The idea that social media companies are about ‘free expression’ was always laughable given these businesses have always moderated content to create the experiences people want to have on social media, so they can monetize them. Otherwise, all social media would be 4Chan.
If social media companies disproportionately moderate content from the political right, it is because they believe that content is bad for business—not for political reasons. If every time you opened Facebook you saw content directly attacking your religious identity, how often would you use the platform?
Social media companies are biased towards their bottom lines. For this same reason, they also reward content that is engaging, gets attention, and spreads widely. That has led to some studies showing that rightwing content performs better on platforms, primarily for revenue reasons, not for political ones. (X is a different case, because it is primarily a political, not a commercial, entity.)
The words and actions of tech executives reveal the business motivations behind content moderation policies. Consider Mark Zuckerberg, who has taken so many ‘principled’ stands on speech with the blowing of the political winds that it is hard to keep track of them. The issue has always been that these companies are too close to politics. Instead of telling people in power whatever they want to hear, tech execs should forcibly assert their right to manage their businesses as they see fit. That includes their right to pursue the long term profits to be gained in stable democracies with regulatory clarity. The real censorship efforts come from weaponized executive and legislative branches in the U.S.
Przemyslaw Grabowicz
Adjunct Professor, University of Massachusetts Amherst, Assistant Professor at University College Dublin
The key issue is that we do not know whether social media platforms disproportionately moderate posts from one side because they have significantly loosened their moderation practices by moving to so-called "community notes", which are driven by user reports rather than fact-checkers. In the context of US politics, our research shows that X polls gauging support for the US presidential candidates in 2016, 2020, and 2024 were biased towards Trump because right-wing users were much more likely to engage and vote in them than left-wing users. We see similar biases in the context of German politics: a post receives more engagement if it's published by a far-right parliamentarian. If right-wing users engage with political content more often than left-wing users, then community notes could be similarly biased towards right-wing perspectives, but we haven't tested this hypothesis yet.
Regarding algorithms, our study of political biases on X before the German federal election indicates systematic biases in content visibility within X’s algorithmic feed: out of the eight political parties of Germany, posts by the members of the far-right AfD received the most views, i.e., 38% of all views. However, it is not clear whether these biases are a result of unfair treatment. First, these biases are partly the result of political biases in engagement measures. That is, a post is more likely to appear in news feeds if more people engage with it, and posts of the far-right AfD tend to receive more likes and retweets. Second, our findings suggest that there are some other unknown factors related to party affiliation that contribute to explaining political biases in news feed appearances. For instance, the feed algorithm may favor some topics over others.
Andrew Guess
Associate Professor of Politics and Public Affairs, Princeton University
Some of the best evidence on [the question of whether or not social media content recommendation algorithms are biased] comes from a randomized experiment on the Twitter platform in which roughly 1% of accounts globally were assigned to a holdout group that continued to get the original chronological version of the home feed after algorithmic ranking was introduced in 2016. The authors find that in most of the countries they studied, algorithmic personalization boosted parties and politicians on the center-right over those from the center-left. It’s not clear why this was the case during the study period. For example, it’s possible that conservative movements were better at harnessing the power of Twitter’s affordances to their benefit. In general, partisan differences in amplification can be driven by any number of confounders, such as investments in social media efforts or differences in user enthusiasm.
Would we find similar results today? The only way to know with any degree of certainty would be to hold major platforms accountable with regular, transparently designed audits and robust data sharing with researchers, civil society groups, and data journalists.
Alice Marwick
Director of Research, Data & Society and Research Associate Professor, Department of Communication, UNC Chapel Hill
Although journalists and politicians like to see the American “left” and the “right” as informational equivalents, they are not. The “right” has a long-standing narrative that the press is biased against them, dating back at least a century. This prompted the right to create their own alternative media system. As historian Nicole Hammer chronicles in her excellent book Messengers of the Right, the idea of liberal media bias was crafted and spread by conservative strategists, and it took hold spectacularly.
History is often absent from our conversations about social platforms, but it’s crucial to understand the long shadow of conservative accusations of “bias.” This is especially true because conservative media, overall, contains far more mistruths than liberal or mainstream media. This is true for explicitly conservative platforms like Rumble, Truth Social, and Bitchute, and it is true for conservative content shared on mainstream social platforms. For example:
- A 2023 study of a huge Facebook dataset found that most “untrustworthy” sources that spread misinformation are consumed overwhelmingly by conservatives.
- An enormous study of 15 years of YouTube found that YouTube is more popular with right-wing audiences than left-wing, and there is more right-wing content on the platform.
- Conservative Twitter users shared far more misinformation than liberal users, even when "misinformation" was defined by Republican moderators.
- In both Europe and the United States, conservatives are more likely to believe misinformation.
Conservatives are more likely to believe misinformation; they are more likely to spread misinformation; they are more likely to consume conservative media that contains misinformation; and as a result, they are more likely to be censored, banned, deplatformed, or moderated on social media. And when they are deplatformed, the amount of misinformation on the platform decreases, and they often move to explicitly conservative platforms where conservative-leaning incorrect information is less likely to be moderated.
My own research examines far-right content on spaces like Telegram, blogs, 4chan, and Discord. These spaces are not just rife with misinformation; they are full of disinformation, often containing hateful language, conspiracy theories, overt misogyny, white supremacy, and homophobia. And, unfortunately, my work has shown how these disinformative narratives filter into the mainstream, often through politicians and mainstream conservative media. Consider the Trump administration’s war against trans people, from attempting to ban gender-affirming care to disallowing gender changes on passports. For years, I have watched as narratives justifying these policies—trans people are mentally ill, gender-affirming care is child abuse, trans people are deceptive and dangerous, etc.—circulated in the far-right mediasphere, unmoderated.
Allowing false information to spread has real, material consequences, whether measles outbreaks are driven by false anti-vaccine propaganda, violence against LGBTQ+ people and people of color, or cutting off support to Ukraine based on Russian propaganda. Social media platforms know that increasing efforts to moderate this content will disproportionately impact conservatives, but they also know that the idea that “platforms are biased against conservatives” is very sticky, often popularized by content creators on their own platforms. Platform owners have decided that allowing false or harmful content is a small price to pay for a lack of regulation and increased profits. This is not a deal we should accept.
Fil Menczer
Distinguished and Luddy Professor of Informatics and Computer Science and Director, Observatory on Social Media, Indiana University
For years now, conservative politicians in the US have claimed that social media platforms censor political speech, in particular conservative speech. The censorship allegedly occurs through moderation, but a simpler explanation is that conservatives posted more low-quality content. This was the conclusion of two studies that the Observatory on Social Media at Indiana University conducted to explore this question.
In the first study, we analyzed the relationship between partisanship, echo chambers, and vulnerability to online misinformation by studying news-sharing behavior by more than 15,000 Twitter accounts in June 2017. Our results confirmed prior findings that conservative partisans share more misinformation. However, we also uncovered a similar, though weaker, trend among left-leaning users. Because of the correlation between a user’s partisanship and their position within a partisan echo chamber, these types of influence are confounded. To disentangle their effects, we performed a regression analysis and found that vulnerability to misinformation is most strongly influenced by partisanship for both left- and right-leaning users.
In the second study, we deployed neutral social bots who started following news sources across the political spectrum on Twitter and tracked them during five months in 2019 to probe distinct biases emerging from platform mechanisms versus user interactions. We examined the content consumed and generated by the bots and analyzed the characteristics of their friends and followers, including their political alignment, automated activity, and exposure to information from low-credibility sources. We found that the news and information to which US Twitter users were exposed depended strongly on the political leaning of their early connections. The interactions of conservative accounts were skewed toward the right, whereas liberal accounts were exposed to moderate content, shifting their experience toward the political center. Partisan accounts, especially conservative ones, tended to receive more followers and follow more automated accounts. Conservative accounts also found themselves in denser communities and were exposed to more low-credibility content. However, we found no evidence that these outcomes could be attributed to platform bias.
Of course, these studies from years ago cannot rule out that some platforms may be censoring political content today. It has been reported that under Musk, X has suspended several liberal journalists.
Anna Lenhart
Policy Fellow, Institute for Data Democracy and Politics, The George Washington University
The FTC’s request for information highlights some important questions for lawmakers and non-governmental organizations concerned with free expression and online safety. Notably, many of these questions could be addressed with comprehensive transparency mandates such as those underway in Europe (the same policies the administration has been attacking). For example, the EU’s Digital Services Act (DSA) mandates that users receive reasons for content moderation decisions (Article 17).
Instead of the government making a partisan call for comments, the Commission has set up a Statements of Reason database equipped with a Researcher API. The dataset gives an overarching view of content moderation decisions in Europe (e.g., limited distribution, removal) categorized by reason, which vary based on each platform’s community guidelines (e.g., harassment and bullying, adult content, nudity and body exposure, youth exploitation and abuse, disordered eating and body image), process for decision (e.g., automated fully, partially or not at all), etc. The database provides system-wide metrics regarding content moderation, but it does not include the actual content or accounts subject to moderation.
To allow for independent researchers and civil society groups to examine political bias in content moderation decisions, the DSA includes provisions which allow researchers to apply for platform data to study systemic risks which includes “freedom of expression.” Under Article 40, researchers could explore if platforms are disproportionately enacting their community guidelines in a way that suppresses political ideologies. Alas, the US does not have a similar policy, so a request for information laden with political rhetoric will serve as a hypocritical substitute.
Stephan Lewandowsky
Professor and Chair in Cognitive Psychology, University of Bristol
There are three misconceptions or misdirections that are common in this space. My papers address all of them, and I point to a further publication below that addresses the third one.
First, there is the conflation of fact-checking with allegations of censorship. Fact-checking is not censorship. It is counterspeech. Supreme Court Justice Louis Brandeis famously formalized the counterspeech doctrine when he said that the best remedy to combat harmful speech is “more speech, not enforced silence.” Anyone who claims fact-checking is censorship is therefore, wittingly or unwittingly, acting in a manner that is injurious to free speech. When Facebook discontinued fact-checking, it gave liars a free pass while preventing counterspeech to hold them accountable—quite the antithesis to free democratic discourse.
Second, there are claims that social media are biased against political conservatives. The opposite is true. For example, an analysis of Facebook engagements during the 2016 election campaign revealed that conservative outlets (Fox News, Breitbart, and Daily Caller) amassed 839 million interactions, dwarfing more centrist outlets (CNN with 191 million and ABC News with 138 million), and totaling more than the remaining seven mainstream pages in the top ten. Another analysis of Twitter found that conservatives enjoy greater algorithmic amplification than people on the political left, and this algorithmic bias has become even more extreme since Musk took over.
Third, there is the related claim that fact-checking is biased against conservatives because more right-wing content is flagged as false. Several lines of evidence show this claim to be false. It has been shown repeatedly that professional fact checkers’ judgments or ratings of the credibility of news sites correlate highly with those of bipartisan crowds (i.e., randomly sampled members of the public). When you combine that result with the well-established finding that most misinformation online is spread and consumed by people on the political right, then it naturally follows that even fair and unbiased fact-checking must call out conservatives more than liberals – that’s not a problem with fact-checking but with the lopsided reliance on misinformation by the right. This was shown recently by Mosleh et al. in this paper.
Spence Purnell
Resident Senior Fellow, Technology and Innovation, R Street Institute
Content moderation as a practice is evolving, with platforms like X and Meta utilizing decentralized models that flag misinformation instead of removing posts. Research shows most moderated (removed) content is spam or explicit material, not political speech.
Conservative claims were fact-checked and moderated more often than liberal ones in some cases, but this may be due to their frequent reliance on lower-rated sources, even when those ratings were determined by both ideologically diverse and conservative-only groups. Motivated reasoning can make these sources more appealing, bypassing critical thinking filters. However, bias in fact-checking is also a factor. A 2023 Harvard survey found that 90% of misinformation experts lean left, potentially influencing early moderation practices.
However, new decentralized models address this concern using “bridging” algorithms to ensure both sides of an issue evaluate flagged content. One study found that up to 97% of notes were rated “entirely” accurate by a group of diverse users. This approach allows controversial content and political speech to remain online, but gives critical, cross ideologically vetted context users can trust knowing that opposing viewpoints contributed to the final rating.
Regarding the possibility of algorithmic bias: to appease both users and marketing clients, platforms use algorithms to amplify content they predict users will engage with, including political content. Users create their own echo chambers by liking and sharing content they agree with while ignoring or downvoting opposing views. The algorithm then amplifies content based on these behaviors. Algorithms reflect user behavior rather than enforcing ideological preferences, though biases in content moderation and fact-checking may still play a role in perceived disparities.
David Rand
Erwin H. Schell Professor of Management Science and Brain and Cognitive Sciences, Massachusetts Institute of Technology
The FTC’s newly launched inquiry into tech company censorship hinges on the claim that conservatives are disproportionately targeted. But is there actually evidence that this is occurring? Our research suggests a different explanation: differences in behavior, not bias in enforcement, could drive apparent disparities in content moderation.
In a recent paper published in Nature, we analyzed 9,000 Twitter users who shared Trump or Biden hashtags before the 2020 election. We found that users posting Trump hashtags were 4.4 times more likely to be suspended than those posting Biden hashtags. However, they also shared significantly lower-quality news—and simulations show that even if enforcement policies were entirely politically neutral, users posting Trump hashtags would still get suspended at much higher rates due to differences in the quality of the content shared.
Critically, our work addresses concerns about ideological bias in determining what counts as "low-quality" information. Prior studies relied on fact-checkers, which some argue skew left. We instead used politically balanced groups of laypeople—and even groups of only conservatives—to assess content quality. The results were consistent: conservative Twitter users shared more low-quality news. This pattern holds across seven additional datasets spanning Twitter, Facebook, and surveys from 2016 to 2023 and across 16 countries.
In new research, we analyze Community Notes on Elon Musk’s X, a system designed to minimize political bias by requiring cross-ideological agreement (via a "bridging algorithm") for posts to get flagged. Even here, 67% more Republican posts were flagged as misleading compared to Democratic posts. This isn’t simply due to more Republicans using X—our data show no overrepresentation of Republicans (and actually, until very recently, there were substantially more Democrats on X than Republicans).
These studies demonstrate that on its own, conservatives getting moderated more than liberals does not provide evidence of political bias or targeted censorship by technology companies. With bipartisan demand among the American public for reducing misinformation online, policymakers must recognize that some partisan disparities in enforcement are inevitable—even under neutral rules aimed at curbing the spread of false information.
Joshua A. Tucker and Zeve Sanderson
Faculty Co-Director and Executive Director, respectively, New York University Center for Social Media and Politics
While social media platforms disproportionately moderate posts from conservative users and sources, most research would suggest that this effect is likely driven by the asymmetric production of misinformation. More specifically, studies have shown that conservative users are more likely to post, share, and be exposed to misinformation and that political elites contribute to this dynamic. From the extant literature, it’s clear that the ideological asymmetry in moderation (disproportionately impacting conservative content) could be reflective of the ideological asymmetry in misinformation dissemination (disproportionately driven by conservative users).
However, for many observers, this literature may not settle the question. A key challenge for platforms is reflected in research: how to define misinformation and operationalize that definition in practice. Generally, misinformation is determined at the article level by fact-checkers, or at the source level by lists of known fake or low-quality news websites. For many conservatives, this approach may undermine the validity of the referenced research. Put another way, if there is bias in the classification of misinformation by fact-checkers, researchers, or media organizations, then the lack of bias found in moderation practices could simply reflect initial biases in content classification.
In this context, crowd-sourced moderation systems—especially ones, such as X’s Community Notes, that are designed to find agreement among diverse users—may be more convincing. A recent study of X’s community moderation system finds that content from conservative users is flagged as containing misinformation more often than content from liberal users. Given the nature of the crowdsourced evaluation, we wouldn’t expect there to be potential biases that could have impacted other studies. As a result, this finding suggests that perceptions of asymmetry in misinformation moderation may reflect asymmetries in misinformation production.
Sander van der Linden
Professor of Social Psychology and Director, Cambridge Social Decision-Making Laboratory, Cambridge University
In general, research shows that members of Congress are fact-checked at equal rates (with Democrats receiving the most fact-checks), but what predicts fact-checking is not partisanship or bias but the prominence of the politician in question. On social media such as Facebook, however, most misinformation is coming from the extreme right, which has been evidenced not only by independent researchers but also by Facebook’s own research. Accordingly, on social media platforms such as X and Facebook, pro-Trump/conservative accounts do see more moderation than pro-liberal/Democrat accounts, but that’s because pro-Trump accounts are objectively sharing much more misinformation. You could argue that this is because of bias, but research actually shows that regular, bipartisan crowds of people (on which Community Notes is also based) arrive at very similar ratings as fact-checkers, so even under politically neutral platform policies, this asymmetry in the sharing of misinformation will create the perception of bias.
Contrary to claims made by Elon Musk, research shows that even before he took over, Twitter’s algorithm was unintentionally rewarding and amplifying right-wing content around the globe. This may be related to research that has found that highly moral-emotional language and “out-group” derogation enjoy greater virality on social media. Most misinformation is shared by so-called “supersharers” of misinformation, who have a disproportionate reach and tend to be more likely to be older and Republican (in the US). On X, for example, Musk has been explicitly and repeatedly amplifying right-wing content, which is influential given his massive reach, and research has indeed confirmed structural engagement shifts in favor of Republican accounts on X after Musk’s endorsement of Trump. So, all in all, right-wing accounts have received greater amplification on Twitter, though it is unclear how this generalizes to other social media platforms (there is some evidence that Google search displayed left-leaning results more toward the bottom of the page, though other research finds Google and Bing prioritize left-leaning media).
Authors
