Home

Donate

A Decision on Content Moderation Systems, After All

Claire Stravato Emes / Jan 29, 2025

Claire Stravato Emes is a new media scholar and postdoctoral researcher at the Law School of Sciences Po Paris.

Mark Zuckerberg is seen in attendance during the UFC 298 event at Honda Center on February 17, 2024 in Anaheim, California. (Photo by Chris Unger/Zuffa LLC via Getty Images)

Meta founder and CEO Mark Zuckerberg's latest announcement regarding his company's new strategy for content moderation in the US, specifically a decision to switch from third-party US-based fact-checkers to a “community notes” system driven by users, has attracted considerable criticism. The new policy is primarily viewed as a superficial concession crafted to align with Republican political and cultural agendas over maintaining the accuracy of the information disseminated on Meta’s platforms.

Since 2013, Facebook has strived to become a model for transparency in content moderation, routinely publishing reports on its effectiveness and advocating for similar regulatory standards across all platforms. Yet, persistent doubts about its capacity to consistently enforce rules serving public interests and commitments have fueled a worldwide call for more explicit regulations. Global regulators–including Australia (AVM Act), the UK (Online Safety Bill), Germany (NetzDG), and the EU (DSA)–have increasingly implemented stricter controls on digital platforms, mandating proactive content supervision and regulation to prioritize user rights while eradicating harmful material, with significant fines for non-compliance.

In contrast, the US has maintained a lenient approach to platforms’ content regulation, in part due to the constraints of the First Amendment protecting freedom of speech, and in part because of the constraints of Section 230 of the Communications Decency Act, which grants platforms significant intermediary liability protections for user-generated content. These and other statutory and regulatory constraints create a distinctive landscape for online speech compared to countries with more rigorous online safety laws.

Facebook’s shift to a content moderation system that is likely more cost-effective and less resource-intensive is made against the backdrop of President-elect Donald Trump’s election and MAGA supporters gaining influence, exhibiting an aversion to regulation, and promising vengeance against even those who study online mis- and disinformation. Given the private company's focus on fostering growth and resilience, Meta's strategy to leverage a lenient regulatory environment with minimal obligations by adopting more cost-effective measures is hardly unexpected.

Yet, Zuckerberg's recent announcement raises significant concerns. By framing the adjustments as a dedication to preserving free speech, Zuckerberg has cast the decision around content moderation as politicized and driven by ideology. The declaration does not mention the need to uphold societal values and address the needs of Facebook users or customers—a focus that should be central for any private company. Mark Zuckerberg's choices, such as shifting moderation operations from California to Texas and reducing oversight on issues like gender and immigration, seem to favor right-wing political interests over the safety and well-being of Meta’s users, overlooking the platform's broader social duties.

Joel Kaplan, the longtime Meta policy executive and its newly appointed President of Global Affairs at Meta Platforms, amplified such concerns. His remark, "It's not right that things can be said on TV or the floor of Congress but not on our platforms," overlooks the strict oversight that traditional media must adhere to when handling controversial content—a scrutiny that social media, including Facebook and Instagram, often escape due to intermediary liability protections. The ‘omission’ underscores Meta's ethical shortcomings and complacency regarding its potential impact on political discord.

More regrettably, Meta's emphasis on cultural and political motivations for shifting practices diverts attention from a crucial debate about content moderation. Expecting a private actor like Meta, in a lax US regulatory environment, to balance profit-seeking and innovation while upholding human rights and free speech is a considerable gamble. Regardless, the recent adjustments in Meta's content moderation strategy could signal that current content moderation strategies are too costly and possibly unsuccessful in maintaining a healthy information ecosystem. By exposing falsehoods, these entities have enabled social media platforms to curtail the dissemination of contentious information, thus mitigating the proliferation of misinformation. However, despite its critical importance, the fact-checking process has also faced criticism.

Indeed, fact-checking organizations are key in assessing political statements' truthfulness, educating the public about inaccuracies, and curbing misinformation. Since 2016, they have been essential as epistemological authorities that seek to preserve collectively shared realities. Reacting to Meta's US-focused action, more than 125 fact-checking organizations signed an open letter highlighting their critical function in maintaining and fostering evidence-based discussions worldwide and their need for more financial support.

This support appears essential as fact-checkers struggle to keep pace with how information spreads on social media, a challenge that often leads to an "implied truth effect," where unchecked information might be wrongly assumed to be accurate due to the absence of a correction. The reach of fact-checkers is also limited, with most US adults never using a fact-checking website. In regions across the Global South, only a relatively small fraction of the educated and urban populace (0.01%) access fact-checking resources, and those in rural areas remain susceptible to being exposed to unchecked hyperlocal information.

Meta's recent announcement, coupled with the prospect of a reduction in funding for fact-checking organizations, has reignited an ongoing debate about how societies can maintain a healthy information ecosystem online. The polarized responses from readers of The New York Times to the question, "Should Social Media Companies Be Responsible for Fact-Checking Their Sites?" reflects significant divergence. This sentiment is encapsulated in a reader's comment, which suggests, "I don't believe there is a 'right answer' to this." The universal value and benefits of high-quality information are widely recognized; this recognition culminated in economics, with Joseph Stiglitz receiving the Nobel Prize, emphasizing information's crucial role as a global public good. Stiglitz notably highlighted the importance of greater institutional involvement to guarantee access to reliable information and reduce social and economic disparities. When shifting such thinking from markets to the public sphere, the debate over the quality and spread of information and its societal impact usually turns political- with ideology eclipsing facts- complicating the consensus on strategies to ensure high-quality information.

It is amidst this prevailing confusion that Meta and X have announced innovative strategies for fact-checking in the form of ‘community notes.’ The social media platforms both propose a path for collective oversight that operates independently ensuring no external institution exerts any influence. They encourage a form of co-regulation driven by users, a vision that seems to align well with libertarian, ‘government-free’ principles. Yet, there are significant gaps, and we know little about the effectiveness of these measures in maintaining the integrity of online information ecosystems.

Community-based moderation relies on a principle that is generally seen as legitimate, using an approach sometimes referred to as the "wisdom of crowds." The collective judgment of news by everyday, non-expert people shows (often in small-scale experiments) that aggregating them yields remarkably accurate results. Groups of non-experts can surpass the expertise of individual professionals in various fields when rating the truth of medical diagnoses and financial forecasting. Preliminary data suggest that community-based content moderation can significantly reduce the visibility of misleading posts, cutting its exposure by an average of 61.4%. Still, the evidence is too restricted to be entirely persuasive. More research is needed, and Meta has not yet presented successful pre-tests before enacting changes, a move that seems more in line with the 'move fast and break things' mentality in Big Tech. At the same time, the company is making it harder for researchers to study its platforms, suggesting insights may be hard to arrive at independently.

In essence, Meta's latest announcement on content moderation invites us to address socio-technical challenges and to question and seek solutions for improving content regulation and the state of the information ecosystem. Could community-led actions be a viable strategy for content moderation? Are there safeguards in place if they are not? There are many questions with few or no answers. Given the role his platforms play in human affairs, Zuckerberg's responsibility should include ensuring the safety of Meta users, and without transparent, evidence-based justifications for his decisions, it is challenging to view him as a responsible leader.

Authors

Claire Stravato Emes
Claire Stravato Emes conducts research in the field of digital platforms and systemic risk management in the digital public space. Her doctoral work focused on analyzing immigration-related sentiments and discourses in the Singaporean public sphere, adopting interdisciplinary methods that combine qu...

Related

Transcript: Mark Zuckerberg Announces Major Changes to Meta's Content Moderation Policies and Operations

Topics