Hear Me Out: A Different Perspective on Meta’s Fact-Checking Decision
Danielle A. Davis / Mar 7, 2025Danielle A. Davis was a Fall 2024 visiting fellow at the Georgetown McCourt School of Public Policy.

Mark Zuckerberg's Facebook account is displayed on a mobile phone with the Meta logo visible on a tablet screen in this photo illustration on January 7, 2025. (Photo by Jonathan Raa/NurPhoto via Getty Images)
In January, Mark Zuckerberg announced that Meta would end its fact-checking program in the US, sparking an immediate and intense debate. For many this decision to discontinue the program appeared to be an outright abdication of responsibility during a time when misinformation and disinformation are disseminated at unprecedented rates. Others questioned the manner, substance, and motivation behind the platform’s decision. I believe the fallout from this decision provides an opportunity for us to have an honest conversation about the true nature of the Meta fact-checking program.
The Uneven Standard of Meta’s Fact-Checking
In principle, fact-checking is essential to preserving a healthy information ecosystem through consistent, thoughtfully-designed and executed policies and practices. However, in practice, Meta has not always met this standard. The platform’s fact-checking has often been incongruent, uneven and deeply flawed. These issues can be seen not only in Meta’s broader policies but also in how the platform carries out those policies.
One undeniable example arises from the platform’s policy exempting direct statements made by politicians from its fact-checking process. In justification of their decision, Meta stated that political speech was inherently newsworthy and should be available for public scrutiny without interference from the company. The platform also claimed that their policy aimed to uphold free expression; in practice, however, the policy ultimately created an indefensible double standard wherein regular users were subjected to fact-checking scrutiny while politicians—who arguably yield even more power and influence with their direct statements— were not. In my opinion, this exemption undermined the platform’s credibility and highlighted a desire to cherry-pick when and how it wanted to apply policies.
Additionally, Meta’s fact-checking program and how the company handles instances of misinformation has shown itself to be problematic in its execution, as evidenced by several high-profile instances. For example, the COVID-19 lab leak theory was originally deemed to be misinformation and suppressed on Meta platforms. However, since then multiple US federal agencies and Congressional investigations later acknowledged the theory as a plausible explanation for the origin of the virus. Meta ultimately lifted its ban on the COVID-19 lab leak theories in 2021.
Meta’s premature dismissal of this narrative stifled legitimate debate and inquiry, revealing the dangers of fact-checking that fail to account for evolving science. Similarly, during the pandemic, Meta labeled content on the platforms questioning mask efficacy as misleading or false. Over time, public health experts acknowledged that the effectiveness of masks depended on factors such as type, fit, and proper usage—nuances that Meta’s broad application of fact-checking often overlooked.
The New York Post’s Hunter Biden laptop story serves as another example. The story was initially deemed to be potential “Russian disinformation,” including by many prominent news outlets, and was subsequently suppressed on the Meta platforms. However, later forensic analyses and investigations, including testimony during Hunter Biden’s 2024 federal gun case, corroborated the laptop’s legitimacy.
Reflecting on Meta’s actions in an August 2024 letter to the House Judiciary Committee, Mark Zuckerberg admitted, “It’s since been made clear that the reporting was not Russian disinformation, and in retrospect, we shouldn’t have demoted the story.” Together, these instances highlight the flawed nature of the Meta platform’s fact-checking process, which has dismissed complex or controversial narratives prematurely with significant consequences for the public’s discourse.
Subjectivity in Meta’s Fact-Checking Program
Stossel v. Meta Platforms, Inc. reveals that Meta’s fact-checking process was also more subjective than it might have appeared to the public. In this case, Stossel sued Meta and Science Feedback, a French nonprofit organization and owner of the fact-checking website “Climate Feedback,” alleging defamation.
Stossel posted two videos—one entitled “Government Fueled Fires,” which discussed the California wildfires of 2020, and another entitled “Are We Doomed?” that questioned claims made by individuals Stossel considered to be “environmental alarmists.” In the first video, Stossel explored a hypothesis that purported that while climate change contributed to wildfires, poor forest management was a more significant factor. However, it’s important to note that Stossel did not explicitly claim that the forest fires were caused solely by poor forest management and not by climate change. Despite this, Facebook labeled it as “Missing Context” in conjunction with its third-party fact-checker, Climate Feedback.
When viewers clicked on the “See Why” label, they were shown a text box that stated, “independent fact-checkers say this information is missing context and could mislead people.” The text box also linked a Climate Feedback article, and if the viewer clicked on the link, it routed them to a page that stated: “Claim – ‘forest fires are caused by poor management. Not by climate change.’ Verdict: misleading.” Stossel argued that the label and associated link falsely implied that he had made the claim being evaluated, which he did not. According to Stossel, the label and the associated link led viewers to believe that it was his reporting that was inaccurate when, in fact, the disputed claim was never his to begin with.
In the second video, “Are We Doomed?” Stossel included clips of a panel discussion he hosted on climate change that explored a wide range of topics, including rising sea levels, the strength of hurricanes, and the function of carbon dioxide as both a greenhouse gas and a vital resource for crop growth.
Facebook labeled the video as “Partly False” and directed viewers to another Climate Feedback fact-check page with the caption: “Video promoted by John Stossel for Earth Day relies on incorrect and misleading claims about climate change.” However, according to Stossel’s complaint, the fact-check page did not identify any false facts in Stossel’s report.
Rather, Stossel contended, “the very language of the ‘Fact-Check’ Page confirmed that what was being checked was Stossel’s ‘reasoning’ and ‘overall scientific credibility’ – not the underlying facts cited in his journalism.” For instance, in the video, panelist Daniel Legates, a climatology professor at the University of Delaware, noted that sea levels “have been rising for approximately 20,000 years.” However, this statement was consistent with the data referenced by Climate Feedback on its own fact-check page.
According to Stossel, these actions significantly reduced the visibility and, therefore, the revenue potential of both videos. Subsequently, he sued Meta and Science Feedback for defamation, claiming that these labels falsely attributed statements to him that he never made and maligned his reputation.
A California district court dismissed the case and granted Facebook’s anti-SLAPP (anti-strategic lawsuit against public participation) motion, stating that: “Simply because the process by which content is assessed and a label applied is called a fact-check does not mean that the assessment itself is an actionable statement of objective fact.”
Essentially, what Facebook marketed as fact-checking was, in practice (as determined by the court), a process of subjective evaluation—meaning that the fact-check was more of an opinion than an actual fact. This obvious disconnect between public perception and the actual nature of fact-checking emphasizes why we should be cautious about placing unfettered trust in these systems. Because for many social media users, the term “fact-check” implies an authoritative, objective determination of truth. But in reality, these fact-checks often reflect differing interpretations of complex topics, as demonstrated in the Stossel case.
The Issue with Overreliance on Third-Party Fact Checkers
The Stossel case exposes the dangers of over-relying on private systems of truth arbitration. Fact-checkers can be fallible because they rely on the same imperfect tools as the rest of us: incomplete data, evolving science, and the biases of those interpreting the evidence. When private corporations like Meta empower third-party fact-checkers to police content, they become judge and jury when it comes to truth —a precarious position for any private entity—and a title that the platform has, in one way or another, vehemently attempted to reject for years. Their reach and influence over public discourse means that mistakes in judgment or biases in their processes are amplified, often with lasting consequences.
That being said, Meta’s decision to discontinue its fact-checking program also comes with serious consequences. Operating without a system to moderate false content could exacerbate the spread of misinformation on the platform and encourage bad actors to intentionally flood online spaces with false information. I strongly believe that Meta’s decision is a missed opportunity to strengthen its system, learn from past mistakes, and implement reforms that balance the need for accuracy with transparency and accountability. A better approach would have been to rethink the fact-checking process and improve its methodologies, diversify its sources, and, most importantly, leave the subjectivity aspect out of the fact-checking process—rather than abandoning it altogether.
Ultimately, the issue goes beyond Meta or any other social media platform. It speaks to a broader societal problem: our over-reliance on centralized authorities to tell us what is true. This dependence is not only highly problematic, it’s dangerous. It absolves individuals of the responsibility to think critically and evaluate information, conduct their own research, and challenge the status quo and popular mainstream narratives.
Furthermore, it grants these centralized authorities the power to define what is “true,” often with far-reaching implications for public understanding and discourse. We must recognize that the truth is not always black and white. The truth can be shaped by evolving evidence and competing perspectives. We must also recognize that a well-informed public requires some form of skepticism—and a willingness to question narratives and explore alternative viewpoints.
At the same time, we must acknowledge that in an online world steeped with mis- and disinformation at every turn, actual fact-checking plays a critical role and has a place in our online spaces. However, the challenge lies in building systems that we can trust without becoming dependent on them, while also committing to transparency and fostering a culture of personal responsibility in seeking the truth.
Authors
