Home

Donate
Perspective

Meta Is Ditching Fact-Checkers. That Doesn’t Have to Be a Disaster

Ilana Strauss / May 23, 2025

Screenshot of an announcement from Meta Chief Global Affairs Officer Joel Kaplan posted on Meta's website. Source

Meta’s decision to end its US fact-checking partnerships and pilot a Community Notes–style system marks a major shift in how platforms approach misinformation. It's not alone: X has already embedded crowdsourced fact-checking into the heart of its trust model, and YouTube is experimenting with similar ideas. The message is clear: Platforms are shifting away from expert-led moderation and toward the so-called “wisdom of the crowd.”

As someone who’s worked inside professional systems—as a fact-checker on contract for Facebook and TikTok via PolitiFact, and now as the founder of News Detective, a civic tech platform incubated at MIT—I understand the tradeoffs. And I believe the most promising future isn’t either/or. It’s both.

Why the Old Model Struggled

When I worked on Meta’s fact-checking program, our team often faced queues of thousands of flagged posts. Most would never be reviewed. Meta’s systems could surface massive volumes of questionable content, but the editorial teams couldn’t keep pace. A Columbia Journalism Review piece once noted that across all US fact-checking partners, only about 14 Facebook checks were completed per day. Another report found most checks were too slow to curb virality—less than half were completed within a week.

And even when fact-checks were timely and accurate, they didn’t always earn public trust. Particularly among conservatives, professional fact-checkers were seen as gatekeepers with political agendas. As Tech Policy Press has covered, the perception of bias—not necessarily the reality—undermined the model’s credibility, especially in polarized environments where trust was already fragile.

Why the New Model Feels Risky

Crowdsourced fact-checking, in theory, solves the scale problem. Anyone can contribute; the community votes on what’s accurate. But without infrastructure, these systems risk devolving into popularity contests. Research has shown that crowdsourced systems like Community Notes can be slow to respond—and their effectiveness often hinges on whether corrections gain enough consensus to be shown at all. In polarizing cases, factual notes may never appear if they don’t align with the majority sentiment. As Baybars Orsek put it, “Having various viewpoints is not the same as possessing the expertise needed to assess the accuracy of information.”

The risk is clear: we replace slow, underfunded fact-checking with fast, shallow reputation scoring.

A Hybrid Model That’s Already Working

That’s why I founded News Detective in 2020—an approach that blends the scale and engagement of community-based models with the accuracy and oversight of professional ones.

We partner with university classes to train students to assess real claims circulating on social media. Students submit fact-checks, which are reviewed by a professional moderator with experience at PolitiFact. Once approved, the fact-checks appear directly on the posts where the misinformation originated. Instructors report that the system is easy to integrate: it typically requires no class time and only about 15 minutes of homework per week. Though many participants are journalism students, the model works just as well across disciplines. In fact, across multiple courses, the short training video we provide correlates more strongly with fact-checking quality than academic background—suggesting the model may also be viable for high school students or members of the public with similar support.

We’ve deliberately partnered with universities across different regions of the country—including the University of Illinois, the University of Oregon, and Howard University—to ensure our fact-checking community isn’t drawn from a single geographic or cultural background. So far, we’ve trained approximately 2,000 students and are currently piloting the system on Bluesky (Bluesky has awarded News Detective a microgrant).

What We've Learned

In post-assignment surveys, 92% of students said participating made them better at spotting misinformation. One wrote:

“This assignment gave me great insight as to how disinformation can be spread so quickly across the internet, though it also provided me with the tools necessary to look beneath the surface of these claims and find the truth.”

Others described changes in behavior: pausing before reposting, tracing sources more often, or noticing framing in headlines. The takeaway: when given the tools and opportunity, people are eager to develop fact-checking skills—and they do.

In one recent week, a professional fact-checker working five hours approved approximately 250 student-submitted fact-checks—concise entries that included a claim, rating, citations, and brief explanation. That translates to about 50 fact-checks per hour, compared to the typical four per hour in traditional workflows—a 1200% increase in efficiency. These results suggest that quick, transparent checks may be more impactful for many types of misinformation than long-form analysis.

Our website currently receives around 20,000 views per month. While we don’t yet have visibility into views on platforms like Bluesky, the public engagement so far suggests this hybrid approach resonates.

What Platforms Should Do Now

Tech companies face a dilemma: They need to moderate content at scale, avoid the perception of political bias, and preserve user trust—all without breaking the bank. However, outsourcing accuracy to untrained crowds is not a solution. Building hybrid systems is.

This means:

  • Supporting programs that combine peer participation with professional oversight
  • Treating fact-checking as civic infrastructure, not just a PR liability

Our media ecosystem needs more than better tools—it needs better habits. And it’s worth trying transparent, participatory, scaffolded systems. If community-based fact-checking is going to dominate the next era, let’s make sure the crowd understands how news is made.

Systems like these are already working—but they’re still rare. With the right support, hybrid fact-checking models could expand well beyond a few pilot campuses. The infrastructure is here—and it’s ready to grow.

Authors

Ilana Strauss
Ilana E. Strauss is a journalist and fact-checker who has written for The Atlantic, National Geographic, Reader’s Digest, and PolitiFact. She is the founder of News Detective, a civic tech project incubated at MIT that combines professional oversight with community-based fact-checking.

Related

Meta Dropped Fact-Checking Because of Politics. But Could Its Alternative Produce Better Results?February 3, 2025

Topics