Home

Donate

Mark Zuckerberg’s Immoderate Proposal

David Lazer, Sandra González-Bailón / Jan 21, 2025

David Lazer is University Distinguished Professor of Political Science and Computer Sciences, Northeastern University, and faculty fellow at the Institute for Quantitative Social Science at Harvard, and elected fellow of the National Academy of Public Administration. Sandra González-Bailón is the Carolyn Marvin Professor of Communication at the Annenberg School, University of Pennsylvania, where she directs the Center for Information Networks and Democracy.

Meta founder and CEO Mark Zuckerberg attends the inauguration ceremony where Donald Trump was sworn in as the 47th US President in the US Capitol Rotunda in Washington, DC, on January 20, 2025. (Photo by KENNY HOLSTON/POOL/AFP via Getty Images)

Meta founder and CEO Mark Zuckerberg announced on January 7 that Meta was jettisoning the network of independent fact-checkers it had relied on for much of the last decade. Moving forward, he stated, it would replace fact-checking with a system like X’s “Community Notes.” In the words of Joel Kaplan, a longtime Meta policy executive and its newly appointed Chief Global Affairs Officer, the move would allow the “community to decide when posts are potentially misleading and need more context.”

We are part of a team of independent researchers that had unprecedented data access to Facebook and Instagram during the 2020 US election. We led the most comprehensive research to date evaluating the likely effects of content moderation on the diffusion of information, including posts fact-checked as false. Given what we learned about Meta’s content moderation machinery, we are very skeptical of the changes Zuckerberg announced. We are also concerned about the fact that no one outside of Meta will know what effects this change in policy will have on the information users see.

The core of Meta’s argument is that fact-checkers are inaccurate in their assessments. As Kaplan stated in his post: “…in December 2024, we removed millions of pieces of content every day. While these actions account for less than 1% of content produced every day, we think one to two out of every 10 of these actions may have been mistakes (i.e., the content may not have actually violated our policies).” As a result of these errors, Kaplan argued, “too many people find themselves wrongly locked up in 'Facebook jail.'”

We believe this concern is deeply misplaced. Our findings illuminate why. We found that, circa 2020, the main way information spread on Facebook is through Pages, given their disproportionate audiences. However, misinformation spread very differently: it circulated largely among Friends, from user to user, exposing millions of other users in the process.

Pages are the most effective means for spreading information, but they were not heavily used to spread misinformation. Why was this? Because of the way in which Facebook reduced the visibility of repeat offenders, a penalty that was not applied to users. The repeat offender policy, and the way in which it was applied, created the incentives for sharers of misinformation to cultivate alternative ways of disseminating content that evaded enforcement. Our results show that most misinformation circulating on the platform was disseminated by a tiny number of users (less than 1%), but it still resulted in millions of other users being exposed.

Misinformation, in other words, found a crack in Meta’s content moderation machinery. Soon after the 2020 election, Meta extended its repeat offender penalties to users, as well, to patch up this gap in enforcement.

This phenomenon of supersharers of misinformation, which research has found also exists on other platforms such as X, is the Achilles’ heel of the misinformation sub-ecosystem; and it is at this vulnerability that fact-checking is primarily aimed. Identifying a piece of misinformation is indeed tricky. Misidentifying content is not only about labeling as false what is not. It is also about letting problematic content fly under the radar, which is a bigger source of error. And it is also about classifying information quickly enough because, in a matter of days, much of the spread of a piece of misinformation will have already occurred.

Sufficient speed, scale, and accuracy to directly slow misinformation is a monumental (perhaps impossible) task, even for Meta. However, identifying the supersharers of misinformation is not hard because their behavior is so distinctive. A 10-20% error rate at identifying false content becomes very tiny for people who share dozens of pieces of misinformation. Zuckerberg’s “Facebook jail” is mostly for supersharers, not the incidental jaywalker who shared something that was erroneously classified as false.

There are two inarguable facts. First, what Meta does matters. It is the dominant social media company in the United States, controlling in its various online properties about half of social media today.

Second, social media companies are always choosing what people do and do not see. Facebook, Instagram, and the like choose to show users only a tiny fraction of what they could see. Every time you open Facebook, Instagram, or Threads, Meta is choosing to put some content on the top of your feed, burying other content you will never see. This is even more the case now than in 2020 because now Meta selects content from beyond the set of accounts you decided to follow. The change in policy announced on January 7 means that the information that fact-checkers provided to Meta will not be factored into that selection of content. It is unclear how community notes will weigh in or how malleable they are to coordinated efforts to tilt the conversation (remember the #StoptheSteal campaign?).

We anticipate that the elimination of penalties for sharing misinformation will cause a flood from some Pages– which are still the most effective means to spread content on Facebook. Given the new-found access of misinformation to Pages, will the new system slow the spread of fake cancer cures or misinformation about where to vote? We are skeptical, and we will likely never know the answer to those questions because Meta has not invited any more independent research to evaluate the effects of their actions.

The prospect of a large corporation filtering what you see from people you know and information sources you follow is not one that we embrace with enthusiasm. But that is part of the bargain you make when you use Facebook (and other social media). When you look at your Facebook newsfeed today, with or without these measures in place, Meta will still be choosing the speech you see.

Authors

David Lazer
David Lazer is University Distinguished Professor of Political Science and Computer Sciences, Northeastern University, and faculty fellow at the Institute for Quantitative Social Science at Harvard, and elected fellow of the National Academy of Public Administration. He received his PhD from the Uni...
Sandra González-Bailón
Sandra González-Bailón is the Carolyn Marvin Professor of Communication at the Annenberg School, University of Pennsylvania, where she directs the Center for Information Networks and Democracy. Prior to joining Penn, she was a Research Fellow at the Oxford Internet Institute (2008-2013). She complet...

Related

The Need to Make Content Moderation Transparent

Topics