The Need to Make Content Moderation Transparent
Sandra González-Bailón, David Lazer / Dec 11, 2024Sandra González-Bailón is the Carolyn Marvin Professor of Communication at the Annenberg School, University of Pennsylvania, where she directs the Center for Information Networks and Democracy. David Lazer is University Distinguished Professor of Political Science and Computer Sciences, Northeastern University, and faculty fellow at the Institute for Quantitative Social Science at Harvard, and elected fellow of the National Academy of Public Administration.
In August 2024, Pavel Durov and Elon Musk, the CEOs of social media platforms Telegram and X, saw their commitment to free speech tested within days of each other.
In Durov's case, the French government arrested him for allegedly allowing criminal activity on his app. In Musk's case, a Brazilian judge ordered access to the platform to be blocked for not complying with a request to suspend certain accounts. Durov and Musk are both known for their outspoken defense of light-touch content moderation. Each ended up folding to the demands of the French government and the Brazilian judge, respectively.
These incidents showcase the fleeting nature of these CEOs’ values when confronted with the bottom line. They also highlight two growing concerns: the concern of governments, increasingly worried about the global influence platforms have to control information flows; and the concern of the public, increasingly alarmed by how platforms can privilege some voices over others. When someone like Musk decides to take a prominent role advising the US government, those concerns gain additional urgency: will the platform he owns, X, simply become a propaganda arm of the Republican party?
Promoting and demoting content is not intrinsically bad. Indeed, all major social media platforms actively do that in some fashion. Content moderation – the practice of promoting/demoting/labeling information – serves an essential purpose: to protect safety and ensure users have a positive experience. However, content moderation also gives platforms (and those who can pressure them) unprecedented power to control what information circulates and becomes accessible.
The opacity surrounding the exercise of this power makes it difficult for the public to evaluate its scope and consequences and impossible for individuals to make informed choices about which platforms to use.
In a recent study, we analyzed unprecedented data—looking at all content that spread on Facebook during the 2020 election—to try to assess the hidden patterns of information propagation on social media as content moderation measures unfolded (and shifted) over time.
The results of the study suggest that a set of extreme content moderation interventions by Facebook known as “break the glass” measures seem to have led to a substantial reduction in the diffusion of content fact-checked as false. One of these measures, for instance, was called ‘virality circuit breaker’, and it was designed to hinder the spread of content through the platform. We find that labeled misinformation went from accumulating more than 50 million views in July 2020 to close to zero in the days before the election.
If we agree that online spaces are better with less unreliable content circulating through users’ feeds, these results can be read as good news. However, the results are also clear evidence that platforms can and do control information flows in obscure ways.
Our current visibility into Meta’s content moderation, including its role in the 2024 election, is near zero. Meta claims it ran “a number of election operations centers around the world to monitor and react swiftly to issues that arose, including in relation to the major elections in the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico and Brazil.” But what, exactly, they did and to what effect is unknown. The platform has discontinued partnerships with external researchers and eliminated its major public-facing tool for transparency, CrowdTangle. (Meta did not extend the research collaboration that gave us access to the 2020 election data).
Information is power, and the ability to invisibly control access to information is unchecked power. This is an unacceptable status quo and a concern on both sides of the ideological and partisan spectrum.
Antonin Scalia once stated that “you are entitled to know where speech is coming from.” In the current era, we would argue that you are also entitled to understand what speech platforms make visible.
Content moderation gives platforms the ability to activate the levers that can short-circuit the dissemination of content and, therefore, its visibility. Our research shows that this is a power they exercise with consequence at moments of collective vulnerability. Assuming that platforms act for the betterment of all is, at this point, one assumption too many. It should not be up to platforms to decide, by themselves, whether or when they get it wrong. With power should come accountability. This level of transparency can only be achieved through data-driven assessments of platform activity unclouded by conflicts of interest – or PR rhetoric.
Mandates that require platforms to make data available, such as those encoded in the Digital Services Act in Europe, offer one mechanism for transparency — although we are yet to know if platforms will fully comply. There is no similar mandate in the US. Absent such requirements, Big Tech’s power to shape the information landscape will remain unchecked.
Related Reading
- We Know a Little About Meta’s “Break Glass” Measures. We Should Know More.
- A Primer on the Meta 2020 US Election Research Studies
- After the Meta 2020 US Elections Research Partnership, What’s Next for Social Media Research?
- Examining the Meta 2020 US Election Research Partnership
- The Politics of Social Media Research: We Shouldn’t Let Meta Spin the Studies It Sponsors