Home

Donate

Using Science To Guide Social Media Regulation

Silvia Giordano, Filippo Menczer, Natascha Just, Florian Saurwein, John Bryden, Luca Luceri / Jan 9, 2023

Silvia Giordano is a Professor in the Department of Innovative Technologies, University of Applied Science and Arts, Lugano, Switzerland; Filippo Menczer is the Distinguished Luddy Professor and Director of the Observatory on Social Media, Indiana University, USA; Natascha Just is Professor and Chair, Media & Internet Governance Division, University of Zurich, Switzerland; Florian Saurwein is a member of the research staff in the Media & Internet Governance Division, University of Zurich, Switzerland; John Bryden is affiliated with the Observatory on Social Media, Indiana University, USA; and Luca Luceri is Research Scientist, Information Sciences Institute, University of Southern California, USA.

In October 2022, Elon Musk purchased Twitter, declaring that "the bird is freed" and that freedom of expression would be a priority on his platform. Although little is known about Musk's understanding of free speech, many feared that his policies would lead to a neglect in the moderation of harmful content, ranging from disinformation to hate speech. Fomenting these fears, Musk reinstated users who had been banned for violating the platform’s rules on election misinformation and incitement to violence; retweeted fake news about the attack on House Speaker Pelosi’s husband; gutted teams responsible for trust and safety issues; attacked Anthony Fauci; spread disinformation about Twitter’s Trust and Safety Council and the former head of trust and safety; promoted the QAnon conspiracy theory; suspended the accounts of several journalists; and opened the blue check mark, previously reserved for verified accounts, to paid subscribers, leading to a proliferation of fake accounts. In response to public outcry, Musk also deleted some of his tweets and placed the blue check subscription service temporarily on hold.

Science could help policymakers understand which regulations work and what their unintended consequences can be, whether they are internal platform policies or rules imposed by legislation. A rigorous scientific approach could prevent much of the chaos that we are currently witnessing as Musk tries new approaches that scare away advertisers and users.

Researchers have already shown that by eliminating barriers to information sharing and by algorithmic amplification of engagement, social media have facilitated the viral and global distribution of harmful content such as hate speech and disinformation. Social media have been effectively weaponized in modern social, political, civil, and conventional wars. They have been exploited in the spread of lethal disinformation such as hate content fomenting ethnic violence and genocide, Russian information operations influencing Brexit and the 2016 U.S. election, ongoing Russian propaganda about the war in Ukraine, and false claims about COVID health policies.

How are social media companies handling such dangerous manipulations? The climate of uncertainty in the Twittersphere reverberates on other platforms, whose moderation policies during the recent U.S. Midterms were neither clearly communicated nor consistently enforced. The reluctance of social media companies to deal with the challenge of handling harmful content effectively, as well as the lack of clarity and transparency in their moderation policies, have led to renewed discussions regarding the need for regulation of social media platforms.

Platforms such as Facebook, Twitter, and YouTube enjoy a liability privilege because they do not have to take action against illegal content as long as they are not aware of it. However, the platforms establish community standards and complex content governance systems to identify, filter, delete, block, down-curate, or flag problematic content. Twitter and Facebook, for example, have developed moderation policies aimed at reducing harm, even though political, economic, and normative factors stand in the way of consistent enforcement of these policies. But even the existing policies could be quickly erased.

Musk says that the moderation of hate speech and disinformation hinders free speech, but without such moderation we would revert back to the situation of a few years ago, when the information ecosystem was even more flooded by speech polluting and poisoning public discourse. In fact, our research shows that weaker moderation ironically hurts free speech: the voices of real people are drowned out by malicious users who manipulate platforms through inauthentic accounts, bots, and echo chambers.

Musk is not alone. Several Republican states have already tested the water with bills that would prohibit banning of users, or other forms of moderation. So far these have been blocked in the courts. However, Republicans see moderation as a First Amendment issue and will continue to push against it. Ongoing political gridlock seems likely within the U.S., and if the E.U. starts to take a lead as with the Digital Services Act, then this may even incur a backlash. There is a need to find workable evidence-based policy that limits the harm from online hate speech and disinformation before it starts to irreparably damage our democratic institutions.

How can regulators contribute to enhancing the information ecosystem? The legal and technological transformations needed to effectively mitigate harmful social media abuse present formidable challenges. Policymakers have limited access to social media data, statistics, metrics, and algorithms for comprehending and handling online manipulation. They are unable to predict the effectiveness and impact of specific regulations, including their unintended consequences. For example, when is it more effective to add friction to information sharing, versus decrease the visibility of suspicious content, label debunked claims, or suspend bad actors? When are such steps ineffective, or worse counterproductive? We lack tools to answer these kinds of questions. Furthermore, it is difficult to adapt methodologies to the distinct contexts of different countries. Government regulation can have very different goals and consequences in a democratic versus a repressive regime.

To address these challenges, we need a clear, traceable, and replicable methodology to craft and evaluate policy recommendations for preventing and curbing abuse. This requires a transdisciplinary effort. Inputs from media policy and governance research should be used to formulate a set of policy alternatives that are expected to produce effective solutions for the mitigation of online harm. Computational social science methodologies, in turn, should be leveraged to model the effects of moderation policies and quantify their impact.

We are involved in an international research project that aims to generate recommendations and quantitative evidence to classify regulatory policies and assess their expected impact within the information ecosystem. Such an effort may form the basis, for platforms and regulators of any country, to design timely, transparent, and effective policy interventions to mitigate social media abuse. But there is much work to do. Platforms that will be affected by existing and proposed regulatory legislation should support researchers and policymakers in their work to quickly understand these phenomena and reduce harm. Studies of clear and effective regulation, aligned with law, are a must for the current and future Musks of our society.

This post originally appeared at the website of the Indiana University Observatory on Social Media.

Authors

Silvia Giordano
Silvia Giordano is a Professor in the Innovative Technologies Department and the Head of Research of the Institute of Informatics and Networking Systems at the University of Applied Science and Arts in Lugano, Switzerland. Her research includes privacy and human behavior in communication and social ...
Filippo Menczer
Filippo Menczer is the Luddy distinguished professor of informatics and computer science and the director of the Observatory on Social Media at Indiana University, Bloomington. In the last twelve years, his lab has led efforts to study online misinformation spread and to develop tools to detect and ...
Natascha Just
Natascha Just is Professor and Chair of the Media & Internet Governance Division at the Department of Communication and Media Research of the University of Zurich. Her current research interests center on competition policy, market power control, changing governance structures, algorithms on the Int...
Florian Saurwein
Florian Saurwein is a Senior Teaching and Research Associate in the Media & Internet Governance Division of the Department of Communication and Media Research, University of Zurich. His research centers around interrelations of technology, society and governance, with a focus on risks and governance...
John Bryden
John Bryden served until recently as a senior research scientist and as the executive director of the Observatory on Social Media at Indiana University, Bloomington. He has applied techniques from mathematical modeling and data science to the study of social, group and political behavior.
Luca Luceri
Luca Luceri is a postdoctoral research associate at the Information Sciences Institute, University of Southern California, and a research scientist at the University of Applied Science and Arts of Southern Switzerland. His research leverages machine learning, data and network science to detect, inve...

Topics