Home

Donate

After the Meta 2020 US Elections Research Partnership, What’s Next for Social Media Research?

Laura Edelson / Aug 3, 2023

Laura Edelson is an incoming Assistant Professor of Computer Science at Northeastern University, and the former Chief Technologist of the Antitrust Division of the Department of Justice.

Image by Alan Warburton / Better Images of AI / Social Media / CC-BY 4.0

Last week, four studies were published in Science and Nature that studied the impact of Facebook’s recommendation algorithm during the 2020 US presidential election. These studies have generated headlines because of what they did not find: significant differences in participants’ polarization were not detected during the three-month study window. However, the three studies that altered users' algorithmic recommendations showed significant changes to the content in their feeds and to those users' on-platform posting behavior.

These findings point to promising lines of future research. Still, neither Meta nor any other social media companies have committed to allowing such research on their platforms during the next election. To ensure social media research on topics from polarization to teen depression can continue to advance, Congress must pass the Platform Accountability and Transparency Act. Americans from all walks of life and both sides of the partisan divide are concerned about social media. Social and Computer Science researchers like me are racing to better understand why algorithms behave the way they do and what impact those algorithms have on users.

The studies published last week explored three facets of Facebook’s recommendation algorithm: the ranking algorithm used to prioritize which content to show users in their feeds, the impact of users’ friends’ behavior, and the impact of one particular type of user behavior: reshares. The studies did this by altering selected consenting participants’ recommendation algorithms and comparing those users' resulting feeds to a control group. A fourth observational study explored how users across the political content engaged with content from different sources.

The impact of content ranking for recommendations to users is powerful: in the study that compared an algorithmic feed to a chronological one, algorithmic feed users saw less political content in total but more content from what the study terms ‘like-minded’ and ‘cross-cutting’ users, and less content from moderate sources. What does this mean? The ranking algorithm shows users less politics, but what it does show them is more polarized, with the ideological middle less likely to be represented. Users receiving the algorithmic feed also had different on-platform behavior. They were more likely to post about politics and voting despite the fact that those users saw less political content. This fascinating finding showing a connection between changes to the recommendation algorithm and users’ posting behavior is notable and should be followed up with future research.

When the researchers explored what content users on different ends of the partisan spectrum were recommended and what they engaged with, they found that misinformation is highly concentrated on the Right. Broadly, this finding is in harmony with findings from other studies, but because the researchers were able to see both what content users were shown and what content they engaged with, we now better understand why this is happening. Facebook’s recommendation algorithm maximizes for user engagement, and this study found that misinformation content was more engaging to right-wing audiences. The algorithm also appeared to rank misinformation more highly for right-wing users, leading them to be exposed to it more.

This pattern is a feedback loop, but Facebook’s algorithm is doing what it was trained to do: a particular type of content is more engaging to an audience, so the algorithm shows those users more of that content. Pages and Groups appear to be a particularly potent source of right-wing misinformation. As the researchers say, “Pages and groups benefit from the easy reuse of content from established producers of political news and provide a curation mechanism by which ideologically consistent content from a wide variety of sources can be redistributed.”

Of course, the elephant in the room is this: no one knows if Meta will ever allow researchers to conduct similar experiments again. Indeed, no other social media company has announced plans to permit such access. Meta deserves credit for allowing this work to take place, but this model of voluntary access is a fraught bargain for the public. No matter how well-meaning and professional researchers are, when platforms choose who gets to study them, citizens will wonder what questions the researchers were not allowed to ask.

But there is a way forward: social media platforms can be required (as other important companies are) to make more information about their products transparent to users and researchers. The Platform Accountability and Transparency Act, a bipartisan bill recently reintroduced in the US Senate, aims to do just that. By requiring social media companies to be more transparent about their algorithms and advertising practices with users, and by allowing vetted researchers to study the platforms, we can build on the promising work published last week. We have open questions about how platforms from Instagram to TikTok recommend content to their users, why some users (such as teen girls) have very different feeds and outcomes from others, and what aspects of recommendation algorithms impact what users see. Answers to these questions will benefit everyone, and Congress can act this year to open the floodgates of research.

Authors

Laura Edelson
Laura Edelson is an incoming Assistant Professor of Computer Science at Northeastern University. She is the former Chief Technologist of the Antitrust Division of the Department of Justice. She studies the spread of harmful content in large online networks and methods for crowdsourced internet measu...

Topics