Home

What Can We Learn from the First Studies of Facebook's and Instagram's Roles in the US 2020 Election?

Ravi Iyer, Juliana Schroeder / Aug 2, 2023

Recent studies are being over-interpreted by some as absolving social media of responsibility for incentivizing divisive content, write Ravi Iyer, managing director of the University of Southern California Marshall School’s Neely Center for Ethical Leadership and Decision Making, and Juliana Schroeder, a professor in the Management of Organizations group at the University of California-Berkeley’s Haas School of Business.

Meta headquarters, Menlo Park, California. Shutterstock

After a great deal of commendable and careful work, four of the first studies concerning Facebook and Instagram’s role in the US 2020 election were released. These studies represent an unprecedented collaboration between academia and industry, with many things to be learned from each one. But contrary to how many, including Facebook itself, have characterized the findings, they do not squarely address what is perhaps the most robust criticism of social media’s effects on society – that optimizing for engagement incentivizes divisive content that, in the long-term, can enhance polarization.

One of us (Iyer) worked on improving Facebook’s effects on society for over 4 years, including in places such as Myanmar and India. Iyer collaborated on dozens of internal studies examining the effects of reshares and other algorithmic features of newsfeed ranking, many of which are now public in the Facebook papers disclosed by whistleblower Frances Haugen. Both of us have conducted dozens of academic studies on values, political attitudes, and polarization. As a result of the many studies conducted internally, Facebook eventually re-weighted the use of reactions, removed optimizations for comments and reshares for political content and reduced the effect of comments and shares for all content, after having found that removing some engagement incentives reduced negative experiences (e.g., bullying reports, views of misinformation, and violent or graphic content). This echoed external research showing that divisive content is often highly engaging.

Building on such previous work, what can we learn from these new studies? Below are three things we have learned and one thing we cannot. Spoiler: These new studies are consistent with previous research, but not in the way that Meta may want you to think.

Three things we have learned:

  1. Chronological feeds may not improve social media. One experiment assigned a sample of consenting users to a reverse-chronological feed compared to the normal NewsFeed algorithm and measured changes in the content they viewed, their engagement levels, and their political attitudes and knowledge. While some (e.g., Frances Haugen) have suggested that a chronological (or reverse-chronological) feed could be a way to improve Facebook, such a feed was never mentioned as a reasonable alternative in the FB papers, in part because the inventory that would be chronologically ranked only integrates well in a world of ranking. Remember when a bug on the Facebook News Feed showed everyone every post made to any page you followed? Imagine seeing that information chronologically alongside posts from every group you are in, comments by every friend you have, and all with no penalty applied to hyper-active attention seekers. Although regulators have considered incentivizing chronological feeds, this study of the effects of a (reverse-)chronological feed suggests that it is not a reasonable alternative because it makes users like their content less, see more untrustworthy content, and as a result, substantially decrease their time spent on the platform. Moreover, evaluating against a (reverse-)chronological feed is not a reasonable baseline for making strong statements about the effects of the NewsFeed algorithm, absent details about what counts as inventory. A feed composed entirely of original posts from friends in reverse chronological order would likely perform very differently.
  2. Surveys of stable attitudes generally do not respond to short term product changes. Across all four of the recently released papers, none showed a statistically significant effect of any of the tested interventions on attitudes. Since the attitudes measured were about politics and polarization, some concluded that algorithms do not likely affect political attitudes at all. But at other places we’ve worked (including in academic studies at the University of Southern California and the University of California-Berkeley, in field studies with CivilPolitics, and in product changes at Ranker), it is notoriously difficult to observe differences in measures of stable attitudes as a result of short term experiences. Most people have fairly stable attitudes about anything they perceive to be important, and temporarily changing one source of information, while every other source remains constant, often does not produce a measurable effect.

Moreover, these lacking effects are not specific to polarization; they mirror this internal Facebook study (via Gizmodo) showing null effects of large algorithmic changes on meaningfulness and well-being. That study, which used a larger sample, concluded that “we probably can’t rely on summative sentiment surveys for future work” and instead suggested using “prevalence surveys”, which ask people about specific experiences with content, which may have a cumulative effect over time. Null results for general attitudes were also found in this external study, in which a one month break from Facebook led to changes in reported experiences, but not in attitudes about others. In contrast to more general measures, measures of specific experiences with content have been found to have predictive power at Facebook, in external studies of Twitter, and across multiple social media platforms in our recently released Neely Social Media Indices. Future researchers should include both stable attitude measures and content-specific measures in their surveys so that null effects are more readily interpretable.

  1. Optimizing for reshares can substantially increase views of divisive content. In contrast to the lack of movement in survey measures of stable attitudes, in the paper that examined reshared content, removing reshared content substantially decreased political content from untrustworthy sources and partisan news clicks, consistent with what the Wall Street Journal reported happened when Facebook removed comment and share optimization for political content or what the AP reported happened when Facebook removed popularity predictions from health content algorithms. Across studies, both internal and external to Facebook, reshared content tends to be more divisive, and optimizing for reshares leads to more divisive content. More broadly, this shows the power of algorithms to incentivize or disincentivize divisive content experiences, in contrast to the idea that algorithms do not play a major role in political discourse. Another paper showed that the base algorithm for the Facebook feed (relative to an experimental intervention that reduced exposure to content from like-minded sources by about one-third) resulted in less exposure to cross-cutting sources and more exposure to uncivil language. Clearly, algorithmic choices can make a difference in terms of the amount of divisive content that is consumed. Facebook recently shared that time spent viewing content is used in numerous ranking models. Future researchers should examine whether, like optimizing for reshares, optimizing for time spent similarly increases exposure to divisive content.

What we cannot learn:

  1. Whether divisive content has long term effects, especially for vulnerable users, publishers, and politicians. In a forthcoming paper to be published by the Knight First Amendment Institute, we suggest that the potentially largest effects of social media platforms stem from facilitating conflict actors in manipulating others and incentivizing the production of divisive content by politicians and publishers, as a result of engagement-based algorithms. As evidence, we cite the experiences of civil society groups, experiments that illustrate the algorithmic incentive toward divisive content, the statements of politicians and publishers, and finally, previous lab studies that suggest that divisive content does indeed create negative intergroup attitudes. The authors of four latest papers published in Science and Nature acknowledge the limitation that these studies were short term (3 months) and only affected individual users, who still lived in communities where their friends, family, political representatives, and media messages were influenced by social media, even if their individual feeds were experimentally changed. These studies also examined the average effect, rather than seeing whether divisive content may affect more extreme users, in particular.

It is impossible to conduct a perfect experiment, and the authors of these studies should be commended for doing laudable work. Many of them apparently disagree with how their findings have been interpreted by Facebook. More broadly, reasonable people can still disagree about social media’s effect on polarization, since one cannot design an experiment in which entire communities are randomly assigned to use or not use their platforms over a long time period. To some degree, social media does reflect society, and so we don’t believe we should hold social media companies responsible for every negative piece of content they host.

However, we do think we can hold them responsible for content they incentivize and encourage. Studies (current and past) agree that partisan publishers like Dan Bongino and Occupy Democrats, which regularly were amongst the most engaging content on the platform, are promoted via aspects of algorithms, such as optimizing for reshares. Facebook has commendably removed some of those incentives, leading to less divisive political content in viral content reports. Rather than arguing that such content has no effect on society, we would suggest that the company instead take credit for that work and extend it even further, examining more components of ranking algorithms.

In our view, the broader evidence remains most consistent with the idea that partisan content does indeed have the potential to polarize society, even if the effect on measured polarization may not be observable in short term studies that only affect individual experiences and even if it is impossible to uniquely distinguish the potential effects of social media alongside other factors, such as the often toxic discourse on cable news. With recent heat waves illustrating the urgency of coming together to address shared challenges to humanity such as climate change, we should continue to work on ways to enable human cooperation, including addressing the algorithmic amplification of divisive content.

Authors

Ravi Iyer
Ravi Iyer is Managing Director of the Neely Center for Ethical Leadership and Decision Making at the University of Southern California Marshall School. He worked at Meta for 4+ years across data science, research, and product management roles on improving its algorithmic impact on society.
Juliana Schroeder
Juliana Schroeder is a professor in the Management of Organizations group at the University of California-Berkeley’s Haas School of Business. Her research examines how people make social judgments and decisions and has been published in a wide range of academic journals and in several book chapters....

Topics