Home

Donate

Researchers Consider the Relationship Between Misinformation, Outrage, and the Sharing of Content on Social Media

Prithvi Iyer / Dec 4, 2024

Misinformation’s detrimental effects on democracy and social cohesion are the subject of a significant amount of research. Yet, platforms' efforts to detect and remove misinformation have not been particularly successful. A core assumption underlying fact-checking interventions to curb misinformation is that users care about only promoting accurate information. But that assumption has been called into question.

A new paper by Killian L. McLoughlin, William J. Brady, Aden Goolsbee, Ben Kaiser, Kate Klonick, and M.J. Crockett, published in Science, suggests that “misinformation exploits outrage to spread online.” The authors define moral outrage as “a mixture of anger and disgust triggered by perceived moral transgressions.” Online outrage promotes misinformation because posts expressing outrage get more engagement and are, thus, algorithmically amplified. Second, expressing outrage helps signal “loyalty to a political group or broadcasting a moral stance,” an incentive that does not depend on information accuracy. Thus, if outrage is what drives the spread of misinformation, fact-checking interventions that assume users prioritize accuracy are bound to fail.

The researchers asked three key questions about the relationship between misinformation and outrage.

  • Does misinformation trigger more outrage than trustworthy news?
  • Does outrage increase the spread of misinformation?
  • Does outrage shape the “psychological motives” for sharing misinformation?

To test their hypothesis, the researchers compiled parallel datasets from Facebook and Twitter, drawing data from 2017 and 2020–2021. Additionally, they conducted two controlled behavioral experiments. The Facebook and Twitter studies primarily looked at engagement with posts that contain web links, classified as either misinformation or trustworthy based on the quality of the source. This approach seemed better than fact-checking individual articles because the latter is hard to scale and prone to selection bias. For the behavioral experiments, the researchers analyzed headlines that had been independently fact-checked as either true or false. In each study, “American participants viewed 20 news headlines that varied on trustworthiness (true versus false) and outrage evocation (high versus low) and rated their likelihood of sharing it.”

Results

  • “Misinformation sources evoke more outrage than do trustworthy news sources.”

On Facebook, the researchers found that misinformation triggered more angry reactions than posts containing trustworthy news. Interestingly, misinformation triggered anger more than any other emotion, indicating that misinformation is inextricably linked to outrage. The Twitter studies, which looked at the “presence or absence of outrage in tweet responses onto the news source linked in the original tweet,” found that responses to posts containing misinformation had significantly more outrage. Thus, this pattern was consistent across platforms and time periods.

  • “Outrage facilitates the spread of misinformation.”

The researchers found that Twitter posts sparking outrage were shared more frequently than those that didn’t, regardless of their truthfulness. While outrage boosts sharing for both misinformation and trustworthy news, the effect is often stronger for false information. The relationship between anger and increased shares was consistent even when the audience size increased.

Interestingly, outrage predicted the likelihood of sharing posts in each Twitter study, but the relationship between outrage and news type was inconsistent. “In studies 1b and 2b, the effect of outrage on shares was stronger for misinformation than for trustworthy news sources. In study 3b, the effect was stronger for trustworthy news sources as compared with misinformation”, the authors concluded. Furthermore, the study found that users were more likely to share news headlines designed to provoke outrage, regardless of whether it is trustworthy or misinformation. This shows that outrage might be a more salient predictor of whether information gets shared online, a finding that challenges existing platform interventions that seek to inoculate users from misinformation by providing them with accurate information.

  • “Outrage increases nonepistemic motives for sharing.”

Lastly, the researchers sought to examine if and how outrage shapes the underlying motivation for users to share misinformation, drawing a critical distinction between epistemic (accuracy-focused) and non-epistemic (emotionally driven) motives. Outrage appears to amplify the latter. Across all studies on Facebook and Twitter, when posts received angry reactions, they were more likely to be shared without being read. This pattern was more salient for posts classified as misinformation, suggesting that “emotions in general (beyond outrage in particular) affect nonepistemic motives for sharing.”

Takeaways

In eight studies conducted across two time periods –2017 and 2020-21 – the researchers showed that misinformation triggers more outrage than trustworthy news, while outrage predicts the sharing of information, regardless of whether it is true or false. A key implication of this research is that traditional countermeasures to misinformation that focus on providing users with accurate information may not be the way forward. Rather, this research suggests that often, social media users share information they know to be false, motivated by signaling political affiliation or loyalty to a particular moral position.

The authors speculate that misinformation evoking outrage “may be less reputationally costly to share than other types of misinformation because of the signaling properties of outrage.” Thus, policymakers and social media companies might benefit from implementing interventions that target non-epistemic motives for sharing news, a strategy that goes against what platforms currently do, which is mainly focused on reminding users to check for accuracy before sharing news. The authors acknowledge that these findings are based on American users active on Facebook and Twitter, which limits the generalizability of these findings to other geographies and platforms.

The struggle to produce these results

Arriving at these results was hardly an easy process. One of the authors, Molly Crockett, noted on the social media site Bluesky that conducting this study was difficult because of the challenges of getting data from social media platforms.

“Doing this work was way harder than it had to be, thanks to Big Tech,” Crockett wrote. “We applied for access to Facebook data in Aug 2018. Our project was approved in Feb 2019. Months of paperwork & lawyers followed. We spent 2019 reading apologetic emails from Meta about delays in sharing the dataset with us.” Crockett went on to note the difficulty of getting the data from the Facebook API, a major revision that was necessary after a flaw was discovered in the dataset, and difficulties dealing with Meta’s legal review during the peer review process.

Crockett went on to note that “conditions for researchers continue to get worse... Elon has cut off researcher access to Twitter, and Mark [Zuckerberg] is cozying up to [the] Trump admin[istration], so we can’t be confident about continued access to Facebook. Trump has threatened to cut funding to universities hosting misinfo[rmation] research.” She called on the field of researchers to “ramp up mutual support & more integrated approaches moving forward.”

In this environment, the platforms may be unlikely to act on the insights this study produced.

Related Reading

Authors

Prithvi Iyer
Prithvi Iyer is a Program Manager at Tech Policy Press. He completed a masters of Global Affairs from the University of Notre Dame where he also served as Assistant Director of the Peacetech and Polarization Lab. Prior to his graduate studies, he worked as a research assistant for the Observer Resea...

Topics