Home

Donate

Researchers Validate the Dangers of Disinformation

Prithvi Iyer / Aug 15, 2024

Disinformation research has come under attack in recent years. Politicians on the right have made unsubstantiated claims on how disinformation research silences conservative viewpoints, while others have argued that misinformation is hard to identify and fact-checking initiatives are too political to be effective.

An article published in the Nature journal Humanities and Social Sciences Communications titled Liars Know They Are Lying: Differentiating Disinformation from Disagreement" refutes these claims and provides a deep dive into the often-politicized world of disinformation research. The authors, Stephan Lewandowsky, Ullrich K. H. Ecker, John Cook, Sander van der Linden, Jon Roozenbeek, Naomi Oreskes, and Lee C. McIntyre, argue that willful disinformation (i.e., false claims with the intent to deceit) is “demonstrably harmful to public health, evidence-informed policymaking, and democratic processes.” The authors also propose ways for civil society organizations and policymakers to identify and respond to willful disinformation tactics without resorting to censorship.

The authors provide ample empirical evidence that counters the claim that disinformation research aims to silence conservatives. In fact, they write, recent research that studied 208 million US Facebook users found that a “substantial segment of the news ecosystem is consumed exclusively by conservatives and that most misinformation exists within this ideological bubble.” However, that has not stopped right-wing politicians from stoking free speech fears among their supporters.

Politicians have also targeted and discredited researchers. From public denunciations to legislative actions that attempt to curtail academic freedom, researchers find it increasingly difficult to conduct impartial investigations. For example, Kate Starbird, a disinformation researcher at the University of Washington, was called to testify before Rep. Jim Jordan’s (R-OH) Select Subcommittee on the Weaponization of the Federal Government, where she faced false allegations that she colluded with the Biden Administration to silence conservative voices. She told the New York Times, “The people that benefit from the spread of disinformation have effectively silenced many of the people that would try to call them out.”

This paper also tackles what the authors call the “postmodern” critique of disinformation research. Rather than refuting allegations of disinformation with factual evidence, those that take this view resort to attacking “the idea that objective knowledge is even possible.” The paper cites various examples of this tactic, especially from former President Donald Trump and his aides, who have often referred to the idea of alternative facts – or the famous comment from former New York City mayor and Trump ally Rudy Giuliani: “Truth isn't truth.” These strategies serve as ways to engage in disinformation to erode public trust, often with little to no public consequences, especially in light of decisions made by social media companies to reduce their trust and safety efforts while also laying off employees working on hate speech detection and election integrity.

Key strategies

Along with providing evidence for why disinformation is still a pressing policy concern and refuting arguments that suggest otherwise, the authors provide useful insights on how to identify disinformation and the underlying intent to deceive. They provide three key strategies.

  • Statistical and Linguistic Analysis: One effective method relies on statistical and linguistic analysis of text. While humans are notoriously unreliable at detecting lies—performing only slightly better than chance—advances in natural language processing (NLP) have significantly improved this capability. Machine-learning models can now analyze linguistic cues to classify texts as deceptive or honest. For example, the authors cite research about a model that examined the distribution of different types of words and achieved a 67% accuracy rate, outperforming human judges who scored just 52%. A similar model was used to classify tweets by Donald Trump as true or false based on independent fact-checks. Remarkably, this model achieved over 90% accuracy.
  • Analysis of Internal Documents: Another method for detecting willful deception involves analyzing the internal documents of institutions, such as governments or corporations. By comparing the internal knowledge of these entities with their public statements, researchers can uncover active deception, especially when it occurs on a large scale. This approach has been particularly effective in exposing corporate malpractices, where discrepancies between internal communications and public positions reveal intentional deception. While this technique may be resource intensive, it can also lead to significant outcomes, like the “conviction of Phillip Morris under federal racketeering (RICO) law.”
  • Comparing statements with official testimony: The third approach focuses on identifying discrepancies between public statements and those made in a court of law. A notable example is Donald Trump’s "big lie" about the 2020 presidential election. While Trump publicly and repeatedly claimed widespread electoral fraud, his lawyers, who filed over 60 lawsuits related to the election, did not support these claims in court. In fact, Trump's attorneys frequently disavowed any mention of fraud when questioned by judges.

This paper supports the idea that disinformation research is essential to democratic discourse. Importantly, this paper also distinguishes between healthy democratic debate based on contested facts and outright lies. As the authors note, disagreeing on facts does not “license the use of outright lies and propaganda to willfully mislead the public,” and it is possible to “identify falsehoods, disinformation, and lies and differentiate them from good-faith political and policy-related argumentation.”

Quantifying the Impact of Disinformation

To provide a quantitative understanding of disinformation and its impact on the information ecosystem, a separate group of researchers from Indiana University's Observatory on Social Media conducted a study to explore the growing vulnerability of social media platforms to manipulation by bad actors. Their research was driven by the question: How do manipulation tactics impact the quality of information on social media networks? In the resulting paper, titled “Quantifying the vulnerabilities of the online public square to adversarial manipulation tactics,” they argue that “social media users are vulnerable to adversarial manipulation tactics, through which bad actors can amplify exposure to content that threatens, for example, democratic elections and public health.” Using a simulation model called SimSoM, the study examined tactics employed by bad actors to degrade information quality. The researchers define “bad actors” as those “accounts that are controlled by bad (adversarial) actors to spread low-quality content among authentic agents.” Such accounts can be controlled by humans, bots, or cyborgs.

On the other hand, authentic agents are those users who seek to share and consume high-quality content. The model considers three possible manipulation tactics: infiltration, deception, and flooding. Infiltration refers to how “bad actors amplify exposure to their messages by getting authentic accounts to follow them,” while flooding refers to when bad actors spam users with low-quality content. Deception refers to the appeal of the content. The researchers consider cases where low-quality information has a high deceptive appeal based on how it is communicated.

Key Findings

  • The study finds that infiltration is the most effective manipulation tactic. If the probability of following a bad actor’s account is 10%, “the average quality in the system is reduced to less than half.”
  • When bad actors generate content exclusively with maximum deceptive appeal, quality is reduced to about 70%.
  • If bad actors combine infiltration with deception or flooding, the “average information quality is reduced to 40%.”
  • The authors also find that inauthentic accounts do not need to target “influential” accounts (those with large follower numbers and high activity). Rather, “they can do more damage by connecting to random accounts.” Thus, contrary to popular belief, disinformation campaigns can create more polarization when the targets are random. As the authors note, “the distribution of quality is uneven so that the targeted population is worse off, but other parts of the community are spared. Targeting tactics, therefore, tend to backfire if we assume that bad actors intend to maximize the spread of their content across the full community.”
  • Similarly, the research also finds that targeting accounts known to spread misinformation is not particularly effective. In this case, they find that such users exist in an “echo chamber” wherein “low-quality messages get shared and become obsolete rapidly within one densely connected partisan cluster, sparing the rest of the network.”

Conclusion

These two recent papers address the backlash against disinformation research in different ways and provide important takeaways. The article by Stephan Lewandowsky et al. sheds light on the broader social and political dynamics that complicate research, highlighting the challenges researchers face and the importance of intent in identifying disinformation. The quantitative study by researchers at Indiana University offers numerical data on the impact of disinformation on public trust and behavior, revealing the extent to which disinformation campaigns can undermine democratic processes and the quality of information shared online. Together, these studies validate the importance of studying and mitigating disinformation, especially in an election year.

Authors

Prithvi Iyer
Prithvi Iyer is a Program Manager at Tech Policy Press. He completed a masters of Global Affairs from the University of Notre Dame where he also served as Assistant Director of the Peacetech and Polarization Lab. Prior to his graduate studies, he worked as a research assistant for the Observer Resea...

Topics