Addressing the Dangers of Misrepresenting Scientific Evidence Online
Prithvi Iyer / Oct 11, 2023Prithvi Iyer is Program Manager at Tech Policy Press.
What does it mean to live in a world where we cannot agree on what is legitimate “scientific evidence”? Research has shown that social media can help amplify and lend a kind of credibility to conspiracy theories cloaked in scientific jargon, furthering disagreements on core issues like climate change and pandemic response that urgently require a common consensus and unified call to action.
For example, conspiracy theories linking Bill Gates to the Monkeypox outbreak in the United States or blaming jihadist forces for spreading COVID-19 in India have been amplified on both traditional and fringe social media platforms, fuelling a pervasive distrust in scientific evidence and public health institutions. These online activities have clear offline impacts, ranging from a distrust of vaccines to calls for violence against the alleged perpetrators of a public health crisis. Thus, it is crucial that scientific communications is resilient from malicious misrepresentation.
To better understand the complexity of scientific misinformation and its implications, researchers are studying how conspiracy theorists and science journalists alike transform and communicate scientific evidence in the age of social media. For Tech Policy Press, Adi Cohen looked at how scientific publishing is weaponized by COVID-19 vaccine and masking skeptics to spread disinformation. Now, a notable new study published in Science, led by Andrew Beers and colleagues at the University of Washington, explores the implications of "selective reporting" on social media, specifically concerning discourse around the efficacy of masks during the COVID-19 pandemic.
Selective reporting refers to the act of cherry picking scientific evidence to suit one’s claims. Citing one shard of evidence that suggests COVID-19 vaccines are ineffective without the appropriate caveats and context of the full body of research that shows they are, in fact, highly effective at reducing the risk of infection would be an example of selective reporting. This study, titled “Selective and deceptive citation in the construction of dueling consensuses,” sheds light on how mainstream science journalists and conspiracy theorists alike shape online perception of mask efficacy by transforming scientific evidence into competing narratives.
The researchers analyzed a dataset of 5 million tweets that discuss masks, revealing that “science communicators,” a broad category of people who post about science issues that includes conspiracy theorists and science journalists alike, selectively amplify certain studies while disparaging others to create supporting and opposing bodies of evidence. The authors consider “scientific communicators” to be people sharing their opinions about scientific issues without necessarily having any training or expertise in science. This group is distinct from scientists themselves, who typically publish their work in scientific venues.
The analysis revealed a variety of differences between the way that scientists and scientific communicators go about engaging with evidence, even when drawing from the same literature. Importantly, science communicators often project their biases onto scientific evidence, distorting findings to fit their preconceived notions. For instance, anti-mask scientific communicators frequently employ selective and deceptive quoting of scientific work and criticize opposing research. This group of people is made up of “an informal coalition of conspiracy theorists, creators on alternative media platforms, and others who have previously shared false theories about climate change, the benefits of vaccination, the existence of mass shootings, HIV/AIDS, abortion, the reality of the moon landing, and a variety of other topics.” But citing scientific work, even if misrepresented, adds a veneer of legitimacy to claims about mask efficacy, emphasizing that selective reporting and communication styles have a profound impact on shaping online scientific consensus, contributing to opposition and divisiveness no matter what the actual evidence says.
This may cause “familiar anxiety” to scientists, say the researchers, since “it can suggest that mainstream scientific consensus is non-objective and thus potentially just as untrustworthy as alternative consensuses.” These findings also challenge the popular notion of "prebunking," which attempts to inoculate audiences against misinformation. The researchers find that “unfortunately, charismatic disinformers also inoculate their readers with critical citations of otherwise widely trusted research.” They admit that scientific communicators that share information counter to the scientific consensus are often much more successful, at least when it comes to
In the end, these researchers put the onus on scientists and publishers to work harder, and to recognize that the evidence produced by the scientific process does not reach the public unmediated. Designing their communications with the realities of the current information ecosystem in mind, dominated as it is by social media, may help lead more people to trust the consensus formed by knowledge-producing institutions, even if they aren’t perfect.
These findings resonate with another recent study conducted by researchers in New York University’s Department of Psychology and the Kellogg School of Management at Northwestern. This study explored user perceptions of viral social media content. The research indicates that divisive, emotionally charged, and even misinformation-laden content tends to go viral, despite users recognizing that it shouldn't. This phenomenon can be attributed to psychological factors like negativity bias.
But the conclusion of this study, titled “People think that social media platforms do (but should not) amplify divisive content,” is that the onus should be on platforms to “amplify accurate, educational, and nuanced content” as opposed to false and negative content. Doing so would be more in line with the stated preferences of users, who appear to be aware of the negative effects of social media even if such awareness does not necessarily translate to their online behavior.
Taken together, these findings underscore the need for a deeper understanding of the role of social media in amplifying false, misleading, and divisive content, and the complex interaction between different components and players in the information ecosystem. Whether we can recalibrate that ecosystem to produce a fact-driven consensus on issues such as public health, climate, and more, will remain to be seen.