Reviewing New Science on Social Media, Misinformation, and Partisan Assortment
Prithvi Iyer / Jun 9, 2024Three new research studies released in the past month unpack different impacts of online misinformation and how people sort themselves politically on social media. Below are summaries of the key findings, including results that shed light on how misinformation on Facebook impacted US COVID-19 vaccination rates, the outsized role of "supersharers" propagating fake news on Twitter around the 2020 election, and how proactive blocking of counter-partisans drives political assortment and "echo chambers" on social media.
1. Quantifying the impact of misinformation and vaccine-skeptical content on Facebook
Jennifer Allen, Duncan Watts, and David Rand
This study explored the extent to which “misinformation flagged by fact-checkers on Facebook (as well as content that was not flagged but is still vaccine-skeptical) contributed to US COVID-19 vaccine hesitancy.” The researchers conducted two experiments to measure “the causal effect of 130 vaccine-related headlines on vaccine intentions.” Using Facebook’s Social Science One dataset, the researchers measured exposure to “13,206 vaccine-related URLs that were popular on Facebook during the first 3 months of the vaccine rollout (January to March 2021).”
Here are some key findings from the study:
- The first experiment showed that exposure to headlines containing false information about the COVID-19 vaccine “reduced vaccination intentions by 1.5 percentage points,” a finding consistent across pretreatment conditions like age, gender, political affiliation, etc.
- However, the researchers note that a news item did not reduce vaccine intentions simply because it was false, indicating that other factors are also at play.
- Another interesting finding was that exposure to vaccine misinformation flagged by fact-checkers was viewed 8.7 million times, accounting for “only 0.3% of the 2.7 billion vaccine-related URL views during this time period.” In comparison, unflagged misinformation was viewed a lot more.
- When looking at the impact of flagged vs unflagged misinformation on vaccine intentions, the study found that “URLs flagged as misinformation by fact-checkers were, when viewed, more likely to reduce vaccine intentions (as predicted by our model) than unflagged URLs.” However, after subsetting the specific items (i.e., URLs) that induced vaccine hesitancy, the study estimated that unflagged vaccine-skeptical content lowered vaccination rates by just more than 2% compared with “−0.05 percentage points for flagged misinformation— a 46-fold difference.”
Conclusion
The key takeaway from this study is that misinformation flagged by fact-checkers had a causal effect on lowering vaccine intentions, “conditional on exposure.” However, since flagged misinformation was viewed much less frequently compared to unflagged misinformation, flagged content “had much less of a role in driving overall vaccine hesitancy.”
2. Supersharers of Fake News on Twitter
Sahar Baribi-Bartov, Briony Swire Thompson, and Nir Grinberg
Misinformation on most social media platforms is typically shared by a small fraction of the overall user base. These people are known as “supersharers.” While there is growing evidence regarding the use of bots and state-sponsored misinformation campaigns that flood social media discourse, less is known about who these “supersharers'' are, where they come from, and the strategies they use to flood platforms with misinformation.
A research paper by Sahar Bribi-Bartov, Brioni Swire-Thompson, and Nir Grinberg examined the prevalence of supersharers on Twitter in the context of the 2020 US election. Using panel data of 664,391 registered US voters, the authors identified 2,107 supersharers, accounting for 80% of fake news shared on Twitter from August to November 2020. To show the impact of supersharers on exacerbating online misinformation, the researchers compared this group with two reference groups that served as the control condition: the heaviest sharers of non-fake political news, and a random sample of panelists. Here are some key findings from this study:
- On average, the study found that “7.0% of all political news shared by the panel of 664,391 individuals linked to fake news sources.” However, merely 0.3% (2,017 people) of the registered US voters included in this study were responsible for 80% of the fake news shared on Twitter. These identified supersharers dominated the discourse on Twitter throughout the election cycle and, interestingly, also dominated non-political news sharing.
- The authors note that supersharers also had significantly higher network influence compared to the reference groups. In terms of engagement, the study found that “about a fifth of the heaviest consumers of fake news in the panel follow a supersharer.”
- On the question of who these supersharers are, the study found that this group has a high proportion of “women, older adults, and Republican individuals compared with all reference populations.” In terms of racial composition, there was a higher percentage of Caucasians compared to other groups. Supersharers were also overrepresented in three US states: Arizona, Florida, and Texas.
- The study also examined strategies used by supersharers and found that “no more than 7.1% of supersharers can be considered as bots,” with no significant difference compared to the reference groups. The biggest difference between supersharers and the other groups was their rate of retweets. While the study does not discount the possibility of sophisticated automation, it attributes the massive volume of content shared to “manual and persistent retweeting.”
Conclusion
This study provides further evidence that misinformation is propagated largely by a small fraction of users. By providing insight into who supersharers are and how they generate content, the authors show that “platform interventions that target supersharers or impose retweet limits could be highly effective at reducing a large portion of exposure to fake news on social media.”
3. Blocking of counter-partisan accounts drives political assortment on Twitter
Cameron Martel, Mohsen Mosleh, Qi Yang, Tauhid Zaman and David. G Rand
This study by Cameron Martel, Mohsen Mosleh, Qi Yang, Tauhid Zaman, and David Rand examined factors that drive Americans' political assortment on Twitter. One common explanation is that Americans engage in “preferential tie formation,” which means only forming online connections with those who share similar political beliefs. In this study, the authors look at an additional factor: preferential prevention of social ties in driving political assortment.
The authors conducted two Twitter field experiments where they created bot accounts that identified as either Democratic or Republican. In the first experiment, they randomly assigned a set of Twitter users to be followed by either a copartisan or counter-partisan bot account. Co-partisan refers to those with the same political affiliation (i.e., a Republican bot account following a Republican user). In contrast, ‘counter-partisan’ refers to those with different political affiliations (i.e., a Democratic bot account following a republican user. )In the second experiment, they added a neutral control condition bot account. The hypothesis was that users would be more likely to block counter-partisan accounts compared to copartisan or neutral accounts, indicating 'preferential social tie prevention' across party lines. Here are some key findings:
- In the first experiment, the study found that “users were roughly 12 times more likely to block counter-partisan accounts compared to copartisan accounts.” Interestingly, unlike with selective tie formation, the authors found significant differences across political parties wherein “Democratic users were more likely to block Republican bots (26 times more likely than blocking copartisan Democrat bots) than Republican users were to block Democrat bots (about 3 times more likely than blocking copartisan Republican bots.”
- In the second experiment, which included a politically neutral control condition, the study found that “users were significantly less likely to follow back the counter-partisan bot compared to the politically neutral bot.” Additionally, Democrats were 4.4 times more likely, and Republicans 3.6 times more likely, to block counter-partisans relative to the neutral control, again demonstrating the partisan asymmetry prevalent in online group formation.
- Lastly, the researchers also conducted a field experiment where they tried to replicate these findings with a different sample. They found that “participants were about 3 times more likely to block the counter-partisan profile compared to the politically neutral profile.” Regarding reasons for blocking certain profiles, the authors note an interesting pattern wherein “participants were more likely to block users who praised their out-party than users who criticized their in-party,” a pattern that held in particular for Democrats.
Conclusion
This study shows that along with who people choose to connect with online, “proactive prevention of social connections between counter-partisans is also a powerful causal driver of partisan assortment in social networks.” Moreover, the study points to an interesting partisan difference in blocking behavior wherein Democrats are more likely to block Republicans than vice versa. This study adds to the evidence base for why users decide to block accounts. Overall, this study indicates that “partisan assortment must consider all potential avenues of social tie maintenance—including social tie prevention—in order to more completely understand propensities to connect and share information within and across party lines.”