Home

Donate

A Primer on the Meta 2020 US Election Research Studies

Prithvi Iyer / Dec 13, 2024

This article was originally posted on August 11, 2023, and was updated on December 13, 2024.

Meta's headquarters in Menlo Park, California.

The entrance to Meta's offices in Menlo Park, California.

The US 2020 research project is a partnership between Meta researchers and external academics to explore questions related to Facebook and Instagram in the 2020 US election. The external team of academics was led by Professor Talia Jomini Stroud, founder and director of the Center for Media Engagement at the University of Texas at Austin, and Professor Joshua A. Tucker, co-director of the Center for Social Media and Politics at New York University. Professors Stroud and Tucker selected 15 additional researchers to collaborate on this endeavor. 

Meta claimed that neither its researchers nor the company as a whole had the authority to restrict or tamper with the findings – irrespective of whether they are favorable or not for the company. However, Meta was closely involved in shaping the research questions, methodology, workflow, and design choices. As noted by Michael Wagner, a professor at the University of Wisconsin-Madison tasked with evaluating the rigor of the collaboration as its rapporteur, "Meta set the agenda in ways that affected the overall independence of the researchers.” Nevertheless, the studies themselves, he said, are “rigorous, carefully checked, transparent, ethical, and path-breaking,” and are thus important to understand, as they may help inform a number of policy questions related to social media and these platforms specifically. (For more, listen to a Tech Policy Press podcast with Wagner.)

Six research papers have been published as part of this project so far, with more to follow. This post serves as a resource to collect the studies and their findings in one place. It is intended for people interested in making informed assessments of the research and its claims by providing a snapshot of the research process and results. This resource also enables readers to examine the research papers in tandem, facilitating comparisons and assessments of the results in relation to one another. We hope that this primer on the research project serves to distill and simplify the key findings of these studies while ensuring that readers can still pick up on the nuance and caveats needed to accurately infer the implications. Of course, readers are also encouraged to read the full papers.

Table of Contents

  1. How do social media feed algorithms affect attitudes and behavior in an election campaign? (Science, 27 July 2023)
  2. Reshares on social media amplify political news but do not detectably affect beliefs or opinions (Science, 27 July 2023)
  3. Asymmetric ideological segregation in exposure to political news on Facebook (Science, 27 July 2023)
  4. Like-minded sources on Facebook are prevalent but not polarizing (Nature, 27 July 2023)
  5. The effects of Facebook and Instagram on the 2020 election: A deactivation experiment (PNAS, 13 May 2024)
  6. The Diffusion and Reach of (Mis)Information on Facebook during the U.S 2020 Election (Sociological Science, 11 December 2024)

1. How do social media feed algorithms affect attitudes and behavior in an election campaign?

Published in Science, 27 July 2023.

Context

One concern about social media platforms is whether the algorithms that select content, such as the recommender systems driving Facebook’s News Feed, serve to polarize users. Some have suggested reverting to a “chronological feed” – a feed of posts ordered in reverse chronology as opposed to by an algorithm – might reduce polarization.

Hypotheses

  • Chronological Feed ranking would reduce issue based polarization vis-a-vis algorithmic based ranking.
  • Chronological feeds would “reduce affective polarization on an individual level.”
  • Chronological feed would decrease knowledge about the 2020 election campaign and decrease recall of recent events covered in the news.
  • “Chronological feeds would reduce both online and offline forms of political participation, including self-reported turnout in the 2020 election and on-platform political engagement.”

Methods

  • “Study participants were recruited through survey invitations placed on the top of their Facebook and Instagram feeds in August 2020. Participants were users residing in the United States who were at least 18 years of age and who provided informed consent.”
  • “Users were invited to complete five surveys, share their on-platform activity, and participate in passive tracking of off-platform internet activity.” They could withdraw at any time.
  • Participants were randomly assigned to a control group (Algorithmic Feed) or to the treatment group (Chronological Feed), in which the most recent content appeared at the top of those feeds.
  • Outcome variable operationalized via Population Average Treatment Effect. This means that the study measured participation by looking at “an average of users’ predicted ideology, friend count, number of political pages followed, and number of days active, among other variables.”

Results

  • “Users in the Chronological Feed group spent dramatically less time on Facebook and Instagram” (73% less daily compared to the control group).
  • Treatment group members migrated to other platforms, spending more time on Tiktok and Youtube. “36% (2.19 hours) for Tiktok and 20% (5.63 hours) over the entire study period for Youtube.”
  • “Users in the treatment group (chronological feed) showed lower engagement on both Facebook and Instagram.” Lower engagement was captured via lower number of likes, reshares and comments.
  • Participants with the Chronological Feed reduced the share of content from ideologically “cross-cutting sources on Facebook (18.7 versus 20.7%, p < 0.005) and also reduced the share of content from ideologically like-minded sources on Facebook (48.1 versus 53.7%, p < 0.005).”
  • On Facebook, the Chronological Feed “increased the share of content from designated untrustworthy sources by more than two-thirds relative to the Algorithmic Feed , whereas it reduced exposure to uncivil content by almost half.” These trends were not statistically significant for Instagram.
  • The Chronological Feed condition “did not express significantly lower levels of affective or issue polarization compared to those in the Algorithmic Feed condition.” Thus, H1 was refuted.
  • There were no statistically significant differences between treatment and control groups with respect to “election knowledge or news knowledge on either platform.”
  • No differences in political participation (voter turnout) between treatment and control groups.

Discussion

  • What explains the disconnect in findings? As in, why did users in the treatment group report dramatic changes in online behavior (lower participation etc) but no changes in polarization levels? This result may seem counterintuitive. Some reasons for these findings proposed by authors are:
    • Downstream effects of social media on political polarization may need a "longer intervention period". Three months may be too short for a robust intervention.
    • The US context may not be generalizable to other countries.
    • People in the treatment group (chronological feed) often saw posts from other users (friends/followers) that still had the algorithmic ranking that could have biased their feeds. Thus, we cannot discount the possibility that users in the treatment group were still exposed to the algorithmic ranking by proxy. Addressing this may require scaling the study to ensure exposure to content is also mediated by group assignment.

2. Reshares on social media amplify political news but do not detectably affect beliefs or opinions

Published in Science, 27 July 2023.

Context

The "reshare" function, present on platforms such as Facebook and Instagram, has been identified as a catalyst for content to attain a "viral" status. This study seeks to investigate the viability of a policy intervention involving the removal of resharing as a means to alleviate the detrimental impacts associated with viral content like increasing online polarization.

Hypotheses

  • "Withholding reshared content from users’ feeds may reduce affective and issue polarization by decreasing their exposure to emotionally or ideologically inflammatory content."
  • "Removing reshared content would, on net, reduce accurate knowledge about the election campaign."

Methods

  • "Adult users residing in the United States who provided informed consent (N = 193,880, 1.3% of those who saw the invitation) were invited to complete five surveys and share their Facebook activity (the study was advertised to 14.6 million FB users)."
  • Study compared two groups; a control condition in which no changes were made to their Facebook feeds, and a treatment condition in which no reshared content (from friends, Groups, or Pages) was shown in the feed.
  • As with the earlier study, the "main estimand of interest is the population average treatment effect, which is weighted by users’ predicted ideology, friend count, number of political pages followed, and number of days active, among other variables"

Results

  • The control group spent “73% more time each day on average compared to US monthly active users, whereas time spent reduced to 64% more for those in the No Reshares group.” The treatment effects were gauged via Ordinary Least Squares regression.
  • No significant migration to other social media platforms for the no-reshares group. This indicates that users who did not see reshares were still incentivized to remain on the platform rather than opt for other platforms like X and TikTok among others.
  • "The No Reshares treatment decreased the relative proportion of content seen by participants that is posted by their friends by an average of 10 percentage points while increasing the relative share of content from Groups by 8 percentage points and from Pages by 2 percentage points."
  • Feeds without reshares had less political news compared to the control group. “Untrustworthy news sources also declined from 6.2% TO 2.5% for the no reshares group.”
  • "The No Reshares condition decreased the proportion of both like-minded (51.1 versus 53.7%, p < 0.005) and cross-cutting (19.7 versus 20.7%, p < 0.005) content, while increasing that of ideologically moderate content by more than 15% (26.2 versus 22.6%, p < 0.005)."

Results pertaining to core hypothesis

  • No significant difference in polarization between both groups.
  • “There was also no statistically distinguishable change in election knowledge, i.e., users were less likely to correctly remember recent events. However, when looking at sample average treatment effects show that the “no reshares” group had less news knowledge.”
  • "The treatment does not have statistically distinguishable effects on perceived accuracy of various factual claims, trust in media (either traditional or social), confidence in political institutions, perceptions of political polarization, political efficacy, belief in the legitimacy of the election, or support for political violence."

Takeaways

  • “The reshares feature could be a double-edged sword: It facilitates encounters with both reliable news about politics and current events but, to a somewhat lesser degree, also content from untrustworthy sources that may exaggerate or fabricate information.”
  • “Without reshares on their feeds, users were less likely to click on outbound links from news sources with highly ideological audiences. However, this does not manifest in increasing issue or affective polarization.” Study concludes that though reshares may have been a "powerful mechanism for directing users attention and behavior on Facebook during the 2020 election campaign, they had limited impact on politically relevant attitudes and offline behaviors."

3. Asymmetric ideological segregation in exposure to political news on Facebook

Published in Science, 27 July 2023.

Context

Social media plays a pivotal role in shaping how society engages with information. This study examines the “funnel of engagement” with respect to seeing political news in context to the 2020 US election. The “funnel of engagement” can be understood as the link between what users could potentially see on their feed, what they actually see and eventually what they engage with (via likes/comments/reshares etc). This study responds to the concern that political polarization is closely related to the structure of news feeds on social media platforms by examining how the funnel of engagement on Facebook and Instagram shapes what users engage with and if that content bolsters online polarization.

Research Questions

  • “How ideologically segregated is news consumption on Facebook, and are those patterns of segregation symmetric on the right and left.”
  • “How does segregation vary with potential news consumption versus actual exposure ver- sus engagement.”
  • “How does segregation vary if the level of analysis is URLs rather than domains (thus capturing curation of content within domains).”
  • “How segregated is exposure on Facebook relative to the benchmark of browsing behavior (the predominant source of data in past research).”
  • “How segregated are the streams of content from the major path- ways to exposure on Facebook (friends, Pages, and Groups).”
  • “How prevalent is exposure to unreliable content on the right relative to the left.”

Methods

  • The data in this paper draw from the set of 208 million US- based adult active Facebook users “whose political ideology can be measured and track all URLs classified as political news that were posted on the platform from 1 September 2020 to 1 February 2021.”
  • In the analyses, the authors "only examined posts classified as political news that contain a URL, which amount to about 3% of all posts shared by US adult users and 3.9% of all content that US adult users saw on the platform during our study period."
  • For each URL (and corresponding domain), the authors reported "measures of the potential, exposed, and engaged audience. The potential audience of a URL is the set of unique users that could have been exposed to that content, the exposed audience is the set of unique users that saw a post containing that URL on their Feed and the engaged audience is the set of unique users that clicked, reacted, liked, reshared, or commented on the post with the URL."
  • In total, the data comprised aggregated exposure and engagement metrics for “208 million US adult active users with an ideology score in relation to 35,000 unique domains and 640,000 unique URLs that were classified as political news and were shared more than 100 times during the study period.”
  • The analyses rely on two measures: the segregation index (which offers a summary statistic of the entire information environment) and the favorability scores (which are associated with individual domains and URLs which allow the researchers to infer the ideological composition of their audiences). The authors dichotomized the ideology scores such that users with a score ≤0.35 are categorized as liberal, and those with a score ≥0.65 are categorized as conservative. For the favorability scores, 1 indicates the URL has a conservative audience, -1 indicates a liberal audience and 0 indicates equal distribution across partisan lines. The authors define audience polarization as the extent to which the “distribution of favorability scores is bimodal and far away from zero.”

Results

  • "The segregation score based on exposed audience for domains fluctuates around 0.35 (i.e., the gap between the intersection of conservatives with conservatives versus liberals with conservatives is 35 percentage points)."
  • "Algorithmic and social amplification are both contributing to increased segregation: As we move down the funnel of engagement (i.e., as the footprint of algorithmic and social curation becomes more evident), liberal and conservative audiences become more isolated from each other."
  • Mean favorability scores indicate that FB users consuming political news are mostly conservative. “There are more domains and URLs being favored by very conservative audiences.”
  • Most sources of misinformation are favored by conservative audiences. The study also finds that “algorithmic and social amplification do not exacerbate the already existing audience segregation for misinformation content. However, misinformation shared by Pages and Groups has audiences that are more homogeneous and completely concentrated on the political right.”
  • “News sources and stories consumed by conservative audiences depart more clearly from the zero line of cross-cutting exposure, which means that their audiences are more homogeneously conservative and, therefore, more isolated. These outlets on the right also post a higher fraction of news stories (URLs) rated false by Meta’s 3PFC program, which means that conservative audiences are more exposed to unreliable news.”

Takeaways

  • Facebook is ideologically segregated.
  • Ideological segregation manifests far more in content posted by Pages and Groups than in content posted by friends.
  • Pages and Groups are associated with higher levels of ideological segregation which suggests that the choice of which Pages to follow and which Groups to join is "driven far more by ideological congruence than the choice of with whom to be friends."
  • “Pages and Groups benefit from the easy reuse of content from established producers of political news and provide a curation mechanism by which ideologically consistent content from a wide variety of sources can be redistributed.”
  • "Although there are homogeneously liberal and conservative domains and URLs, there are far more homogeneously conservative domains and URLs circulating on Facebook. This asymmetry is consistent with what has been found in other social media platforms."

4. Like-minded sources on Facebook are prevalent but not polarizing

Published in Nature, 27 July 2023.

Context

Social media has received criticism for creating "echo chambers" – spaces where users mainly encounter content that aligns with their existing beliefs, potentially deepening polarization. This study investigates the relationship between exposure to like-minded content and polarization, to assess the effectiveness of potential policy interventions designed to address polarization by altering the structure of news feeds in a manner that discourages the formation of these "echo chambers.”

Research Question

  • To what extent do "echo chambers'’ on Facebook exacerbate political polarization in context to the 2020 US election?

Methods

  • The authors use data from all active adult Facebook users in the USA to analyze how much of what they see on the platform is from sources that we categorize as sharing their political leanings.
  • With a subset of consenting participants, the authors evaluate a potential response to concerns about the effects of echo chambers by conducting a large-scale field experiment reducing exposure to content from like-minded sources on Facebook. Thus, participants were randomized into treatment and control groups wherein the treatment group saw less posts from like-minded friends/pages/group.
  • "Participants in the treatment and control groups were invited to complete five surveys before and after the 2020 presidential election assessing their political attitudes and behaviors. Two surveys were fielded pre-treatment: wave 1 (31 August to 12 September) and wave 2 (8 September to 23 September). The treatment ran from 24 September to 23 December."
  • For participants assigned to treatment, the authors "downranked all content (including, but not limited to, civic and news content) from friends, groups and Pages that were predicted to share the participant’s political leaning (for example, all content from conservative friends and groups and Pages with conservative audiences was downranked for participants classified as conservative"
  • Political leanings were measured via Facebook's internal Machine Learning classifier that accounts for user data to make predictions about political affiliation. Users with predicted values greater than 0.5 were classified as conservative and otherwise classified as liberal, enabling the researchers to analyze the full population of US active adult Facebook users.

Results

  • Despite reducing exposure to content from like-minded sources by approximately one-third over a period of weeks for the treatment group, the authors "find no measurable effects on 8 pre registered attitudinal measures, such as ideological extremity and consistency, party-congenial attitudes and evaluations, and affective polarization."
  • "Despite the prevalence of like-minded sources in what people see on Facebook, extreme echo chamber patterns of exposure are infrequent. Just 20.6% of Facebook users get over 75% of their exposures from like-minded sources."
  • For the treatment group, “less exposure to like-minded sources did not lead to a proportional increase in exposure to cross-cutting sources.” Instead, it led to an increase in exposure to content that was neither like-minded nor cross-cutting.

Takeaways

  • The authors find that only a small proportion of the content that Facebook users see explicitly concerns politics or news and relatively few users have extremely high levels of exposure to like-minded sources. However, a majority of the content that active adult Facebook users in the US see on the platform comes from politically like-minded friends or from Pages or groups.
  • Reduced exposure to like-minded content also led to reduced exposure to uncivil content and misinformation.
  • Some reasons why reduced exposure to like-minded content (opposite of an echo chamber) did not lead to increase in polarization?
    • “Political information and partisan news—the types of content that are thought to drive polarization— account for a fraction of what people see on Facebook.”
    • “Large shifts in exposure on Facebook may be small as a share of all the information people consume.”
    • “Persuasion is simply difficult—the effects of information on beliefs and opinion are often small and temporary and may be especially difficult to change during a contentious presidential election.”

5. The effects of Facebook and Instagram on the 2020 election: A deactivation experiment

Published in Proceedings of the National Academy of Sciences, 13 May 2024.

Context

Facebook and Instagram are often criticized for increasing political polarization via recommendation algorithms that often promote divisive content and for spreading misinformation. Fears of online polarization and misinformation are especially acute in election season. This paper examined whether disengaging with Facebook and Instagram in the run-up to election day changed political attitudes and how. It was led by Stanford University economists Matthew Gentzkow and Hunt Allcott.

Research Questions

  • This study examined “the effect of Facebook and Instagram access on political beliefs, attitudes, and behavior by randomizing a subset of 19,857 Facebook users and 15,585 Instagram users to deactivate their accounts for 6 weeks before the 2020 U.S. election.”
  • Specifically, this study sought to estimate the effects of deactivating Facebook and Instagram accounts on “consumption of other apps and news sources, factual knowledge, political polarization, perceived legitimacy of the election, political participation, and candidate preferences.”

Methods

Sampling and Recruitment

  • The researchers ran two parallel experiments with Facebook and Instagram as the respective “focal platform.”
  • For each focal platform, “Meta drew a stratified random sample of users who lived in the United States, were at least 18 years old, and had logged into their account at least once in the past month.”
  • Meta placed survey invitations in these users’ focal platform news feeds from August 31 to September 12, 2020. Those who were willing to deactivate their accounts were tasked with completing baseline surveys in order to begin data collection and analysis. “Individual-level participation in the experimental analyses and surveys was compensated and required informed consent.”
  • “Just after the baseline survey, participants were randomized into two groups: Deactivation (27%) and Control (73%). The Control group was informed that they would receive $25 if they did not log in to their focal platform for the next week, while the Deactivation group was informed that they would receive $150 if they did not log in to their focal platform for the next 6 weeks.”

Experimental Design.

  • The primary analysis samples were “limited to participants who used the focal platform for more than 15 min per day at baseline.”
  • The sample was weighted to be “representative of U.S. focal platform users on race, political party, education, (and among those with more than 15 minutes daily use) baseline account activity.”
  • The study employed an “instrumental variables regression to estimate the causal effect of deactivation while accounting for imperfect compliance with deactivation.”

Results

  • “The effects of both Facebook and Instagram deactivation on our overall knowledge index are small and insignificant.” The knowledge index refers to the “average of standardized scores on three sets of factual questions: i) election knowledge (knowledge of candidates’ policy positions); ii)news knowledge(correctly distinguishing recent news events from plausible placebo events that had not happened); and iii)fact knowledge(correctly distinguishing true statements from misinformation that was circulating about topics such as COVID-19 and fraudulent ballots).”
  • While deactivation did not significantly affect the knowledge index, the study found that “Facebook deactivation decreased news knowledge by a point estimate of 0.098 SD (P<0.01, Q<0.01) and increased fact knowledge by 0.042 SD( P=0.012, Q=0.132). Increased fact knowledge implies that participants in the Deactivation group were better able to distinguish misinformation from true stories."
  • “Facebook deactivation reduced trust in political information from Facebook by a point estimate of 0.040 SD (P<0.01, Q<0.01) without affecting trust in Instagram, while Instagram deactivation reduced trust in information from Instagram without affecting trust in Facebook.” The researchers attribute this finding to the fact that “time away from a platform made users more aware of the amount of low-quality or inaccurate information to which they had been exposed.”
  • On the question of polarization, the study found that “neither Facebook nor Instagram deactivation had a significant effect. The (statistically insignificant) point estimates are that both Facebook and Instagram deactivation reduced affective polarization by 0.031 and 0.030 SD (Facebook: P=0.049, Q=0.189, 95% CI bounds=−0.062,−0.000; Instagram: P=0.074, Q=0.190, 95% CI bounds=−0.064,0.003).”
  • The study also found no effects of deactivation on shaping opinions regarding the perceived legitimacy of elections. Moreover, “neither Facebook nor Instagram deactivation significantly affected turnout, and the 95% CI bounds rule out effects larger than about 1.6 percentage points in either direction.”
  • On the question of vote choice, “some people did change their minds over the study period: about 20% of Control group participants report voting differently at endline than they had reported intending to vote at baseline.”
  • Deactivation did not significantly affect Donald Trump's favorability. While not statistically significant, it is important to note that deactivation “decreased Trump favorability, decreased turnout among Republicans and increased turnout among Democrats and(statistically insignificantly) decreased three secondary outcomes: pro-Republican affect, pro-Republican issue positions, and votes for Republicans in state-level elections.”
  • The researchers also noticed some differences in findings based on party affiliation. “First, Facebook deactivation significantly reduced our overall index of knowledge among Democrats. Second, Facebook deactivation increased polarization among strong Democrats and (insignificantly) decreased it among strong Republicans.”

Discussion

  • The authors argue that these findings “paint a nuanced picture of the way Facebook and Instagram influenced attitudes, beliefs, and behavior in the 2020 election.”
  • The study found no significant effects of social media access on polarization, suggesting that “if Facebook access contributes to political polarization, the effect is either small or accumulates over a longer period than 5 weeks.”
  • The authors concluded that “aside from a reduction in online participation, we find no significant impacts of Instagram deactivation on any other primary outcomes. This is true even among younger users, and it suggests that despite Instagram’s rapid growth, Facebook likely remains the platform with the largest impacts on political outcomes.”
  • The study also found “no significant effects of either Facebook or Instagram access on turnout, with CIs that rule out moderate effects.”
  • These findings must be taken with caution because of the study’s limitations, which include the lack of “generalizability, a time-limited intervention, general equilibrium effects, self-reported outcomes, and attrition.” Notwithstanding its limitations, the authors “believe that this study can usefully inform and constrain the discussion of the effects of social media on American democracy.”

6. The Diffusion and Reach of (Mis)Information on Facebook during the U.S. 2020 Election. 

Published in Sociological Science, 11 December 2024.

Context

The 2020 US election took place amid widespread concerns regarding the threat of online misinformation and the role of social media platforms like Facebook in promoting such content. Given Facebook’s social media dominance, this paper examined if and how misinformation spread on the platform during the 2020 US election and the extent to which Facebook’s content moderation policies were effective in dealing with misinformation in that time period.

Research Questions

This study sought to answer three key questions:

  • “How prevalent were large diffusion events during the U.S. 2020 election and, within that set, how prevalent was misinformation?”
  • “How did the structure and rhythm of diffusion vary across mechanisms (affordances) for dissemination?”
  • “Were content moderation and the set of exceptional rules applied under the “break the glass” umbrella successful at reducing the flow of misinformation?”

These overarching questions were narrowed down to 6 concrete research questions.

  • “How prevalent is broadcast versus viral diffusion?”
  • “How concentrated are the re-sharing distributions in terms of users generating the diffusion events?”
  • “Who, in terms of basic demographics, re-shares most political content and most misinformation?”
  • “Does political content or misinformation generate, on average, larger diffusion trees?
  • What affordances (user accounts, Pages, or Groups) are associated with the diffusion of political content or misinformation?”
  • “Which affordances are associated with greater reach (number of views) of a given diffusion tree?”

Methods

  • The researchers collected data from US-based monthly active adult users. They tracked “exposures to and re-sharing of all posts published (publicly or privately) on the platform from July 1, 2020, to February 1, 2021. In other words, the results we report are based on the full set of user, Page, and Group posts created by U.S.-based adult accounts that were shared at least once.”
  • To protect user privacy, the researchers only reported “tree-level data for the posts that were shared (both privately and publicly) at least k= 100 times by U.S. users (1.2 percent or 12.1 M posts).”
  • The researchers used diffusion trees to map the spread of Facebook posts. As the authors note, “We reconstruct the diffusion of these posts in the form of network trees. Tree data structures are hierarchical networks with a root node (i.e., the original post) and nested layers of re-sharing activity (if the post is re-shared).” Put simply, if a post had few reshares, the tree would be narrow, while posts that were extensively shared across networks would tend to be wider.
  • In total, 114,000 trees were labeled as misinformation. In contrast, only a small number of fact-checked trees (2000) were labeled true, likely due to a “selection bias in the content that fact-checkers evaluate, a sample that contains more problematic content than truthful content.”

Results

  • The data seemed to suggest that “virality is very unusual: most diffusion chains activate a relatively small number of users close to the original source.” They also found that most reshares happened 24 hours after the original post and from a relatively small subset of users.
  • A core finding from this paper was also that “trees identified as misinformation are, on average, deeper and have higher virality.” However, “misinformation trees gather fewer re-shares at each step of the diffusion process and crucially that they grow more slowly, contrary to what past research has claimed about misinformation.”
  • Regarding factors that may promote online misinformation on Facebook, the study found that “the two most important factors associated with the growth of misinformation trees (other than being initiated by Pages) are (a) whether the content diffused is classified as political news and (b) the average user age: larger misinformation trees are generated by older users.”
  • The study also investigated the sources of misinformation on Facebook and found that “ users may be the source for most posts labeled as misinformation, but it is Pages that accumulate, per tree, more of the misinformation views; users lack, for the most part, the broadcasting potential that Pages have.”
  • Finally, the researchers noted a surprising finding: “the quantity of large diffusion trees steadily decreased from July until Election Day, and the views of large misinformation trees during the two weeks prior to the election (i.e., about the time that the “break the glass” measures were being instituted) strikingly plummeted to near zero.” The authors reason that content moderation efforts ramped up closer to the election with “break the glass” measures proving particularly effective.

Takeaways

  • “Facebook creates a broadcasting (rather than viral) mode of exposure when it comes to the overall reach of information, with Pages (not Groups) acting as the key engine for this type of dissemination.”
  • Misinformation’s spread on Facebook follows the opposite pattern, relying on “viral spread, powered by a tiny minority of users who tend to be older and more conservative.”
  • Contrary to prior research, this study suggests that misinformation spreads slowly and receives fewer views.
  • The authors believe this is “the most ambitious analysis to date on how content propagates on social media” and provides “unambiguous evidence that diffusion dynamics are contingent on platform affordances (like the support for Pages) and the less stable set of principles encoded in the form of content moderation policies.”
  • However, one must be cautious before making causal judgments or generalizations from this study. For one, this study relied on fact-checking organizations to determine misinformation. If misleading posts evaded fact-checking, this study may “underestimate the prevalence of this type of posts on the platform.” The study examined misinformation spread via reshares and ignored other means through which users could spread misinformation on the platform.
  • Given that this study examined the 2020 US election, it is hard to ascertain whether these results stand the test of time. Similar research projects must be periodically undertaken to help understand the evolving landscape of online misinformation and the role platforms play in this regard.

- - -

Note: Tech Policy Press will periodically update this post as the US 2020 research project releases future publications. If you have a comment or suggestion on this material, please reach out to me.

Related Reading

Authors

Prithvi Iyer
Prithvi Iyer is a Program Manager at Tech Policy Press. He completed a masters of Global Affairs from the University of Notre Dame where he also served as Assistant Director of the Peacetech and Polarization Lab. Prior to his graduate studies, he worked as a research assistant for the Observer Resea...

Topics