YouTube and the 'Big Lie': Research Shows Cause for Concern
Justin Hendrix / Sep 2, 2022The "Big Lie" that the 2020 election was 'stolen' from former President Donald Trump is a persistent alternate reality for a sizable number of U.S. voters. As a result, violent threats against election workers are a substantial problem, according to a recent U.S. House Oversight Committee report. False claims about the 2020 election have been used to justify state laws that seek to limit voter participation, while the election of just one of a sizable number of election-denying candidates for key positions overseeing the vote in battlegrounds such as Arizona, Pennsylvania or Wisconsin could help throw the next election cycle into chaos.
Mis- and disinformation on social media are not the primary cause of belief in the Big Lie, but given the scale of social networks, any marginal effect that contributes to the propagation of false election claims could have substantial impact.
It is in this context that, in a blog post this week, YouTube announced its plans to "limit the spread of harmful misinformation" in the 2022 U.S. midterm elections. The company says it intends to recommend "authoritative national and local news sources like PBS NewsHour, The Wall Street Journal, Univision and local ABC, CBS and NBC affiliates," and to add "a variety of information panels in English and Spanish from authoritative sources underneath videos and in search results about the midterms." And, YouTube promises to take action on election denial, including videos that claim "widespread fraud, errors, or glitches occurred in the 2020 U.S. presidential election, or alleging the election was stolen or rigged."
But given the prevalence of such false claims, how might YouTube's algorithms-- designed to recommend content that users want-- contribute to their propagation, particularly to users already inclined to accept them?
A snapshot of data from the 2020 election suggests cause for concern. On the same day that YouTube released its plans for the midterms, the Journal of Online Trust and Safety published the results of a study by researchers at NYU's Center for Social Media and Politics (CSMaP) that found "a systematic association between skepticism about the legitimacy of the election and exposure to election fraud-related content on YouTube."
The study, titled Election Fraud, YouTube, and Public Perception of the Legitimacy of President Biden, focused on "a specific type of content—YouTube videos about fraud in the 2020 US presidential election—to test whether online recommendation systems potentially contributed to a polarized information environment in which content about Trump’s claims were disproportionately suggested to participants who were most likely to believe them."
The answer? Yes, they likely did. The "findings suggest that people most skeptical of the election results were more likely to be shown videos on the topic of fraud, which may have increased or maintained their interest." While not all of these videos would have necessarily supported the fraud narrative -- some were just reporting on it, and some were refuting it -- it is clear that some of them were supporting the narrative. While there can be no doubt that political and media elites-- including partisan cable news-- are the actors most responsible for driving the supply and demand for false election claims, the findings suggest that YouTube's algorithms played a role in meeting the demand, particularly among those already skeptical of elections.
The sample size for the study was not enormous-- only a few hundred individuals participated. But one of the study's authors, Dr. James Bisbee, a former postdoctoral researcher at NYU CSMaP and now assistant professor at Vanderbilt University, says when it comes to thinking about generalizability the results are likely in the "lower bound," since the sample "skewed liberal, Democratic and better educated than the US population writ large." Had the sample included more Trump supporters, "the correlation as well as the descriptive results on the overall prevalence of these types of videos could likely be higher," said Bisbee.
The results of the research may interest the House Select Committee to Investigate the January 6th Attack on the U.S. Capitol, which issued a subpoena to Alphabet CEO Sundar Pichai earlier this year seeking information "concerning how Alphabet developed, implemented, and reviewed its content moderation, algorithmic promotion, demonetization, and other policies that may have affected the January 6, 2021 events." While the NYU researchers point out they have no evidence any of the individuals in their sample played any role in the events at the Capitol, it is worth considering whether the relationship observed in their study might have played some adverse role when considered at scale, and whether YouTube's own researchers were at all aware of it.
As another election looms, YouTube promises it is "limiting the spread of harmful election misinformation by identifying borderline content and keeping it from being widely recommended" (emphasis mine). But in a highly contentious political environment where false claims may play an outsized role in motivating the actions of people convinced elections are being tampered with, even if such content is narrowly recommended it may leave the platform with some responsibility for adverse effects. What's more, these dynamics are less well understood outside of the United States. For instance, how might the propagation of false claims about the upcoming election in Brazil impact that country?
The only thing that is clear is that each election cycle is a live experiment.