A new study based on a massive dataset of posts collected from Facebook pages and groups in the runup to the 2020 U.S. Presidential election finds that visual misinformation is widespread across the platform, and that it is highly asymmetric across party lines, with right-leaning images five to eight times more likely to be misleading.
In “Visual misinformation on Facebook,” published this week in the Journal of Communication, scholars from Texas A&M University’s Department of Communication & Journalism, Columbia University’s Tow Center for Digital Journalism, and the George Washington University’s Institute for Data, Democracy & Politics collected and analyzed nearly 14 million posts from more than 14,000 pages and 11,000 public groups from August through October 2020.
From this corpus, the researchers arrived at a representative data set of political images, and another of images that specifically depicted political figures. An analysis found that 23% of political images in a sample contained misinformation, while 20% of those that depicted a political figure were misleading.
A Novel Methodology
“Our study is the first scholarly attempt we are aware of to provide valid, platform-scale estimates of the prevalence of visual misinformation on Facebook — and indeed, the first study on any social media platform to estimate the scale of U.S. politics-based visual misinformation,” write the authors, Texas A&M’s Yunkang Yang, Columbia’s Trevor Davis, and GWU’s Matthew Hindman.
We conducted the first large-scale visual misinformation on Facebook, also the first large-scale visual misinformation re: US politics on any social media. Combining expert coding with computer vision, we found that more than 20% of public image posts contained misinformation.— Yunkang Yang, PhD (@yangyunkang) March 2, 2023
Studying images and video content on social media platforms is more difficult than studying text. This study’s approach to the “ultra-large-scale collection of image posts on Facebook” started with a “megalist of the Facebook pages and public groups that are most widely followed and generate the most engagement from users.” The resulting “dataset is so large that it approaches complete coverage of all top U.S.-based political public pages and groups,” and captures “the overwhelming majority of interactions generated by all U.S.-based political public groups and pages.”
From this massive corpus of posts, the researchers used a range of tools and methods, including applying Amazon’s facial recognition API to identify political figures, Google’s reverse image search to identify images that had been previously fact checked by journalists, and hand coding to mark an image as misleading if it “promotes unsubstantiated conspiracy theories, spreads elements of known political disinformation campaigns, makes claims that are demonstrably false, or places facts in a misleading context.” (The coding scheme also took into account humor and satire: “If viewers would most likely need to accept a falsehood in order to find an image post ‘funny,’ we classify it as misinformation.”)
Three Key Hypotheses
With the datasets of images related to U.S. politics and depicting U.S. political figures in hand, the researchers set out to determine the scale of “image-based misinformation on Facebook pages and public groups in the lead-up to the 2020 U.S. election,” “the partisan character of image based misinformation on Facebook,” and “whether image-based misinformation attracts more engagement than nonmisinformation.”
Out of 1,000 images in the sample focused on U.S. politics, 226 contained elements of misinformation, suggesting that roughly 23% of all such images contain misinformation. When it comes to the question of the partisan character of image based misinformation, 39% of right-leaning image posts contained elements of misinformation compared to only 5% of left-leaning image posts.
Likewise, 20% of the public figure sample of images contained misinformation. While 176 out of 588 of right-leaning image posts in the sample of 1,000 images were found to be misleading, only 20 out of 326 of the left-leaning images contained misinformation.
The only vaguely encouraging result, perhaps, is that the researchers “find no evidence that misleading image posts generate significantly more engagement than nonmisleading posts, once the size of the group or the number of page followers are controlled for.” But even on this point it’s not necessarily good news. “There is one caveat: with this dataset, we cannot reject the possibility that misleading content might improve (or detract from) a page or group’s audience growth over time, which might be a question for future research.”
The researchers found four key types of misleading images: those that were altered to be misleading, memes with misleading text, “unaltered images with false or misleading captions or labels,” and screenshots of social media posts that are themselves misleading, such as tweets. In this set, the researchers found “four major misinformation themes: (a) images that depicted Joe Biden as senile, (b) images that targeted Joe Biden’s son Hunter Biden, (c) images suggesting that Democratic candidates endorsed violence, and (d) images that promoted the QAnon conspiracy theory.”
Damage to Democracy
While the study’s primary focus is on the impact of image-based misinformation on citizens’ ability to make informed decisions, the researchers point out that they discovered a disproportionate number of political image posts that direct hate towards groups such as women and racial minorities. “Democratic female public figures—especially those of color—seem particularly likely to be targeted for abuse,” they write.
This “identity propaganda” seeks to delegitimize non-white groups, exploit stereotypes, and undermine representation. The authors say that future research should prioritize investigating image-based attacks and identifying “image characteristics that contribute to the visual framing of non-white public figures.”
While some may hold that social media companies’ moderation policies have addressed the problem of misinformation over the past few years, any discussion of the issue that overlooks image posts is insufficient, say the authors.
“Ultimately, our results raise profound concerns about Facebook’s impact on democratic politics,” they write. “Right-wing pages and groups, especially, are still posting a flood of falsehoods on the platform. The very pervasiveness of visual misinformation on Facebook makes its impacts difficult to measure, but they are likely to be highly corrosive to democratic self-government.”
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Innovation. He is an associate research scientist and adjunct professor at NYU Tandon School of Engineering. Opinions expressed here are his own.