In his March 25th testimony to the House Energy & Commerce Committee, Facebook CEO Mark Zuckerberg addressed the divisiveness in U.S. politics by blaming political elites and the media environment, arguing “that technology can help bring people together.”
New research in a range of disciplines suggests today’s platforms are in fact part of the problem, and points to the need to study the information ecosystem from a higher vantage to look for more connections and effects across the various groups of actors, media and platforms that influence the public sphere.
Social media facilitates polarization through “social, cognitive, and technological processes”
Writing in Trends in Cognitive Science, researchers in psychology and neural science at NYU and the University of Cambridge observe that reviewed empirical research to evaluate the relationship between social media and polarization, observing that while “social media is unlikely to be the main driver of polarization,” it is “often a key facilitator.” The researchers observe that the key factors that appear to shape the role of social media with regard to polarization are “partisan selection, message content, and platform design and algorithms.”
Walking through each, the researchers detail experimental results that correspond to these factors. For instance, they find that in the category of partisan selection, “cognitive biases in information seeking, belief updating, and sharing may all increase polarization,” and that while these biases emerge from the user they “may also interact with platform features to amplify the effect” such as triggering algorithms to increase exposure to similar content.
With regard to platform design and algorithms, research points to the conclusion that different platforms contribute to division in different and unique ways. “Some platforms’ algorithms seem to amplify content that affirms one’s social identity and pre-existing beliefs,” say the authors, such as Facebook’s newsfeed.
Interventions on false claims could backfire, exacerbate spread on other platforms
One problem area where social media platforms have increased efforts to mitigate divisive and polarizing content is with regard to disinformation. But efforts to stop the spread of false and misleading content and ideas suggest the problem is tricky, and some interventions could produce undesired outcomes.
Writing in the Harvard Kennedy School’s Misinformation Review, researchers from NYU’s Center for Social Media and Politics (CSMaP) looked at “tweets from Former President Donald Trump, posted from November 1, 2020 through January 8, 2021,” a period in which he was sharing a high volume of false and misleading claims related to the outcome of the 2020 election, looking in particular at tweets “that were flagged by Twitter as containing election-related misinformation.”
The researchers then consider flags and blocking messages altogether, and find that “while blocking messages from engagement effectively limited their spread, messages that were flagged by the platform with a warning label spread further and longer than unlabeled tweets.”
But the researchers did not simply look at the implications of such interventions on Twitter. Rather, they looked also to see how the messages spread on other platforms, including Facebook, Instagram and Reddit. What they found points to the complexity of content moderation across multiple platforms:
Our findings underscore the networked nature of misinformation: posts or messages banned on one platform may grow on other mainstream platforms in the form of links, quotes, or screenshots. This study emphasizes the importance of researching content moderation at the ecosystem level, adding new evidence to a growing public and platform policy debate around implementing effective interventions to counteract misinformation.
This finding is expressed looking at the average trajectory of messages on one platform versus others. For instance, “messages that received either a soft or no intervention on Twitter had a similar average number of posts on public Facebook pages and groups,” while “messages that received a hard intervention on Twitter had a higher average number of posts, were posted to pages with a higher average number of page subscribers, and received a higher average total number of engagements.” In other words, blocking a message on Twitter may help it spread further on Facebook, for instance.
The researchers point out that they have no causal evidence as to whether Twitter’s warning labels worked, or not. “Nonetheless, the findings underscore how intervening on one platform has limited impact when content can easily spread on others,” added paper co-author and NYU research scientist Megan A. Brown. “To more effectively counteract misinformation on social media, it’s important for both technologists and public officials to consider broader content moderation policies that can work across social platforms rather than singular platforms.”
New concerns about the relationship between gender and incivility towards female politicians on social media
Writing in the journal Social Media + Society, Boston University researchers look at Twitter and the role it plays in “the facilitation of political discourse” by measuring and analyzing the discourse related to the three top 2020 Democratic primary candidates: Senator Elizabeth Warren, Senator Bernie Sanders, and then former Vice President Joe Biden. Concerned that “incivility has the potential to stifle democratic discourse and cause adverse effects within the political sphere,” the researchers collected more than 18 million tweets between August 1 and September 30, 2019.
Analyzing the set, the researchers found that 22.5% of a representative sample of the tweets contained language categorized as “uncivil”. While “the greatest proportion of uncivil tweets was directed to Joe Biden, the data indicates a statistically significant relationship between candidate gender and incivility,” and the researchers conclude that the “results show that the highest frequency of uncivil conversation surrounded Senator Elizabeth Warren, the only female candidate in [the] study.” Indeed, a text mining analysis found that ‘Murder’ was one of the most frequently associated terms that co-occurred with ‘Warren’,” according to the paper. Such a stark finding suggests there is still more work to be done on how to address uncivil dialogue and attacks toward political candidates and officials based on gender.
The role that social media plays in the divisive politics of the United States is a subject of interest not just to researchers but to regulators, politicians and civil society actors who wish to improve the nature of political conflict and achieve more just and equitable outcomes. A constant refrain from researchers studying these issues is the need for access to platform data.
“Research on social media’s impact on society has made tremendous strides in the last decade. But our work has often been hampered by a lack of platform transparency and access to the necessary data,” said NYU Professor Joshua A. Tucker, co-director of CSMaP and a co-author of the study on Twitter warning labels and election disinformation. “Increasing data access is critical to measuring the ecosystem-level impact of content moderation and producing rigorous research that can inform evidence-based public and platform policy.”
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Innovation. He is an associate research scientist and adjunct professor at NYU Tandon School of Engineering. Opinions expressed here are his own.