Home

How to Reduce the Danger of Social Media Facilitating Political Intimidation and Violence

Paul M. Barrett / Sep 5, 2024

Paul Barrett is the deputy director of the NYU Stern Center for Business and Human Rights, and the author of a new report on social media’s role in political violence.

Supporters of then President Donald Trump scale the facade of the US Capitol on January 6, 2021. Shutterstock

With an election approaching and the prospect of post-Election Day clashes about the integrity of voting procedures, the risk of political intimidation and violence in the United States is rising. One factor exacerbating this problem in recent years has been the exploitation of social media platforms to incite and organize violent offline behavior.

Recall:

  • Insurrectionist supporters of former President Donald Trump inspired and organized the historic attack on the US Capitol on January 6, 2021, on mainstream platforms like Twitter (now X), Facebook, and YouTube; fringe sites like TheDonald.win, Gab, and Parler; and extremist “image boards” like 4chan.
  • Almost exactly two years later, backers of former Brazilian President Jair Bolsonaro took to TikTok and Facebook to spread false election fraud incitement and encourage attacks on government buildings in Brasilia.
  • In 2023 and 2024, menacing posts on platforms ranging from Elon Musk’s X to Donald Trump’s Truth Social have been linked to a spike in death threats to members of Congress, judges, and prosecutors. Similarly, right-wing activists’ recent incendiary posts on X have preceded intimidation of faith-based organizations that help recent immigrants to the US.
  • This year, right-wing mobs in Europe instigated anti-immigrant violence in Britain, Portugal, and other countries by means of false social media posts attacking asylum seekers.

Political violence is fed by multiple factors, not least the example of political leaders. In the US, former President Trump has fanned the flames of lawlessness by promising, if reelected, to pardon January 6 rioters and seek “retribution” against political foes. Adding fuel to the fire in the US are right-wing cable television networks, radio outlets, and online influencers. Left-wing extremists occasionally use social media to threaten or organize violence, but with nowhere near the frequency of their counterparts on the right.

An answer to industry obfuscation

The social media industry, led by Meta, the company whose platforms claim the largest collective user base, has repeatedly tried to deflect responsibility for its role. Testifying before Congress in the wake of January 6, Mark Zuckerberg, Meta’s multi-billionaire founder and chief executive, tried to dodge accountability. “We did our part to secure the integrity of the election,” he told lawmakers in March 2021. “The reality is our country is deeply divided right now, and that isn’t something that tech companies alone can fix…. Some people say that the problem is that social networks are polarizing us, but that’s not at all clear from the evidence or research.”

Zuckerberg employed two misleading rhetorical moves. First, far from successfully securing election integrity, Meta conspicuously failed to stop Facebook Groups from becoming a key venue for the incitement and planning of the “Stop the Steal” campaign to overturn a legitimate election. Second, no serious analyst contends that “tech companies alone can fix” the vicious and sometimes-violent divisiveness now eroding the American political system. That is a classic straw-man argument that appears to be intended to confuse listeners.

A new report I’ve written for the NYU Stern Center for Business and Human Rights, where I am deputy director, clarifies what the social science research actually says. My colleagues and I reviewed more than 400 studies — most published by peer-reviewed academic journals, some by university-affiliated research groups or independent think tanks. The research consistently shows that social media is exploited to facilitate political intimidation and violence. What’s more, certain features of social media platforms make them particularly susceptible to such exploitation, and some of those features can be changed to reduce the danger.

Among the platform features we examined are:

  • Facebook’s Groups product, which, in addition to its role in January 6, helped sometimes-violent QAnon adherents to grow into a full-blown movement devoted to the delusion that former President Trump has secretly battled “deep state” bureaucrats and Satanic pedophiles.
  • Instagram’s comments function, which has allowed the Iranian government to threaten dissidents with sexual assault and death as a way of silencing them.
  • TikTok’s powerful recommendation algorithm, which in one experiment promoted violent videos, including incitement of students to launch attacks at school.

The report also includes a special case study written by Dean Jackson, Tim Bernard and Justin Hendrix of Tech Policy Press, looking at the social science on how social media facilitated the January 6 attack. They found that on balance the body of research largely confirmed the findings in a 122-page memo on the role of social media produced by the Select Committee to Investigate the January 6th Attack on the United States’ investigative staff, of which Jackson was part.

Ways to reduce harm

There are steps social media companies can take to diminish their contribution to political violence. Here are some of our top suggestions for the industry:

  • Sound the alarm. To reduce risks, social media companies first need to end their tendency to deflect and obfuscate; instead, they should acknowledge the role they play in facilitating political strife.
  • Put more people on the content moderation beat. In 2022 and 2023, Meta, Twitter (X), and other major social media companies laid off thousands of “trust and safety” employees — the people who devise and enforce policies aimed at reducing online hatred and incitement. This ill-advised retreat must be reversed.
  • Confront election delegitimization. Political threats and actual confrontations often stem from irresponsible efforts to undermine trust in elections. Tech companies need to act aggressively to label and/or remove baseless allegations of election fraud and redirect users to authoritative sources of information.
  • Make design changes to mitigate harm. Rather than allow user anonymity, social media companies should require users to verify their identity (with provisions for storing verification data securely and/or erasing it once it’s no longer needed). Platforms should monitor groups for the prevalence of content advocating violence, regardless of partisan orientation. Invitations to, and recommendations of, volatile groups could be shut down; so should the groups themselves if they become dangerous. More broadly, recommendation systems should be redesigned to reduce, rather than heighten, sectarianism. Sheer user engagement, which may reflect hateful and other sensationalistic posts, can be reduced as a criterion for amplification.

There are things government can do, as well:

  • Enforce existing laws. With healthy respect for the First Amendment, the US government need to be vigilant about enforcing criminal laws banning political intimidation and the incitement of violence. The Federal Trade Commission (FTC), Federal Election Commission, and their state counterparts also must use their full authority to enforce existing laws against election fraud, voter suppression, cyberattacks, and other offenses relevant to protecting elections.
  • Protect election workers. To arrest the exodus of election workers, governments should raise the stakes for those who seek to intimidate these public servants by hardening existing penalties and introducing new ones that take into account the coordinated disinformation campaigns that lie behind the harassment.
  • Enhance federal authority to oversee digital industries. Longer term, the federal government needs to regulate digital industries in a more systematic fashion. Congress should expand the consumer protection authority of the FTC to accomplish sustained oversight of digital industries.

Diminishing the risks that technology poses to our democracy demands the expenditure of additional resources and energy in both corporate suites and the halls of government.

RELATED READING:

Authors

Paul M. Barrett
Since 2017, Paul M. Barrett has served as the deputy director and senior research scholar of the NYU Stern Center for Business and Human Rights. Barrett is also an adjunct professor NYU School of Law, where he co-teaches a seminar called, "Law, Economics, and Journalism." Before coming to NYU, Barre...

Topics