The immediate danger of insurrection seems to have been averted in the aftermath of the January 6th siege on the Capitol. Rumored threats by militia and other right wing groups have not materialized, even as the Capitol remains heavily guarded. But the forces that led to the violence that day remain intact– chiefly, the disinformation and conspiracies that drove the mob up the Capitol steps. More than two months later, a number of falsehoods continue to circulate on social media, including that “antifa” was responsible, that former President Trump will return to the White House, and most damaging of all, many continue to believe in the “big lie”– that Trump won the 2020 election.
To bring back a shared sense of reality, the response to this maelstrom of lies must be unified. First, social media platforms must double down on efforts to remove hate speech and extremist disinformation intended to incite violence. Second, there must be more unity of effort in the community of researchers that investigate, track and take action against far right groups, online disinformation and misinformation, and conspiracy theories. And third, the government must introduce new accountability measures to make sure the platforms are held to account.
The Role of Platforms
In the days since January 6th, social media platforms have taken more significant action against incitements of violence and conspiracy theorists, with Twitter, Facebook and YouTube suspending Donald Trump’s accounts and Apple, Google, and Amazon deplatforming the social media app Parler. Twitter reportedly already has the technology to deplatform a significant amount of hate speech from its platform and Facebook’s own research shows that the platform hosts billions and billions of hate speech posts per day. Based on the technology that we’ve developed internally at Human Rights First, I know that it is possible to detect disinformation, hateful, and extremist content at-scale, with a high-level of accuracy, so that the perpetrators of this speech are fairly warned, deplatformed and the targets of their ire protected. No one should feel unsafe in an online forum– a situation that seems like a distant, utopian dream.
While clear disinformation, hateful and violent speech can be eliminated to a large degree, more borderline content can be decelerated so that it does not spread as quickly. At the user-level, platforms could detect when someone is posting something potentially hateful and delay their post by a few minutes, giving them the opportunity to cool down and rethink whether they want to put that content up for all the world to see. Deplatform, decelerate, delay– and the types of content that led to the Capitol attack will be reduced significantly.
The Role of the Research Community
In the days leading up to January 6th, researchers warned of the violence to come, not having to look too hard to find violent intent online. Now, the Capitol attackers and their supporters continue to forge their own narratives, further and further disconnected from reality. United that day, there is a risk that the various factions combine into a unified hate movement. From those wearing antisemitic paraphernalia to prominent white nationalists, from QAnon adherents to far right anti-government groups like the Oath Keepers, the events of January 6th realized many of the fears that social media researchers have been expressing for years: the virtual meshing of these groups, joined by hashtags, led to a physical meshing, as these dangerous ideas joined inside the Capitol with the common goal of violently disrupting the democratic process. There is an urgent need for the research community to redouble its efforts to understand extremism and radicalization from a wide variety of perspectives and utilizing the tools of a variety of disciplines.
Funders should incentivize collaboration across institutions and disciplines. Simultaneously, researchers should be careful regarding which government agencies they share their findings with: the disheartening news of political appointees, law enforcement, active duty military and veterans who were part of the violent mob shows that these strains of extremism have infected those who are charged with protecting the public. It is also time to melt away the artificial divisions within the research community that put disinformation and hate into separate silos– the long term challenge is tracking communities that thrive on both.
The Role of Government
Ultimately, the structural factors that have led to this moment need to be addressed, starting with the technology that makes it easier to find and engage with hateful and extremist ideas than it is to address it. That recognition should spur at least a spirited, good faith debate on possible legislation to address these deficiencies. Several remedies have already been introduced, such as the bipartisan Platform Accountability and Consumer Transparency (PACT) Act, which forces companies to disclose information about their content moderation policies. There is a robust debate about possible reforms to and amendments of Section 230 of the Communications Decency Act, such as the bill introduced by Representatives Eshoo and Malinowski holding platforms accountable for their algorithms promoting rights violating content, a known avenue for radicalizing users. And, there is renewed appetite in federal law enforcement to make investigations into white supremacist, white nationalist and far-right extremists a priority.
It will take some time as well as broad, concerted action to implement changes to turn down the temperature online and improve our national psychology. The challenges of the decade ahead require us to generate consensus. We haven’t a moment to spare.
Dr. Welton Chang is co-founder and CEO of Pyrra Technologies. Most recently he was the first Chief Technology Officer at Human Rights First and founded HRF’s Innovation Lab. Prior to joining HRF, Welton was a senior researcher at the Johns Hopkins Applied Physics Laboratory where he led teams and developed technical solutions to address disinformation and online propaganda. Before joining APL, Welton served for nearly a decade as an intelligence officer at the Defense Intelligence Agency and in the Army, including two operational tours in Iraq and a tour in South Korea. Welton received a PhD and MA from the University of Pennsylvania, an MA from Georgetown University, and a BA from Dartmouth College.