Home

Donate

Tech Firms Promise To Address Hate and Extremism, Again

Justin Hendrix / Sep 15, 2022
U.S. Capitol in Washington D.C., January 6, 2021

This piece is co-published with Just Security.

The current president of the United States, Joe Biden, was inaugurated just two weeks after a violent insurrection at the U.S. Capitol sought to prevent the peaceful transition of power. It was nothing short of an inflection point that has made confronting hate, violent extremism and domestic terrorism a national priority.

  • In March of last year, the Office of the Director of National Intelligence issued a stark public warning about the threat of domestic violent extremists, noting that many attackers “often radicalize independently by consuming violent extremist material,” making disruption difficult.
  • In May 2021, the State Department announced the United States would join the Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online, “formally joining those working together under the rubric of the Call to prevent terrorists and violent extremists from exploiting the Internet,” after the Trump administration had declined to participate.
  • In June 2021, the White House released a national strategy for countering domestic terrorism, promising to work with a range of partners, “especially the technology sector,” in order to stop the “abuse of Internet–based communications platforms to recruit others to engage in violence.”
  • This summer, President Biden gave a speech in Philadelphia in which he called out right wing extremists, in particular, who “fan the flames of political violence that are a threat to our personal rights, to the pursuit of justice, to the rule of law, to the very soul of this country.”

On Thursday, at a summit organized by the White House called United We Stand, the Biden administration introduced a range of new efforts by the government, civil society and the business sector “to counter the destructive effects of hate-fueled violence on our democracy and public safety, mobilize diverse sectors of society and communities across the country to these dangers, and put forward a shared, inclusive, bipartisan vision for a more united America.” The event was announced in August by Director of the White House Domestic Policy Council, Ambassador Susan Rice.

The Role of Technology in Hate-Fueled Violence

While the Summit did not include a session specifically devoted to the subject of the role of tech platforms in addressing hate and violence, in its announcement of new initiatives the White House included a raft of efforts by some tech platforms to address these issues, including:

  • An expansion of YouTube’s policies to combat violent extremism, including a a commitment to remove “content glorifying violent acts for the purpose of inspiring others to commit harm, fundraise, or recruit, even if the creators of such content are not related to a designated terrorist group,” as well as a media literacy program “to assist younger users in particular in identifying different manipulation tactics used to spread misinformation” and a program to incentivize “college students to develop their own dynamic products, tools, or initiatives to prevent targeted violence and terrorism.”
  • A promise from Microsoft to expand its “application of violence detection and prevention artificial intelligence (AI) and Machine Learning (ML) tools and using gaming to build empathy in young people,” and an effort on its Minecraft gaming platform to “to help students, families and educators learn ways to build a better and safer online and offline world through respect, empathy, trust and safety.”
  • New initiatives from Meta, the company that operates Facebook, Instagram and WhatsApp, including a “research partnership with the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism to analyze trends in violent extremism and tools that help communities combat it” and a series of “trainings, workshops, and skill-building to equip community-based partners working locally to counter hate-fueled violence with tools to help amplify their work.”
  • A new tool from Twitch, a live streaming platform implicated in recent hate attacks such as the recent mass shooting in a Black community in Buffalo, that the company says will empower “its streamers and their communities to help counter hate and harassment and further individualize the safety experience of their channels,” as well as “new community education initiatives on topics including identifying harmful misinformation and deterring hateful violence.”

The White House calls these announcements “a step towards recognizing the important role companies play in designing their products and platforms to curb the spread of hate-fueled violence both online and off.”

The joint announcement suggests the White House has achieved some nontrivial success in its efforts to engage Silicon Valley on the problem of domestic extremism and hate. And while the summit highlighted collaboration with the tech sector, the administration has also promised more fundamental reforms that would address the industry’s incentives and business models. Thursday’s announcements come on the heels of an announcement of six principles “for enhancing competition and tech platform accountability,” focused on antitrust reform; privacy protections; protections for children; the removal of “special legal protections under Section 230 of the Communications Decency Act that broadly shield the companies from liability even when they host or disseminate illegal, violent conduct or materials;” increased transparency; and protections to stop “discriminatory algorithmic decision-making.”

The administration’s approach is, in short, multi-faceted. Some of its priorities, however, would require legislative action. Congress has so far failed to produce any significant tech reforms, a point underscored by Sen. Amy Klobuchar (D-MN) during testimony by former Twitter security chief turned whistleblower Peiter Zatko on Tuesday. “Despite this probably being our 50th hearing .. between Commerce and Judiciary,” said Sen. Klobuchar, “we have not passed one bill out of the U.S. Senate when it comes to competition, when it comes to privacy, when it comes to better funding the agencies, when it comes to the protection of kids.”

Before the Good, the Bad and the Ugly

While Thursday’s announcement struck a deservedly positive tone, it came just hours after a contentious Wednesday hearing in the Senate Committee on Homeland Security & Governmental Affairs titled “Social Media’s Impact on Homeland Security,” at which senior executives from Meta, Twitter, YouTube and TikTok testified.

“We have seen firsthand how quickly dangerous and extremist content can proliferate online, especially to vulnerable communities or users already on the fringe, and alter how people view the world, conspiracies like QAnon and Stop the Steal, hateful ideologies, like white supremacy and antisemitism,“ said Chairman Sen. Gary Peters (D-MI), in his remarks at the hearing. “The Christchurch shooter who killed 51 people and inspired the Poway and El Paso shooters was radicalized on YouTube, and live streamed his attacks on Facebook to rally others to his cause. Three years later, a shooter in Buffalo New York streamed his attack on Twitch, which acted quickly to take it down, but the video was soon circulating widely on Facebook.”

The tech executives defended their record, pointing to strict policies against the presence of violent and extremist content on their platforms. “I want to make clear that there's no place on YouTube for violent extremist content,” said Neal Mohan, YouTube’s chief product officer. His counterpart at Meta, Chris Cox, pointed to a number of experts the company employs as well as collaborations with law enforcement to identify violent extremists. Twitter’s head of consumer products, Jay Sullivan, promised his company would “delay or stop a product rollout if we have health or safety concerns.”

Each of the platforms affirmed that they are generally keen to work with external researchers to assess harms, but when it comes to specific disclosures beyond their self-produced transparency reports, the answer is perhaps less direct. One particularly sharp query came from Sen. Alex Padilla (D-CA), who asked Meta’s Cox if he would reveal how many users were recommended content categorized as hate speech, a figure that could demonstrate the platform’s role in spreading the material. Cox seemed to promise he would reveal the figure in a follow-up to the hearing. Sen. Padilla also asked TikTok chief operations officer Vanessa Pappas whether she could be more specific about how long it takes the company to address violative content outside of the 88% figure the company says is removed in less than 24 hours. She promised to get back to him.

Concerningly, when queried by Sen. Peters, Meta’s Cox was seemingly unaware of research from the Tech Transparency Project, an advocacy organization, that found Facebook has automatically generated pages for white supremacist figures and groups, and that Facebook searches for some groups with names including the phrase “Ku Klux Klan” generated ads for Black churches. Sen. Peters requested that Cox provide “written comments" on his query after the hearing.

Here is @SenGaryPeters asking Meta CPO Chris Cox to explain why FB is auto-generating pages for white supremacists, as exposed by last month's TTP report.

Cox couldn't answer, and Sen. Peters asked that he follow up with "written comments."

We look forward to those comments. pic.twitter.com/1ikTNwwbKx

— Tech Transparency Project (@TTP_updates) September 14, 2022

The Beat Goes On

Whether the administration’s efforts to use the carrot and the stick to cajole tech firms to do more to counter hate and violent extremism will ever reduce the online manifestations of the problem to a degree deemed acceptable remains to be seen. These phenomena are not, of course, primarily caused by technology, and so it is appropriate that the announcement of initiatives related to technology follows that of other offline initiatives to disrupt political and racially motivated hate and extremism that may ultimately prove more important. The United We Stand Summit included faith leaders, conversations with local leaders and survivors of hate-fueled violence, and a discussion of broader federal initiatives with officials including Vice President Kamala Harris, Department of Homeland Security Secretary Alejandro Mayorkas and Attorney General Merrick Garland. President Biden is expected to deliver a keynote address this afternoon.

Nevertheless, the focus on the role of tech firms is necessary. Observers can expect more announcements from tech firms related to addressing violent extremist content online at the Christchurch Call leaders’ summit, co-hosted by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron on September 20, alongside the United Nations General Assembly meeting in New York. Indeed, such announcements now come at a regular cadence, only slightly more predictable than the violent events that precipitate them.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics