Yesterday’s Legislation is Failing Us in the Fight Against Tech-Fueled Violence
Yaël Eisenstat, Katie A. Paul / Jan 31, 2024This week’s Senate hearing with the CEOs of Meta, X, TikTok, Discord and Snap made one thing clear: it is time to update the laws barring victims of online harms from any recourse, even when the companies themselves play a role in facilitating that harm, say Yaël Eisenstat and Katie A. Paul.
Imagine your teenager logs onto Instagram. They search “World Economic Forum” for a school project and follow Instagram’s account recommendations to click a profile that describes itself as “NatSoc,” short for National Socialism. Instead of getting the information they need, they are inundated with conspiracy theories and hate. The account posts hateful propaganda, including an interview with American Nazi Party founder George Lincoln Rockwell, who posits that “the main thing we are fighting for is the preservation of the white race.”
Clicks away on YouTube, chants of “Sieg Heil” blast through the speakers. These are the opening lyrics to the song “Zyklon Army,” referring to the poison gas used in Nazi death camps, by white power band Evil Skins. The video, which has been viewed more than 170,000 times, wasn’t created by the band or posted by another user. It was created by YouTube itself.
These are just some of the troubling examples exposed in two recent studies we led at ADL (the Anti-Defamation League) and the Tech Transparency Project (TTP). The most disturbing finding: Instagram’s algorithms served the worst kind of virulent content to teenage personas, model teenage accounts used in the research—while adult accounts that followed the same searches and methodology received none of these recommendations.
Although tech companies claim to ban hate and police violent organizations, we found that some of the biggest platforms and search engines at times directly contribute to the proliferation of online hate and extremism through their own tools and, in some cases, by creating content themselves using automated systems.
The proliferation of hatred and extremism on these platforms is not new. There is years worth of evidence that big tech platforms amplify the violent extremist ideologies of perpetrators behind deadly hate-driven attacks in the U.S. and around the world. Just some recent examples:
- Meta-owned Facebook was used to live stream the murder of 51 worshippers by a white supremacist in Christchurch New Zealand in 2019 and Instagram was used by the mass shooter of the Gilroy Garlic Festival to push white supremacist ideology in 2019;
- Google-owned YouTube was where the neo-Nazi mass shooter from a mall in Allen Texas posted his reveal following the massacre in 2023;
- Amazon-owned Twitch is where the white supremacist teen shooter who killed nine people in Buffalo, New York live streamed his attack; and
- X (formerly known as Twitter) is where a teen neo-Nazi posted about race war before killing two people in Virginia in 2018.
And yet, platforms have little cause to worry about accountability for any of this. For decades, they have enjoyed near-blanket immunity under Section 230 of the Communications Decency Act for violent and extremist content that not only litters their platforms, but that their own tools at times amplify and even create.
Known as “the 26 words that created the internet,” 47 U.S. Code § Section 230 gives companies a liability shield for most user-generated content. When it was enacted in 1996, Section 230 was, in effect, a huge state subsidy to help a then-nascent industry get off the ground.
Nearly three decades later, however, Section 230 is hopelessly outdated. It has not kept pace with the ways that online information is created, ranked, recommended and amplified. Today, these technology platforms use Section 230 to shrug off lawsuits for facilitating, amplifying, and even profiting from sometimes-unlawful extremism. The argument that users are responsible for the hateful content that rises to the top of news feeds has long served to absolve these multi-billion-dollar companies from responsibility for any role their own products play in exacerbating hate.
Recently, a case accusing Meta of helping radicalize 2015 Charleston massacre shooter Dylann Roof was summarily dismissed, preventing the victims from having their day in court and the public from having the opportunity to learn if Facebook’s own tools played a role. In fact, TTP research from 2020 showed that Facebook had auto-generated pages for a hate group—cited as an influence by Roof in his manifesto—complete with a direct link to the group’s website. The judge implied his hands were tied due to Section 230, and that it would be up to Congress to change the law. The Supreme Court made a similar argument in 2023 when ruling on Gonzalez v Google, with Justice Elena Kagan saying in regards to the lines drawn by Section 230: “isn’t that something for Congress to do, not the Court?”
In May 2020, two individuals murdered a federal security guard in Oakland, California, after they met because they were both recommended to join a Facebook group associated with Boogaloo bois, an anti-government extremist movement. Facebook also auto-generated pages related to the Boogaloo movement they joined. Whether Facebook is solely responsible for that murder is not the question. Whether Facebook helped facilitate the murder, however, is one that can only be answered if the victims can have their day in court.
Without the threat of lawsuits or any other form of accountability, platforms have little incentive to prioritize user safety over business decisions. Take Facebook, which was exposed in 2022 for auto-generating pages for known terrorists and extremists, like the Boogaloo Bois, involved in real-world violence. Facebook has been aware of the issue since at least 2019 – but continues to auto-generate content for hate groups and other extremist groups even today.
It’s time to shift the debate from whether a platform should be responsible for third-party content to a more important conversation about how to update Section 230 to apply to the realities of today’s internet. The law was supposed to protect internet providers from being sued over content created by their users. Against this overly permissive regulatory backdrop, tech companies have steadily evolved from passively hosting hateful content, to amplifying or recommending it, to actually creating it. With the increasing ease and availability of generative AI tools added into the mix, we must ask: how far will this phenomenon go?
To be clear, we do not believe Section 230 should be repealed. Platforms should be protected to engage in content moderation and shouldn’t necessarily be accountable for user speech. They should not, however, be granted automatic immunity for their own creations and for the amplification of content that may result in legally actionable harm.
Technology companies will say it is too difficult to alter the way their systems recommend or amplify content. Our findings show this is not true. In one of the studies, YouTube was responsive to the test persona but resisted recommending extremist content, proving that it is not just a problem of scale or capability. It is, in fact, possible to retrain their algorithms or tune their systems to ensure they are not leading users down paths riddled with hate.
Twenty-six words regulating the opaque ecosystem that influences so much of our daily lives today is simply not enough. While it is encouraging to see lawmakers consider how to build guardrails for emerging technologies, including some proposals to exclude AI-generated content from Section 230 protections, that does not replace the need to fix the problems that the ever-expanding, broad interpretations of Section 230 immunity have created. It is time for Congress to update Section 230–and clearly define what is and is not protected by the law–before tech-fueled hatred leads to more tragedies.