Ben Lennett, a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy, is an editor at Tech Policy Press.
Last week, the Supreme Court released decisions in Gonzalez v. Google, LLC, and Twitter, Inc. v. Taamneh. Though much was made of the Gonzalez case as the high court’s first reckoning with Section 230, its outcome was considerably more tied to the justices’ interpretation of U.S. anti-terrorism laws in the Taamneh case. As I wrote for Tech Policy Press in February, the claims in both cases rely upon the Justice Against Sponsors of Terrorism Act (JASTA), which enables victims to sue individuals and other entities that knowingly aid and abet an act of international terrorism.
Ultimately, that’s what shaped the Court’s decision. By failing to find the plaintiffs in the Twitter and Google cases adequately stated a claim under JASTA, the Court could sidestep any major questions about Section 230. This was unsurprising for many observers, as neither complaint tried to establish that Twitter or Google directly supported the terrorist acts that precipitated the cases. Instead, the plaintiffs argued that both companies– Google, via YouTube hosting and recommending ISIS content and sharing ad revenues through its Adsense program, and Twitter, by hosting ISIS accounts and facilitating general communications–supported ISIS’s terrorist enterprise.
According to the Court’s summary in the Taamneh case, this was not sufficient:
In this case, the failure to allege that the platforms here do more than transmit information by billions of people—most of whom use the platforms for interactions that once took place via mail, on the phone, or in public areas—is insufficient to state a claim that defendants knowingly gave substantial assistance and thereby aided and abetted ISIS’ acts. A contrary conclusion would effectively hold any sort of communications provider liable for any sort of wrongdoing merely for knowing that the wrongdoers were using its services and failing to stop them. That would run roughshod over the typical limits on tort liability and unmoor aiding and abetting from culpability.
Thus the justices declined to address the application of Section 230 in the Gonzalez case and remanded the case back to the appellate court to review in light of its decision in Taamneh.
In doing so, the Court left the status quo intact, with Section 230 continuing to provide platforms with broad protection from liability, but one that might be narrowing ever so slightly. For example, the Court did not refute the appellate court’s finding that YouTube’s Adsense program was not protected by Section 230. However, it agreed that the complaint in Gonzalez “does not say, nor does it give any other reason to view Google’s revenue sharing as substantial assistance” to a specific act of terrorism or ISIS more generally.
For policymakers and others that are disappointed that the courts seemingly gave big tech and other internet platforms a pass for enabling terrorist content and organizing, the decisions further clarify that any obligations for platforms to take more significant action concerning user content and speech will likely have to come from the U.S. Congress. Section 230, as currently interpreted, seems to immunize platforms from harms related to hosting or promoting content, except in the narrowest of circumstances, such as if a platform’s algorithm or content moderation gave “special treatment” to unlawful content or materially contributes to its development.
Furthermore, as the Court notes in its decision in Taamneh, “Plaintiffs’ complaint rests heavily on defendants’ failure to act, yet plaintiffs identify no duty that would require defendants or other communication-providing services to terminate customers after discovering that the customers were using the service for illicit ends.” Ultimately, this is true for most other unlawful content on social media and what, if any, incentives or obligations these platforms have for removing it or taking further action.
Relying on the courts to impose some sort of distributor liability (as several amici argued in the Gonzalez case) that would compel platforms to take action if they know of unlawful content appears to be an improbable strategy and, in the end, might do more harm than good by creating a massive level of uncertainty both for tech companies and users. In contrast, Europe at least codified its takedown requirements and procedures for terrorist content online, including ]safeguards for freedom of expression.
A reckoning for social media platforms may yet come via legal challenges to Florida and Texas laws that seek to limit social media companies’ ability to moderate certain content on their platforms. The Eleventh Circuit Court of Appeals found that the Florida law was “substantially likely” to violate First Amendment rights on social media platforms. In contrast, a separate appeals court upheld the Texas law.
Both decisions were appealed to the Supreme Court, though even if the court decides to grant review, it will happen in the next term. If the Court does take the cases, it will find it harder to avoid interpreting Section 230. Both laws call into question the scope of immunity under Section 230(c)(2), which has thus far enabled platforms to develop and enforce their terms of service and content moderation policies without incurring liability.
Indeed, the conservative justices may have held back their arguments for Section 230 in Gonzalez for these cases in particular, which are considerably more relevant. The appeals court decision that upheld the Texas law generously cited Justice Thomas’s concurrence in an earlier case involving Twitter, where the justice questioned the influence of digital platforms over speech and their ability to remove content. Conservative lawmakers also filed an amicus brief in the Gonzalez case, urging the Supreme Court to use its decision to reexamine the scope of Section 230(c)(2) protections for censoring conservative points of view.
The Gonzalez case may have failed to challenge Section 230, but the Florida and Texas cases may yet severely weaken it.
Ben Lennett, a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy, is an editor at Tech Policy Press. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Technology Institute at the New America Foundation and as policy expert providing analysis to foundations, governments, and other institutions.