No Easy Answers in Supreme Court Review of Section 230
Ben Lennett / Feb 19, 2023Ben Lennett is a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy.
Last fall, the Supreme Court agreed to review a case that challenges the scope of Section 230, the law that shields social media and other internet platforms from liability that results from the user-generated content they publish. Gonzalez v. Google, LLC involves the family of an American woman killed in a Paris bistro in the ISIS attack on the city in 2015. The family sued YouTube arguing that its parent company, Google, aided ISIS’s recruitment by enabling users to access ISIS propaganda videos and should be liable for damages because its algorithms recommend those videos to users.
A district court dismissed Gonzalez's claims regarding Google's recommendation of ISIS videos as barred by Section 230. The Ninth Circuit Court of Appeals reviewed the case and upheld the district court's decision on Section 230’s protection for algorithmic recommendations. Gonzalez appealed to the Supreme Court.
Though reporting on the case has generally focused on the potential impact of a decision on big tech’s algorithms, the petition asks a broader question of the Court; whether Section 230 immunizes platforms like YouTube when they make “targeted recommendations.” In short, is Google liable for the content it recommends to users, or does Section 230 protect it from liability? To answer that question requires the Court to weigh differing interpretations of Section 230’s ambiguous text and to try to discern Congress’s intent when passing the law.
If you read through the amici briefs filed in the Gonzalez case, you will likely find a common story of Section 230’s origins. Two cases preceded the law in the early 1990s involving internet companies, Compuserve and Prodigy, that facilitated access to the early internet and hosted public discussion forums. In the first case, Cubby v. Compuserve, the court dismissed a defamation lawsuit, finding that company was not liable because it was not involved in the creation or editing of the information in question and could not know that it was defamatory. However, in Stratton Oakmont v. Prodigy, another court found Prodigy could be liable for publishing alleged defamatory content because it engaged in efforts to moderate and screen certain content in its forums, in an effort to make it a more family-friendly service.
Members of Congress, including Senator Ron Wyden (D-OR, then a representative) and former Representative Christopher Cox (R-CA), were concerned about the impact of the Stratton Oakmont decision on the internet and, in particular, intermediaries like Prodigy, AOL, as well as emerging search engines, that could be held liable for defamation and other harms just like newspapers and other traditional publishers. In this context, Section 230 was drafted and passed in 1996 as part of the Communications Decency Act (CDA).
The CDA was primarily concerned with children’s access to obscene and indecent material on the early internet. The main provisions of the CDA, which criminalized the transmission of "obscene or indecent" material to persons known to be under 18, were later overturned by the Supreme Court on the grounds they abridged freedom of speech. Section 230 was left standing, fundamentally shaping the internet we know today.
Supporters and critics of the law are all likely to agree that it did two things. First, it instructed courts not to treat companies like Prodigy, which hosted third parties' content, as the publisher and speaker of that content. Thus, these platforms should not be held to the same legal standard as a newspaper or other similar publisher involved in writing and editing content. And second, it protected those same entities from liability when they acted in good faith to restrict access to or availability of objectionable material, thus overturning the Stratton Oakmont decision that essentially penalized Prodigy for efforts to moderate content in its forums.
But beyond those elemental protections, the consensus breaks down. And though many observers characterize Gonzalez as an odd case to prompt the Supreme Court to review Section 230, in many respects, it highlights the difficulty of determining the scope of the law’s protections and tradeoffs in enabling broad immunity for some of the most consequential platforms of media and speech.
If we examine the case in the context of Section 230, Gonzalez's claims do not appear to treat Google as the speaker or publisher of ISIS propaganda. Instead, they rely upon the Justice Against Sponsors of Terrorism Act (JASTA), which enables victims to sue individuals and other entities that knowingly aid and abet an act of international terrorism. Did YouTube’s content recommendation algorithms aid and abet ISIS’s acts of terrorism by helping to amplify their propaganda videos? Possibly. But does Section 230 bar those kinds of claims, even if the claims have merit under JASTA? That’s the main question before the Court.
In its brief, the lawyers for Gonzalez argue that Google’s recommendations and its decisions regarding the presentation of the ISIS videos on YouTube should be exempt from 230 protections because they reflect information created by Google, not ISIS or other users on the platform. The U.S. Government’s brief in the case echoed that argument. However, the government argued that Google’s recommendations reflected its conduct and actions, separate from Section 230’s protection for publishing ISIS propaganda videos on the platform.
Google, or any other company, should not be immune from liability when their algorithms or conduct cause harm. And indeed, recent decisions establish that Section 230 does not outright immunize platforms from claims where the company “materially contributes” to the harm in question. For example, if a company designs an algorithm to target ads in a discriminatory manner, it can be held liable. The Department of Justice settled a case with Meta (Facebook) to resolve allegations that its targeted advertising algorithms enabled housing ads to discriminate based on race, religion, and other characteristics, violating the Fair Housing Act. Moreover, platforms can be liable for harm related to a product's design. A Georgia court recently allowed a product liability lawsuit against Snapchat to proceed that alleged the company’s “speed filter” feature was negligently designed and encouraged reckless driving.
Yet it is not quite as straightforward in the Gonzalez case. Google’s potential liability is not just the result of its action in designing an algorithm, but the fact that its algorithm promoted ISIS propaganda videos to users. As several amicus briefs argue, the harms alleged in the Gonzalez case stem from the content of the ISIS videos themselves, not Google’s algorithms. Would Google be in this position if not for the fact that its algorithms promoted ISIS propaganda and not something else? Moreover, others challenge the view that Google’s recommendations are not protected by 230, given that the law’s definition of the services covered includes those that “pick, choose, analyze, or digest content.” These are elements of what Google and other platforms’ recommendation algorithms do.
Other briefs seeking to narrow Section 230 took a different approach by urging the Supreme Court to revisit a precedent established in an early case after Congress passed the law. In Zeran v. America Online, a defamation suit aimed at AOL, a Federal appeals court upheld a dismissal of the case because Section 230 barred the claims. However, it took a broad view of Section 230’s protection and, in particular, immunized AOL even from secondary claims of liability.
Critics of Zeran’s interpretation argue that Section 230 immunized platforms from strict publisher liability for defamation while preserving distributor liability. They argue that prior to the internet era, courts distinguished between publishers and distributors in defamation cases. Publishers, such as newspapers, were strictly liable for the content they produced and subject to damages if they published a defamatory story. In contrast, distributors, including bookstores and libraries, were only exposed to secondary liability and required to know that a book or other content was defamatory. Zeran collapsed that distinction, finding that Section 230 broadly protected platforms as there was no way to attach any liability to AOL for defamatory content “without conceding that AOL too must be treated as a publisher of the [defamatory] statements.”
One of the counterarguments to the Zeran critique is that distributor liability is a distinction without a difference based on the definitions within common law. Coincidentally, both sides of the debate point to the same sources of information for common law, but with competing interpretations. In addition, current members of Congress also disagree about Congress’s intent on the subject. A brief from Senator Josh Hawley (R-MO) argues that “[h]ad Congress actually sought to prevent technology companies from being held liable as distributors—for disseminating content they knew or should have known to be illegal—it could have easily done so.” In response, a brief filed on behalf of Senator Wyden and former Representative Cox states that “had Congress intended to limit immunity to defamation claims, it could have said so explicitly. It did not.”
It is a difficult discussion to follow, but it could have dramatic consequences for platforms if the Court reverses the Zeran interpretation. Briefs in support of distributor liability argue that it would not be so unprecedented, as the platforms are already obligated under the Digital Millenium Copyright Act to remove content that violates copyright law when provided notice. Other briefs argue that it would lead platforms to preemptively restrict or overly police speech and content to avoid liability.
The Gonzalez case is not about whether Google should be liable for recommending or amplifying terrorist propaganda on its platform, but whether it can be under the law. But beyond that specific legal debate, the case underscores the difficulty of narrowing Section 230 without significantly curtailing digital spaces for speech and global freedom of expression. The Supreme Court could choose to avoid this dilemma by dismissing the case as improvidently granted. Or it could, via another case that it is also reviewing, Twitter, Inc. v. Taamneh, find that Google’s content recommendations and similar actions by other platforms do not meet the standards of civil liability under U.S. terrorism laws, and ignore Section 230 altogether.
We may get more of a sense of the Court’s potential direction on the case when oral arguments occur this week.