Home

Donate

Mapping the Key Arguments in Supreme Court Amicus Briefs in Gonzalez v. Google

Aaron Fisher, Justin Cole / Feb 17, 2023

Justin Cole is a Just Security Managing Student Editor and JD candidate at Yale Law School. Aaron Fisher is a Just Security Student Staff Editor and a JD candidate at New York University School of Law.

Co-published with Just Security.

In late February, the Supreme Court will hold oral arguments to consider the Communications Decency Act’s Section 230, which shields tech companies from liability for content posted by users on their internet platforms. Gonzalez v. Google LLC, the more prominent of two cases before the Court, examines whether Section 230 immunizes Google for YouTube’s targeted recommendations of information provided by users who are third-party content providers, or instead provides immunity only when platforms engage in traditional editorial functions.

Last year in the case below, the Ninth Circuit sided with Google, finding that YouTube’s algorithms for recommending video content to users are shielded by Section 230. If the Supreme Court disagrees, social media platforms like YouTube and Twitter may have to make major changes to their business models, perhaps substantially altering the online environment. Given the potentially far-reaching consequences of Gonzalez v. Google and the related case, Twitter, Inc. v. Taamneh, numerous individuals and groups have submitted amicus curiae briefs to the Supreme Court in advance of oral arguments to be held on Feb. 21 and Feb 22.

While amici make many arguments on different sides of the central issue in Gonzalez, we have distilled the most common arguments to provide a primer on what the Supreme Court might consider during oral argument and subsequent internal deliberations. The volume of briefs submitted to the Court means a comprehensive assessment of each one would be unwieldy; our goal is to provide a review that is substantial enough to prepare the reader to understand and evaluate competing perspectives.

Amicus Briefs Filed in Support of Gonzalez: Main Arguments

Argument 1: “Section 230 . . . protects an online platform from claims premised on its dissemination of third-party speech, but the statute does not immunize a platform’s other conduct, even if that conduct involves the solicitation or presentation of third-party conduct.”

Brief that makes this argument: The United States, filed in support of vacatur.

  • Section 230 “is most naturally read to prohibit courts from holding a website liable for failing to block or remove third-party content, but not to immunize other aspects of the site’s own conduct,” including its design choices.
  • The brief cites the Ninth Circuit’s decision in Fair Hous. Council v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008), in which the court agreed that Section 230 did not shield a defendant that asked subscribers to disclose their sex and sexual orientation, allegedly in violation of housing discrimination laws.
  • The brief also cites Justice Clarence Thomas in Malwarebytes, Inc. v. Enigma Software Grp. USA, LLC, 141 S. Ct. 13 (2020) as observing “two respects in which lower courts have extended Section 230 ... beyond its proper bounds.”
  • The third element of Section 230 says that “the disseminated material must have been ‘provided by another information content provider,’” defined as “anyone who ‘is responsible, in whole or in part, for the creation or development of information.’”
    • “Contextual considerations indicate that Congress did not intend ‘development’ to carry its broadest ‘definitional possibilities.’”
    • “More fundamentally, deeming a website as an ‘information content provider’ whenever it enhances user access to third-party content would produce a ‘self-defeating’ result. Interactive websites invariably provide tools that enable users to create, and other users to find and engage with, information.”

Another brief that makes this argument: Former National Security Officials in support of neither party, filed by Mary McCord (including 21 signatories).

  • Petitioners do not seek to hold Internet platforms responsible for someone else’s content; rather, they seek to hold them accountable for their own content (the algorithms that the platforms develop and use to decide which content reaches which online users). Amici refer to this as “respondent’s own affirmative amplification of that content for targeted users whom the platform’s algorithms identify as likely to view it.”

Argument 2: Section 230 “bars plaintiffs’ ... claims to the extent those claims are premised on YouTube’s alleged failure to block or remove [Islamic State] videos from its site, but the statute does not bar claims based on YouTube’s alleged targeted recommendations of [Islamic State] content.”

Brief that makes this argument: The United States, filed in support of vacatur.

  • Companies that run online platforms like YouTube have developed algorithms that purposefully steer users toward more extreme content. For example, “Facebook’s internal data shows that ‘64% of all extremist group joins are due to our recommendation tools.’”
  • “[T]he effect of YouTube’s algorithms is still to communicate a message from YouTube that is distinct from the messages conveyed by the videos themselves. When YouTube presents a user with a video that she did not ask to see, it implicitly tells the user that she ‘will be interested in’ that content ‘based on the video and account information and characteristics.’ The appearance of a video in a user’s queue thus communicates the implicit message that YouTube ‘thinks you, the [user]—you, specifically—will like this content.’ And because YouTube created the algorithms that determine which videos will be recommended to which users, the recommendations are bound up with YouTube’s own platform-design choices.”
  • “A claim premised on YouTube’s use of its recommendation algorithms thus falls outside of Section 230(c)(1) because it seeks to hold YouTube liable for its own conduct and its own communications, above and beyond its failure to block [Islamic State] videos or remove them from the site.”
  • According to the professional experiences of the amici, terrorist video content has an outsized effect on radicalizing people and recruiting them to join violent groups and ultimately commit acts of terrorism.

Another brief that makes this argument: Free Press Action.

  • “Here, petitioners seek to analogize Google’s algorithmic recommendation of others’ content to cases involving a defendant’s own speech or conduct. But the analogy is ultimately unpersuasive.”

Argument 3: Courts have interpreted Section 230(c)(1) in an overly broad manner, relying on it to immunize a swath of activity by online platforms that is not covered by the language of the statute.

Brief that makes this argument: Former National Security Officials in support of neither party.

  • The purpose of Section 230 was to recognize the implausibility of certain requirements for monitoring content that is posted online, not to protect the “knowing deployment of algorithmic amplification of terrorist content.”
  • The Ninth Circuit was wrong to use Section 230(1) of the Communications Decency Act to deny petitioners their day in court. If plaintiffs are given their day in court, they would still have to prove their case on the merits.
  • The only actions by online platforms that are identified in Section 230 are posting and taking down content—nothing about algorithms or filtering content that has already been posted by third parties.
  • In another subsection of Section 230, Section 230(b)(3), Congress expressed its desire to “encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services.” Amici argue that algorithms currently deployed by Internet platforms such as YouTube in fact do the opposite of what Congress wanted and take away users’ ability to control what information they receive online.

Another brief that makes this argument: Cyber Civil Rights Initiative.

  • “[M]ost state and lower federal courts have interpreted Section 230 as granting [interactive computer service providers] essentially unconditional immunity in cases involving third-party content—even where they have not tried to restrict access to objectionable content, remain indifferent to such content, or even actively promote, solicit, or contribute to or profit from such content.”

Argument 4: There is a difference between publisher liability and distributor liability; as written, the text of Section 230 concerns publisher liability, not distributor liability. Courts have long ignored this distinction, and it is time for the Supreme Court to rectify that error.

Brief that makes this argument: Senator Josh Hawley (R-MO).

  • Google had actual knowledge of the existence of ISIS accounts and content on its platform; it continued to distribute the content. Therefore, it is liable under the principles of distributor liability.
  • The word “publisher” does not encompass every party involved in the dissemination of an unlawful statement.
  • For distributor liability to attach, knowledge by the distributor of specific unlawful videos on the platform is not required.
  • “In the simplest terms: despite having actual knowledge of violent pro-ISIS material being hosted on its platform, Google continued to promote that content by operating recommendation algorithms tailored to disseminate content in a manner that would drive maximum engagement. Section 230 has never immunized platforms from liability for such conduct.”

Other briefs that make this argument: Seattle School District No. 1; Senator Ted Cruz (R-TX), Congressman Mike Johnson (R-LA), et al.

  • From Seattle School District No. 1: “Defamation law has long distinguished between publishers and distributors of third-party content. It is a ‘black letter rule that one who republishes a libel is subject to liability just as if he had published it originally .... In contrast, ‘one who only delivers or transmits a defamatory matter published by a third person is subject to liability if, but only if, he knows or had reason to know of its defamatory character.’”

Amicus Briefs Filed in Support of Google: Main Arguments

Argument 1: Finding that algorithms are not protected by Section 230 would create a “chilling effect” that would lead Internet platforms to reduce or eliminate legitimate third-party content.

Brief that makes this argument: NYU Stern Center for Business and Human Rights.

  • Section 230 “promotes and protects free speech,” and without its liability shield, “Internet platforms would reduce or eliminate third-party content, rather than take on the impossible and risky task of trying to filter all potentially actionable content.”
  • “Section 230 is not a perfect law, but it is the law that created the internet—and its vast array of free speech—as we know it.”
  • The “obvious chilling effect” would be inevitable because service providers could not screen the millions of postings made.
  • An exception for “targeted recommendations” would lead to the loss or obstruction of “a massive amount of valuable speech” by virtue of platforms’ actions in response to their increased exposure to liability, and much of the content “that will disappear or be obscured will disproportionately come from marginalized or minority speakers.” Little would be left of Section 230 if it did not extend to claims based on recommendations, which would force one of three approaches: (i) removing all third-party content; (ii) removing large quantities of content seen as the most problematic; or (iii) buying third-party content amidst a “largely incomprehensible mass of all such content available on the platform.”

Other briefs that makes this argument: Center for Democracy & Technology, American Civil Liberties Union, ACLU-Northern California, Electronic Frontier Foundation, Knight Institute at Columbia University, R Street Institute, and Reporters Committee for Freedom of the Press, filed in support of petitioner Twitter, Inc. in the companion case Twitter v. Taamneh.

  • Affirming the Ninth Circuit’s ruling in favor of Taamneh could quell legitimate speech on the Internet by forcing online platforms to over-police content, leading them to inadvertently take down lawful user speech. This “chilling effect” would have grave consequences for free speech.
  • Online platforms have to rely on automated content-management because of the massive number of posts on platforms every day. Given that not every post can be reviewed by a person, this leaves open a lot of room for mistakes and for the removal of lawful speech if platforms are ordered to over-restrict their content.
  • The core principles that the Supreme Court has established with regard to publishers and distributors of speech covers online platforms in the same way.

Argument 2: Section 230’s protections exceed those of the First Amendment. Perhaps unlike the First Amendment, Section 230 provides robust procedural and substantive protections for targeted recommendations, which allows author-users of online platforms to publish a wide range of content.

Brief that makes this argument: Law Professor Eric Goldman.

  • For example, Section 230, unlike the First Amendment, treats commercial and non-commercial speech equivalently.
  • The lower court appropriately applied Section 230 and was correct in dismissing this complaint.
  • If Congress wants to change Section 230’s protections, it can—but it should not be up to the Supreme Court to do so.
  • Finding for the petitioner would create massive amounts of costly litigation over Section 230.
  • Finding for the petitioner would lead to fewer voices being heard online.

Argument 3: There is no legal distinction between “traditional editorial functions” and “targeted recommendations.”

Briefs that make this argument: Professor Eric Goldman and NYU Stern Center for Business and Human Rights.

  • The NYU Stern Center brief argues that there is no way to carve out “targeted recommendations” from Section 230 because “[t]hey are the essence of what today’s internet platforms do.” “Petitioners fail to identify any way to meaningfully distinguish ‘recommendations’ from other approaches to third-party content,” including Google search results, URLs, and notifications. The purported distinction between “active” or “passive” treatment of user content also has “no application to today’s internet, in which all or almost all platforms use some kind of ‘recommending’ by algorithm.”
  • According to the NYU Stern brief, there is no distinction between “(1) search results responding to a user’s query on a search engine and (2) content a platform ‘pushes’ to a user via a ranked feed or recommendation.”

Argument 4: Section 230 clearly protects online platforms when they make automated recommendations (based on algorithms).

Brief that makes this argument: Internet Law Scholars (19 scholars including Eugene Volokh as principal author).

  • Amici argue that the text of Section 230 actually supports the broader interpretation of the statute (to include content organization and curation based on algorithms). Amici argue that the definitions in Section 230(f) support this broader interpretation of Section 230(c)(1).
    • “Section 230(c)(1)’s use of the phrase ‘treated as the publisher or speaker’ further confirms that Congress immunized distributors of third-party information from liability.”
  • Some Internet platforms were already in the business of picking and choosing content to display to their users at the time of Section 230’s passage.
  • Amici argue that Sens. Josh Hawley and Ted Cruz, in their amicus briefs, erroneously refer to distributor liability as separate from publisher liability under common law. Instead, distributor liability has been considered a subset of publisher liability at common law.
  • Making recommendations is a quintessential role of any publisher. For example, newspapers decide where in the newspaper to place each article.
  • Reinterpreting Section 230 would lead to over-moderation of speech out of fear of liability, making the Internet a less open space.
  • It should be up to Congress to make any changes to Section 230 that it deems necessary.

Other briefs that make this argument: NYU Stern Center for Business and Human Rights; ZipRecruiter, Inc. and Indeed, Inc.

Secondary Arguments in Support of Gonzalez

Argument 1: The Court should interpret Section 230 narrowly in light of the federalism canon, wherein “the Court typically allows only the clearest textual commands to alter the balance of state and federal power.”

Brief that makes this argument: State of Tennessee, et al.

  • The broad interpretations of Section 230 have preempted state-law claims for “breach of contract; gross and ordinary negligence; negligence per se; unlawful and unfair competition; products liability; negligent design, and failure to warn; the intentional infliction of emotional distress; public nuisance; civil conspiracy; infringement of state-based intellectual property; state cyberstalking and securities violations; tortious interference with a business expectancy; and aiding or abetting other tortious conduct.”

Argument 2: The Supreme Court should reverse the Ninth Circuit’s decision to bar petitioner’s claims under Section 230 because “the 2016 Justice Against Sponsors of Terrorism Act (JASTA) impliedly repealed Section 230 to the extent that it shields respondent from liability for petitioners’ claims based on the algorithmic amplification of terrorist content.”

Brief that makes this argument: Former National Security Officials in support of neither party.

  • There is no way to reconcile JASTA’s “encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services” with lower court interpretations of Section 230 to preclude liability. Because JASTA was passed more than 20 years after Section 230, amici argue that JASTA “should control the assessment of liability for conduct falling within its scope.”
  • According to amici, Congress has identified the three-prong test from Halberstam v. Welch, 705 F.2d 472 (D.C. Cir. 1983), as the appropriate standard to apply when assessing aiding and abetting liability under Section 2333(d)(2). However, this analysis is not possible if Section 230 bars a claim at the beginning of the case.

Argument 3: Narrowing the protections of Section 230 “will cause social media to exercise more care.” “If plaintiffs are more likely to get relief for algorithm-recommendation harm, social media will be more careful to prevent their platforms from enabling terrorist and radical groups by such recommendations.”

Briefs that make this argument: National Police Association; American Association for Justice.

  • The brief by Major General Tamir Hayman (ret.)briefly makes this point about incentives as well.
  • Relatedly, the Giffords Law Center to Prevent Gun Violence brief, though it explicitly “takes no position on whether Section 230(c)(1) ... should immunize respondent Google ... in this case,” argues that while “[s]ocial media companies have taken steps to address the growing threat of online hate speech and real-world gun violence,” “[m]ore must be done.”

Argument 4: A narrow construction of Section 230 serves to safeguard the fundamental right to access the courts to vindicate federal statutory remedies.

Briefs that make this argument: American Association for Justice; National Center on Sexual Exploitation; Lawyers’ Committee for Civil Rights Under Law; Child USA; Electronic Privacy Information Center;Zionist Organization of America.

Argument 5: “A correctly cabined understanding of [S]ection 230 allows consumers to shop for different content-moderation and recommendation regimes. Importantly, it also allows all parties to enforce the terms of service.”

Briefs that make this argument: Institute for Free Speech;Center for Renewing America, Inc. in support of neither party.

Argument 6: A broad interpretation of Section 230 promotes various forms of hate speech.

Brief that makes this argument: Zionist Organization of America

Argument 7: Google’s non-publishing activities “are particularly troubling for adolescents, who are more receptive to harmful content, such as videos encouraging disordered eating, self-harm—or in this case, terrorism activity.”

Briefs that make this argument: Common Sense Media;Fairplay;Zionist Organization of America.

Argument 8:“Th[e] thorny problem of government control over the Internet could be substantially remedied if courts carefully construe Section 230 so that it only immunizes decisions made by providers that meet the statutory preconditions.” The argument is that a narrower Section 230 would open providers to discovery about government pressure, which would incentivize the government not to apply pressure to social media companies.

Brief that makes this argument: America’s Future.

Secondary Arguments in Support of Google

Argument 1: Social media companies “remain liable for an array of claims to which Section 230 does not apply.”

Briefs that make this argument: NYU Stern Center for Business and Human Rights; Center for Democracy & Technology; Product Liability Advisory Council.

  • Section 230 “does not insulate a company from liability for all conduct that happens to be transmitted through the internet.” Claims under the Fair Credit Reporting Act and anti-discrimination statutes are permitted, and Section 230 has “excluded from its liability shield claims related to violations of federal criminal law, intellectual property rules, wiretapping statutes, and state laws ‘consistent with’ Section 230.”
  • “Providers’ business practices, and their consequences, are governed by many areas of law unaffected by Section 230. ... For example, comprehensive privacy or data protection laws are the best way to limit online services’ ability to collect, use, package, and sell personal information about users and to limit the use of such information for targeting content.” ... Competition laws can address the risks of competition, innovation, and consumer choice from a concentration of power within a few major firms in the online ecosystem.”
  • “Congress drafted Section 230 in broad terms, focusing on the conduct for which the provision bars liability—and expressly identifying the types of legal actions to which the defense does not apply. Because those exclusions do not include product liability and similar common law claims, there is no basis for categorically precluding assertion of the Section 230 defenses with respect to those causes of action.”

Argument 2: Interpreting Section 230 to bar the plaintiff’s claims does not raise federalism concerns because Section 230 contains a clear statement preempting state law and the regulation of the Internet is encompassed by the Commerce Clause.

Brief that makes this argument: Washington Legal Foundation (directly responding to the State of Tennessee brief included above).

Argument 3: Section 230 promotes diversity in the context of the development of online platforms for user speech.

Briefs that make this argument: Electronic Frontier Foundation; Trust and Safety Foundation.

Argument 4: Section 230 is particularly beneficial to smaller technology companies.

Brief that makes this argument: Marketplace Industry Association and Match Group, LLC.

  • “Surveys of venture capitalists reveal that weak or unclear intermediate liability laws deter them from making an initial investment in startups. And at annual Marketplace Risk conferences co-hosted by the undersigned trade association, Section 230 and its current scope is a frequent legal topic among member organizations attempting to assess litigation risk and financial viability.”
  • “[D]efending even against an obviously meritless lawsuit can cause significant damage to a startup’s operating reserves. And if the suit progresses beyond a motion to dismiss, small technology companies may face insurmountable costs of litigation, including those required to comply with a discovery.”
  • “Section 230 spares emergent companies these costs by providing not merely a defense to liability but also from the costs of fighting legal battles.”
  • “By removing a costly barrier to entry, Section 230 allows technology businesses to realize the potential of their businesses.”

Other briefs that make this argument: Trust and Safety Foundation; Automattic Inc.; The American Action Forum; Center for Growth and Opportunity.

Argument 5: Weakening Section 230 would “weaken the incentives and tools for platforms to moderate content.”

Brief that makes this argument: National Security Experts (including 10 signatories).

  • “[O]nline platforms like YouTube, Facebook, and Twitter organize and display vast amounts of user-generated content—far more than any workforce could manually and comprehensively review. Accordingly, operators of online platforms must use some form of automation to assist in reviewing third-party content and determining the circumstances under which to display a particular piece of third-party content on a list of material that is potentially relevant to the end user.”
  • “[I]t remains unclear whether alternatives to the user-interest-focused algorithms that many online platforms use today would be any better for national security—and they could be far worse. Take, for example, a system where user interaction alone (such as user ‘upvotes’ or ‘downvotes’ of third-party content) determines how content is displayed. Such an online platform would be using simple, non-algorithmic methods to ‘recommend content and determine how prominently it is displayed. Under Petitioners’ argument, then, the platform operator would seemingly be entitled to Section 230 immunity for its recommendation lists. But terrorists and foreign adversaries have shown the will and the means to systematically ‘upvote’ dangerous content. They could easily exploit online platforms that organize third-party content by popularity. The same would be true if ranking were based on how often content is uploaded—terrorists and foreign adversaries could also manipulate that approach.”
  • “[T]he best option is to encourage online platform providers to continue to improve their methods of detecting and responding to dangerous online material.”

Another brief that makes this argument: Scholars of Civil Rights and Social Justice.

Argument 6: Any change to Section 230 should come from Congress, not the courts.

Briefs that make this argument: Internet Infrastructure Coalition; Public Knowledge; Bipartisan Policy Center; Progressive Policy Institute; Former Senator Rick Santorum (R-PA) and Protect the First Foundation; ACT | The App Association; ARTICLE 19: Global Campaign for Free Expression.

Argument 7: Weakening Section 230 would not decrease hate speech or protect marginalized voices and would also not address the alleged censorship by Big Tech companies.

Brief that makes this argument: TechFreedom.

Argument of Interest in Support of Neither Party

Suits about content recommendation algorithms “are intensely fact-bound and the result in this case should be limited accordingly.”

Brief that makes this argument: Integrity Institute in support of neither party.

  • “This opinion should not hold or suggest that the presence of a recommender algorithm affords blanket immunity to a platform under Section 230; nor should the opinion hold or suggest that a platform making any decisions at all about content recommendation, including especially content moderation, excludes the platform from the ambit of Section 230 immunity. This Court should let Courts of Appeals continue to answer novel questions about recommendation algorithms as they arise, and should continue to allow platforms to moderate content to benefit users without worrying that using an algorithm to do so risks their immunity under Section 230.”

Authors

Aaron Fisher
Aaron Fisher is a Just Security Student Staff Editor and a JD candidate at New York University School of Law. Aaron studied history and religion at Columbia and then worked on political campaigns in Texas and Iowa for two years before starting law school in 2020. At NYU, Aaron is Senior Notes Editor...
Justin Cole
Justin Cole is a Just Security Managing Student Editor and JD candidate at Yale Law School and has a background in national security policy and human rights. Prior to law school, Justin worked for two years in Boston as an Investment Management Paralegal at Ropes & Gray. At Yale, Justin is a submiss...

Topics