Home

Donate

A Deep Dive Into Gonzalez v. Google

Justin Hendrix / Feb 19, 2023

Audio of this conversation is available via your favorite podcast service.

This episode features four segments that dive into Gonzalez v. Google, a case before the Supreme Court that could have major implications on platform liability for online speech. First, we get a primer on the basics of the case itself; then, three separate perspectives on it.

Asking the questions is Ben Lennett, a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including serving as the Editor in Chief of Recoding.tech and as policy director for the Open Technology Institute at the New America Foundation. Read Lennett's consideration of the case and what's at stake here.

Lennett's first interview is with two student editors at the publication Just Security, Aaron Fisher and Justin Cole, with whom Tech Policy Press worked this week to co-publish a review of key arguments in the amicus briefs filed with the Court on the Gonzalez case.

Then, we hear three successive expert interviews, with Mary McCord, Executive Director of the Institute for Constitutional Advocacy and Protection (ICAP) and a Visiting Professor of Law at Georgetown University Law Center; Anupam Chander, a Professor of Law and Technology at Georgetown Law; and David Brody, Managing Attorney of the Digital Justice Initiative at the Lawyer’s Committee for Civil Rights Under the Law.

Below is a lightly edited transcript of the discussions.

Part 1: Aaron Fisher and Justin Cole

Ben Lennett:

You're both law students. Why don't you tell us a bit about your interest in the Gonzalez case, some of the work you've done to understand what the issues are, what some of the arguments are and so on.

Justin Cole:

Yeah, so for me, I initially became interested in social media and technology issues from kind of a social policy standpoint. I'm especially concerned about the negative impacts that excessive social media use is having in our society across many areas, from everything from teen mental health to the national security arena. I first became specifically interested in the Gonzalez and Taamneh cases through research that I've been doing for a paper that I'm writing about a difficult issue, which is websites where users log on and encourage and give each other directions on how to commit suicide. What the Supreme Court ultimately says in its decision in Gonzalez could have a major impact on the government's ability to crack down on these websites.

Aaron Fisher:

Yeah, I became interested in social media and tech issues kind of more generally through a human rights lens as an undergrad while doing research on the ways in which Facebook has incited violence against the Rohingya minority in Myanmar specifically. As for Gonzalez, I actually talked to several of my friends at the law school here who have done some work on the case through a clinic. It became clear to me how impactful this case could be on both the human rights area and just more generally for social media at large. I think it's particularly interesting to me just because there don't seem to be, in this case as clearly delineated left/right divides that we often see on hotly contested supreme court issues. I found that really compelling, as well.

Ben Lennett:

I think what would be helpful for folks that are listening to the podcast is to start with some background like on the case itself, particularly why was the lawsuit filed against Google in the first place?

Justin Cole:

Sure. In November, 2015, Noemi Gonzalez was murdered by ISIS terrorists as part of the large scale ISIS attack in Paris. She was one of 129 victims, and her estate and her family subsequently brought suit against Google alleging that through YouTube that it aided and embedded ISIS in violation of the Anti-Terrorism Act. That's the underlying statute here by affirmatively recommending through computer algorithms videos designed to radicalize viewers. The contention was that these YouTube recommendations actually were uniquely essential to the success of ISIS, and that it therefore contributed to the attacks that tragically left Gonzalez dead. The original complaint argued as I said, that the YouTube recommendations were contributing to this. The district court dismissed this complaint based on Section 230 of the Communications Decency Act, which is a very short provision, but it says... I think it's worth reading. "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

This provision's been interpreted really broadly and for that reason both the district court and the Ninth Circuit determined that Google fell under the protection of Section 230 and that's how we got to the Supreme Court level here.

Ben Lennett:

The lawyers were Gonzalez then appealed to the Supreme Court. What's the specific question they asked the court to review?

Aaron Fisher:

Okay, so the simple version of the question is whether Section 230(c)(1) of the Communications Decency Act, which Justin just read immunizes online platform operators when their websites make targeted recommendations of user provided content. In other words, what that means is whether the algorithms that common websites, popular websites like Twitter, YouTube, or even TikTok use to feed users' content, whether that those algorithms are protected by Section 230.

Ben Lennett:

Specifically though, the question itself is framed in this broader way where, and I think we'll talk more about this in the context of particular of the briefs, that it is about the recommendation and that recommendation can happen either through an algorithm or presumably some other mechanism.

Justin Cole:

That's exactly right. To give a few examples with the platforms that I mentioned a moment ago, when you are on TikTok for example, and the app kind of gets to know your interests and sends you a next video in your feed to watch, that is organized via one of the algorithms that we're talking about. Or for example on YouTube it would be when you're done watching a video what the platform recommends as the next video for you to watch. Then Twitter would be when you're logging onto Twitter, let's say you follow 400 people, you're not seeing just in real time what each of those 400 people is posting at the moment that they post it.

You're going to be seeing more posts from people who have more followers maybe that have keywords that is something you've searched before. These are extremely important algorithms and recommendation strategies for these companies that kind of make the apps what they are. I think without those algorithms it would be an extremely different online environment. I think not that many people realize what the form reaching effects of this case may end up being.

Aaron Fisher:

The implications of this case are quite significant, in particular to the extent that the court finds in favor of the petitioner or Solicitor General, some of these other briefs that would fundamentally narrow Section 230s protections.

Justin Cole:

Yeah, I think that the petitioner is really trying to hone in on a really textual analysis here and they're really, I think, in their brief kind of to a large extent trying to avoid that larger discussion of how significantly that will change and really just looking at the text and saying what the text is trying to do here is prevent Google from being held liable as a publisher of third party material. The petitioners are saying, "Well, Google's not being held liable as a publisher. They're being held liable for their recommendations and recommendations are separate from publications."

They're also looking at the fact that the recommendations are provided by Google itself. It's not as though the videos, the videos were certainly made by other providers but not by Google. That's kind of the main focus of the petitioner's brief, but as I guess you're kind of alluding to the respondents and Amici on their side are pointing to this larger discussion of what will this look like in the social media framework and in websites more broadly if they no longer are able to use recommendations? Depending on how broad this decision ends up being, does that mean that Twitter has to list out, as Aaron was suggesting earlier, tweets just in order of appearance if they can't use these algorithms to try and target what they think that you are interested in. I think that's the broader issue here.

Ben Lennett:

Let's dive in a little bit more in that in terms of the substance of what the petitioners are doing and some of these others which is they're trying to distinguish between the ISIS propaganda videos and the specific actions or conduct of Google in this case. Just spell that out a little bit more in the context of what the Gonzalez lawyers have said and then maybe compare it to what the US government brief argued in somewhat more nuanced fashion.

Aaron Fisher:

They're definitely related arguments and as I was saying a little bit earlier, the petitioners are really honing in on almost each word to a degree of Section 230, which is fairly simple given how short it is, and they really go through and say, "Well first, Google is not being held liable as a publisher of third party material. It's about its recommendations. Second that the recommendations are provided by Google, they're not provided by these other information content providers as Section 230 requires." Then finally the defense doesn't apply because Google's not being a provider here with its recommendations. It's doing something kind of new and independent on its own.

The solicitor general does a related thing in her brief, as well. It's really trying to distinguish between Section 230 protecting the dissemination of third party speech. Google not being held liable simply for having a defamatory or harmful speech on their site, but really honing in on the statute is not supposed to be immunizing other conduct that Google's engaging in, and that includes the design of the website and that includes the recommendations is the argument that she's making there. I was going to expand a little bit on why this would potentially mean that Twitter would have to put out content in the order that it's posted and that would be basically because if it continued using the algorithms that it is, if the algorithm is not covered by Section 230, it would open up Twitter to a massive amount of liability and litigation. It's likely that a platform such as Twitter would just be buried in lawsuits to such an extent that it would no longer be feasible for it to use that type of algorithm.

Ben Lennett:

That's a significant part of the pushback from a number of the briefs is not just a disagreement over the interpretation of the Section 230 itself, but a recognition of the implications and the dramatic changes that a decision in favor of the petitioner or the Solicitor General would have in terms of really reshaping how cloud platforms operate?

Aaron Fisher:

Absolutely. We read a number of briefs, but one in particular that was filed by seven or eight extremely well known organizations that are either free speech oriented, first amendment oriented, and some that advocate for freedom of the press as well. Basically these [inaudible 00:12:26] are arguing that Twitter, for example, in such a situation would have to... Or YouTube, actually, I think is a better example, YouTube in such a situation would have to over police content in order to make sure that they're not allowing anything on their website that would open them up to liability.

What that would likely lead to, according to these briefs is a crackdown on legitimate information on the internet. The prevention of certain information, that's not objectionable just because the companies would at that point be over-cautious.

Ben Lennett:

It seems that there's a divide here, too, in terms of on one hand the solicitor general and the petitioner think that the court can draw this very narrow rule in this case and the briefs that are disagreeing with that feel that there's really no way to draw a narrow rule here without a good portion of Section 230's protections being eliminated.

Aaron Fisher:

I think that's most likely correct. I have read some arguments by commentators that say that it's possible that the court could narrow Section 230 in a more minor way such as specifically how it interacts with the Anti-Terrorism Act. The companion case here, Twitter versus Taamneh, that case actually arises out of the same fact pattern as the Gonzalez case. It starts actually with the same lawsuit by Noemi Gonzalez as a state. However, that case is much more about the Anti-Terrorism Act and asks the question of what the definition of substantial assistance is in the context of social media companies actions concerning content on their platforms that promotes terrorism.

It's theoretically possible that the court could find for Google in the Gonzalez case, fully upholding Section 230, but also find that a website like YouTube knowingly provides substantial assistance under the Anti-Terrorism Act when it fails to take more aggressive action to remove pro-terrorism content from its platform. There are other ways that the court could go in some middle decision that wouldn't completely change the internet as we know it, but I think the more logical argument, and I think the briefs that at least I personally found more compelling on the Gonzalez side of things would really lead to most likely the court narrowing section 230 to not include the algorithms that we already discussed.

Ben Lennett:

There was also this third thread that was in terms of narrowing or reinterpreting sections 230's protections around this idea of distributor liability, which I think if you're not a lawyer can be a little challenging to follow. But I was wondering if either one of you could explain that.

Aaron Fisher:

Sure. Basically there are a couple of different ways of looking at the argument. The main one, which one of the main ones which Justin alluded to earlier, is about this difference between distributor liability and publisher liability. There's kind of a question here asking if under the common law, which is basically the tradition of legal decisions in the US and before that in England, whether publisher and distributor are two completely separate concepts or if distributor is a subset of publisher. The statute of section 230 refers to publishers and not two distributors.

That's a key question. Senator Josh Hawley, a Republican from Missouri submitted one of these amicus briefs to the court, and in that brief he argues that the purpose of Section 230 covers publisher liability and not distributor liability, and that in these cases, Google by way of YouTube is actually a distributor and not a publisher. Therefore these algorithms would not be covered by Section 230. There's also a brief filed by a number of law professors, I believe, that directly addresses this argument by Senator Hawley and includes basically just a disagreement based on the legislative history of Section 230. Talking about the common law, how in their opinion distributer liability is a subset of publisher liability and therefore it is covered by Section 230.

Ben Lennett:

One is strict liability, one is a secondary liability, but there is this specific difference between how a newspaper for example, might be liable and how a distributor could be liable. That distributor might be a, I think the example that was often used was a bookstore or a library.

Can you just walk through what those differences are? So one differences are?

Justin Cole:

One difference is whether or not the platform has specific knowledge of the content that is in violation of the law. For instance, if there's content that's posted on YouTube that's an ISIS recruitment video and that content is illegal under, for example the Anti-Terrorism Act, the operative question then becomes whether the platform has knowledge that that content is on its website. The idea that Senator Hawley talks about, and actually Senator Ted Cruz has another brief that's co-written with, I believe 14 members of Congress, and they make the argument that because they believe publisher and distributor liability are separate, in this case Google is a distributor because they had specific knowledge that there were ISIS videos on their platform. This idea leads to kind of a practical problem, which is that given just the millions of videos and tweets and everything that goes onto these platforms on the internet, it's basically impossible for these platforms to keep track of every single thing that's posted on their platform.

That's another reason why if the Supreme Court does narrow Section 230, it would likely lead to just a whole scale change of what we see on these websites because it would simply not be possible for these platforms to go through one by one and get rid of content that violates any US law or state law.

Ben Lennett:

That's kind of one of the arguments is this distributor versus publisher liability question, but I think there's also, you talked about the Taamneh case, the context of this case is within the JASTA framework for civil liability. I think there's another argument put forward by a brief on behalf of formal national security experts and others that have sort of worked in terrorism and national security around essentially JASTA is the Terrorism Act that provides the inability to sue someone who aids and abets a terrorist organization.

It was passed after 230 and there's some intentionality there from Congress to provide this as a mechanism to hold different institutions and entities accountable.

Aaron Fisher:

Yeah, there is, and that argument specifically made in the brief that I think we've brought up a few times already, which is by these former national security officials. The brief is explicitly in support of neither party, but there is this argument that there's simply a way to reconcile these two statutes with each other and that the idea is this JASTA statute is passed more than 20 years later and that it does even, though not explicitly, impliedly repeal Section 230 to the extent that it would not allow for a aiding and abetting liability here.

Justin Cole:

I think one response that one would potentially have to that is the question of Congress has amended Section 230 at various points.

It's obviously able to do so at any point, and I think that's another large theme that's brought up by the respondents and their various supporting amici, which is, is this really the role of the court to get into and grapple with this issue of Section 230 given its huge implications on the internet and the economy and all these other types of issues. I think that's one thing to think about here, as well.

Ben Lennett:

What are the options for the court? I mean in some respects it doesn't necessarily have to make a decision here.

Justin Cole:

There are a number of options that the court can pursue, some of which we've already touched on a little bit. The first is to dismiss the case as having been [inaudible 00:20:34] granted. This is highly unlikely to happen, but it would not be unprecedented. Basically it would be a dismissal of the case at some point before a decision would've been released because the court would have felt that it should not have granted cert in the case after all.

Given that these seem like real issues, I think this is extremely unlikely. The second option would be to simply uphold the Ninth Circuit's decision in Gonzalez, which would mean just reaffirming that court's decision to dismiss the lawsuit because the claim is barred by Section 230, that's the petitioner's claim. The next option would be to narrow Section 230, which we already discussed a bit. As I said, the most likely way to do that would be to agree with the plaintiffs that algorithms used by companies like Google via YouTube to recommend or push certain content to users are not covered by Section 230. But as I mentioned, there are also kind of other more narrow ways that the Supreme Court might be able to do this. Yeah, I would say those are the main paths that the court could take.

Ben Lennett:

You mentioned the Taamneh case, but maybe just discuss a bit more how those two fit together. If a decision in one happens, how the court avoids having to make a decision in this particular case.

Aaron Fisher:

Sure. As it currently stands, the Ninth Circuit actually came down on separate sides in the two cases. If the Supreme Court were to affirm in both cases, that would be kind of with an effect of what I was talking about a little bit earlier of specifically talking... The Taamneh case deals specifically with the Anti-Terrorism Act, and that definition of substantial assistance. That could essentially cause a company like Google or Twitter to have to take more aggressive action to remove pro-terrorism content from their platform without explicitly saying that Section 230 does not cover these algorithms.

Ben Lennett:

We've had dozens of briefs filed, what's next in the process?

Aaron Fisher:

Well, the oral arguments for both of the cases are this upcoming week, and I think we'll get a better sense then of where everyone is coming down. As I mentioned earlier in my interest in this case, I think it's something that isn't necessarily strictly left/right ideological.

You had Senator Josh Hawley, as we mentioned earlier, filing in support of Gonzalez, but then you had former Senator Rick Santorum filing a brief in support of Google. It's not something necessarily that is easy to predict, I think in that way, as some cases tend to be. Also, it will really depend I think a little bit on how the court views its own institutional role and competence here, and whether they feel as though they're able to make a decision here that should not be left to Congress instead.

Ben Lennett:

When can we expect a decision?

Aaron Fisher:

The Supreme Court, it doesn't say exactly what day it's going to release a decision in a specific case, however we can expect a decision sometime in the late spring or early summer of this year, so stay tuned.

Ben Lennett:

Justin and Aaron, thanks so much for your time and expertise.

Justin Cole:

Thank you.

Aaron Fisher:

Thank you.

Justin Hendrix:

If you're enjoying this podcast, consider subscribing, go to techpolicy.press/podcast, subscribe via your favorite podcast service. While you're there, sign up for our newsletter. If you'd like to read Aaron and Justin's review of amicus briefs visit Tech Policy Press. There, you'll also find a piece by Ben Lennett on the importance of the case as well as a variety of views on it from individuals representing different groups, including many that filed briefs with the court.

The next three segments of the podcast are short interviews with three experts that bring different perspectives on the Gonzalez case. Georgetown University Law Center's Mary McCord, Georgetown Law's Anupam Chander, and the Lawyers Committee for Civil Rights Under the Law's David Brody.

Part II: An Interview with Mary McCord

Mary McCord:

My name is Mary McCord, I'm the executive director of the Institute for Constitutional Advocacy and Protection, or ICAP, at Georgetown Law. I'm also a visiting professor of law, but I spent most of my career at the Department of Justice as a federal prosecutor and more recently in the National Security Division, including as the acting assistant attorney general for national Security.

Ben Lennett:

Thank you and thank you so much for speaking with me today. Your brief was filed on behalf of a group of former national security officials. You talked a bit about your background in national security and terrorism and being a prosecutor. Could you just give a bit more about that particular experience and how that relates to the Gonzalez case?

Mary McCord:

Sure. I went over from the US Attorney's office as a federal prosecutor to the National Security Division in May of 2014, 1 month before ISIS declared a caliphate.

That summer was the summer of just brutal hostage takings, kidnappings, beheadings, real terror as ISIS took actual territory in Syria and used the internet and social media to propagandize, to recruit, to connect people, to grow its network, to raise money. Working with other counterterrorism officials in the government at the time, it was a threat that was really different than what we had seen. For example, with Al-Qaeda, the technology had just changed so much. ISIS brought a whole new level of threat to what already is a dangerous situation. We've got a foreign terrorist organization that is taking over physical territory, claiming itself to be a sovereignty and recruiting people to come and engage in terrorist acts. Being able to organize themselves and recruit over social media sped up dramatically the ascendancy of ISIS and its ability to reach so many people. That made the work of those of us in counter terrorism that much more difficult.

Every case, and I saw every single case from 2014 until I left in 2017, every case the Department of Justice brought, every criminal terrorism case involved some type of radicalization over social media. I saw every complaint, I saw every indictment, I saw every sentencing memo. It was an integral part of ISIS's success in those years. I knew personally that algorithmic targeted recommendations and amplification of terrorist content significantly and exponentially expanded the scope, the breadth of people who they were able to engage with. That's really why, when I saw this case, without really taking an opinion on whether I think the plaintiffs here will be able to ultimately prove a violation of the Anti-terrorism Act, which is essentially bringing a tort claim. The idea of barring them at the courthouse door because of an expansive, and in my opinion, unwarranted and inconsistent with the language of 230, reading of 230, that expansive reading, was wrong.

That's what motivated us at ICAP, myself and my colleague Rupa Bhattacharyya, who was most recently the special master for the 9/11 Victims Compensation Fund, also had a long career at the Department of Justice, felt very strongly that there was a message here to be conveyed to the Supreme Court that it needed to take into consideration as it ruled on this case.

Ben Lennett:

That question and the question then, before the Supreme Court is particularly focused on this idea of targeted recommendations, but I wonder if you can speak to the original complaint or the original lawsuit that was filed against Google. The terrorism laws and the liability that is involved in those specific laws and also just a bit about the purpose and history of that conversation, and to the extent then that kind of conflicts then with this existing Section 230 protections.

Mary McCord:

Many people are familiar with our criminal terrorism laws. Material support to a foreign terrorist organization is the most commonly charged terrorism offense in the US, and that applies when any person or entity or company knowingly provides material, support, or resources to a designated foreign terrorist organization. That could be money, that could be equipment, that could be yourself as a fighter or as some other employee or aid to a foreign terrorist organization. But there are many, many people who have been injured and harmed or had family members killed by terrorist acts. Until JASTA, the Justice Against Sponsors of Terrorism Act, until that act was passed, the only civil liability that those who had been injured by terrorist Act could bring is if they were able to basically sue directly the person who committed that terrorist act or the entity that committed that terrorist act.

What JASTA did was expand the ability for injured persons to be able to seek civil liability by creating secondary liability. What that means, it's a legal terminology, is creating liability for those who aid and abet by knowingly, substantially assisting a foreign terrorist organization or person who is engaging in terrorist acts. The way the statute works, the terrorist act has to be directed or authorized by a foreign terrorist organization, but then the liability kicks in to not just hold that FTO, that foreign terrorist organization liable but hold anyone who assisted that foreign terrorist organization in committing acts of terrorism. Now there is... I'm going to put aside some of the legal issues that are in the companion case in of Taamneh V. Twitter or really Twitter v. Taamneh , which goes right to the heart of how do we interpret that Anti-Terrorism Act provision that I'm talking about with the secondary liability.

But for purposes of Section 230, what this provision in JASTA did, and what Congress was very clear in its language that it was intending to do, is provide civil litigants with the broadest possible basis consistent with the Constitution of the United States to seek relief against persons who have provided material support, directly or indirectly, to foreign organizations or persons that engage in terrorist activities against the United States. That's what Congress said when it enacted JASTA. To interpret Section 230, to block those very persons who have been harmed by terrorist acts, to block them at the courthouse door, not even let them bring their case, which is what this expansive reading of 230 would do, is fundamentally inconsistent with Congress's intent to expansively allow for broad civil liability.

Given JASTA is a more recent statute and conveys Congress' intent, we think that's a couple of things. One, a good reason not to interpret Section 230 the way that the social media companies would have it interpreted, to apply even to targeted algorithmic recommendations, which is well beyond the language of Section 230. It's also separately the basis for a separate legal argument that JASTA actually impliedly repealed any application of Section 230(c)(1), at least to ATA, Anti-Terrorism Act claims involving targeted algorithmic recommendations of conduct.

Ben Lennett:

A big sort of main component of your argument is that there's no guarantee that Facebook or Google would be liable particularly under this statute. But the issue with Section 230 is it prevents litigants from even having that conversation within the court system, that it's automatically shut down by the interpretation of Section 230.

Mary McCord:

Right. It prevents them from having their day in court. One of the key components of our justice system in the US is that people who are injured, who have a cause of action, meaning by statutory right have the ability to come into court and seek compensation for that injury, that they will have their day in court.

They may win, they may lose, their evidence may be insufficient to prove what they need to prove, but they get a chance to fight the fight. The interpretation of 230, the expansive 230)c)(1) interpretation doesn't even give them that day in court and just cuts them off at the knees.

Ben Lennett:

You're not a 230 scholar, but you did make some arguments concerning 230's scope and its scale. I wonder if you could just kind of walk through some of your main points on the interpretation of 230 itself.

Mary McCord:

Sure. First, our brief makes, frankly, a policy argument just to make sure that the Supreme Court is very well aware of the impact of targeted recommendations through these algorithms purposefully created by social media companies for their own business interests to make money. I wanted to make sure the Supreme Court is aware of that process and its real significant impact on terrorism and on foreign terrorist organizations and their ability to expand, monetize, recruit, and commit terrorist acts.

We include examples of prominent terrorist acts where we establish and show that part of the proof that the government had accumulated with respect to those terrorist acts, showed radicalization through social media. Beyond that sort of policy argument, our substantive arguments are that... These are very much like the petitioner's argument or the government's argument, is that an algorithm that the social media company creates in order to make recommendations of other content for users to view should they so choose, because in this case we're talking about Google's YouTube, so we're talking about videos being recommended through their algorithms, that targeted recommendation is not third party content. 230(c)(1) bars litigation against an internet service provider and let's just accept for present purposes, that's what Google is here. It bars liability for just simply posting third party created content, but it does not bar liability for your own, for the social media's own created content.

The argument is really pretty simple, that these algorithms, they're created by the companies. In fact they're created through a lot of research so that they can apply directly, they gather information about people's interests, what other videos they've viewed, what they seem to like. That goes into their algorithm that delivers up more of that content to those customers, to those viewers. In fact, they also know from their research that the more controversial the content is, the more likely it is to pull users in, pulls them into these feedback loops where they just get more and more and more of it and go down a rabbit hole. But all of that provides more ability for advertising and revenue, et cetera. That is all created by the company, that's not created by ISIS or any other terrorist organization. They're creating this loop effect. First, it's just like a textual statutory interpretation argument that it should not be interpreted to apply to conduct that the social media company engages in on its own.

We also point out that the very same year, the very same Congress that passed Section 230 just two months later passed some pretty extensive revisions to the terrorism chapter of the US code and dramatically increased the types of anti-terrorism tools that were available to the government. It was clearly thinking about tools to be used against terrorists, the same congress that created Section 230. It's implausible to think that those who created Section 230, again, before social media existed when only a fraction of house and Senate members even had internet service. There's no way that they could have been thinking at that time when they had terrorism front of mind, that this immunity that they were creating could ultimately be applied to social media companies they couldn't have even conceived of promoting, targeting, recommending terrorist content.

Ben Lennett:

I think your argument makes a lot of sense, in terms of just kind of understanding the context and trying to parse where the company's actions are and what's related to the content itself.

But there is a considerable amount of pushback within the context of the Gonzalez case briefs that argue much differently that they interpret the text much differently that than your brief does. In particular, I think there's this sense among many of the briefs that filed on behalf of Google, that if you carve out this exception for algorithm recommendations or recommendations more generally, that you really have nothing left of 230's protections whatsoever for social media. I'm just wondering if you have any response to that.

Mary McCord:

Again, there are people who spend full time working in tech policy and who actually understand much better than I do how algorithms work. My son's a software engineer, but I am not, I'm very much not. They would be better equipped to answer that, but what I can say is I don't see this as a sky is falling type of moment.

For example, search engines, I've heard people say, "Well, search engines wouldn't be permitted," but there's no suggestion, I don't see in the reading of Section 230(c)(1) to not cover targeted algorithmic recommendations. That's very different than user generated searches using search engines. These targeted recommendations are not based on a user saying, "Hey, I would like to see more ISIS videos." These are things that a user is not inputting anything into at all, other than the user apparently watching extremist videos that then end up delivering them more extremist videos. I think that's just one example of distinctions and differences between what this case is about. These, again, developed by the companies for their own business purposes, targeted algorithmic recommendations is very different than normal search engine use. I would also say that if the companies have got the technological ability to create these types of algorithms, again, drawing from user data to target them very specifically by user, then they certainly must have the technological capability to do their algorithms in a different way.

They just haven't had the market incentive or the litigation risk incentive to do that. Every other manufacturer of a product or provider of a service who puts their product or service into the marketplace has to account for how those products or services might malfunction or be used by bad actors. They have to account for that because they know they might be civilly liable if they don't. Every other product or manufacturer or producer of products or services takes precautions from the beginning to mitigate the risk of those that their product or service could create harm for others. Social media didn't have to grow up with that. They started out their platforms and went full bore without adequately taking into consideration the harms that could be accomplished over those platforms.

Belatedly, they're, of course, doing more. They are creating different types of algorithms. Facebook says this on its own website, "We're using AI to take down extremist content." I don't think they've denied that they've got the wherewithal to do it. It might not be perfect yet. There might be more work to do, but I just see this as a way of injecting normal marketplace concepts into what has otherwise been allowed to grow without ever having to take into consideration what everyone else who puts a product or service into the marketplace has to take into consideration. Again, they still have a lot that they would be immune from because the mere mere posting of content and content moderation in taking down things would still be protected under C1 and under C2.

Ben Lennett:

Mary, thanks so much for your time and expertise.

Mary McCord:

Thank you for having me.

Part III: An Interview with Anupam Chander

Anupam Chander:

I'm Anupam Chander. I'm a professor of law at Georgetown University.

Ben Lennett:

Well thank you so much for speaking with me today to discuss the Gonzalez case. I think a it'd be helpful to kind of understand what is your interest in the Gonzalez case. You were part of a brief that included a number of other internet law scholars, but do you have any particular interest in this case generally?

Anupam Chander:

I became concerned that the simple process of recommendation, or automated recommendations in particular, were at risk and they are so central to what companies, large and small, do online. I wanted to make sure that the Supreme Court had good advice on this issue because with Justice Thomas's activism on this question, I think there's a real concern that the court might radically rewrite internet law in a way, I think, that would be harmful to most of us who use the internet on a daily basis.

Ben Lennett:

The brief that you joined as a response to the brief of the petitioners to the Solicitor General's brief to some other briefs that had argued in favor of narrowing Section 230 in the context of the Gonzalez case. Can you walk us through some of the critiques that you have of the petitioner's arguments of the Solicitor General's arguments around how they think about recommendations in the context of Section 230?

Anupam Chander:

Sure. The petitioners have a difficult argument to make, because there is an appealing aspect to the claim that companies should be liable for what they do and not for what other people say. The Solicitor General and the petitioners would like the Supreme Court to hold that Google is to be liable for what it does, e.g. recommending videos, but not be liable for the videos themselves. This turns out to be a difficult argument to maintain, because one of the key features of the modern internet, a feature that makes the internet really useful is search engines.

Search engines do nothing but recommend content to you. They say, This is of the hundreds of millions of items available on the web, these are the items we recommend to you that we believe will be of interest to you. We're not sure about it, but we think are responsive to your interests." The Solicitor General and the petitioners have to distinguish search engines from a kind of standing recommendation system that exists in newsfeeds, which also says, "Of the thousands or millions of pieces of information that we could show to you, these are the ones we think are responsive to your interests." The argument then comes to something like, "Well, the immunity is available if the person asked for that information at that moment, but it's not available otherwise." That distinction is nowhere in the text. I think it's going to be a hard argument to ultimately persuade the court that recommendation itself can lead to liabilities in this case.

Ben Lennett:

Is it part of the challenge of this is that it's hard to disentangle the recommendation from the content itself> particularly in this case, because presumably if this was involve different content, Google wouldn't be... You wouldn't have much of a claim under JASTA if this was some other content and not ISIS propaganda.

Anupam Chander:

It's hard for the plaintiffs to distinguish the underlying content of the videos from their claims, because if YouTube was recommending cat videos, they would not be held liable for promoting terrorism. The underlying content is critical, is essential to the claims in this case.

Ben Lennett:

That appears to be by design within Section 230, whether or not Congress understood that they were creating such a broad level of protection, maybe is a bit more of an open question. But at least in terms of your reading, and I think many other interpretations of Section 230, that is the distinction.

Anupam Chander:

Section 230 makes a simple determination of who is liable. It says, "The speaker of the content is liable, but that the publisher," here, the online publisher of that content, "Is not liable." It really does create a separate regime for online content. The reason it does so is because it recognizes that this isn't a heavily curated podcast or a heavily curated newscast, or a newspaper. It's rather a tumult of millions of people speaking online. That recognition is there. Even in 1996, you already have bulletin boards which are full of lots of material, some of it very harmful. Recognizing that if you make the platform itself liable for that content, it will lead the platform to take measures that will suppress more speech than we think is appropriate.

Ben Lennett:

What about this argument that a reasonable number of the briefs in favor are filed within the window of the petitioner discussing this distinction between publisher liability and distributor liability, and the idea that Section 230 still maintained this distinction between publisher and distributor liability and the difference between those two and the implications?

Anupam Chander:

Yeah. On the internet, the publisher is also the distributor, that's just the way it is, but the distributor is also the publisher. If you are saying that 230 says no publisher liability, but we'll do liable as a distributor, I'm not sure what 230 does. That's what online services do. They distribute that content to you. Their servers distribute content, that is by the very nature of what it means to publish. Now, in the real world too, publishers often distribute. Newspaper publishers literally bring the paper to your door. The effort to say that 230 did not include distributor liability, I think faces a difficult reality that it's hard to imagine what 230 needs. Who gets 230 protections at all, if they aren't liable for distribution? Because that's what a website does, it distributes that content. Even if it's the most passive, in historical terms, website possible, it still sends images of text or video and noise to your ears via the internet. It distributes them to you.

It's not clear to me what is left of 230 immunity if there is no immunity for the active distribution. Secondly, the common law also made it clear that distributors were liable under the common law for secondarily publishing that material. Distributors, before they were to be held liable, would be held liable as kind of constructive publishers. That was the common law. Remember, publishers are more liable than distributors under the traditional common law, but in order to get two liability for distributors, you have to then ascribe publishing activity to them. That's what the common law did. That's what our brief shows that the common law also recognized distributors before they could be held liable, they would be treated as publishers.

Ben Lennett:

But the standard itself, in terms of the liability strict versus this more secondary liability was different. The argument in favor of this is that the standard is somewhat less because there's a requirement of knowledge that's inherent to the understanding.

In the case of this, the argument particularly in the Gonzales case is that Google had knowledge that this content was... That they were sort of amplifying terrorist propaganda, yet they took insufficient steps to mitigate that or to remove that from their algorithms.

Anupam Chander:

The claim isn't that they had knowledge of any particular piece of content, by the way, it is that they had knowledge generally that there was harmful material that might promote terrorism on its site. The knowledge-based claim would actually cover every manner of ill in society, because Google knows basically every manner of ill in society is currently being propagated on its services. If you simply say there's a kind of abstract knowledge of wrongdoing including fraud, including plans for violence, et cetera, all of that exists on Google Services. That, unfortunately, is going to be the case for any service that carries a large volume of traffic from human beings.

But let me go back. The question here that you posed is that there are different standards of the liability for distributors and publishers. Maybe this simply says this was just trying to get rid of publisher liability, but leaving distributor liability intact. Let me explain again why I think that fails. First, publisher liability was the stricter liability. It was that you didn't even need knowledge to have liability, and the stricter liability was clearly, the text it removed. Now, if we say, "Oh, but now it's liable as a distributor," I find it hard to imagine what companies are not ultimately acting as distributors of material. That's the core function of interactive computer services. They distribute material to you. Now, if that's the claim, then the amount of material that they might have knowledge about in some sense is vast. Their knowledge of wrongdoing, gambling in Casablanca is significant.

Then it would essentially make them liable for all of the wrongs of society which they have knowledge are being propagated on these services. Finally, the common law of distributor liability also originally... This is pre 1996, clearly said that distributors were held liable when they acted as publishers. They were treated as secondary publishers before they were held liable. If you are saying you cannot be treated as a publisher, treating a distributor as a publisher was a condition to hold the distributor liable. Congress knew how to impose knowledge-based liability in 1996. The Communications Decency Act elsewhere imposes knowledge-based liability. Congress knew exactly how to do it. In 1998, it does impose knowledge-based liability in the Digital Millennium Copyright Act. But when it does so in the DMCA, it has a notice and takedown system, that notice and takedown system is very carefully written, it is a very complicated system of notice and takedown.

Why? Because a notice and takedown [inaudible 00:56:04], if you allow it anyone to say notice, to provide you notice and thereby create possible knowledge means that there will be a huge suppression of speech left and right because it's very easy to send a notice. Harvey Weinstein says, "Hey, this is defamatory." Now you know, you better take it down. Anyone can make these kinds of claims and provide notice. The DMCA says you got to do that under penalty of perjury. Okay, you say you have copyright? Well, it's under penalty of perjury. That notice, it better be serious. It has certain rules about what the notice must provide. There's a lot of litigation on what notices must look like under the DMCA. This idea that they accidentally did that in 230 without stating it, even though elsewhere the CDA and elsewhere, the Telecommunications Act by 1996 of which the CDA is a part, they clearly spelled out narrow areas where knowledge was conditioned for liability.

Two years later, they spelled out a very elaborate regime for doing so in the copyright context. I think we should be cautious to interpret Congress's action as being this huge accidental creation of a distributor liability scheme and knowledge-based notice and takedown scheme for all content on all wrongs that might arise online.

Ben Lennett:

Part of your contention here is that if the court were to find and reinterpret Section 230 as expecting distributor liability, that's much different than the way the Congress crafted the DMCA with the very specific standards for the notice and cautious writing of that statute to not create this situation where providers would be liable across the board in all these sort of circumstances.

Anupam Chander:

Here, a newspaper story that you have terrorist content on your site now suddenly makes you liable for terrorism everywhere. That's the theory of the case. My worry as a civil libertarian, I'm interested in making sure that people have the ability to speak and also to complain about authorities. I want people to be able to say, "Hey, the United States, we're doing something bad somewhere." I want people to be able to say that and not then have it taken down because it might encourage terrorism in some way. I want them people to say that in Arabic. I want to make sure that we have the right to speak, and especially speak against orthodoxy, to question what others say. This is the history of civil rights speech in the United States. Folks who have argued for civil rights have long recognized that allowing, and this is New York Times [inaudible 00:59:20], allowing tort claims in these contexts will often lead to suppression of important protests about what is happening in society.

I want people to be able to protest freely online. It does mean there will be speech online, which I abhor, but it also means that that speech that I think is important protests against orthodoxy is permissible online.

Ben Lennett:

That's sort of an interesting point that you discuss or that the brief that you were a part of discusses is that Section 230 doesn't provide just protection for the platforms, but also users. In the case where even in a case like this where if a user retweets or amplifies an ISIS video, they're also, presumably in your argument, not liable. But they're not liable because they're protected by Section 230, as well, because that's what the statute says.

Anupam Chander:

Yeah, the statute section (c)(1) literally says, "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information provider." No provider or user.

Ben Lennett:

Most people are not quite aware of that it does protect the users as well. Even in cases of defamation, for example, if you were to retweet or share a tweet or something that included a defamatory statement, you're protected by Section 230.

Anupam Chander:

Exactly. It's hard to imagine what you are left with of the user protections in 230, if the process of recommendation is not covered by Section 230. Users don't host other people's content users sometimes amplify it. This was true in 1996. It's true today. The plaintiffs and the government has an argument that is essentially runs intention with the text of the statute. I understand fully the desire to make internet platforms liable for bad speech online. But the history of free expression in the United States for over the last 60 years has shown that tort liability against speech platforms will erode the freedom that we have to protest.

This is why there are a huge number of briefs making the civil rights argument on Google's side.

Ben Lennett:

If the court decides in favor of Google and the case, are you concerned at all that it could do something to take sort of sections 230s protections too far, for example?

Anupam Chander:

If the court stands with Google, it reaffirms the last quarter-century of the interpretation of Section 230 by the lower courts. It leaves the law as it stands, the plaintiffs in this case seek to have the court chart a new direction and interpretation, one that I think is inconsistent with the text of the statute. That is why the brief I authored along with other people makes the argument that there's a straightforward textualist argument that stands with Google in this case. I don't think there's a risk that it expands Section 230 further.

Section 230 is not out of jail free card. There are a number of cases where if you make a claim against the companies for other actions or for their own actions, like for example, developing the content as in roommates.com or for other things as in a case called HomeAway in the Ninth Circuit, there are other avenues to challenge these companies practices. But I think that those issues aren't raised in this case of revisiting Roommates or HomeAway, for example.

Ben Lennett:

Well, Professor Chander really appreciate your time and expertise to discuss this case, thank you so much again.

Anupam Chander:

Thanks Ben. I really appreciate it.

Part IV: An Interview with David Brody

David Brody:

I'm David Brody. I'm the managing attorney at the Digital Justice Initiative at the Lawyers' Committee for Civil Rights Under Law. Lawyers' Committee is a national nonprofit racial justice organization.

Ben Lennett:

Can you just sort of tell me a little bit what is Lawyers' Committee's interest and the Gonzalez case?

David Brody:

The Gonzalez case has the potential to dictate whether or not and to what extent we can hold online platforms liable for illegal things that happen online. We care a great deal about that in two directions. The first is it will affect the ability to enforce civil rights laws when online platforms violate them. The second is because Section 230 is really important for preventing censorship of people of color online.

Ben Lennett:

Your interest is both in terms of being able to bring cases to courts, particularly around civil rights violations that occur. Maybe it's from an algorithm or a platform, but also this inherent and then really important protection that Section 230 provides, which in terms of really enabling platforms to publish a diversity of viewpoints on their platforms.

David Brody:

Yeah, that's right. We really think it's important for the Supreme Court to take a balanced approach here. If immunity for platforms is too broad and sweeping, then it will be very difficult, if not impossible, to hold them liable for discriminatory algorithms and other discrimination and harms that the platforms themselves are responsible for. But on the other hand, if the Supreme Court guts Section 230, then the response of many platforms might be to just broadly censor user generated content because it's generally a lot more cost-effective to do heavy-handed censorship than it is to do really detailed content moderation. What we've seen from AI content moderation so far over years is that it already disproportionately silences people of color, LGTBTQ people, women, religious minorities and other groups. We're particularly concerned that if platforms had to worry about liability for anything happening on their services, then it will make it extremely difficult to have open and frank conversations about race and gender issues.

It will be difficult for racial justice movements to mobilize online. What we've seen is, for example, social media was extremely important for the movement for Black Lives and the Me Too movement. Those types of modern civil rights movements have really germinated on the internet, and it would not be possible for those movements to exist without Section 230.

Ben Lennett:

If you hear critic discuss this, it's more framed as this impenetrable shield where platforms can pretty much do whatever they won and they can't be sued in terms of either civil or criminal liability. But I wonder if you can talk through what are the standards with which they can actually be held liable for particular harms that arise from the platforms?

David Brody:

There's essentially a two-part test, and when courts apply this test correctly, it gives platforms strong protections but not limitless protections. There have been a number of cases, especially more recently, where courts have said, "You know what? You can't pass the test, so you're not getting immunity." What is that two part test? The first is, is the plaintiff's claim seeking to treat the defendant as a publisher of third party content, of content made by someone else. That question is really key. The language of Section 230 never uses the word immunity. It doesn't say you can't be immune. What it says is these online users and providers of online services shall not be treated as a publisher of third party content. Under normal common law and other legal principles, publishers are often able to be held liable for the things that they publish. What Section 230 says is if you're publishing something and you didn't write it, you can get immunity.

But if a claim is based on something that's not integral to... Where publishing's not integral to the claim, then Section 230 doesn't apply. What do I mean? Let me give you some examples. In the context of civil rights, what we think about is mortgage approval algorithms, job applicant screening algorithms, tenant screening algorithms, facial recognition algorithms, all these types of systems might happen online. They almost certainly are using third party content in their algorithmic process, but they're not publishing. They're doing something else with third party data online. Section 230 shouldn't cover that. To give an example from a recent decision, there was a case in a Fourth Circuit decided a few months ago, Henderson versus The Source for Public Data, and that was about the Fair Credit Reporting Act and the Fair Credit Reporting Act, one it's requirements is that credit reporting agencies have to provide certain disclosures to consumers about background check reports and things like that.

Someone sued this company saying like, "Hey, you're not complying with this. You're not giving me the disclosures that I am due." The company invoked Section 230 because it was an online site and they were using online records. The court said, "No, giving a disclosure to this user isn't a component of your publishing activity. It's a compliance obligation like paying taxes, so you don't get 230 immunity for that." Similarly, the Ninth Circuit has held that vacation rental websites have to comply with local ordinances about getting licenses for vacation rentals and various disclosures and things like that. Even though the vacation rental websites... Something like Airbnb, it wasn't Airbnb in this case, it was a site called HomeAway. They're hosting listings of third party content users put their houses up on these sites and they're essentially a broker. But the court said, "Yeah, you're doing all this stuff related to publishing, but these licensing requirements, that's ancillary. You have to comply with that and Section 230 doesn't immunize you." That's step one. Is the claim... Does it hinge on whether or not the provider is publishing third party content?

Then there's step two, which is it really third party content? What the second step is, it's usually called the material contribution test. The statute says you get immunity when you're publishing content provided by another person. The word another in there is very important because the way the statutory language works, it says that if the defendant co-create or co-develop the content at issue, then Section 230 doesn't apply. How does this come into a effect? The Sixth Circuit, in a case called Jones versus Dirty World Entertainment, really summed it up as... Material contribution, this test means being responsible for what makes the displayed content allegedly unlawful. It comes back to responsibility. Is the claim trying to hold the platform liable because the content is illegal and the platform published the content? Or did the platform play a role in the creation of that content or some sort of significant role in why what happens was illegal?

To give two examples, there's a classic case from Ninth Circuit called Roommates. It was a site for finding a roommate, the website induced users to express discriminatory preferences about who they wanted to live with, things like race, gender, et cetera. The Ninth circuit said that was illegal because the site specifically had prompts and buttons for people to click, and basically the users had to express these preferences. In contrast, there was another similar case against Craigslist where again, Craigslist was sued for something related to housing discrimination, where people were saying like, "Hey, you're allowing these discriminatory housing ads to run on Craigslist." Craigslist was held not to be liable, it was held to be immune under Section 230 because all it was doing was putting up a blank text box and users put discriminatory information into the text box. But Craigslist didn't play a role.

What's really key here is thinking about what is the exact role that the platform is playing? Is it helping to create the content? Is it doing something that makes the content illegal or more illegal? Or is it just a conduit for someone else's illegal conduct? In the civil rights context, we care a lot about this, because we are focused on things like discriminatory algorithms used for advertising. There was a recent case where DOJ brought a lawsuit against Meta, Facebook's owner, for discriminatory advertisements for housing ads. Basically, we've probably all seen various reports about how Facebook's advertising system can deliver ads on the basis of race, and sex, and other protected characteristics.

Department of Justice did an investigation, brought a lawsuit saying, "Yeah, and you are steering housing advertisements on the basis of race to black users to white users away from black users, away from white users. That's illegal under the Fair House..." How's this factor in for a material contribution test in Section 230? Those housing ads themselves are probably not illegal. It's probably a random apartment building says like, "Here's our units, come check it out." There's probably nothing illegal about those ads. Where the illegality enters is when Facebook's algorithm takes this benign content and delivers it in a discriminatory fashion and therefore transforms what was perfectly fine conduct into illegal conduct, and that is the material contribution that would defeat Section 230.

Ben Lennett:

Then in the context of the Gonzalez case, I mean there's quite a bit of sort of media coverages. This is an issue of algorithms and what you're saying with respect to at least algorithms is that what matters with the algorithm is what is it actually doing? Is it something that's more directly related to the content, then it's a much harder question to answer in terms of the protections for 230. But if it's with respect to the outcomes and the algorithm itself in terms of what it's doing is clearly illegal under whatever law with the civil rights law or other laws, then Section 230 doesn't apply.

David Brody:

That's right, yeah, that's the position we argue. It's also the position that the United States Solicitor General argued. In this sense, I think we filed our brief in support of neither party. I think both sides here are not quite right. The plaintiffs want to say that recommendations aren't covered by Section 230, and so if you're recommending illegal content, then you can be held liable perhaps for that illegal content, maybe. That's probably a bridge too far. The defendants are saying, and amici for defendants are saying, "Recommendations are just part of publishing. We should get immunity for all of it," which is definitely incorrect. What matters here is the platform when it makes a recommendation, and to be clear, it doesn't matter if it's an algorithmic recommendation or a human recommendation. I want to come back to that in a minute, because it's really important. The statute doesn't say anything about algorithms. When that platform's making a recommendation, the content it's recommending, it gets immunity for. If it's recommending illegal content, it can't be held liable for what's inside the box so to speak.

But the manner in which it makes the recommendation, it can be liable for that. Section 230 doesn't protect that. If the manner of the recommendation is itself illegal, it can be held liable. If you think about that as the wrapping paper on the outside of the box, if there's poison inside the box, the platform is immune. If the wrapping paper is poisonous, the platform's on the hook. That's one way to think about it. But I want to come back for a minute on this notion of whether it matters if it's an algorithmic recommendation or not, because this is where this case can get really dangerous. The statute makes no distinction between algorithms and other technologies. It doesn't care, it's tech neutral, which is the right way to write a statute. Let's suppose the Supreme Court is looking at algorithm recommendations and it says Section 230 immunizes recommendations.

Well, there's no distinction between algorithm recommendations and human recommendations. That means human recommendations get immunity, too. Section 230 applies to both providers of services and users of those online services. Consider this hypothetical. Suppose a realtor sends an email to a client and she includes some links to houses and says, "I think you would like these houses because you are black and black people should live in this neighborhood." That's a violation of the Fair House Act. It's very illegal. It's also a recommendation and it's an online recommendation. She's using email, that's an online platform that would be covered by Section 230. She's a user of that online platform. She's sharing third party content, those links, and she's making a recommendation to the recipient that "I think you should be interested in this content."

If recommendations get Section 230 immunity, that type of discrimination gets immunized, and not just that type of discrimination, but really any kind of online communication that includes some third party content and some sort of message that somehow implies that the recipient should be interested in it. Anything they could somehow vaguely be thought of as a recommendation will get immunity, which anyone with half a brain could figure out how to get anything under that umbrella just by using a little bit careful phrasing. Now, you have this situation where there are no laws on the internet.

Ben Lennett:

That is one way in which on one hand the court could uphold the Ninth Circuit's decision but do so in a manner that essentially expands Section 230's protections rather than even just maintaining what is the status quo court's understanding of the scope of Section 230.

David Brody:

That's right. Yeah, and that's something we want to be very, very, very careful about is we can't create a situation in which civil rights laws that have governed our commerce for 50 plus years and have been essential to integrating our society don't apply to the 21st century economy. That's a recipe for a recreation of segregation and redlining online.

Ben Lennett:

Let's move to the other side of this coin, so to speak, for the court. In terms of, let's say they look at the Gonzalez's Lawyers brief, maybe it's Solicitor General's views, maybe they look at some of the conversation around distributor liability being something that should exist within the context of Section 230. Do any of those outcomes concern you, for the work that you do or concern you for the communities that you work on behalf of?

David Brody:

My concern would be if a very narrow scope of immunity was adopted such that lots of types of online activity fall out of scope of Section 230. Because again, the risk here is basically if platforms can be held liable for third party content. There's different routes that could happen, but if they could be held liable for content that they did not co-create and that they did not play some central role in furthering, if they can be held liable in a vicarious way without some sort of specific action, then that can have a very significant chilling effect on how these online platforms operate.

Because as I was saying before, what we have seen is content moderation basically does not work at scale. The people that always get silenced when the dial gets turned up on the AI content moderator, it's always people of color, LGBTQ people, religious minorities, others who have traditionally and historically been subjected to censorship. These online platforms that allow these groups to circumvent traditional gatekeepers in major media, whether it be political gatekeepers, economic gatekeepers, social cultural gatekeepers, online, there's this great opportunity for all types of different groups and communities to find and connect with each other.

That's especially valuable for individuals that might live in a small community somewhere where there aren't lots of other people like them. If you are a family of color in a small town where there's not a lot of other people of color, the ability to go online and connect with others like yourself who might have similar experiences to you is extremely important. Especially, that's true for LGBTQ people. There's a very, very serious risk of silencing people who need these platforms if Section 230 does not offer adequate protection.

Ben Lennett:

A very sort of tough course for the Supreme Court to navigate in this particular case.

David Brody:

It is, but the fact is they already have the map. They have the chart, they don't need to necessarily reinvent the wheel here, the consensus test and framework at the lower courts has largely gotten it right. That's why the things that we really tried to emphasize in our brief and want the court to think about is there's no circuit split on Section 230. The lower courts are largely in agreement. The thing we have to keep in mind is that Section 230s never been to the Supreme Court. There's 25 years of cases at the lower courts that have hashed out lots of different difficult issues, but none of that's binding on the Supreme Court.

It can start from scratch and say, "Everything we know about Section 230 is wrong. We're redoing it this way." One of the things we really wanted to emphasize to the court was like there's this balance that's been established and not every decision is correct, but the fundamentals are strong.

Ben Lennett:

That it's, I think, a very big area of uncertainty then, given the Supreme Court's discretion, so to speak, to come up with its own interpretation and ignore this other sense of cases. That could be a real surprise late this year when the decision comes out.

David Brody:

It could. My prediction is I don't think the Supreme Court's going to say anything about Section 230 in this case. It's got a companion case, Taamneh versus Twitter that arises out of the same facts and has the same legal claims, but doesn't have a Section 230 issue. The court could very well decide, "The Section 230 stuff is too complicated and we don't want to mess with this." They can decide Taamneh in a particular way that would also apply to this case and move this case without having to decide the Section 230 issues. I think that's probably the most likely outcome. I bet... That's what I feel is likely to happen.

Ben Lennett:

David Brody of the Lawyers Committee for Civil Rights Under the Law, thank you again so much for your time and expertise on the case. Appreciate it.

David Brody:

Thanks for having me.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics