On Monday, the U.S. Supreme Court agreed to hear two cases that concern whether tech platforms can be held liable for user generated content, as well as for content that users see because of a platform’s algorithmic systems.
In deciding to hear Gonzalez et al vs. Google and Taamneh, Mehier et al vs Twitter et al, the Court will broach the question of whether Section 230 of the Communications Decency Act should be narrowed, and whether it still immunizes the owners of websites when they algorithmically “recommend” third-party content into a user’s feed.
To learn more about these cases and the potential implications of the Court’s decisions, I spoke with Anupam Chander, the Scott K. Ginsberg Professor of Law and Technology at Georgetown University.
What follows is a lightly edited transcript of the discussion.
Professor, before we get started, can you tell folks just generally, what area of the law you look at most closely?
So, I’ve been teaching internet law for more than 20 years, since the year 2000, and so I’ve certainly watched Section 230’s important role in creating the internet as we have it today. The “26 words that created the internet,” as Jeff Kosseff has described, and as I described in another paper, an earlier paper, “how law made Silicon Valley.” So, one of the critical legs of the stool that makes up the legal framework that facilitates how Silicon Valley operates today, indeed, its business model today.
So, the Supreme Court today has decided, apparently, to hear two cases that have bearing on Section 230, and both regard acts of terrorism. Can you give the basics for the listener who may not be aware of these two cases?
Sure. The two cases, both arise out of the Ninth Circuit Court of Appeals, which is the Court of Appeals on the west coast of California, the Federal Court of Appeals on the west coast. They were consolidated together for appeal at the Ninth Circuit. So, one case is the Gonzalez v. Google case, and the other case is the Taamneh vs. Twitter.
Both cases arise out of kind of similar and horrible tragic facts. So, Ms. Gonzalez, a 23-year-old US citizen, is killed in terrorist attacks in Paris, France, in 2015. Many listeners will recall, and I recall vividly, the, terrorist attacks in Paris that were staged by three men, and maybe others, I’m not sure, but the three men mentioned here in these papers.
And the plaintiffs against Twitter, Taamneh, are relatives of someone who was tragically the victim of another terrorist attack, this one in the Reina nightclub in Turkey. So, these are people who’ve been, the victims of terrorist attacks, or their family members, who are bringing claims, essentially, that Twitter, Google, and Facebook– which I haven’t mentioned thus far– are responsible for radicalizing the ISIS members, recruiting them, and helping them to shape their plan, plan together, and then to commit these atrocities.
And, as I understand it, in both cases, they’re kind of concerned with the platform’s sort of ability to algorithmically recommend content, specifically third-party content, to kind of put that into a user’s feed. And then, also, they are concerned with some of the other affordances the platforms, and particularly in the case of Taamneh, particularly whether Google, Twitter, Facebook, essentially sort of came to the aid or the benefit of these terrorists by providing them with funds, goods, services.
That’s right. At this stage, at the Supreme Court of the United States, the critical question is whether or not that algorithmic actions of Google, Facebook, and Twitter now possibly mean that Section 230’s liability shield is no longer available. And then, in the Twitter case, that they are then culpable. The Twitter case also implicitly invokes the Google Facebook defendants, as well, implicitly culpable for providing, essentially, material support to terrorism or actually aiding and abetting terrorism really quite directly.
So, the algorithmic actions of Google, Facebook, and Twitter are very much at the center stage in these cases, and it is those algorithmic actions … So, the first case, the Gonzalez case, raises a direct question about Section 230. The Taamneh case versu Twitter raises the question of liability under the Anti-Terrorism Act, and whether or not algorithmic actions in this context could be a violation of the anti-terror acts.
So, I suppose in our conversation we’ll focus a little bit on Gonzalez vs Google with regard to Section 230 in particular, but how did we get here? What did the lower courts decide, and what sparked the curiosity of the Supreme Court to take this on?
So, the lower courts have actually been largely uniform in their decisions on these issues, okay? There have been other cases. There’s a similar case out of the Second Circuit called Force v. Facebook, which involves both Israeli citizens, some 20,000 plaintiffs in that case, who are affected by terrorism in Israel. And they are, again, making claims, in that case, pointed towards Facebook, saying Facebook is, essentially creating a terrorist environment in Israel, and they are victims of terror in part because of Facebook.
That Second Circuit case predates the Ninth Circuit opinions in this case. And, in that case, like in the Ninth Circuit opinions here, the courts have uniformly decided that Section 230, as it is currently interpreted, clearly immunizes the internet platforms from liability for these kinds of claims. This is civil action that is intended to hold Twitter, Facebook, and Google liable for the speech, for circulating the speech of its users. That speech in this case as is alleged, is ISIS speech, but Section 230 says you can’t be held liable as the publisher of information provided by another user. And so to hold Twitter, and Facebook, and Google liable in these cases would be to violate this clear 1996 statute passed by Congress in the early days of the internet.
But, there are some indications that these lower courts think it may be time to kind of reconsider that immunity?
Right. So, what you’re talking about here is not the decisions of the courts, but their various concurrences, and dissents, even, saying, essentially, that the courts should, should, reconsider their broad interpretation of Section 230 in these kinds of cases. So, in Force v. Facebook, you have Judge Katzmann– a very, very, esteemed judge who recently, sadly, passed away– who, who raised this concern in Force v. Facebook, in that Second Circuit case. And in this case, you’ve got all three judges, essentially, the ones writing the majority opinions concurring and dissenting, all saying, “Hm, maybe, yes, this is the right decision under the interpretations as it currently stands. Our hands are bound. This is, this is what the interpretation of Section 230 currently is,” but they’re saying, “We don’t think it should be this way.”
Notably, this is a kind of… you know, you’ve got strong liberals, strong progressives– I don’t know what the right term is anymore, but, people from the left– saying here, “No, 230 is too broad, and it should be curtailed.” At the same time, you’ve got the Supreme Court. You’ve got the most conservative member of the Supreme Court– or one of the most conservative members, it’s hard to know exactly– but Justice Thomas, for the last few years repeatedly calling for the reconsideration of Section 230, and saying, “Hey, look, you know, this broad immunity doesn’t make sense. We need to narrow that immunity in some way.”
So, the court will have its opportunity to consider this. Do you think that there are any indications of which way this particular court might rule, in particular in the case of Gonzalez v. Google?
So, it is very hard to make a prediction. I think we know which way Justice Thomas will go, because he’s repeatedly called for a narrowing of Section 230. Here is an opportunity for that. But, I don’t know whether even Justice Thomas sits back and says, “Hm. What I’m suggesting, ending this kind of immunity, in these cases is actually going to harm my friends.” And then there’s the progressives on the court who are going to say, you know, “A lot of people calling for narrowing 230, what is it going to do to, speech, and what speech is allowed online, essentially?”
I mean that not in the legal sense, but in the practical sense of companies that say, “Yeah, I’m going to let that fly,” or, “I’m going to allow that to be promoted by my algorithm. I’m going to allow that to appear in other people’s feeds, which is what I do via algorithm, by automated algorithm,” because that’s the way we get our news feed. There is some algorithm that provides our news feed, even an ABC algorithm, an alphabetic algorithm, or a chronological algorithm is yet still an algorithm. (laughs) So, it’s automated in some way. And so I’m not sure what it means not to be algorithmic for a computer. A computer program is by definition an algorithm. It takes commands. It processes it according to a certain series of logical instructions, and steps, and produces various results.
So, the word algorithm here is so broad. And in fact, these narrow cases, when you start to unpack the implications, suddenly start to envelop the whole internet. So, podcast algorithms that are promoting particular podcasts…, You know, hey, you might be interested in Tech Policy Press because you listen to, you know, this other great podcast, Strict Scrutiny, or something like that, right? I rely upon those all the time. But it’s a lot more than those recommendation engines. It is the way that the internet works that’s at issue. So, I think that’s going to cause, once the gravity of the changes becomes more material… rather than simply saying, “Things are bad,” the Supreme Court has to consider, what happens after we act? What will our rulings do to the state of the world as, you know, after our ruling?
And, I think that is going to cause some both on the left and the right to say, “Hm. I’m not sure which way I want to go in this, because, yes, I don’t think things are great now, but could things actually be worse.” And, I actually think there’s… what the pandemic has taught us for many years is that things can actually get worse. You know, we can actually lose more lives in the second year of the pandemic than in the first, etc. So, things can get worse, and I think people should be cautious about being unhappy with the state of affairs and assuming that messing with 230 will then fix the internet in whichever direction they feel it should be fixed.
So, that’s a long way to come to, can I make a prediction? I think, if I were a betting person, I would probably predict a narrowing of 230. I think that would be unfortunate, in this case, and I think it will have negative repercussions for the kind of speech that I’m concerned about, andso, I actually think that, people on the political left with me– progressives like me– should be concerned, because history’s dissidents have mostly been on the progressive side, and that’s the speech that has typically been first in the United States to be potentially subject to legal liability. So,I think the legal liability concerns are going to raise lots of red flags.
So, is this one of the cases, then, where, you know, a lot of activists who are concerned about speech issues, and concerned about how the internet works, will likely sort of find themselves in league, essentially, with the corporations, in this case, who have the same interests, essentially? Perhaps for different reasons.
I think it’s gonna divide the civil liberties community. You know, you will have the EFFs that will, of course, stand strongly by a free internet, and Section 230 is a key bulwark of a free internet. Now, you will also have those who are, like, ‘oh, there’s too much hate speech online,’ and therefore removing 230 protections will, I think, reduce hate speech, which I think is a really, really important thing, to the extent that that hate speech could be criminal or civil liability. Hate speech isn’t illegal in the United States, but if that is the kind of speech that might lead to legal liability, in certain cases, it might do that, okay, right?
So, that’s a possibility, but at the same time, lots and lots of claims that relate to 230 are about people … So, Facebook is sued left and right by Christian conservatives saying they are being censored. It’s being sued left and right by male supremacists who say they are being censored. And so, there’s a lot of people out there who are going to make claims of censorship, and if you look at the history of these claims, you will see that there is a tremendous amount of legal activity that Facebook is suppressing, even though it’s legal, that progressives may be interested in keeping off that platform.
Let it exist on Parler or Gab, fine– I don’t like it, but, you know, if it’s legal speech, please, go ahead. Find a platform somewhere else. And I think, progressives should be cautious about this, because it will lead, I think, to a lot of suppression of the speech of dissidents that we may want to promote. So, I think of speech directed against police that says, “Hey, look, this, this policeman was harassing me. This policeman beat me up.” Is that defamatory? Facebook, Google, Twitter, Reddit, or Wikipedia doesn’t know the answer to that question, because it doesn’t have the investigatory ability to determine whether or not that person actually did those wrongs.
So, in those cases, better to avoid possible liability, and certainly not algorithmically promote that to anyone. So, is promoting by hashtag algorithmic amplification when people search for BLM? Well, possibly, right?
So, the other obvious example here is the #MeToo movement, which depended upon people– brave women, largely– who came forward and said they had been sexually harassed, and often naming their harasser. That was incredibly difficult and only possible because the platforms weren’t going to be held liable for a defamation claim from very litigious, very rich men who had been absolutely willing to litigate and claim defamation, even though they weren’t defamed, because it was actually true.
So, I think we should be cautious on the left about the kind of speech that is ultimately suppressed. The best example of this– there’s a clear example, and I really want the progressives left critique to grapple with this directly– which is SESTA FOSTA. SESTA FOSTA removed Section 230 protections for illegal sex work, basically. And that has ultimately been bad for a lot of people, making sex work far more dangerous, and really has been seen as, I think, not successful. The sex work still happens. Now, it just happens in more dangerous ways.
If, in fact, the court does rule in the way that you suspect it might, that it might allow for this sort of, I guess, a narrowing of the Section 230 immunity, what other dominoes might fall? I mean, do we expect to see, I don’t know, perhaps, might that give Congress the incentive to go ahead and, you know, maybe, maybe clarify, some issues around 230? That’s been a concern of both Democrats and Republicans on some level over the last couple of years. Or do you see any other kind of legal implications? And, I do see that the companies, you know, are basically saying… I think Google said, you know, “This threatens the basic organizational decisions of the modern internet.” There appear to be multiple individuals, observers that agree with them.
So, let’s begin with Congress. Do I anticipate any possibility of a Congressional intervention? Congress could moot this case in different ways, interpret Section 230 in different ways, etc. And we saw that earlier with the Microsoft Ireland case a few years ago, where Congress stepped in with the USA Cloud Act to move that case. I don’t think that’s likely in this case, because the simple fact is that the left and the right want irreconcilable things out of Section 230.
The right blames Section 230 for suppressing Infowars, for suppressing the flag bearer for their party, former President Donald J. Trump. So, the right has some pretty good arguments there that they’ve been deplatformed, you know, with a pretty substantial example that they can point to, of various people who have been deplatformed from these sites, and I think, too late. I wish they’d been deplatformed earlier, but, you know, frankly, it’s kind of hard to deplatform the president.
And, when Kamala Harris called for Twitter to suspend Donald Trump in the 2020 primary, many august persons in the media said that was small boar. It was irrelevant. It was inconsequential, and she was making a significant intervention in speech. Kamala Harris actually had a very sophisticated suggestion. I was actually shocked by how she was belittled for that, right?
So, now people will say, “Oh, they should have been …. it’s easy,, you know, they should have been deplatformed,” but there was no one jumping to her defense. In fact, you know, when another presidential candidate was asked about it, they were, they said, they laughed and said, “Absolutely not, Donald J. Trump should not be deplatformed from Twitter,” or even, it was just a call for suspension.
So, the right says “our friends are being deplatformed.” Unfortunately for the right, their friends are spreading election misinformation, sometimes promoting insurrection against this country, and often– unfortunately, to be, to be frank, you know– committing hate speech against minorities, and women, and gay people, and lots of other groups, okay?
On the left, the argument is that these platforms have tolerated right-wing speech far too much, and we have too much of this hate speech that they are trying to gin up, and make us more extreme, make us, you know, become right-wing fanatics that are, that are out to hate everyone. And so, they blame the rise of a fascist right on social media. I don’t see how you then come in as Congress and come in with a solution that satisfies both that there’s too little speech online because of Section 230 and there’s too much speech online because of Section 230. Those are irreconcilable positions. And it’s hard for Congress to intervene in any useful way.:
So, I think the answer is no. I don’t think there are ramifications along those lines for Congress. I have to say, this has huge ramifications for the Internet. Search engines are algorithmic amplification of various speech. They promote certain speech online. They find various things that happen at one site, and they say, “This site is what you want more than this other site.”
Now, imagine if you’re liable…. Hey, that search led you to how to do some terrible thing, because that content is findable online. That makes it very hard to do a search engine. Let’s imagine it’s even just copyright infringement, how to break DRM. Now, am I liable because I taught you how to do Google search, or the Bing search, or the, you know, Duck Duck Go search? And search is everywhere. It’s not just within Google, etc. Lots of companies do searches on their website, and if they start promoting things on their own websites … Section 230 has been relied upon by companies large and small. The smallest companies and the largest companies online rely upon Section 230. As long as you allow commentators, as long as you allow users to say something, whether it’s as simple as commenting on a product, and users being as clever as they will always use that commentary space to do hilarious things, brilliant things, but also, sometimes horrific things.
Is there anything else that I didn’t ask you about, or anything you feel that you wanted to get across that’s important about these particular couple of cases?
So, the Supreme Court, in a case called Reno v. ACLU at the dawn of the internet age, said, “We have, the internet is a forum for true diversity of opinions,” and they said, “From the Balkans to the ‘Buls, this is going to allow speech,” and they wanted to make sure in that case that a provision in the statute that 230 is part of didn’t harm people’s access to the internet. And, they wanted the internet to flourish. It would be fascinating to see how the Supreme Court returns to this question, now.
The internet has matured, certainly, but at the same time, it still relies upon that main liability shield every day. And so, once that liability shield is lowered for the automated work that it does, which is everywhere– we have automated spam filters (and you saw the Republicans complaining about spam filters that are removing too much Republican speech), automated news sites and increasing AI– this is going to mean that those kinds of services become much more complicated to provide, because now you have to face lawsuits. And, the issue isn’t whether or not the lawsuits win. It’s that they cost money to defend, and defending against lawsuits means that companies will often settle lawsuits rather than defend them, even if they are not a meritorious claim brought by the plaintiff.
So, what Section 230 has done is allowed these companies to avoid a ton of lawsuits, companies small and large, and removing 230 protections is going to lead them to be more conservative in an old-fashioned way that is not allowing speech that is potentially risky. And that will include both speech on the right and the left. There’s a lot of speech on the left that is legally risky, and we should want to protect online.
And so, it would be hard to say, “Oh, it’s only the AI, or the only algorithmic recommendations,” because once that speech is online, it’s going to be algorithmically recommended. So, it’s not just the algorithm. It’s the speech that is very much at stake here. And so the way to avoid being liable for algorithmic recommendation is to not allow that speech in the first place. That’s what’s going to happen. You’re going to see a lot more, what I’ve called, “Disney-fication” of the internet. Everything is happy. Everything is good. You know, no one’s doing anything bad. Everything’s all coming up roses.
A sterilization or a sanitization, perhaps.
That’s, that would be my concern, yeah.
Well, I appreciate you for explaining this to me, and I hope that we can come together again and talk about it, perhaps when a little more is known about the timeline, or further down the line when perhaps there’s the decision.
Thanks so much, Justin. I love your pod. It’s great. Thank you.
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Innovation. He is an associate research scientist and adjunct professor at NYU Tandon School of Engineering. Opinions expressed here are his own.