Home

The Supreme Court Decides: A Final Word on Gonzalez v. Google and Twitter v. Taamneh with Anupam Chander

Justin Hendrix / May 21, 2023

Audio of this conversation is available via your favorite podcast service.

Last week, the Supreme Court released decisions in Gonzalez v. Google, LLC, and Twitter, Inc. v. Taamneh. In this episode we’ll discuss what it tells us about how the Court is thinking about social media and intermediary liability, and what it might tell us about future cases the Court may hear. I’m joined by someone who follows these issues closely, and has shared his expertise with us on this podcast before: Anupam Chander, a law professor at Georgetown University.

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

Anupam, you have been following this set of Supreme Court cases very closely. You were one of the people who wrote a brief in favor of Google v. Gonzalez. Are you at all surprised by the Supreme Court's decision, or do you feel happy about the outcome, since it more or less came out the way that you would've liked?

Anupam Chander:

I'm surprised that the outcome was so unanimous. By a nine-zero vote, the Supreme Court, after reviewing these complicated cases and after maybe a hundred amicus briefs in the two cases, concluded strongly that Google, Facebook and Twitter could not be held liable for terrorism occurring across the world.

Justin Hendrix:

Digging into that 9-0 decision… there had been a lot of concern about the possibility that the entire internet would essentially have to be kind of re-conceived if the Supreme Court took a more extreme point of view in these cases. Some people have pointed out that the plaintiff's cases were weak, and these were not perhaps the cases that should have even come before the Supreme Court in this case. What do you put it down to? Is it simply that, or did the justices kind of have a bit of a kind of awakening to the complexity of these issues?

Anupam Chander:

I think it's a lot of the explanations that you just pointed to. I think that there was a huge amount at stake, which is why in December of this year I said, "I'm going to write my first amicus brief." And I put together a group including Eugene Volokh, whom I knew would get the attention of the Court because he's a friend of many of the justices. So we wrote a brief that was really a textualist analysis of Section 230, trying to argue the issue in the way that this Court supposedly decides cases. And so I think there was a huge amount at stake. Why did I spend my winter holidays doing this? Because recommendation algorithms are at the heart of the internet.

When you do a search online, you ask, "Of the millions of websites out there, tell me the one, oh Google, or oh Bing, which you recommend to me as most relevant to my query." So recommendation algorithms are what... Some people prefer a chronological feed. So I think that's great, I think we should have those choices, but I actually do like the Facebook news feed and the Twitter feed, even the For You feed. The TikTok feed is of course entirely recommendation algorithms. It's not really based largely on whom you follow, and certainly not on the chronological postings of whom you follow.

So recommendation algorithms are everywhere. We learned in this case that they're used by Reddit and they're used by... Wikipedia is using various algorithms in this process. Maybe not a recommendation, but doing lots of other... The dirty work behind the scenes is being done by automated algorithms. And if the way that the automated algorithm functions, sometimes accidentally promoting something wrong, maybe it's some kind of terrible weapon or some self-harm that it might promote because it's an automated algorithm, then if that leads to liability... boy, does that change the way that the internet works. And that's why you saw an outpouring of briefs, as I said, from Reddit, which people think of as very much human-curated. Or Wikipedia, which again people think of as human-moderated and human-produced, to, of course, all the big tech platforms, including Microsoft, which wasn't being sued, but still has services like GitHub and LinkedIn, which it said were at risk if this case proceeded in the way that the plaintiffs would've liked. So in this case, I think there was a lot at stake.

To add to the worry, the Biden Administration filed a brief on the plaintiff's side. So now you've got plaintiffs and you've got the solicitor general's office saying Section 230 does not cover recommendation algorithms to a large extent, and it was a huge amount at risk in this case.

Justin Hendrix:

One of the things that those who were on the other side are saying in the wake of these decisions is that they're heartened that the Court did not offer a full-throated defense of the Section 230 liability shield, and that the door is still open for a better case with better facts to challenge the tech platform's immunity. Do you suspect that that's the case?

Anupam Chander:

Sure. So, I think there are people who say, "Look, the Court did not itself opine in 230," so the game is still on with respect to significantly curtailing Section 230, maybe saying you have to show good Samaritan activities before you actually can gain that defense, et cetera. I think the Taamneh case really should be a significant warning to those folks. This Court is not likely, if you read Taamneh, again, a nine-zero decision, in Taamneh, which really is, so in the Taamneh case, there is this question of knowledge that is alleged by the plaintiffs. The plaintiffs allege Twitter knew that ISIS was using its services, and the Court says, "Yes, we accept that," even if Twitter knew the plaintiff still lose nine-zero. So knowledge alone is not enough to create the aiding and abetting liability needed for liability under the Anti-terrorism Act.

So it has to be much more concerted effort, not just that, "I knew that you were using my tools, but also that I wanted you to do so and I encouraged you to do so, and I directed you to do so." Not necessarily directed, but various things that I would have to do that go far beyond knowing that you're using my tools. So you've got language, like the oceans of content that the companies offer on their services, 500 hours of video, half a million posts, 500 hours of video uploaded a minute on YouTube, half a million posts posted every minute on Facebook, X number of tweets, so many million tweets per minute, et cetera. The Court recognized inherently and really explicitly that there would be wrongful speech on these platforms. And it said, "Even knowing that there is wrongful speech on these platforms," and it talked about the billion-plus users that are using these platforms, it said, "Even with that, we don't believe there should be liability."

So I think that is going to really show that any lower Court thinking about really radically rewriting Section 230... You could do this on an en banc. You don't have to go up to the Court, you can do it en banc in the circuit Courts. That is, the ninth circuit could sit en banc and say, "Our earlier jurisprudence on Section 230 is wrong. We are going to radically revise it." That's possible even today. But I think in an en banc court of one of the circuits would look at the Taamneh case and say, "Look, that is a pretty strong ruling in favor of these platforms if they aren't intentionally themselves writing the algorithms to do the harmful thing," and it's simply that they're promoting it much like they might promote rice pilaf, but it just happens to be someone who is promoting generalized terrorism. As long as it's the kind of not an intentional act to do harm to the world and promote harmful content, these companies aren't going to be liable under this Court's views.

Justin Hendrix:

So you've just started to address the next question I had for you, which is about what this tells us about the Supreme Court going forward. I think a lot of folks thought that Justice Thomas was the reformer, he was the one who perhaps wanted to take on Section 230 and look for an opportunity to at least put a chink in its armor, but that's not the case. He ends up writing the majority opinion here. What does this tell us about the Court going forward? We might see the Court, I understand, take on these questions about the constitutionality of Florida and Texas laws around social media companies removing posts. What should we expect?

Anupam Chander:

So let me just pause on Justice Thomas for a second before turning to the NetChoice in Florida and Texas cases. I think all of us were taken aback in the Gonzalez oral argument, where Justice Thomas comes out of the blocks with rather difficult questioning for the plaintiff's lawyer in the case, questioning, under which the lawyer withers immediately. So he asks some basic questions about the scope of what the lawyer is arguing and gets really unpersuasive answers. And I think we see Justice Thomas really kind of perhaps stepping back a little bit from that zealous advocacy for 230 reform that he had offered in earlier kind of dissents and concurrences.

Now, I tweeted, after Elon Musk took over Twitter, that perhaps Musk's takeover of Twitter would cause conservatives to do a 180 on 230. And I almost wonder if that's part of the story. I don't know if that's just pure conjecture, but now you've got Elon Musk running Twitter, which is seen as the most important of these platforms for this kind of public dialogue. That changes the game. And we know Elon looks by all measure to be sympathetic to many of the arguments of Justice Thomas and his friends. And given that Twitter was the company that was most at risk in some sense in these cases, I think we may have seen Justice Thomas retreat a little bit and reconsider what the implications were.

Because I think there were implications of a ruling for Gonzalez, both for the left and the right, that were really going to be problematic. I have always argued that if you increase liability for wrongful speech, you are going to see platforms that really clamp down hard on any claims that might constitute defamation. And so what kind of defamation might folks be worried about? I think saying that someone engaged in sexual assault could potentially be defamatory. It could also be true, but the platform does not know. And so all the kind of Me Too claims, which are specific enough that they might name an individual, would be hard to sustain against a legal office that says, "Hey, if that guy sues, we're going to be on the hook for allowing that to survive on our system."

Now consider a second kind of possible defamatory claim: "This police officer used excessive violence against me or my friend." Police officers always deny that. They always say there was not excessive violence. And so those kinds of claims would also be hard to continue to support by a platform. But at the same time, claims on the other side which might cause other kinds of risks: let's say, imagine you're contesting vaccine information, you're contesting the WHO or CDC guidelines... So I think those kinds of claims could also be at risk because, now do we want to be liable because someone challenged this vaccine claim or this masking claim, et cetera.

So I think those are all the complexities that are involved in these cases, and I think the... So, one of the things I'd just like to point out is that most of the civil liberties community in these cases sided with Google. So you had Reporters Committee for Freedom of the Press, you had the ACLU, you had Article 19, you had the Knight First Amendment Institute, all filing briefs on Google's side. So these cases really, there was a lot of free expression that was at stake in these cases, and many of the civil liberties community. The main civil liberties community, free expression community members filed briefs on Google's side in this case.

Justin Hendrix:

Not that there's any reason to suspect Justice Thomas's motivations or his relationship to the wealthy these days, but that was of course conjecture about Justice Thomas.

Anupam Chander:

Absolutely. Not defamation at all.

Justin Hendrix:

But let's move on to Florida and Texas, and what we might expect in the wake of these decisions.

Anupam Chander:

Well, we were all expecting this fall that the Supreme Court would grant cert in those cases. You've got a circuit split between the fifth and the eleventh circuits on two social media laws that involve both must carry and transparency obligations, and so I think the tech law community expected that that would be the cases that the Court would decide to hear. Instead, of course, we saw that they granted cert in this pair of cases, quite surprisingly. I had followed these cases previously because I had written a paper on the global implications of Section 230, how 230 essentially helps these platforms become global forums, by kind of creating a safe home base for all this global speech that occurs on these platforms. So I had followed these cases because of that, but no one else was... I think very few people were following these cases because they were such losers on the merits.

I just want to say in the Taamneh case, as the Supreme Court says as Justice Thomas says, the Taamneh claim, the plaintiff's claim was essentially that Twitter, Google and Facebook were liable for all acts of terror by ISIS, anywhere around the world. That was the actual claim in this case. Any time ISIS commits terrorism, Google, Facebook and Twitter must pay. So that was the broad kind of framing of these cases. Now, you've got these NetChoice cases, where an industry coalition called NetChoice, an industry advocacy group called NetChoice has challenged these social media laws in Florida and Texas. And so now the Supreme Court has asked for the solicitor general's views on the grant of cert, on possible grant in cert, and everyone expects that the Supreme Court will take those up. Those are going to be really fascinating cases, and there's actually going to be more division in the progressive community than you might expect in these cases. So they're going to... And I don't actually have strong views on some of the complicated issues in these cases at all. Because there are issues about transparency mandates in both these cases that I think will test the limits of transparency laws.

And so the general notion is this: could we ask the New York Times op-ed page to tell us what their criteria are, with a very fine grain detail as to when they accept or decide whom they solicit to write op-eds," et cetera. So those are the kinds of interesting questions. Is this like the New York Times at all? Those kind of analogies. I think the must-carry questions in those cases, and in those cases, as you know, and your audience probably already knows, but, involve social media laws that basically say, "You can't discriminate against different viewpoints of speakers." So if you're saying, "No, we're not going to carry anti-vaccine speakers," well, you can't carry pro-vaccine speakers. So that seems really highly problematic to me, that kind of viewpoint neutrality requirement. And it would be really unfortunate, I think, and it would make our information services much poorer and kind of would cloud us with lots and lots of disinformation, ultimately, that I think we should warn against, and also possibly hate speech that I think we should be worried about.

Justin Hendrix:

It seems like the Court should make reasonable decisions in this case as well, but I suppose there is the offhand chance they could sort of redefine what social media platforms are somehow, with regard to the public square. Is that still what could happen?

Anupam Chander:

I think a lot is in the air. There was some hints in Justice Thomas's opinion about the possibility of treating these common carriers. The thing about common carriers is that they're typically not liable for what they carry, but they're not allowed to censor things. They're not allowed to say, "I'm not going to allow you to mail something which is Republican or Democratic," or whatever it is either way. And so there was a hint of that in the Taamneh case. I don't think the Court will ultimately go there because I think the risks are too high of that approach. But from my mouth to God's ears is all I... So I can't... I'm hoping for making sure that there's... I hope there's no need for divine intervention, but I would ask for it if needed.

Justin Hendrix:

Anupam Chander, I hope I can come back to you again and talk all things tech and the Supreme Court, and catch up when we know the answer to some of these questions a little further down the line.

Anupam Chander:

Thanks, Justin. I really, I love your podcast. I listen to every single episode. You guys are just amazing. All your guests are amazing and the conversations amazing. So thank you.

Justin Hendrix:

Thank you.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics