Home

Unpacking the Oral Argument in Murthy v. Missouri

Dean Jackson, Justin Hendrix / Mar 24, 2024

Audio of this conversation is available via your favorite podcast service.

On Monday, March 18, the US Supreme Court heard oral argument in Murthy v Missouri. In this episode, Tech Policy Press reporting fellow Dean Jackson is joined by two experts- St. John's University School of Law associate professor Kate Klonick and UNC Center on Technology Policy director Matt Perault- to digest the oral argument, what it tells us about which way the Court might go, and what more should be done to create good policy on government interactions with social media platforms when it comes to content moderation and speech.

There is reference to an analysis Klonick produced after the oral argument, as well as an essay Perault co-wrote with Katie Harbath for the Knight First Amendment Institute on their experiences interacting with the government while in public policy roles at Facebook.

What follows is a lightly edited transcript of the discussion.

Dean Jackson:

I'm here with Kate Klonick and Matt Perault. Kate Matt, thank you for being here. Why don't we start out just quickly if you could tell us a little bit about yourselves for our listeners.

Kate Klonick:

I am associate professor of law at St. John's Law School and a fellow at the Brookings Institute, Yale Information Society Project in the Berkman Klein Center. And I work on online speech and internet law and so I was listening to these arguments actually from Paris where I'm here doing Fulbright for the year.

Matt Perault:

And I'm Matt Perault. I direct the Center on Technology Policy at UNC Chapel Hill and I also do some consulting on tech policy issues and I was jaw boned for a little more than eight years when I worked on the public policy team at Facebook.

Dean Jackson:

I want to get into that. I want to get into what it's like on the pointy end of the jawbone.

Matt Perault:

Every day. Both parties, every day, domestic, international, lots of jawboning.

Dean Jackson:

We'll start at a sort of higher level and then I do have a few questions for you about that, but one big picture thing I took away was that multiple times throughout the arguments, the justices, they often do this, they reach for a metaphor about the thing they're discussing and the one that seems to be sticking for social media platforms is newspapers. And it has me wondering in what ways this is or isn't a suitable framework for understanding social media and its place in the public square.

And Kate, I know you have done some writing on this, you referenced an earlier piece you wrote in your recent newsletter unpacking and summarizing the arguments. Is this a good metaphor or not?

Kate Klonick:

Well, it's certainly a good metaphor for the platforms and I really think that you have to think about it in that context. All humans reason through analogy, not just judges and lawyers, although judges and lawyers really make it their stock and trade, but what you saw, and I know that we're going to talk about Murthy, but I think that this is actually... the comparison to newspapers was really relevant. In the Murthy oral arguments ends up being really relevant also to the net choice arguments that we heard a number of weeks ago.

And in the context that we're talking about this, the analogy to newspapers is thus, that essentially this is like the Supreme Court case in the First Amendment Supreme Court case, Miami Herald v. Tornillo. In which basically the Supreme Court said no, the Miami Herald... if it publishes an op-ed from a Republican, it does not have to publish an op-ed from his Democratic opponent, that there's no mandatory right of miscarry that they have to enact and that the Supreme Court in Tornillo recognize the First Amendment rights of newspapers essentially, to curate their own page, to create their own experience, to have their own voice.

This is a slightly separate First Amendment test in other kind of first amendment tests, because you're giving a special privilege right to a certain industry and in this case the press and journalism.

But if you extend that to social media companies, that's a very strong argument for social media companies not being regulated and any type of regulation that comes down on social media companies or any type of miscarry or any type of kind of conversations between these companies and the government is really just about them deciding on their own how they're going to control the flow of information to the public and how they're going to package it and present it, which looks a lot like a newspaper.

So the fact that this was coming back and forth between... this was absolutely... I'm glad you mentioned this, but it's absolutely the dominant metaphor in Murthy that even Alito was using who has been so skeptical of platforms and calling them First Amendment actors on their own, hates the platforms with the passion of a thousand suns... that he was out there being like, "So, they're like a newspaper. It's kind of like the Pentagon Papers..." All of this kind of stuff... was like, okay. That's interesting, man.

That's going to be... how are you going to hold that in one hand and then decide that choice in the other and not hold this up? I think I was on a chat with a few people, we're all just, "Huh, okay. That cuts really well for the platforms, hopefully. But we'll see."

Matt Perault:

So Dean, you can hear that one of the people you invited to join you for the podcast today is a kick ass law professor and one is not. I'll make it obvious that I'm the one who's not. I actually had a First amendment hornbook by my side, it's within reach right now because I never took a First Amendment class in law school and at some point I think over the summer I was like, "You know, I should learn the First Amendment. That would be a useful thing."

So, Kate is an expert. I'm still getting up to speed, but I think newspapers are generally a poor analogy for tech platforms. This often comes up in the Section 230 context for instance, and the analogy is often made that newspapers are liable for defamation and social media companies are protected by 230 and isn't that unfair?

And I think they fundamentally do different things. Tech platforms host user speech primarily. That's not exclusively what they do, but primarily just host user speech and newspapers create content, so they're doing fundamentally different things. I do think Kate is right that historically it seems likely... newspapers have their own first amendment rights and applying that jurisprudence to tech platforms would suggest they too have their own editorial rights and they can come up with their own content moderation policies and enforce them as they want to and the government doesn't have the right to control the editorial discretion of tech platforms. And it seems likely, though we will see, that the Supreme Court will agree.

I think in this particular case, in the jawboning case, I think it's exactly, Dean, what you said at the outset. I think it's Justices reaching for something that they really know. You actually heard Justice Kagan saying, "And when I worked in the executive branch, I used to call up newspapers all the time and I would tell them that the thing that they had written was factually incorrect."

I think in that way it is probably they're somewhat similar. It's a little bit different because platforms whose speech newspapers create it, but I think what she was saying is, "Look, this is a kind of communication that is..." as I said jokingly, but also I think it was true in my experience. She's saying, "This is a communication that happens every day. This is a type of communication that happens every day. The government is making its case in some way with private entities around issues that relate to speech." And so I think that's why it kept coming up.

Dean Jackson:

If I can post a follow-up to both of you, I find myself both drawn to and repelled by the metaphor to newspapers because I do on the one hand, firmly believe, and I think this is a pretty consensus opinion, that platforms have editorial speech rights.

On the other hand, as you said Matt, if the New York Times is 99 parts news content and 1% user submitted op-eds, Facebook is the opposite. And also there are many more newspapers than there are platforms, fewer newspapers than there used to be, but still many more. And so the market concentration is also a distinguishing factor.

But another case I've been watching is this case around the Age Appropriate California Design Code Act where this sort of distinction between platforms do this and it's editorial and then platforms also do this, but that's functional or commercial and which parts of those can government regulate or not? That seems to me to be really muddied in this metaphor. Am I wrong about that? Is there a need to tease those things out?

Kate Klonick:

One of the things that I think is particularly what makes social media a complicated thing to regulate, and so there's twofold. One is that to host pages, to display them on your screen, to engage with human existence via the internet, at least for right now until we can download it directly into our brains uses the interface of speech. So it just naturally falls into a giant hole in the law that we're not really great at understanding how to regulate in the first place and that we have struggled with for all of American history in particular around the First Amendment. So that's one thing about this.

And the second thing is that to your point that there is a sliding scale of product versus speech distinctions and platform versus speech. Is Grindr, for example, the dating app, is that a speech platform or is Facebook a speech platform?

And I would actually push back at how you're counting the platforms. If you're counting giant platforms like Facebook is one giant platform versus the number of newspapers. I would understand that that's how you're counting that idea, but I don't think that's actually how we experience the internet on any Facebook page. There's probably... if there's hundreds of thousands, if not millions of groups and chats and various ways of interfacing and finding news and information that are very varied. And so it is both... there is one Facebook, but there are many different parts of Facebook and the same is true of Reddit obviously and YouTube and the channels and there's just various types of things.

So I actually think that the media ecosystem is way more diverse in online than it would be for a straightforward newspaper. But I just want to point out that every time there's a new technology, we have to struggle to compare it to old technologies and there's never a perfect overlay.

I'm sitting here and people keep comparing social media companies... sitting here in France, and people keep comparing social media companies to why can't we just regulate... why can't we just give government subsidy to niche industries like the French do to the film industry? And I'm like, that's adorable, but that's not going to... this is not... I don't even know how that would work. And also, yeah, that's just a very different way of regulating media, but everyone is sitting from their own perspective carrying their own kind of backpacks on their backs into these questions and their own metaphors and I think that it's very... I think that the newspaper... I think Matt is a hundred percent... I think it's what they can... it's part of a generation in which they got two newspapers delivered a day, one in the morning, one in the afternoon. This is just like they're just searching for a way to understand this thing.

Matt Perault:

I think the analogy that Kate's making to her experience living in Paris right now is really interesting. 'Cause I guess the macro issue that we're dealing with in all these cases really is how much can the state influence technology and media? And in France it might be common, so a large government investment in a media organization to try to displace other competition, displace competition from non-state supported entities might be totally fine. We certainly have some of that in the US with PBS and NPR and we... strong tradition of public broadcasting here, but I think it's different in the New York Times, Wall Street Journal context and different in the Facebook, YouTube context. I think Kate's right to bring in other platforms and think about the functional elements of them.

My tendency where I tilt on all this is that more separation between the state and expression platforms is a positive thing and that's good in the net choice case example where we want platforms to be able to develop, I think ideally a wide range of content moderation practices, which means that they can compete on that.

So if you don't like the content moderation service that you're getting from one platform, if someone else is able to offer a competing alternative with a different way of moderating content, you can make a choice to use that alternate service. And I don't think, because it has to do with expression, that the state should be able to completely eliminate a platform's ability to decide how it wants to enforce its own editorial policies.

I think it's also true in the Murthy context, that we don't want the state basically able to achieve in a private behind the scenes bullying, jawboning type of way, what it can't achieve through either executive action in the regulatory side or through lawmaking on the congressional side.

Dean Jackson:

So Matt, maybe this next question, I think, you can lead on since you have been on the receiving end of the jawbone, but this case, it's ostensibly about, as you said, whether or not government coerced social media companies, but in a lot of the arguments it seemed like their speech rights and policy stances and their preferences didn't really center in the conversation outside of maybe some of Justice Kagan's questions.

The arguments tended to focus more on the effect on users, on the plaintiffs whose suit Louisiana brought to the court and that struck me as, I don't know, maybe a missed opportunity.

Do you think it's significant that for the most part the questions bypassed that relationship that you just outlined between the state and the platforms and focused instead on the impact on members of the public?

Matt Perault:

Yeah, I mean I think the impact on members of the public is ultimately what we should care most about. We should care most about how users are able to express themselves on a platform and if the state interferes with platform editorial discretion in a way that increases censorship or forces platforms to carry certain speech in a way that would be problematic for their user base, I think as users, we should care and as users, I think that's problematic because I do think it runs afoul, if not of the First Amendment as a strictly legal matter, it runs afoul of First Amendment principles, and I think we should care about that.

To be super clear about that, I'm not saying this case should be decided on a wishy-washy interpretation of the First Amendment. I thought that where the Justices were pushing seemed appropriate to me. In lots of contexts, I think the rigor of the legal process is a really important and powerful thing. You need to show standing, you need to show causation, you need to show that you meet or fail a particular doctrinal test that has been developed over decades. And so I think it's really important that the Justices were pushing in that direction here.

And I think based on that, based on what I've read from Kate's really good analysis, I think we both agree that it's likely that the Biden administration will win. And that seems appropriate to me based on what I've heard in the arguments and my very limited understanding of the First Amendment. I don't think that's a positive thing.

The case may be correctly decided and it also may be the case that we should be concerned about jawboning and that we would want government officials to interact with private speech oriented platforms in a different way. So there's an infinite number of things that are lawful but that you shouldn't do. I can engage in hate speech right now and the government can't restrict my ability to engage in hate speech with respect to either of you, but should I be using that kind of rhetoric in this kind of conversation? I think that's wrong and problematic for a whole bunch of different reasons.

And I think in this case it might be that the government didn't behave unlawfully. It's not a violation of the First Amendment, but is that what we want the government to do? I think the answer is no, and I think it's no, not just because of the specific equities here. We might be sympathetic to the Biden administration, we might be sympathetic to the idea that there was problematic information on these platforms, but trying to restrict the government's ability to behave in a certain way here won't just bind the Biden administration, it'll bind future administrations.

And as of today where it looks like if the election were held today, according to current polling, we would have Trump 2.0. I think there are reasons to be really seriously concerned about what future administrations, the stance that they might take with respect to online expression vis-a-vis platforms.

Kate Klonick:

So I agree with all of that. I would just point out too, that to put the First Amendment, kind of the gloss on this, and I would say the best case scenario that we come out of this for First Amendment doctrine is better, more clear rules around what crossing that line entails, the line that Matt is talking about, which is a very hard line to draw on the one hand, as you have, it happens all the time. It happens all the time. The government calls up people up and screams at them all the time to take things down. Okay. Also, maybe not great in certain contexts.

These are really hard types of rules to draw. And I think that the case that guides this is Bantam books in which essentially you had a very clear what's called traceability. You had a very clear kind of line of causation between police officers showing up and threatening prosecution for a bookstore, not removing certain types of books from circulation and from sale.

And that happening because of the police officer's actions. And what you saw over and over again in the Murthy case was the Justices being like, "There is no traceability here. And so if there's no traceability here, that means there's no injury, no books got taken off the shelves because of these conversations with the 'police officers.' And you're grasping at straws to even say that some of this kind of blustering from government officials to the platforms is necessarily going to rise as level." We've all had tons of conversations like that.

Now the hard part is just going to be how they're going to write a new test if they decide to make one, but I imagine, if I'm being perfectly honest and if I was trying to decide this narrowly as possible, which I imagine they're going to do, is they're going to toss this on standing.

There's going to be kind of... this is going to go the way of, we just can't even reach this issue. And who knows how hard they'll redraw that test because I actually think that it pretty much... if they toss it on standing grounds, there's not a lot of stuff that they can add to that. This is pretty much fails on standing grounds under the Bantam books test as it's currently written.

So I don't think unfortunately, we're going to get much more new law on this and this is going to continue to be a gray area for everyone that interacts with government in any area of the industry that has a threat of someone from the White House or a senator calling them up.

Matt Perault:

I agree with that assessment and I find it very dissatisfying. Not in a way where I would say, "And I wish the justices would behave differently," but because it's a unsatisfying result to not get more guidance and I think the test is really unsatisfying. I don't know the difference between persuasion and coercion when it comes to the government. There are lots of-

Kate Klonick:

Well and no one does, right?

Matt Perault:

Right. So it's a bad-

Kate Klonick:

They're literal synonyms.

Matt Perault:

Right. And I also think they're synonyms... they're very close to the same thing in general world, but we're not talking about the general world. We're talking about an actor that has the ability to levy massive civil fines, to levy massive injunctive remedies, to do a lot of bully pulpit type stuff that can be really problematic for companies. It's not good for companies when they are compelled to testify in front of Congress for instance. That is an easy thing that lawmakers can do and have shown that they will do when they're concerned about these issues aside just from fines and objective relief, they can impose criminal penalties and initiate criminal prosecutions.

There's a serious power dynamic here, and I understand the social media companies at issue are not powerless, but there is a different level of power when it's the federal government who is calling and putting pressure even on a company or trying to persuade that gets close to coercion and we know that in lots of context in criminal law, how we think about "voluntary confessions" and stuff is informed by the power that the government has.

So this line to me seems potentially correct and also absurd and I thought the hypotheticals that the Justices were talking about made that absurdity clear. You can say, "Hey, I think that a certain thing about this speech..." that seems to clearly be on one side of the line, and then they were asking questions like, "And what if the government official says, 'And please take it down.' Does that all of a sudden then move it to the other side of the line and what if instead of saying, 'And please take it down,' they say, 'Take it down now you X, Y, Z' and use obscenity or whatever to make their point clearer. Does that cross the line?"

And it feels to me to be absurd. The test that I thought of in my mind that maybe is also absurd but is more between education and information and coercion/persuasion because I think the Justices were making very valid points that the government can share information. It's really useful for companies to hear and understand and I think lots of times companies receive it that way. Like Yoel Roth wrote a great piece for the Knight Institute about these issues where he was talking about the dynamic between the FBI and Twitter when he was at Twitter and he was saying a lot of these interactions are benign and positive and the tech companies want to learn about problematic stuff on their platforms and they can hear about it from the government.

So I think education and information seems positive. There's something else that gets us into a place that I think most people would agree feels somewhat uncomfortable. And the question is do we have a legal test that separates out those two things? And I think probably we don't.

Dean Jackson:

You've given me so many different lines of questioning to pursue. I'm going to try and combine two of them and the first is Matt, you brought up this Knight First Amendment Institute convening that we were both at in October on this very topic and in that convening there was a lot of discussion about the nature of jawboning and this question of traceability, right? Because on the one hand, it's very concerning to me that you're not able to trace any government communication to an alleged injury on the part of a speaker, right? There are no dots connecting those two things in the evidence presented, and the courts called them out on that. It's concerning to me that the case got that far, despite that link being bright and clear.

On the other hand, at that Knight convening, there was a lot of discussion of the way that if I'm a government affairs person inside of a corporation, my job is in some ways never to get that email from the government. My job is to anticipate, to intuit where the government's lines are and to be responsive to those. And so in some ways for a legal test, is the right test A led to B or is there a sort of atmospheric effect of government communication and pressure on platforms where just through the frequency of conversation and the tenor of conversation, I understand that there's a risk of being in hot water even if there's not a sort of direct exchange of demands and compliance.

Matt Perault:

The legal test needs to be a serious narrow, rigorous one for the reasons that I think that Kate explained. I don't think you should overturn standing doctrine just because the issue here seems like one we feel uncomfortable with.

I think standing doctrine should apply here, but I do think... I have this strong feeling about this, that it signals why sometimes the law is not sufficient to address any number of different things in our society, and that's not because of something that's problematic about the law. I think we should just have realistic expectations about what it can achieve.

The job of the court here is to assess where there is a violation of First Amendment law and I think it is appropriate for that to be like a rigorous and narrow inquiry, but there are lots of things that are disfavored or problematic and also lawful or where courts are going to say, "You can't do X, Y, Z," or, "We can't take this on," but that doesn't mean it's okay.

You raise the child safety issues in California as one example. I think that's a perfectly good one. I think there are problems with the court decisions that have pushed back on the age appropriate design code in California, but I also think there are important, very legitimate First Amendment issues with that law and it seems like it's appropriate from my standpoint, that various provisions of it should face serious challenge on serious scrutiny in court.

I don't think the correct response to that is like now we shouldn't care about child safety because a court said you couldn't do it this way, so we just shouldn't care about child safety online. There are an enormous number of different things that you could do to address that issue ranging from other attempts at legislation to other forms of litigation, to digital literacy, to company best practices, to public conversations about how to manage kids' experiences online. So I don't think the end result should be like, "Oh, we have a decision and therefore we should walk away from the issue."

And my guess is... my hope is how we would treat a decision in Murthy, that it's not the end of this conversation, but is we understand clearly, hopefully clearly... Kate and I, I think, are both skeptical about how clearly we'll understand it... but we understand something about what the legal doctrine is and then we can say, "And also here's what we've learned about this issue and here's how we might approach it."

Dean Jackson:

I do want to ask, and maybe you can take up this next question because the amicus brief that Knight submitted for this case I think gets to some of the points you just made, Matt, that it's broken under the three sections and the third section is judicial remedies may never be enough to address this problem. There probably is not a legal test that is going to be clear and straightforward that will help you sort things into either a persuasion or coercion bucket with a hundred percent satisfaction rate.

Are there things other branches of government could do though? Is there something Congress could do or the executive branch could do that from a process perspective might give us more safeguards against the types of interactions that make us queasy while preserving the government's right to speak, the beneficial nature of the government's ability to provide information and those things. Is the judicial arrow the only thing in our quiver here or is there some other part of government that might be better suited?

Kate Klonick:

I actually don't know. That's a good question. I guess I think from an institutional standpoint question, most of the hard lines are on the First Amendment are drawn by the judiciary. But interestingly, one of the things that Matt kind of points to and that is often talked about is part of what bothers us so much about jawboning is a lack of transparency and the extent to which there could be mandated some type of transparency types of initiatives of these types of conversations and when they happen is that's a possibility. I don't know if that's the most effective way to go about this kind of speech.

Yeah, Matt, I know that you wrote about this a little bit for Knight, so if you want to go ahead and take that up, that'd be a great idea.

Matt Perault:

I think the Knight brief is awesome and that it was wonderful to see. It cited in the argument, I think it was cited more than once by the Justices. The section that you're referring to I thought was really strong, but I actually thought the focus of it was in a different direction than where I would hope it would be. So the focus of that third section was on corporate concentration and how that would distort the speech landscape in potentially problematic ways.

And all of that may be fair enough, but I think the topic of jawboning is on the government side. How is the government circumventing the democratic process to shape speech in private platforms? And so I think remedies that we should think of to address that should be directed at government. There may be any number of other things that we want to do with respect to private platforms, but for jawboning, I think the appropriate focus is more on the public sector.

One thing that I think you could do actually that would get, again, the more best practice, what are the right norms here approach would be an executive order from the President giving guidance to officials in the executive branch about how they should engage with private platforms. And that's perfectly appropriate to me. I think there're probably even other ways that the President could provide that guidance assigned for just issuing an executive order. And I think it's helpful to lay out some guidance on how officials throughout the executive branch should conduct themselves.

The thing that I wrote more about for Knight was about sort of process-based remedies. There are a few different ones that I had in mind. One was there is currently a process for reporting ex parte communications when the government is lobbied in an inappropriate way during a rulemaking process. So if the FTC is engaging in a rulemaking process or engaging in some sort of decision making process and a private entity tries to engage in ex parte communication, the FTC can report it publicly.

And we actually saw this at some point in the last year, one of the FTC commissioners did this with respect to some communication that he received. And you can imagine the reverse process in a jawboning case for a private entity where you have a government official in the platforms, inappropriately reaching out to the private entity and the private entity just reports it. So there's not any sort of specific... it's not saying it's unlawful, it's just reporting it in some form.

I think there are lots of issues with that. The main one I think is that private entities wouldn't do it and they wouldn't do it for the reason that jawboning is problematic because it then I think opens them up to potentially retribution from the government. And so they may not want to call attention to this because they might not want to see what the government does in response.

There are other process-based remedies, and I think you guys may find this laughable. I'd love your thoughts on it. But the two process-based approaches that I think currently work, this is maybe the laughable part, are DMCA process, Digital Millennium Copyright Act, IP take down process. Kate is rolling her eyes and raising her eyebrows because of how fraught that process is.

And then on the other side, the ECPA oriented process. So how does the government get information from companies and law enforcement proceedings? The reason for the eye rolls, I think Kate, you should explain it in more detail, but both of these are fraught. No one really likes either of these processes. So the government's frustrated they can't get more information in law enforcement cases. Civil liberties groups thinks those authorities are too broad on the IP side. Rights holders hate it and free speech advocates hate it. They think the IP process is used either to enable copyright protected speech to get out into the universe in ways that it shouldn't, or the free speech side thinks the constraints are too significant.

But we sort of muddle through with both processes and I think a muddling through might be helpful in a jawboning context. In both the IP side and the law enforcement side, there are legal guardrails on the processes, then there are formal requests, then there are clear transparent expectations of how platforms and the government would interact in those cases. There are mechanisms for resolving disputes. And so I do think if we had more formal channels for these processes, for these communications, it would help in a whole bunch of ways to provide rigor and transparency and a little bit more public accountability for this kind of conduct.

Kate Klonick:

I mean, I was making a face, but only because I was like, I do fall on the... as both a copyright, former copyright scholar and free speech scholar. Now I do think that it ends up having a lot of administrability effects, but I'm generally all for trying something and hoping that it works, especially if it's going to be something that's process oriented and governance oriented as a solution rather than continuing to muddle through and with bad policies and in an ad hoc way. And so I am not opposed to this. I guess I was just... oh, I don't actually hear of the DMAs as being held up as being a way that anything "works" particularly well. So I guess that was a little bit of my pushup back, but I mean you're not wrong, how you qualified it. I think that's actually a nuanced take on it.

Matt Perault:

Yeah, I think no one likes the DMCA. I think part of that is because we, when you see a process, you critique it and the DMCA merits critique, I think I understand the critiques on both sides and I think if we had a process for jawboning, it would satisfy probably no one and it would merit critique. But right now we have no process, which means a low level bureaucrat working at the White House or working at DHS or working at HHS can put pretty significant pressure on tech platforms, not just the big ones, but the small ones to change their content moderation practices. And that can occur outside the public eye and it can probably occur in a way that's consistent with past Supreme Court jurisprudence and with, I think, however the court resolves the Murthy case. And I think that should feel unsatisfying.

This should not be a no guardrails, no process universe where the government can behave however it wants. And tech platforms, just to be clear, can behave however they want. They can decide for a whole bunch of reasons, some goods, some not, to take action or not take action in response to government pressure. I just think the no guardrail world that we're in is not the optimal one.

Dean Jackson:

Listening to both of you, I think it's interesting that you've raised two processes that have parallels to what's alleged here. My first thought went to more of the international human rights field and the disclosure of government requests for user data and attempts to catalog those in a transparent and publicly accessible way.

Matt Perault:

But that's because of ECPA. That's because ECPA exists, at least that's the starting point. Companies are not... if India requests content from Facebook or Google or Snap or whatever the company is, that company is not permitted to respond by providing content in response.

It is prohibited by US law and US law outlines a process for that. The Indian government has to actually go through a diplomatic process, diplomatic channel, typically the MLAT channel, the Mutual Legal Assistance Treaty channel, and then get an order from a US court.

And so all of that provides rigor and importantly, I think it also provides a process that results in being able to track requests because if you are at a tech company, you are not permitted to touch a... unless you work in a specific team at the company, you are not permitted to touch a government request for user data because it has stringent legal guardrails attached to it.

So there are specific people who are tasked with that. That means they track the requests meticulously and then it can be subject... it can be part of a transparency report. I think if you decided right now, if Kate and I went to any company and we were like, "Okay, today can you just tell us the number of jawboning related requests you got? Requests you got to alter your content moderation practices from a government official." Companies would start laughing. There's no way to track them currently, tons of people within companies receive them. Tons of people on the government side make them. They make them in various different forms. It's not even clear what exactly qualifies as a request.

Kate Klonick:

Some of them now with the EU are required through trusted flagger programs. Right?

Matt Perault:

Yeah.

Dean Jackson:

I think what's compelling about it, the process approach that you've both held up though to me, is that it's not totally reinventing the wheel, right? We have this perennial problem of how do third parties attempt to get platforms to do things and none of the processes seem to make anyone terribly happy, but in a way, maybe these parties have irreconcilable interests and so a compromise process in which nobody is happy might be good process. Do you think that... is there truth to that?

Matt Perault:

I do think the status quo, which I feel so uncomfortable with, it's not that no one's happy, it's that people are happy when they like the outcomes. And that's what I find so dissatisfying. The headlines about this case that I saw on PBS, I was watching the PBS live stream of the argument and NPR, the way that they framed it, I think, was in a way that was very sympathetic to the government conduct here.

Can governments take action on disinformation online, I think was one of the types of framing. I think what is going on here is much more pernicious than that. It's really can governments interfere with the speech decisions made by tech companies? How do we know what's interference? Where do we draw the line?

And I think, I don't know for sure, but I think if this was about the Trump administration putting a lot of pressure on tech companies related to conservative bias for instance, or related to any kind of Trump related speech that many people on the left would find incredibly undesirable and would find it very problematic for the President to be interfering to try to effectuate those speech outcomes, the headlines would be different. And I just don't think that's the world that we want in terms of what we think of as policy best practices, where an administration that we might be sympathetic to takes action in a way that we find to be sympathetic and we think that's totally fine.

And then an administration that we find that we have a lot of skepticism about taking action that we disagree with, engages is in very similar conduct and that's all of a sudden deeply problematic. That is the world that I think we're in, and I think that's not a great one.

Dean Jackson:

We don't have to talk about this in the hypothetical, right? We have the real world parallel in the Trump administration. We have Mark Zuckerberg and Trump on the phone talking about how Meta is going to respond to his posts during the protests after George Floyd's murder. We have Trump drafting an executive order on Section 230 protections after he's unhappy with Twitter, right? We've already seen what that looks like. And to me, if this is worrying, that's a 5-alarm fire. You've got the President and private conversations with an executive then drafting executive orders directly coming off of those interactions.

While I think it's exactly right that we need process around this because it's concerning no matter who does it, because the government powers will be wielded by different actors for different ends at different points in history. The parallel only takes you so far, I think, because I think there is a difference between trying to get a sense of platform policies around COVID and if what the government says it's doing is true, trying to get them to enforce their own policies as opposed to there's already a smoking gun in the way that a second Trump administration would probably approach these things.

I think we'd be remiss not to talk about the fact we're having this conversation, which I think is a really valid conversation, about a really valid issue. But we're having it because a sort of doctored narrative made its way to the Supreme Court through this lawsuit.

And we've alluded several times to the Justices chastising the Louisiana Solicitor General for a brief that really cherry-picked parts of the factual record to create a narrative of pervasive coercion where the real picture is much grainier and you can still find examples of things that are concerning within that grainy picture. But how should we think about that, that the agenda has been set by an inaccurate portrayal of fact and the sort of ongoing consequences of the case for the election, for social media regulation, regardless of how the Justices decide going into the next year.

Kate Klonick:

So this fall, I was invited by Reason to do, or SoHo Forum to do a debate with Dr. Jay Bhattacharya, who is one of the main people who was named as a plaintiff on this case, and I think I was actually quite naive. This is a statement to my own media ecosystem and the people who I spend time reading and the conversations. I was aware that where Bhattacharya came down and that he believed he'd been censored by the government... through the government... through Twitter, by the government, through this entire kind of mechanism. And I don't know, I am just a very kind of boring civil libertarian. I have a very middle of the road... I don't know. I basically made the argument in the debate that he didn't even get any of his speech taken down, that he had gotten removed from the Twitter's trends panel.

Twitter just wasn't actively promoting him in some way and he'd been put on a blacklist and he just kept waving around this word blacklist. "I was blacklisted." I'm like, "They just called it a blacklist." They don't trend certain types of words. You're just getting marginally deamplified. Where is this injury? And I bring all of this up because to this point and to Matt's point about how this is outcome driven and how this is not about being process driven. I just don't actually think that we have a very good handle on the type of people in communities that are driving this type of fringe litigation right now.

I think that if you've studied mis and disinformation for a while, you do unfortunately, but I don't spend a lot of my time in those communities and doing that kind of stuff. And so I went into this debate thinking, how much do I actually disagree with this guy? Probably a little bit, but especially in this for instance, but generally think that jawboning can present a problem and that we should create some processes for dealing with it.

And I was just very surprised at how I just... literally, their questions and people were like bearing their teeth at me out of anger at me, "Why do you love censorship?" And I was like, "I don't. Next question."

Dean Jackson:

Why do you love censorship, Kate? That's my next question.

Kate Klonick:

I don't know. Anyways, I was like, "I'm here for the cookies." But my point is that this is twofold. This is coming from a position of motivated reasoning. So in the cognitive science sense, this is coming from motivated reasoning. So process is not going to solve that problem and make people happy. Matt is completely right. But what's really frustrating is we have lots of evidence that people do like procedure. Tom Tyler's work in this is fantastic, but there's a lot of kind of actual empirical evidence that people prefer a procedure in their day in court to an outcome driven kind of basis, but not when you have an entire conspiracy that the entire court system and the entire government and all of the people that run it and all of the people that run the platforms are conspiring against you in some way.

There is no amount of process that can overcome that. And so to Matt's point and to these kind of sane points, I say "Yes, I think we're sane, and these are good things. I would take these any day of the week." But I guess a little bit disappointingly, I'm like, we're just not having the same conversations with the same facts and the same reasoning, and I'm not trying to be derisive about that or be elitist about that. It's just that they are just... they just see the world in a completely different way, and that's not copacetic with the First Amendment doctrine that I, and hopefully and the majority of Justices see being defended here.

Matt Perault:

I didn't know until I was looking at the case more closely how problematic the factual record is, and it does seem like it's genuinely problematic. I think that's unfortunate here. I think a lot of people have jumped up and down and pointed out how challenging that was for the state attorney and how badly that looked for him to get criticized by the Justices and the argument on those grounds, and I don't disagree with that, but I think the whole thing's unfortunate. Jawboning occurs. This was not the right factual record, I think, to make that clear and to make the standing point as clear as possible, but I don't think that's because this doesn't exist. I don't think that's because you couldn't make out a better case on it. And so I think there are sometimes cases that come up to the Supreme Court where an alternative factual record would make for a more robust and helpful debate.

This wasn't that case and as Kate has pointed out, I think for very good reason, standing is part of the doctrine for a really important reason. You can't win a case if you have a really bad claim on standing grounds and it does make sense that's how this will be resolved, and again, I don't think that's a good outcome.

Dean Jackson:

It does feel a little bit if this case gets thrown on standing grounds, like you get to the climax of the movie and then the hero and the villain shake hands and agree to part ways.

Matt Perault:

It's not unusual, but there's a massive Section 230 case last year dealing with a terrorist incident in the place where Kate is currently residing, and that case was, again, the factual record was really weak in the case. It was making an allegation about YouTube's role in a terrorist incident where there was actually no assertion, I think. You guys, correct me if I'm wrong, I think there was no assertion that the perpetrators in question had actually watched a video on YouTube, so there was no causal link and it was thrown out on those grounds.

But there are obviously cases where people see content and then go out and do things. I think you can still make very strong claims that doesn't establish causation, but where people see material online and then go out and do things and the platform plays a role in curating content that then enables someone to see a particular video and then they may take action in response to that. And so I think that case was correctly decided on standing grounds. I was very sympathetic in the equities to the company's position in that case, but I don't think the factual record was really sufficient to have the kind of erring and debate around the topics there that we would've wanted, and I think that's unfortunate.

I would've found it problematic if the court had radically altered Section 230 doctrine. I don't think that would've been a good thing, but I'm not also sure it's a particularly good outcome when a weak factual record means we are not able to really unpack the true issues that underlie the case.

Dean Jackson:

I want to go back to Kate, your story about the panel you were on and the alleged blacklist and the way in which really deciding to deamplify a user is exactly the use of that editorial judgment that we talked about platforms having a First Amendment right to. Twitter can decide who it does and doesn't promote. The content's still there. The person still got to speak, but they don't necessarily have a right to a bullhorn.

Another thing that stood out to me during the arguments was the number of times the Louisiana Solicitor General said the government can counter speech it doesn't like with more speech and that the government actually could go to platforms and say, "We wish you would promote our speech. We wish you would, in some ways, elevate our viewpoint." Which struck me as odd because it strays into questions of coerced speech and propaganda and different types of risks.

But it seems to me that in some ways all of this argument about social media and speech is about the right to a platform and the right to be amplified and heard more than the ability to speak, which is maybe something that's taken on new importance in the social media aid. Is there a kernel of something interesting in that or is this just an artifact of the case that the Louisiana Solicitor General had to make?

Kate Klonick:

I think it's both. I think it is an artifact, but I also think that it raises a really great question, and it also gets into the softness of the language in which we talk about all of this, which is that of course being banned on Twitter is not like having the police state come and knock on your door and tell you not to talk anymore or come to your bookstore and tell you to take some books off the shelf.

The idea that basically you have a way to soft pedal some of this kind of... the hard line between a censorship question and a pure censorship question and government censorship question versus the private. That's just something that we want to incentivize generally speaking, because if we don't want the government to be making this and we want these private entities to be able to be creating their own press and information ecosystems and communication ecosystems, and we don't want the government setting the rules, then you're going to have to abide by the fact that not everyone's going to be able to always speak as much as they want all the time everywhere.

That is just a fundamental state of play for all of the game, and I think that this is something that people miss a lot in First Amendment doctrine is actually good First Amendment, good speech regulation that is in keeping with a lot of the values of the First Amendment from an Alexander Michel John to a Robert Post argument is that you have a well-regulated space both in design and rules around the border about tone, civility, process and procedure that allows it so that you don't just have one person with a megaphone sucking up the whole public square. There's certain types of changes that you make.

Part of this is if you're disempowered by this, then... and there's also personal stuff with this. I feel like a lot of these people were aggrieved and just felt like they were people who were used to being listened to and were very angry that they weren't getting listened to in the COVID debates and that their peers weren't respecting them. And so I feel like there was a whole lot going on in these cases behind what motivated them to bring them.

But it was, as you said, so refreshing to hear the justices put some bright lines on this and to be like, "No, we're not..." to see it for what it was. I just don't know that it's going to matter for the next case that gets brought in this.

Dean Jackson:

I could listen to both of you all day. I think you should have a right to speak and be heard. Thank you both so much for doing this.

Matt Perault:

Thanks, Dean.

Kate Klonick:

Thank you.

Authors

Dean Jackson
Dean Jackson is the principal behind Public Circle Research and Consulting and a specialist in democracy, media, and technology. Previously, he was an investigative analyst with the Select Committee to Investigate the January 6th Attack on the US Capitol and project manager of the Influence Operatio...
Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics