Experts Debate Social Media and the First Amendment
Justin Hendrix / Sep 28, 2023Justin Hendrix is CEO and Editor of Tech Policy Press.
On Friday, I attended a packed lunchtime discussion hosted by the Harvard Law School Rappaport Forum titled “Censorship, Content Moderation, and the First Amendment." The panel was moderated by Noah Feldman, a Professor Law at Harvard. Speakers included Jameel Jaffer, Adjunct Professor of Law and Journalism at Columbia Law School & Executive Director of the Knight First Amendment Institute, Columbia University; and Daphne Keller, Lecturer on Law at Stanford Law School & Director of the Program on Platform Regulation at the Stanford Cyber Policy Center.
The discussion focused on issues that may soon be considered by the US Supreme Court, including the constitutionality of laws passed in Texas and Florida that would prevent social media platforms from taking action on certain political speech. In August, the Biden administration urged the Court to decide whether the laws are constitutional, and it is expected to do so.
And, the Rappaport Forum panel also considered Missouri v Biden in light of the recent US Fifth Circuit Court of Appeals ruling against the Biden administration. That case concerns what is permissible government persuasion and what is impermissible “coercion" and "significant encouragement” when lobbying social media companies to make certain content moderation decisions. On Tuesday, the government asked the Supreme Court to pause a block on its contacts with social media companies, while the plaintiffs seek a rehearing of the Fifth Circuit decision to address its scope.
(For another compelling, recent perspective on the issues in Missouri v Biden, I recommend reading former Twitter trust and safety head Yoel Roth's essay, published today by the Knight First Amendment Institute, which focuses on "the portion of the Fifth Circuit ruling concerning the FBI" drawing on his personal experience.)
With the Law School's permission, I'm publishing the transcript of the Rappaport Forum discussion here, as it is a useful and accessible way to engage with the issues at play. As the Knight First Amendment Institute's Jaffer put it, "the courts are going to hear this full slew of cases over the next few years relating to the government's power to influence or coerce or expose the social media companies' content moderation decisions. And I think it hardly needs to be said that those cases are going to have an immense effect on the character of digital public sphere and therefore on our democracy as well."
I'd add only that the effects will extend well beyond the US, since it will change the ways in which global social media platforms conduct themselves when it comes to content moderation and political speech. The implications may be even more profound in countries far beyond the jurisdiction of US courts.
This transcript is lightly edited.
Noah Feldman:
I just want to say a word about the two leading topics that we'll be talking about. And we will, I'm sure expand beyond just those topics. The first is a set of cases that are in front of the US Supreme Court now that are being briefed and will be argued this Supreme Court term and decided, one expects, by the end of June, involving laws passed by Florida and Texas that in their form regulate what social media platforms may and may not do in their content moderation.
And to oversimplify, each of these laws imposes on the platforms something like the standard that the First Amendment imposes on government in moderating content. As you know, that standard... and not just those of you who were in my First Amendment class, welcome, glad you're here. We just had two hours of First Amendment. So these are the real, the people really committed to the First Amendment and I thank you for coming.
As you know, all of you, the standards that a private company and the social media platforms are private companies, are ordinarily held to, are not First Amendment standards. Because the first amendment in the first instance only regulates the government. These state laws therefore would put the content moderation operations within those companies in a very different position with respect to what they can and cannot moderate than they presently are. It would require far, far less moderation of things like hate speech and misinformation and possibly even ordinary everyday offensiveness than they practice under current circumstances. And the circuit courts of appeals split on the constitutionality of those laws and that's why it's before the Supreme Court now.
Hard to imagine a topic more important for free speech in the United States today than what are the standards that the social media platforms may or may not use to determine what content can be on those platforms? And here that issue arises in direct relationship to the First Amendment.
The other is also before the Supreme Court, but in a slightly different procedural posture, if you'll forgive the legalese. It is a case involving an argument by individuals whose content was taken down from social media sites for violating their rules on COVID misinformation. Who alleged in district court where they won a preliminary injunction. That the Biden administration convinced by means of encouragement and even coercion the platforms to take down their content by fine-tuning their content moderation misinformation standards to prohibit what they were doing.
The US Court of Appeals for the Fifth Circuit partly upheld a preliminary injunction issued by the lower court. It narrowed it down to just the Biden administration and not people in the CDC and the Supreme Court decided to stay that order until I think four o'clock today and gave until the end of the day Wednesday for people to submit briefs. So it's very probable that before you go off to your happy hours this evening, there will be a Supreme Court decision on this fascinating and rich issue, which sometimes we use the shorthand to call it, we call it "jawbone." I actually don't know the intellectual origins of that phrase because it sounds to me like Samson and the jawbone of the ass, and that didn't end well for the Philistines.
Daphne Keller:
It is.
Noah Feldman:
Is that actually the origin? I always thought it had something to do with the fact that you talk out of your jaw, but I guess not. If so, it's a very loaded metaphor, I guess it assumes a conclusion.
But what is meant is circumstances where government officials use persuasion, and persuasion that may go up to the line, will cross the line of coercive persuasion to the point where the decision to remove the speech becomes in law the speech of the government. And by becoming the speech of the government, is regulated by the First Amendment. Okay, so for those of you who haven't taken First Amendment or haven't taken it recently, the idea is that the government ordinarily can say whatever it likes, but it can't stop people from speaking. Private parties can stop other private parties from speaking and they're not stopped by the First Amendment from doing so.
But if the private party, the social media company, removes the speech of another private person and does so because the government made them do it, then at that point it becomes the government's speech Act and then it cannot lawfully be performed. It would've been fine on that theory if the platforms did it themselves, but it's not constitutional if they did it by being pushed into it according to some complex legal standard by the government. Without further ado, Daphne, the floor is yours.
Daphne Keller:
Thank you so much and thank you to Harvard and the Rappaport Forum for hosting us here.
So I've been practicing platform speech law for a long, long time and I've been teaching it for 11 years, I just realized. And when I started teaching it, every single class was on the topic that lawyers call intermediary liability. So that's the question of when the law can or should require platforms to take down user speech because that speech or that content is unlawful and it's doing harm in violating the law by being distributed further by the platforms. And every year that I teach for the past five or six years, I've had to drop a day of talking about that question, which is when does the law require platforms to silence their users? And add more material about the opposite question, which is when can the law stop platforms from silencing their users?
Are there situations where there can be, what we call “must carry” laws compelling platforms to carry speech against their will because a government body has decided that that's what's in the public interest? And as we know from Noah's introduction, in these cases coming out of Texas and Florida, that they're likely to go to the Supreme Court soon, those states are asserting the right to compel platforms to carry speech that they don't want to. But lest you think that other issue has gone away, there have been three state laws requiring platforms, effectively requiring them to take down user speech that got struck down as unconstitutional in the past two and a half weeks. So there is a lot of action on both sides of this. When does the law make platform silence people? When does it compel them to let people speak? And it's a very complicated set of issues because there really are speech considerations on all sides.
It is quite understandable that people want to be able to talk in some of the most important public forums of our age and they don't like it when a giant corporation stops them from doing that. That is not surprising and while it is passed as politically an issue of concern to the right and to Republicans right now, I think it is absolutely a bipartisan issue. Liberals don't like being silenced by corporations either. It is, I think unsurprising that we're seeing the great wave of regulation right now, including the three state laws that were just struck down and the Texas and Florida laws, because we're in this historically unprecedented situation of very concentrated power over public discourse and private discourse. The things that we once would have said to each other in a church or a bar or a note passed in class are instead passed through these private companies and transmitted digitally.
And that introduces both a greater capacity for control because they're there at all, because it's a centralized power and because they can have tools that automatically detect what words you use and automatically, if inaccurately you suppress things. So it's unprecedented power and because it is private power, the tools to defend users' rights from surveillance under the Fourth Amendment and from censorship under the First Amendment, those legal tools don't work or they don't... If they work, we don't know how they work yet because the idea of applying them to private actors in the way that some advocates want to do now is unprecedented, is unexplored territory, figuring out how that could possibly work.
I think I want to suggest that there is a problem in the way that states have responded to this concentration of power, and that this is a problem that appears on the right and the left. Again, I think a lot of this gets passed as partisan and isn't necessarily. The problem is that regulators say, "Wow, private companies, YouTube, Facebook, Google, you have so much control over discourse, it's terrible. We're going to have to take that over and tell you how to use it." So instead of saying, "There's a concentration of power, let's undo the concentration of power," which is conceivable through interoperability mandates or through changes in privacy law. Instead of taking that approach, the approach that you get from both the left and the right is to say, use your power in the following way. Use it to take down more of this kind of speech or use it to keep up more of this kind of speech.
And I want to drive home that, the Texas and Florida laws, although they get called must carry laws and Texas and Florida themselves claimed that they are common carriage laws, which suggests that the platforms are supposed to just carry everything that people say, they actually introduced some pretty significant state preferences about speech. They are not content neutral, they're not speaker neutral and they incentivize platforms to do things that will suppress speech as well as maybe carrying more speech. So, one way that that works is Texas's law has a mandate to be viewpoint neutral when platforms are deciding what content to take down. If they want to take down anti-racist content, then they have to also, and I said that backwards. If they want to take down racist content, they also have to leave up anti-racist content. You pick your really difficult issue and they're supposed to carry speech on both sides of it. If they want to take down pro-anorexia content aimed at teenagers, they might have to take down anti-anorexia content aimed at teenagers.
What that does for listeners, if you're on the internet and you wanted to follow a speaker you already respect or learn about something, is as the cost of accessing the information you want, which maybe is the anti-racist speech, you have to also put up with this state mandated inclusion of the stuff that you didn't want. So it is very much changing what it is that users can see and read online at state behest in a way that raises questions, not just about platforms rights to decide what to do, but about users' rights to speak or rather to access information online. It is also, I think, quite likely speaking as a former platform lawyer, that if the platform is trying to decide how to comply with the viewpoint neutrality mandate, they'll say, "You know what? I'd rather have no one talking about racism at all than have to carry both the pro-racist and the anti-racist viewpoints. So I'm just going to take down a whole lot more speech than I used to." And that's the consequence of this, the nominally pro-free expression law in Texas.
I can tell you more about ways in which I think the laws more in the weeds to introduce state preferences for speech, but hopefully that sets out the basics of it. I have about three more minutes, right? All right. I think there's an underlying problem here or an underlying difficulty, which is about what in the trade gets called lawful but awful speech. This is this very large category of speech and I had an article in the UChicago Law Review going into more depth on this, that is legal, it's protected by the First Amendment, that's probably not going to change. But it is also morally abhorrent to many people, it violates social norms and they don't want to see it. So the pro-anorexia content, the pro-suicide content, the beheading videos, the Holocaust denial, the list is very long, and it's very ugly.
If we don't want to see that content on the internet, we can't use the law to make it go away. And so where we've been so far is we're stuck having private companies come up with rules and enforcing the rules that there's economic demand for and social demand for, but nobody likes that either because of this concentration of power issue. And so the deeper question I think is, how to deal with that. And the answer can't be, or I hope it can't be, "Well, we'll just ban a bunch more speech. If we will use the law to restrict all this stuff that is currently First Amendment protected." Or there's a version of that that says, "You can still say all that stuff offline, but if you say it on platforms, it's more dangerous, so they have to take it down." And maybe the FCC will administer a new set of rules for previously lawful speech and say platforms have to take it down.
There are a lot of directions you could go to use legal power to address that. And I think they're all pretty scary. And so I am much more interested in approaches that go back to this idea of maybe let's not have that concentration of power. Let's build what my Stanford colleague Francis Fukuyama calls middleware or what other people call it, adversarial interoperability or competitive compatibility. Which is finding ways to make it so that internet users can decide for themselves what speech rules they want to be subject to and have a competitive marketplace of different providers coming along, letting you select the Disney flavor of YouTube or the version of Twitter that is curated by a Black Lives Matter affiliated group or the combination or something from your church. There are all these ways to layer competing speech rules on top of existing platforms that I think can take us away from this idea that there has to be just one set of rules and the government gets to say what it's going to be.
Noah Feldman:
Thank you so much, Daphne. On that last topic, it'll be interesting to talk about A, whether that puts people into filter bubbles and B, whether we're not actually seeing the market competition now in the way that, the company formerly known as Twitter, now has radically different rules of engagement than it did previously and is yet we're in competition with other factors. Jameel.
Jameel Jaffer:
So I totally disagree with everything that Daphne said.
No, it's really a privilege to be up here with Daphne and Noah who are both wonderful people and really smart thinkers on this set of issues. I do need to correct one thing that Noah said. I did not, in fact, dream up the Knight Institute. It was Columbia University and the Knight Foundation that dreamt it up and then made the mistake of hiring me to build the institute. So as you've already heard, the courts are going to hear this full slew of cases over the next few years relating to the government's power to influence or coerce or expose the social media companies' content moderation decisions. And I think it hardly needs to be said that those cases are going to have an immense effect on the character of digital public sphere and therefore on our democracy as well.
Some of those cases have already been mentioned, in Florida and Texas. We have these laws that require the social media companies to carry content that they would rather not carry. The laws also limit the use of recommendation algorithms, they require the companies to dispose all sorts of information to their users and to the public. There's also this Missouri case that Noah referred to where users have sued the Biden administration over its efforts to coerce the platforms or influence the platforms into taking down what the administration saw as vaccine disinformation. I would put into this category of cases, also the TikTok cases where the Montana has banned TikTok altogether from operating in the state. And one way to think about that law is as the most extreme content moderation where TikTok can't serve any content at all to its users. There are lots of other cases-- Daphne referred to some of them. Lots of other cases in the lower courts right now that raise these kinds of issues. I think that the plaintiffs have a pretty good chance of prevailing in most of those cases.
And in my view, the plaintiffs probably should prevail in most of those cases. Because most of them involve what I think can fairly be described as government efforts to rig public discourse. And that is precisely what the First Amendment was meant to protect against. But I think that it matters a lot how the courts resolve those cases, how the plaintiffs win those cases. I'm worried that the courts are constructing a First Amendment that sees every regulatory intervention in this sphere as a form of censorship. And I don't think that that version of the First Amendment would serve free speech or democracy very well. In my view, the First Amendment should be able to distinguish between regulation that undermines the values that the First Amendment was meant to serve. Values like accountability and tolerance, self-government and interventions that promote those values. The First Amendment needs to be able to distinguish those two categories of interventions.
And of course it's important that the First Amendment be attentive to the possibility that any intervention in this sphere is an effort to distort public discourse, or that the intervention will have that effect. And I don't want to move past that too quickly. I think that's hugely important, if you doubt the importance of that, just look around the world at the way that fake news laws are being used now against journalists. So I think it's hugely important that First Amendment doctrine continue to be attentive to the possibility that any regulation in this sphere has that intent or that effect. But I do think it would be a sad thing and something terrible for our democracy if the courts constructed a First Amendment that was indiscriminately deregulatory. A First Amendment that left essentially no space for regulatory intervention at all, even intervention that might be important to protecting the integrity or the vitality of the digital public sphere.
So I think it's worth taking a close look at some of the arguments that the social media companies and the technology companies, more broadly, are making in these cases that we have identified already. So one of the arguments is that, the collection of user data is speech within the meaning of the First Amendment. Another is that, any regulatory intervention that implicates the platform's editorial judgment has to be subject to the most stringent form of constitutional review. Another argument is that, any regulatory intervention that focuses specifically on social media companies should be subject for that reason to the most stringent form of constitutional review. And then finally, any regulation that would be unconstitutional if applied to newspapers must also be unconstitutional if it's applied to social media companies. So it's not surprising that you see social media companies making those arguments. What business wouldn't want to be totally beyond the reach of regulation?
So I understand and appreciate why they're making these arguments. But if courts accept those arguments, it's not just the bad laws that we have already identified that will be struck down, it's also good laws. Those kinds of arguments will preempt legislatures from passing laws that I think most of us, no matter what our political views are, would agree make sense. Privacy laws for example, that would restrict what data the platforms can collect and what they can do with that data. Interoperability laws, which Daphne already mentioned, that might make it possible for third parties to build on top of the networks that the social media companies have created. Transparency laws that would allow the public to better understand what effect the platform's engineering decisions are having on public discourse. Or process oriented laws that would give users whose speech is taken down the right to an explanation or the right to appeal that decision.
Now, I know Noah wants me to make this argument in the strongest possible way, but I need to caveat it in one respect at least, which is that, the details are going to matter a lot. I'm not making the argument that every transparency law is necessarily constitutional. Again, it's important that the courts be attentive not just to the reasons why legislatures are passing these laws, but to the actual effect that the laws are likely to have on First Amendment actors exercise of editorial judgment. But a First Amendment that precluded any and all regulation of social media platforms would make the First Amendment, I think the enemy of the values that we need the First Amendment to protect. Should I stop there or do I have a couple more minutes? You want me to-
Noah Feldman:
You can go on for another minute.
Jameel Jaffer:
Yeah. Okay, well only-
Noah Feldman:
Say something provocative.
Jameel Jaffer:
Okay. All right.
Noah Feldman:
The last time I had a discussion with Jameel, we got into a yelling argument that took an hour and a half and it's all on video somewhere.
Jameel Jaffer:
You weren't the moderator.
Noah Feldman:
I wasn't the moderator, that's true.
Jameel Jaffer:
I guess the only thing, maybe this will sharpen the argument slightly. So the argument that the First Amendment shouldn't make any distinction between newspapers say and social media companies seems especially misguided to me. There's no question in my mind that social media companies exercise editorial judgment. They make judgments all the time about the relative value of different categories of speech that seems like editorial judgment of the kind that, or at least analogous to the kinds of judgments that newspapers make about what should appear in their pages or that parade organizers make when they decide which floats can appear in the parade, that seems like a form of editorial judgment to me. But the relationship that a social media company has, to the speech that appears on its platform is different from the relationship that a newspaper has to the speech that appears in its pages. To say that another way, both of these kinds of actors exercise editorial judgment, but they exercise editorial judgment in different ways.
And those differences I think should matter to the First Amendment analysis. Why don't I leave it there? I can say more on that.
Noah Feldman:
Great. I would love to ask a question to both of you that derives from something that Jameel said, but I think it's relevant to both of your comments. And that is the question of why we have a First Amendment in the first place at all. So I think you said in passing Jameel that the whole point of the First Amendment is to avoid the government distorting free speech or rigging what discourse is out there is the public. And I want to push back from the standpoint of the people who passed the Florida and Texas laws. I think what they would say is, "That's not the main purpose of the First Amendment, although it might be a purpose." The main purpose of the First Amendment isn't to enable people to speak freely. And nowadays, the place that people speak is on social media. And as platform lawyers certainly know, and everyone who uses social media knows, an enormous amount of content that you might want to say on social media, you can't.
It gets taken down and the more controversial you are, the more quick they are to take it down. And so from that perspective, if the government can't tell social media to allow free speech, and if you can't define free speech by saying, "We're not going to make up a special definition for you, we're just going to use the definition that the courts make us use," how on earth can that be in violation of the principles of the First Amendment? It seems like the only way it could be is if you think something that you guys both claim not to think, I think, which is that, the platforms are not just like newspapers who can say whatever they want.
So if they're not like newspapers, what could possibly be wrong with Florida or Texas saying, "You know what guys? You're subject to the same standards that we're subject to. And the reason for that is that the First Amendment is about maximizing people's capacity to communicate and you are in the real world, the thing that stands between this generation and the possibility of free speech." So I would like each of you to address that.
Daphne Keller:
So that's not what they said though.
Noah Feldman:
Well, let's reconstruct it in the strongest argument that they could. Let's then just imagine a statute which is a variant on this, these statutes that just says, "The platforms may not do anything that the government may not do with the regulation of free speech." Is that constitutional in your view?
Daphne Keller:
I don't think so.
Noah Feldman:
That's what I thought.
Daphne Keller:
And so to be clear, the difference is, so Florida says, "You have to let politicians say anything and journalists say anything." So it is picking winners as speakers and giving them special privileges. And I think those are important special speakers too, but the way they do it is very clumsy. And then Texas says, "You have to be viewpoint neutral, but actually you don't have to be viewpoint neutral as to these things we think are really bad, you can just take that down."
Noah Feldman:
Just imagine they did it well.
Daphne Keller:
Yeah. So, instead we're imagining a law that says there's a common carriage law, which is what Texas and Florida claim they have, which says, "You have to carry every single thing period. Or you have to carry every single thing that's legal." And so if you know something's illegal, take that down, but you have to carry everything else. I think one, I guess, the constitutionality, but man, those lawmakers constituents would hate that. Their kids and grandparents and cousins and whatever would go on YouTube and suddenly see a bunch of extreme porn or go on TikTok and see a bunch of pro-suicide videos and think this is not something people would actually be happy with. But setting that aside, I think, so I have been focusing on the speech rights of internet users and how they're affected. But here the impact on the speech rights of the platforms is quite visible and quite extreme. Is taking away their ability to set any editorial policy at all, which I think is clearly a First Amendment problem. It also will, I think would be a ....
Noah Feldman:
But why? Because corporations deserve free speech rights?
Daphne Keller:
Well, because we have a bunch of precedents saying that the parade operators and the cable operators and so forth, various commercial entities or non-commercial entities that just aggregate third party speech and set some rules for it, they do have First Amendment rights. So because the Supreme Court I think is my main answer there. But I also think it would destroy the ...
Noah Feldman:
Can I just push back? I mean, what if the Supreme Court said that, "A parade is one thing, because can always make your own parade. But I tried to make my own Facebook and I wasn't so successful. So they're not exactly like a parade, and so we're going to treat them differently." And I think Jameel thinks that they should be treated differently from newspapers. So if that were the case... I mean I don't think, imagine the precedent doesn't limit us here, because I personally don't think that it does. Would you still think, if you were on the Supreme Court and not bound by a precedent, do you believe that these giant gajillion dollar multinational corporations that control all of our speech have their own free speech to shut us up? Or that's the question that I'm asking.
Daphne Keller:
Yes. Yes, they do. I don't think there should be-
Noah Feldman:
Why?
Daphne Keller:
There should be more of them. They shouldn't have the power that they do, but they are providing a service that most users want in curating the speech that they see. So it's not a free speech mosh pit, every day when you show up on Twitter or YouTube or Facebook. And they're doing that in expressing, they're expressing their own priorities about what speech is good and bad in so doing. It seems like, I agree with you, the court can just change it and maybe they will, and maybe that's the world we're heading for. So precedent's not that important, but I think that there is a First Amendment value being served that would be served better with more competition, but it's definitely a First Amendment value.
Noah Feldman:
Jameel, and especially given that you think there's a difference between the social media companies and newspapers, I want to know what the principle is behind that difference. Unless you are willing to allow the government to force the social media companies to allow free speech.
Jameel Jaffer:
Well, I mean I think it depends. So the answer for newspapers, the Supreme Court has already given us in a case called Miami Herald. So there was a law that would've required newspapers to run opposing viewpoints when they editorialize on certain topics. And the Supreme Court struck it down, saying, "You can't force newspapers to publish opinions they disagree with and to carry speech so they don't want to carry." And so the question is, does that principle apply or apply with the same force to social media companies? And I don't think it should. I do think that there are circumstances in which legislatures should be able to impose, must carry obligations on platforms even if they couldn't impose the same ones on newspapers. I'm not totally unsympathetic to that aspect of the Florida law. The Florida law says, the best version of the Florida law would say, "A couple of weeks before elections, the big social media companies can take down political candidates posts only according to, say, published procedural rules that are applied generally and not just to political candidates or to a particular subset of political candidates."
Now, do I think that law might be constitutional because I think the social media companies have no First Amendment rights at issue here? No. I think the social media companies are exercising editorial judgment as Daphne says, they're just exercising it in a different way than newspapers do. But the fact that they're exercising editorial judgment isn't the end of the analysis. Then there's the question of, is the public justification for overriding that editorial judgment strong enough to justify overriding it? And I think you could make a strong case or at least a plausible case, that in the weeks before an election, the public's interest in hearing from political candidates should prevail over the interests of Facebook or TikTok in promoting the political candidates that they might prefer at that particular moment in time.
Now, the Florida law, I'm not defending the Florida law. The Florida law I think was passed in order to retaliate against companies that were perceived to have a liberal bias. I don't think there are any legislative findings in the Florida law to justify the must carry provision I just described. But I'm not unsympathetic to that argument and I don't think we want a first amendment that categorically precludes legislatures from even considering those kinds of must carry provision.
Noah Feldman:
So can I push you just a tiny bit to what seems to me like it would be the logical conclusion of that view? You say there has to be a compelling governmental interest, fair. What about the compelling governmental interest in the next generation of people who communicate only on social media, for the most part, having free speech? I mean, we don't have a public... The Supreme Court has said that the public sphere today is online and on social media. So if you accept that, then I can't even imagine an interest more compelling to override the supposed free speech interests of these gajillion dollar corporations. I think neither of you is jumping up and down about the idea that all corporations have free speech rights, but we'll leave that to one side.
But the core idea would be that we can't have free speech anymore if the platforms are treated as exercising the editorial control. And you yourself, I mean, I think I'm expressing a view, it's closer to your view than to mine, because I tend to be on neither newspapers. But I'm really trying to articulate the counter view. Once you've conceded that under some circumstances their editorial control can be overridden, why not override it just all the way down the line and let's just have free speech and we don't have to invent some bad free speech law. We'll just use the free speech law the Supreme Court has already created for governments.
Jameel Jaffer:
Because there're a competing First Amendment rights at stake here. The platforms also have rights, as speakers that need to be accounted for. And I think what Daphne said earlier about how this would actually work in practice needs to count for something, it needs to count for something that this would result in a digital public sphere that works for nobody. So I mean that's the reason that I don't think that's a very persuasive argument. Number one, the platforms have their own speaker rights at issue here. And number two, overriding those rights in the way that you described, essentially imposing the First Amendment on private platforms would result in a public sphere that works for nobody. It would be... I go onto Bluesky now because I want to see the views of the people I follow on Bluesky. If I went onto Bluesky and instead the first 10 things I saw were posts from Elon Musk or Jack Dorsey, it would be much less useful to me. And-
Noah Feldman:
Can I ask Daphne to, I'm going to ask you to dig in a little bit on that. Because you also made a version of that argument that no one would be happy with such a thing. But I think my response to that would be something like this. We have the public sphere and it was always governed by free speech rules and it existed and it was fine. Maybe it wasn't perfect, but it was fine. Now we have this new phenomenon on social media and you're saying, "Well, this business model can't possibly work if you do it that way." Who cares from a free speech standpoint? Since when is the point of the First Amendment to serve the interests of the shareholders of these big corporations? It worked fine before, it's still working fine outside of social media, so why not do it that way? And why not open to agree to free speech? And then the last point is maybe even the premise is not true.
I mean, the company formerly known as Twitter has not quite suspended all rules, but certainly my feed includes many, many things that I would not want my grandmother or a small child to see. I didn't do anything different, I just showed up on the site and one day everything had changed. You could still use it. I don't think it's the death of the forum.
Daphne Keller:
So I have an answer that's about metaphor and an answer that's about doctrine. The metaphor answer is that platforms are functioning as a substitute for the public square and also the nightly news broadcast and also passing a note in class and also just this long list of means of communication, some of which have that free for all speech rules and some of which didn't at all. And so I think, sacrificing the value that people get from platforms in those other roles in order to turn everything into the free for all isn't justified by the historical analogies. But moving to doctrine, we have repeated First Amendment cases saying, "If the technology permits a resolution where the government can meet its goals by putting more autonomy in the hands of individual users and listeners, then it should do that instead of having a top-down rule."
So we have this in the Playboy case, which is about cable scrambling, and the court was like, "Well, but what if parents had more individual control within their households, wouldn't that be better?" And because the law there had suppressed too much lawful speech without the legislature trying to get to a better means and fit between this legitimate goal of protecting children and the risk to lawful speech, the court said, "No, go back, there are better versions and specifically increasing individual autonomy is a better version." You get something similar in the Ashcroft case. So I think, I want to make the argument that lawmakers don't get to just ignore remedies that would increase user autonomy and say, "Oh, too bad, there's this major power over speech that we're going to have to take over and dictate how to use." I think they should have to look at better ways to give users power over what? Discourse they choose to participate in.
Jameel Jaffer:
Can I just also push back on the way that you framed this? I think it's you who's arguing for a radical departure from historical practice, because it's not the case that everything always worked, and now we are proposing that we move to a different model where suddenly the public square works differently. Well if that worked in the past, is that, editors exercise editorial judgment free from interference or substantial interference by the government. And that's an argument in the other direction. That's an argument for giving the platforms the space to make editorial judgements that a lot of us might disagree with. My argument is just that, that's not the only First Amendment interest on the table, but the interest of editors and making editorial judgments is an important First Amendment interest, but it's not the only interest on the table. And so you have to at some point balance that interest with the interest that you articulated the interest of users in participating in public discourse.
And I offered one possible line a couple of weeks before the election, you could have this must carry rule. You could offer a different line. My meta argument is just that, the First Amendment shouldn't foreclose that debate. We shouldn't have First Amendment doctrine that says, "Imposing must carry obligations on anybody who's exercising First Amendment rights is per se unconstitutional." Because that will preclude even the exploration of the kinds of laws that we were talking about a few minutes ago.
Noah Feldman:
I'm going to turn the subject to job owning in a second. I do just want to say to put my own cards on the table. I think I do have a more radical view, although my own actual view is on the other side. I tend to think broadly that you have to either treat all free speakers with the same free speech rights are the same, and therefore that the social media companies are just like the newspapers. Or else then you have to allow the government to regulate in this specific way where it would just say free speech applies in all of these contexts. I'm scared of a world where thoughtful, nuanced people like you articulate reasonable rules, but then it goes to the Supreme Court, and now it's the Supreme Court who's going to do the picking and choosing of which legislative rules are okay, and which aren't. And if I could trade the two of you for the current configured Supreme Court, I would trade it in a second, but I can't. Anyway, let's turn to the Jawboning-
Jameel Jaffer:
Can I have one response on that?
Noah Feldman:
Yeah, of course. Always.
Jameel Jaffer:
So once again, you are making it seem like I am proposing a shift when in fact you are proposing a shift.
Noah Feldman:
I agree that you're not already the Supreme Court.
Jameel Jaffer:
So think about parade or if you want to have a parade or protest, you have to apply for a license. If more than 100 people are going to participate in your demonstration, you have to apply for a license. We have a whole set of rules around licensing schemes to make sure that they are not used as a means of viewpoint discrimination. But we accept that parades are licensed. Parades are First Amendment activities, right? But we would never accept that scheme if it were imposed on social media companies or newspapers. If suddenly the government said all First Amendment actors have to be treated the same, parade organizers have to apply for licenses, so newspaper publishers should have to apply for licenses as well. We would say, "What are you talking about? These are totally different activities." The legal regime that applies to one shouldn't necessarily apply to the other. The First Amendment needs to be sensitive to factual differences. So I think I'm just proposing what we already have acknowledged and recognized, and you're proposing the shift.
Noah Feldman:
Let's talk about jawboning, Daphne. So when you hear that, when you read the transcripts of the testimony of the backroom conversation between the Biden administration and the platforms around vaccine misinformation, what did you think? Did you think this crossed what the legal line ought to be and amounted to intervention with free speech? Or did you think, "No, they're doing what they're allowed to do. They're just saying, we're the government and we really want you not to do this and we're going to repeat ourselves multiple times." What was your own? If you were the judge in that case, what would you have said under prevailing legal standards?
Daphne Keller:
Well, mostly I thought that it sounded familiar because I've been strong-armed by Republicans, Democrats, International. This is not an atypical conversation with government. The White House emails that were so emphatic, they were dumb. But I think and so I...
Noah Feldman:
They may have been dumb, but did they cross the legal line? When you were being pushed, I mean you were working at the time for Google.
Daphne Keller:
Yeah. So I-
Noah Feldman:
Did they successfully push you around? I mean, you were still Google.
Daphne Keller:
They didn't, but they were running into a lawyer with a lot of economic backing. And what I worry about much more is the same kinds of communications coming to smaller platforms that don't have a lawyer who's going to stand up for them and try to push back. But I think the thing that troubles me about the jawboning discussion generally is actually the focus on whether those communications are coercive. Because I think that imagines a world where all the platforms are Google and Twitter in 2007, and they're going to throw a bunch of money at protecting their users' rights for some economically irrational reason to do with the beliefs of the founders. That's not the world we live in. We live in a world where there are a lot of platforms who have every reason to accommodate people in power, be they people in the power in the US government, people who control access to lucrative foreign markets and are saying, I need you to take this down and then we can talk about your car factory.
There are plenty of platforms cited, that aren't being coerced because they're perfectly happy to do as they are asked. And if the legal question is about coercion, then there is no way to protect users' rights, online speakers rights from this very realistic scenario. And so I felt like it was useful in the fifth circuits ruling, that they have a concept of significant encouragement, I think is what they call it. And I just want more doctrinal development over there to figure out when is that going on? When can we hold governments accountable for acts that are not coercive and that affect user speech?
Noah Feldman:
So this is interesting. I mean, I myself wrote an article the other day literally arguing the opposite of that, but I wonder if Jameel agrees with it. I mean, so if I understand Daphne correctly, she's saying, "Leave aside what seems like the tricky question of whether there was coercion or not. Even if it's not coercive, the First Amendment should protect me from the government just encouraging the platform to take down my content." Do you agree with that?
Jameel Jaffer:
I'm not sure whether I agree with that. I probably don't. I probably don't.
Noah Feldman:
Imagine that formulation.
Jameel Jaffer:
Let me just say, there is I think a strong argument on the other side. The government needs to be able to govern and governing requires influencing public opinion, speaking to private actors. It would be a crazy world, I know Daphne is not proposing this. It would be a crazy world if the government weren't at liberty to share the CDC's research findings with the public, with the press, with social media companies. I think it would be a crazy world if the CDC weren't entitled to reach out to platforms that already have policies relating to public health misinformation and say to the platforms, we think those posts are misinformation and then leave it to the platforms to make the final call.
So I think there is a strong argument on the other side, and that argument is actually internal to the First Amendment in the sense that, that kind of government speech as a category, there are obviously exceptions, but as a category, that category of government speech informs public debate. We have a more informed public debate when the government can say to a newspaper, "We think that story is going to cause some national security damage." Or can say to a platform, "We think that, that posted misinformation."
Noah Feldman:
So it's 1:20. A lot of people have class at 1:30 on different parts of the campus, I'm one of them. I'm going to say, I'm going to arbitrarily say we have five minutes now for questions and conversation. So let's open it up. There are microphones circulating around and if you raise your hand, I will bring the microphone to you and I promise everybody who has class, we'll be out of here at 1:25, so you can be at your 1:30 class and if you're late, you can say, it's my fault. Right over here, please. In the front here. Thank you.
Audience Question:
Hi, thank you. Hi, Jameel. I would love to hear more about the distinction that you talked about between how newspapers should be treated and how social media companies should be treated. My question is around, if in the near future social media companies direct more editorial control and start acting even more like the online version of the New York Times or something like that, is there a point where they transform themselves into press and does that affect that regulation that we can put forward?
Jameel Jaffer:
Yeah. Really good question. I'm using social media companies and newspapers as shorthands. What really matters, I think, is the relationship of the editor to the content. And if a social media company, if a newspaper has a comment section, then it should probably be treated more like what I said social media companies should be treated like. And if a social media company is curating a newsfeed, I think it should probably be treated more like what I was saying, newspapers should be treated like. What matters to me is the relationship of the editorial judgment to the content.
Noah Feldman:
Over here. Please.
Audience Question:
Thanks so much. Daphne, you mentioned earlier that instead of governments looking at speech laws, perhaps they should be looking at privacy and things like interoperability. Also, can you speak more about why those laws might be more constitutional or beneficial for regulation?
Daphne Keller:
Yeah. I mean it relates to my point about the Playboy case and Ashcroft, this idea that if you have an option to put the power in the hands of individuals to decide what they want to listen to and what they want to be able to say, then you should do that. And if we used competition law to help make interoperability mandates, or if we just got rid of the stupid laws that prevent interoperability right now, right now, there are problems under anti-scraping laws like the CFAA, the Computer Fraud Abuse Act and Copyright law that make it really hard to build mechanisms to interoperate to different services or to layer a different editorial rule on top of an existing service. So there are pretty simple legal interventions that could make this user empowerment more possible, that would avoid having to decide what the state's new rules for speech are going to be.
Privacy is a bank shot way of getting at that. Basically, we need an overhaul of federal privacy law anyway. But if there's a way to use users' privacy rights as a mechanism for them to insist, "Hey, in exchange for my data you have to give me these options and more autonomy," maybe again, that's another way to put power in the hands of individuals rather than the state when it comes to speech.
Audience Question:
I was curious about how either of you would think about third party harms. And Jameel, you were talking about appreciating regulations that can distinguish between the encouragement of values that undergird the First Amendment. So how do third party harms figure into each of your analysis?
Noah Feldman:
This is going to be the last question.
Daphne Keller:
So I will just say a high level thing, which is, I think in every fight about platforms and speech, there are three interests at stake. That there're the interests of people who want to speak. There're the interests of people who are being harmed by online speech and there're the platforms. And when it gets litigated or when you show up in Washington DC, usually just two of those actors are there. So you get, person harmed versus platform, and then there's nobody representing speech interests in the courtroom. Or you get a person who wants to speak or the AG of Texas or whatever versus platform. And then the people who are harmed by online speech aren't there. So there's just this systematic problem where one of the interests is always missing in the way that we litigate these questions.
Jameel Jaffer:
I would just say that I think the systematic problem is even bigger than that, because the fourth actor that's not there is the democratic public. And if we want a First Amendment that actually works for democracy, we have to find some way to make the First Amendment care more about the implications of these questions for democracy.
Noah Feldman:
I want to thank the Rappaport Foundation. I want to thank everyone for coming, and I especially want to thank Daphne and Jameel for genuinely modeling nuanced and thoughtful discourse, even when I tried to make them not do that. And I'm super grateful to you both. Thank you for being here. Thank you.