Home

Donate

Debate Over Content Moderation Heads to the Supreme Court

Justin Hendrix / Oct 9, 2022

Audio of this conversation is available via your favorite podcast service.

Some of the most controversial debates over speech and content moderation on social media platforms are now due for consideration in the Supreme Court.

Last month, Florida’s attorney general asked the Court to decide whether states have the right to regulate how social media companies moderate content on their services, after Florida and Texas passed laws that challenge practices of tech firms that lawmakers there regard as anti-democratic. And this month, the Supreme Court decided to hear two cases that will have bearing on interpretation of Section 230 of the Communications Decency Act, which generally provides platforms with immunity from legal liability for user generated content.

To talk about these various developments, I spoke to three people covering these issues closely.

We also made time to discuss Elon Musk’s on-again, off-again pursuit of Twitter, which appears to be on-again, and how his potential acquisition of the company relates to the broader debate around speech and moderation issues.

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

Will, I think I'll perhaps, ask you to start things off. I, to some extent, initiated the correspondence around doing this podcast after reading an article that you wrote with Cat Zakrzewski about a request from Florida's Attorney General late last month who asked the Supreme Court to decide whether states have the right to regulate how social media companies, moderate content on their services.

I was at the Trust and Safety Research Conference, out in Palo Alto last week, and there were hundreds and hundreds of people there, both academics and many, many, people from industry concerned about content moderation issues. And at one point, I looked up thinking about this and thought, is the Supreme Court about to make all these people's jobs, somehow, you know, against the law? Or perhaps otherwise kind of do them in, in terms of their approach to things at the moment? But can you just sort of tell us what happened, first in Florida, and then we'll get into the other cases?

Will Oremus:

Yeah. So, both Texas and Florida passed laws that would restrict internet companies' ability to moderate content. And they did it in slightly different ways. Jameel can probably speak to the specifics better than I can, but the- the Texas law says that they can't censor-- and they used this work censor, which is a contested term-- but that they can't censor posts from any user, at least any Texan, based on the political viewpoint they expressed.

Florida's law took a little bit of a different approach and said that they can't censor, politicians, candidates for elected office, or journalistic organizations over a certain size. So, that's a little less restrictive in a way, but both of them are trying to get at the same thing, which is this long-running contention from many on the right that the big online platforms, in their efforts to reign in misinformation and hate- hate speech, conspiracies, et cetera, have gone too far and are squelching conservative viewpoints all across the internet.

Texas's law was- was upheld in- in somewhat of a stunning decision by the Fifth Circuit Court of Appeals. Florida's law was mostly-- or largely-- struck down by a different Court of Appeals. So, Florida, seeing that Texas's was upheld, said, "Hey, you know, let's set... you know, Supreme Court, let's get a ruling here, 'cause now we've got a split between the two Circuits, and we want ours upheld too."

Justin Hendrix:

So, the questions presented by Florida's Attorney General are, and I'm quoting here, "One, whether the First Amendment prohibits a state from requiring that social media companies host third party communications and from regulating the time, place, and manner in which they do so, and two, whether the First Amendment prohibits the state from requiring social media companies to notify and provide an explanation to their users when they censor the users' speech."

I wanna come to you, Brandie and Jameel. where do we net out on these questions, from your point of view? or what do you think's important about these questions? Jameel, I know you've written that, that, you know, the- the- the- the companies are right that this is a- these laws are a violation of their First Amendment, rights, but maybe for the wrong reasons?

Jameel Jaffer:

Yeah, I mean, I think that there are... I think that Will's division, into sort of the must-carry rules, on one hand, and there are there sort of procedural and transparency rules, on the other hand. I feel differently about those two categories. So, with respect to the must-carry rules, I think that... and by must-carry, I mean the viewpoint discrimination rule that Will described, as well as the Florida rule that says that platforms can't take down political candidate speech in advance of the election.

I think that those kinds of rules, obviously, implicate the platform's editorial judgment. Platforms, in my view, do exercise editorial judgment when they decide what kind of content can stay up on their platform, and which kind of content has to come down. they are exercising editorial judgment in a very different way than, you know, newspapers do when they decide what to publish, or parade organizers do when they decide what kinds of floats can be in the parade, but it is nonetheless, a form of editorial judgment.

They are making a kind of value judgment about what content, is useful to their users and what content isn't, and I see that as, you know, obviously, First Amendment activity, and my guess is that most of the Justices will see that as First Amendment activity. That doesn't actually answer the question that's presented in the case about the- you know, are these must-carry rules constitutional or not because you can accept that the platforms are exercising editorial judgment and then still be left with the question of, well, so what? That doesn't necessarily mean that Congress can't regulate the exercise of editorial judgment.

So, you know, the way I would evaluate is the way I would evaluate any other, rule that, tried to direct editorial judgment in this way, and that's by asking is this a content-neutral rule or a, a content-based rule. If it's content neutral, it's subject to less scrutiny. If it's content based, then it's subject to more scrutiny. I think in these cases, it's gonna be very hard for the states to justify these must-carry rules, because the findings, the legislative findings are very weak, there is a lot of language in the legislative history.

This is especially true in Florida, but also true in Texas, that suggests the rules were imposed for, in- in order to retaliate- retaliate against the platforms for having taking down President Trump's account, for example. so I think there are a lot of reasons why these must-carry rules should- should fall, but, you know, if I were on the Court, which obviously, you know, I'll- I'll never be, but if I were on the Court, then I would, you know, not favor a categorical prohibition on must-carry, obligations in this context. I would just say that these particular must-carry obligations have to be subjected to, very stringent scrutiny and they don't survive that scrutiny.

I've talked for a long time, so I won't talk right now about the transparency laws, but I feel differently with the transparency pieces of the laws. I think that, those might well be constitutional or at least some of them might be constitutional.

Justin Hendrix:

Brandie, I want to bring you in, give you the opportunity to either respond to Jameel or take us into new territory.

Brandie Nonnecke:

Jameel, thank you so much for bringing up this point about, platform's editorial judgment, and if we could take it back to Section 230, I think that this is an important point that needs to be reiterated. That Section 230, first, it shields websites from civil lawsuits rising out of illegal content posted by the website's users, but second, it also allows websites to retain this immunity even if they engage in content moderation that removes or restricts access to our availability of material.

And the case before the Supreme Court, the Gonzalez versus Google case really gets at this content moderation piece, right, around whether or not when they are using a recommender system or an algorithm to prioritize content to end users, and if that content is illegal. Like, we can get into more of a discussion about that case and what the content is. do they maintain that immunity?

I also wanna point out that the CITRIS Policy Lab, in partnership with the Our Better Web initiative, we maintain that a database of all legislation proposed at the federal level that seeks to amend, revoke, reform Section 230. So, I encourage people to check that out, and in that database, we included analysis in there, so you can, see what might be the effects of some of this legislation.

But around, now, transparency, let's talk a little bit about that. Jameel, I'm so happy that you brought that up. I am an academic. I am very, very in much in favor of transparency of platforms, in particular, not just transparency around their- there- there's two sorts of transparency. Right? They want transparency over, how they're making decisions in their moderation of content, removal of content, and how that aligns with their terms of service, but then, also, there's another side of transparency around how may we compel platforms to open up more data to independent researches and journalists to provide insight into what's actually happening on these platforms and how we should shape legislation and regulation that addresses those challenges that we see are increasingly happening on online platforms.

So, I'm actually- I'd like to bounce a question to you, Jameel. Let's talk a little bit about transparency obligations and whether or not platforms should be required to open up their data to third parties.

Jameel Jaffer:

In general, I'm in favor of that kind of transparency. In fact, you know, my college at the Knight Institute drafted a piece of the Platform Accountability and Transparency Act, which would provide journalists and researchers with a safe harbor to use particular kinds of digital tools to study the platforms, and, at least at a conceptual level, I very much support the other piece of the PATA bill, which, is the part that you just described, Brandie, that, you know, would require the platforms to disclose information-- certain kinds of information-- to academic researchers.

I do think that in general, transparency requirements do present First Amendment questions, because, first, you know, transparency-- requiring transparency about editorial decision making-- can have an indirect effect on editorial decision making. Right? Sometimes, the whole point of imposing transparency obligations is to affect the editorial decisions that the particular actor's making. but also, transparency obligations can be used to, you know, in a kind of discriminatory way, you know, in order to punish editors for their editorial decisions.

And so, I think we have to be careful about, the kinds of transparency provisions we're proposing and, the ways that those kinds of transparency provisions might be used, but I am, you know, not at all sympathetic to the view that those concerns are so great and so ever-present that we should be categorically opposed to any imposition of transparency obligations on editorial actors. I think that would be kind of a disaster for First Amendment values, because a certain amount of transparency about how the public sphere operates is crucial to having a public sphere that has kind of integrity and, you know, the people can sort of rely on.

So, I think that there are First Amendment considerations on both sides of the- the equation here. I think that, you know, the editorial actors do have a point when they say transparency obligations can be abused, but to me, there's also a Firsts Amendment, argument on the other side, which is that the public needs access to this kind of information to understand the forces that are shaping public discourse. And, I think that there has to be, you know, a way through that, and, that- that allows certain kinds of transparency obligations to be imposed.

And I think that some of the obligations in the Florida and Texas laws, might be of that, you know, of that kind. I think that the cases are complicated because of the motivations behind these laws, and also, you- you know, the transparency provisions aren't always drafted as (laughs) carefully, as you would want them to be, and some of them seem to impose, you- you know, very, very burdensome- burdensome requirements for, you know, non-obvious, you know, sort of rewards on the other side. So these particular cases are a little bit complicated, but sort of at a higher level, I agree with you, Brandie, and I think that the federal law-- the PATA bill-- is pretty good.

Brandie Nonnecke:

Yes. Thank you, and actually I did an analysis of the Platform Accountability and Transparency Act, here in the US, but then, in the EU, we have the Digital Services Act. I published that piece in Science earlier this year, so, yes, I agree with you that PATA is great, and also I just want to make the point that while we're talking about some states and the federal level, really a lot of this is happening at the state level, and I just want to bring up one other case that I think is relevant here, around this compelled transparency and some of the ways that companies are effectively able to push back.

So, in Maryland, there is this online Electioneering Transparency and Accountability Act that was passed into law in 2018, and it would have required newspapers and other media platforms to publish information on their websites about the political ads they display. They were able to successfully argue that this may inadvertently chill speech, because now, you're going to have this public database of political ads, and some people may not be willing to publish, or pay for political ads.

And so, yes, these concerns, I think, are real around platform transparency, and it's how to effectively strike that balance between- how do we open up data to independent researchers to better inform the public on what's happening on the platforms and shape better regulation on legislation?

Justin Hendrix:

Will, I wanna come to you. Brandie kinda took us toward Section 230, in her comments earlier, and I know you've been looking at the broader historical and political context around the ongoing debate around Section 230. So, I thought I might sort of ask you to maybe hit some of the high points there, and how that relates, perhaps, to, you know, the context in which the Supreme Court will apparently make its decision, with regards to, these two cases it's agreed to hear, this week, which, you know, Brandie mentioned, Gonzalez, et al versus Google, and then we've got, of course, Taamneh versus Twitter, et al.

So, I don't know, Will, if that gives you enough of a bit of context to launch in.

Will Oremus:

Yeah. I think your listeners are probably familiar with the broad contours of Section 230. It was part of the Communications Decency Act back in 1996, at a time when the big political concern about this new thing called the internet, was that kids would use it to look at pornography. That was the bipartisan concern that motivated the Communications Decency Act. And then, a Republican and a Democrat, former Rep. Chris Cox and then Rep. Ron Wyden, slipped in this bit that tried to move the power, for moderating the internet, to give it to the companies. I mean, give it to the forums, give them the power to decide whether to leave stuff up or take stuff down.

They wanted to facilitate the growth of this fledgling industry. You know, it was bringing new jobs to Oregon and California. They also saw it as a way to allow, forums to use their own judgment, and to sort of compete in the marketplace on having a family-friendly forum or having a free-wheeling forum, and it was pretty uncontroversial at the time. I mean, it didn't get a lot of attention when it passed. Over the years, this sort of bipartisan consensus that we should leave it to the companies to decide what speech is okay and what isn't, it stood for, you know, it stood for quite some time.

But recently, it's begun to crumble as both left and right have become dissatisfied, with the power that- that the big platforms have over what people can say and can't say. I mean, the First Amendment protects us from censorship by the government. there is- there is a growing sense, I think, from people on right and left that the social media companies today are so big and so powerful, and the decisions, you know, which- which you get booted off Facebook, that has an effect on your ability to speak that is not negligible. It's not trivial.

And so one of the ways that both left and right have seen to try to reestablish some government authority there is to amend Section 230. You could imagine other approaches. I mean, you could, you know, there's also antitrust bills. There are privacy bills that would take aim at these company's business practices, and their power directly rather than removing the liability shield, but Section 230 does seem to be a convenient way for politicians to try to advance their view of what these giant platforms should be doing differently, in terms of speech.

And, of course, it's different for both sides. I mean, the left, you know, largely would like to see companies being more aggressive in their moderation, more careful when it comes to amplifying misinformation or conspiracy theories. The right would like to see them be more permissive. It's interesting that they both see weakening Section 230 as a way to do that. Of course, you had SESTA-FOSTA in 2018, which carved out content that facilitates sex trafficking.

If it does get further weakened, I guess we'll find out who's right. You know? I guess we'll find out whether it leads companies to be, much more careful in what they allow, because they are afraid of being sued, or whether it leads them to go in the opposite direction and just let it be a free-for-all. But, you know, I think the terrorism case, the Section 230 terrorism case is interesting, because once again, there's a type of speech that both left and right can kind of agree is bad. Right?

I mean, there was a bipartisan, agreement on, decency and pornography and that sort of thing, that platforms should be able to, you know, keep their sites clean. Terrorism is another one of those categories, where it's not really a partisan thing. You know, both the Democrats and the Republicans can see value in platforms taking down terrorist content. so I think this is an interesting test case for Section 230, as the Supreme Court considers that appeal.

Brandie Nonnecke:

Thinking about the content and the removal of content, as I mentioned before, Section 230 does protect platforms from removing content they find object- objectionable, but right now, it's unclear whether or not this protects websites that actually promote illegal content. So, you had mentioned in this Gonzalez case. They're talking about terrorist content, which is tied to the Anti-Terrorism Act, where you cannot help amplify or promote or support terrorist content.

And I think that this is such an interesting question because even though the Supreme Court case is focused on terrorist content, the holding of that case is gonna have a spillover effect on other, you know, harmful content, in the mitigation of the spread of that harmful content. One thing that I think is really important for us to discuss is really about recommender systems and what does that mean to amplify or to target content, and then how can we narrowly scope any of these interventions to mitigate, you know, the spreading of harmful content?

Jameel Jaffer:

I'm still kinda getting my head around how these cases related to each other, how Gonzalez and the Net Choice cases relate to each other. On Gonzalez, Brandie's obviously right that the Gonzalez case is, in part, about, whether Section 230 protects recommendations, whether that's sort of within the scope of the immunity provided by 230.

If the court were to hold that it doesn't, that Section 230 doesn't extend to recommendations, then I think there's an important First Amendment question that will be presented, because the next argument that the platforms will make is, "Well, even if Section 230 doesn't protect us, how is it possible that the First Amendment permits the government to impose criminal liability on the basis of telling somebody to read something."

Surely, if anything's protected by the First Amendment, suggesting to somebody that they read something is protected by the First Amendment. Now, I don't think it's that simple. There's this case from... I think it's 1959, Smith versus California, which involved a bookseller, who, was prosecuted because one book in his bookstore was obscene, and the Court said, "Well, you know, as long as you're not on notice, if you're not on notice that the book is obscene, then the government can't hold you criminally liable."

But, you know, that was the case, in which the bookseller wasn't on notice. I mean, there's a difference between on notice and the Gonzalez case, I think, which involves a situation where the platform was at least allegedly on notice of the character of that speech. And then, there's this question of whether that sort of analog rule from 1959 makes sense with the digital era that we're living in now.

So, I don't know how the Smith case would be applied in this context, but seems to me that these questions that kind of got shoved into the margins because of Section 230-- the First Amendment questions that, you know, the Court's never had to answer because of Section 230-- would suddenly come back with a vengeance if Section 230 were construed narrowly.

The other thing I just wanted to say is that it is absolutely true, obviously, that, at some level, everybody is on the same page that terrorist speech is terrible and the platform should do something about it, but it actually gets pretty complicated when you get beneath the surface because, what counts as terrorist speech is a hugely controversial question. The U.S. legal system makes everything turn in the first instance on whether a particular group has been designated, and the designation process is entirely political, with groups that the United States favors not being designated, and groups the United States disfavors being designated.

And the intersection of those kind of foreign policy, national security decisions and free speech-- that intersection has already generated a lot of controversy in the courts outside the context of platforms, including a Supreme Court case, the Humanitarian Law Project from 15 years ago, and now we have the kind of intersection between that very controversial and unsatisfying set of cases and this other controversial and unsatisfying set of cases involving platforms. I am not yet ready to predict how that plays out, but I think that the fact that it involves terrorist speech, in a way, makes it simpler and in a way, makes it more complicated.

Justin Hendrix:

Brandie.

Brandie Nonnecke:

I totally agree with you, Jameel, and I love to think about case precedent for this, and also, how the Wolf of Wall Street plays into this story about Section 230 in the Stratton Oakmont case. So, in my mind, I love to imagine this Jordan Belfort as Leonardo DiCaprio and thinking about the connections to where we are right now with platform content moderation. And in that case, which was in 1995, right, the year before the Communications Decency Act was passed, which led to Section 230 actually being passed because there was this fear, right, that the platforms would be held liable for all content posted, so they should have some protection so that content can flourish, the internet can grow.

But we're in a different place now. We're in a different place now than we were two years ago. The platforms are constantly changing, constantly evolving, and that lack of transparency around recommender systems and how these platforms work, further complicate our ability to understand what's happening and how to appropriately govern it and oversee it. And I think a really important point here is around the recommender systems.

So, what does it mean to amplify content? What does it- you know, and especially around the content moderation and the ties to the Texas law where the platforms would essentially be required to include all speech in non-moderator removed content. This, to me, is going to create platforms that are a nightmare to use, full of just junk. How do you sort through all of the content that would just be coming at you as a flood? These proposals to actually change recommender systems or give end users greater control of the recommender systems, while I think that they sound great at first blush, in practice, we cannot have platforms that are not using some type of algorithmic content moderation and recommendation system.

So, actually, I think some of the legislation, like the NUDGE Act, proposed by Sen. Amy Klobuchar (D-MN), are really interesting proposals to look at while how might we research other methods that can be used to mitigate the spread of harmful content?

Will Oremus:

That's a great point, Brandie. I mean, I think that it can seem really convenient to say something like, "Well, platforms, you know, it's okay for them to moderate content, but they shouldn't be able to discriminate based on viewpoint." Right? That's what Texas wants to say. It sounds really easy to say, "Well, you know, it's okay for platforms to host content that is maybe illegal, but they shouldn't be able- they shouldn't be allowed to amplify it."

But you get into some of the edge cases and those kinds of claims start to fall, those sorts of ideas start to fall apart, as Brandie was mentioning. I mean, one of the examples is how do you have Wikipedia if it can't discriminate based on viewpoint? Right? I mean, if it has to treat all viewpoints equally, how do you ever get a factual and reliable encyclopedia entry?

The whole process of Wikipedia, what it's all about, is its editors, its volunteer editors, they're deciding which viewpoints are worthy of including in an article and which aren't. In terms of recommender systems and algorithms, I mean, if companies are gonna be held liable when they amplify harmful content via an algorithm, you know, what does that mean for Google search results? Right? Google's algorithm decides which links are more credible than others. That's the whole foundation of modern internet search. Do they have to go back to some kind of system where they're just, like, counting the number of keywords in an article? And that would be a disaster. So, you know, I think there are valid concerns animating these types of reform efforts, and then I think there are also valid concerns that the remedies would be even worse than the disease potentially.

Justin Hendrix:

Will, I wanna stick with you just for a minute because I also wanna bring in the other, I guess, topic du jour this week, which is, Elon Musk's on again, off again potential acquisition of Twitter, which is playing out, in the context of these questions. And to some extent, I think Elon Musk thinks he can settle these questions, at least, by some better engineering or some better, X-version of Twitter, that he imagines may be possible. what's going on? It's 11:35 AM on Friday, October 7th. Is the acquisition gonna happen, not gonna happen? What do we know?

Will Oremus:

Yeah. I mean, this has been a roller coaster, Elon Musk's, attempt to acquire Twitter, and then his attempt to get out of acquiring Twitter. the latest, as we- as we record this, is that he has said he will go through with his original offer to buy Twitter for $44 billion, which seems essentially like admitting defeat. he wants Twitter to drop the case, as a result, and Twitter's, at this point, like, "No, man. We don't trust you. Like, we want the court to stay involved in this. Like, who knows if you're gonna change your mind again."

So, the latest is that the court has given Musk, until I believe it's October 28th to actually close the deal, actually put up the money, buy the company, and take it over, otherwise, the trial's back on for November and Twitter can go after him again. I mean, I assume the reason that we're bringing this up in this context is because one of the things that Musk has said he would do with Twitter is restore it to this idealized sort of free speech social network, you know, make it less censorial.

He was mad when they banned the Babylon Bee, which was this right-leaning parody publication that Elon Musk enjoys. He thinks it was a mistake for them to ban Donald Trump in the wake of the January 6th attacks. He has a lot of friends on the right how- who subscribe to this view that- that Twitter has gone too far in suppressing, conservative viewpoints. So, he wants to kind of open it back up, and he really takes an engineer's view of this, I think. You know, he thinks that these questions aren't complicated, that you just allow free speech, and sure, yeah, if something's really bad, then we'll take it down, but otherwise, we should allow it.

You know, he said things to that effect. As anybody who's worked in the space over the decades could tell him, that quickly turns out to be a lot more complicated than it sounds. but broadly speaking, I think he wants a more laissez-faire approach to content moderation on social media and he's gonna try to institute that at Twitter, and, you know, we'll see how that goes.

Justin Hendrix:

Jameel, you and I have different views, perhaps, on, the efficacy of allowing Donald Trump back on Twitter's platform, but to Will's point about seeing the acquisition in the context of this broader debate that's occurring in legislatures and at the Supreme Court, I don't know, how do you look at it right now?

Jameel Jaffer:

Well, I guess I see this particular, takeover fight in the same way that Will does. I don't think that Musk is gonna solve the problems of free speech They require all sorts of trade offs, and reasonable people can disagree about how to make those trade offs. I mean, that's not to say that there aren't better answers and worse answers.

I actually think that Twitter has been relatively thoughtful about this stuff. I don't agree with every decision they made, but I think they've been pretty thoughtful about trying to build a platform that serves a larger public interest in the exchange of ideas and the exchange of information. I think that there's some things that Musk could do if he were really committed to making Twitter more laissez-faire, or just a broadening discourse on Twitter.

You know, he could put a finger on the scale of labeling over taking things down. He could, require more strikes before somebody's kicked off the platform. As you say, he could put Trump back on. I still think there's a good argument for putting Trump back on, just because I think that there are really two big principles here.

One is that we want our public sphere to serve self government and serve democracy, and that requires a certain amount of moderation, requires a certain amount of, you know, taking unhelpful stuff down. The other principle, though, is that we don't want to be at the mercy of... we don't want self government to be at the mercy of these really large platforms who have a lot of control over what gets said online and who can say it, and what ideas get traction.

And so, you have these two competing principles and you have to find some, way to reconcile them. I think that when it comes to speech of public officials or political candidates, the platforms should be, really hesitant to interfere or to take those accounts down, and when they do, they should do it against the background of transparent principles and a real, transparent process, and that's not really what happened in Trump's case. And I don't think that, you know, the issue here is that Trump was treated unfairly. I do not lose any sleep over whether Trump is treated fairly or not, but I do think that ordinary citizens, have a kind of right to hear the speech of political candidates and government officials. And when these gatekeeper platforms interfere with that right, they should do it very, very carefully, and I'm not persuaded that the platforms were as careful as they should have been this past time around.

Sorry, I keep giving you really long answers, but if Musk wants to just kind of tinker with the dials a little bit, you know, in those ways, then I think that's fine, but I don't think he's going to solve the problems of free speech online as he seems to think he's gonna do.

Justin Hendrix:

Brandie, do you think about this and Musk, to some extent, in the context also of... we've talk mostly in the context of US, law and regulation in this conversation, but the Digital Services Act is gonna become a reality. Elon Musk's hands might be tied, to some extent, as it were, if he decides to try to put his fingers on the scale, as Jameel says, over certain platform features of functions. he'll have, to deal with a new sheriff, on some level.

Brandie Nonnecke:

Yeah. First, before I get into that, the international implications of the EU on the US, I wanted to comment on Jameel and Will. First, I don't know if we've ever seen a greater example of the Dunning-Kruger effect than we do with Elon Musk trying to buy a platform and thinking about content moderation in an area where he has no expertise, and obviously, from his statements, does not fully understand the complexity of online platforms.

Jameel, you said that we should be thinking of these as the public sphere. They should be inclusive of voice. I mean, to me, it's yes, I agree with you that it is this kind of quasi public sphere, and to me, it's really sad that our public sphere is owned by the private sector, and these companies can make decisions about the content that they want to carry or not.

And then, also, I want to bring up, a law in California that I think has a lot of relevance here, because, as we talk about this amplification and targeting of content, about two weeks ago, in California, we passed the Age Appropriate Design Code Act, and it will take effect in July 2024. It addresses any websites likely to be accessed by children. So, those under 18. There's still lack of clarity on, you know, does this encompass all websites if they could access or websites that are making content that we know is specifically targeted to youth.

But I'm bringing it up because in that law, it says covered businesses are prohibited from using a child's personal information in a way that the business knows or has reason to know is material detrimental to the physical health, mental health, or wellbeing of a child. And as we know from Frances Haugen and the Facebook Papers about the harmful effects of Instagram on child mental health, I think these are some of the issues that are actually the intervention points that we can hold platforms more accountable for the spread of content.

We've also been talking a lot about amplification, but not a lot about targeting. Right? Algorithms do both things. They present content through the recommender system to individuals. They amplify through virality, but also, it's distributed in a way that can go directly to the person who is most interested in that topic, and if we're talking about disinformation, hate speech, harassment, we can- we- the algorithms can target that content to those individuals who are most susceptible to its manipulative appeal, and to me, those issues around algorithmic amplification and targeting are gonna be really interesting to think about how do we build in these technical mechanisms to mitigate harms.

You want me to talk about the EU?

Justin Hendrix:

Please, if you wanna add something on that, yeah.

Brandie Nonnecke:

I really wanna talk about that because right when Elon Musk was talking about buying Twitter, one of the things he said was, "I want to, you know, sort of democratize the recommender system. I want people to be able to choose how their content is fed to them in their feed. They should have some control over that." Now, in the EU, we have the Digital Services Act, which does grant individuals that right to be able to have control over their recommender system and un- understand how it works.

In the US, we've had legislation proposed that would give people the power to say, "I do not want my personally identifiable information to be used in the recommender system." Again, while this sounds great in theory, I think in practice, it's a whole nothing can of worms, because are people really going to tweak their algorithm of their recommender system to feed them content of diverse, healthy viewpoints? Or are people going to build out recommender systems and tailor the recommender systems in a way that reconfirm their pre-held beliefs, their bias and prejudice viewpoints.

You know, I think that we need a lot more research and to tie it back to the beginning of this conversation on transparency, I think that actually opening up platform data, fostering collaborations between industry and academia, journalists, civil societies on these issues, is critical, because any of these proposals, we have to think about what are some of those unintended spillover effects.

Justin Hendrix:

So, when you think about these court decisions, that could come, you know, anytime in the- in the next several months, when you think about those various legislation, when you think about the Digital Services Act coming into effect, the internet can look very different in 2023, 2024. I just want quickly to go around to the three of you. What's your threat level? is- are things much better, much worse a year from today? do you suspect that, the outcome of some of these, court cases that we've talked about today could potentially radically transform the internet in the way that some observers suggest, and Will, perhaps I'll start with you.

Will Oremus:

It's hard to predict. I think that some of the proposals, that some of the laws that have been passed, some of the arguments that are being made against Section 230, as I've said, would have some fairly dramatic consequences if implemented. I think that that doesn't mean they won't happen. you know, we saw the Supreme Court overturn Roe v. Wade earlier this year. That's gonna have dramatic consequences. It happened. I think, to some extent, there will be a response. There'll be adjustments made. There'll be reconsideration. There will be backlashes.

T the internet is adaptable, and the internet finds a way. And I think that the internet's gonna look a lot like the internet in the years to come, one way or another. You know? It's gonna be a place where people find ways to spread ideas that are offensive. it's gonna be a place where people find ways to spread ideas that are valuable and that might not spread otherwise. There are gonna be platforms that try to keep things clean and feeling safe, and, you know, succeed and fail to different degrees.

There are gonna be ones that allow absolutely anything, and users will vote with their feet, to some degree. One of my big concerns, in recent years, one of my big interests, has been the power of a few internet platforms over the public sphere. I'm interested-- and this is a little bit tangential to our discussion today-- but I'm interested in how that plays out. I mean, you see we were all concerned about Facebook being a monopoly three to five years ago. Now, you see TikTok ascendant and Facebook on the wain a little bit. Does the market correct itself on its own, or do we need more guardrails with antitrust, to keep a few companies from dominating the public sphere?

That's something I'm really interested in, but I think there could be some dramatic consequences in the short term. In the long term, I think, you know, the internet will, the internet will survive.

Justin Hendrix:

Job security for your beat. Jameel, I'll come to you. Are you similarly-- I don't know if that was note of optimism from Will, but it sort of sounded like one.

Jameel Jaffer:

I definitely agree that it's unpredictable, and I also agree that, you know, the- the internet is resilient, in the ways that Will described. I do worry about the margins, that, like, there is this kind of culture war battle, largely fought by people who have not actually thought very much about the internet or platforms or free speech, and they're just sort of kind of waving flags. Like, Section 230 is a kind of flag, and, it may be that the internet looks, you know, pretty much like the internet looks today, three years from now.

But, at the margins, if the platforms suddenly think that we're now liable for terrorist speech or we can be held liable for recommendations in certain context, then, I feel pretty confident that the speech their gonna take down is going to be the speech of minority groups. We've seen that already with the fight over terrorist speech and the implications for pro-Palestinian speech.

That's just the way that the rules get implemented is that the platforms get worried about anything, not just anything that falls within the scope of the new liability, but anything that somebody else might suggest might fall within the scope of the new liability. And so, there's this kind of zone beyond the new liability that, in which controversial speech gets taken down, and the result will be that the speech that is really most valuable in a democracy, which is this speech at the margins, legitimate political speech that is very controversial.

It's that speech that will get take down, whether it's Black Lives Matter speech or it's pro-Palestinian speech or trans rights speech. It's that kind of stuff, or even conservative speech that is, you know, legitimate political discourse, but extremely controversial. That's the stuff that will come down, and I think that some people will celebrate that. I see that as a loss for our society.

But it is very unpredictable and the unpredictability is both a threat and an opportunity. It is a kind of moment where, for better and worse, we are going back to first principles and we're gonna try to figure out, like, what is it that the framework for public discourse-- or legal framework for public discourse-- should look like? And everything is up for grabs, and that is both really scary and maybe also exciting.

Justin Hendrix:

Brandie, a last word to you.

Brandie Nonnecke:

I think that there's a lot of really interesting issues here, and I think that we need to also give a shout out to the Federal Trade Commission. I feel like they're doing a lot of work that's actually helping to move the needle in the right direction. As we think about platforms and the harms that they cause, unfortunately, not to be a pessimist, but I do feel like we're trying to move toward this ideal utopia that can't exist online. But at the same time, I think by pushing toward that vision of utopia, we can have these incremental wins that can improve the internet for all.

My biggest concern is that we need to make sure that we don't put in place legislation or regulation that inadvertently backfires and undermines platforms own ability to give a good faith effort of mitigating harmful content, and in that process, I'm gonna bring it back to transparency, we really need more transparency, because these platforms often times, they are a monopoly in that certain area of social media. There is a lack of visibility on what's actually happening on the platform.

So, I think because of their position in the market, they may be compelled or should be compelled to be more transparent through transparency reports or opening up data to researchers and journalists.

Justin Hendrix:

Well, certainly, I say this for all four of us, the opportunity to, talk about these issues, I'm sure, will come again and again and again over the next few months, and I hope it will happen in this context again. So, I thank the three of you for joining me today.

Jameel Jaffer:

Thank you.

Will Oremus:

Thanks, Justin, so much.

Brandie Nonnecke:

Thank you.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics