Olivier Sylvain Wants to Reclaim the Internet from Big Tech
Justin Hendrix / Mar 29, 2026Audio of this conversation is available via your favorite podcast service.
This was a landmark week for tech accountability in US courts. Juries in New Mexico and California delivered verdicts finding tech giants Meta and Google liable for harms to young users on their platforms, decisions that are projected to open the door to more lawsuits alleging that social media creates addiction or endangers kids.
Today’s guest sees these developments as positive and in line with the types of thinking he believes will help improve the internet. Olivier Sylvain is a professor at Fordham Law School and the author of a new book titled Reclaiming the Internet: How Big Tech Took Control—and How We Can Take It Back, published by Columbia Global Reports.
I had the chance to speak to him about the book at Book Culture, a bookstore on 112th Street in New York City.
What follows is a lightly edited transcript of the discussion.

Attorney Mark Lanier speaks during a news conference after the verdict in a landmark trial over whether social media platforms deliberately addict and harm children at Los Angeles Superior Court, Wednesday, March 25, 2026, in Los Angeles. (AP Photo/William Liang)
Media clips:
The New Mexico Department of Justice won a landmark trial against the parent company and Facebook and Instagram, Meta. So the tech giant was found liable for misleading users about the safety of its platform and endangering children.
The jury ultimately sided with this state, awarding New Mexico $375 million.
A Los Angeles jury today finding Meta and YouTube negligent in the design or operation of their platforms and saying the companies failed to warn users of the dangers of their services. The jurors found that the tech giants platforms harmed a young user with features that made her addicted and led to mental health problems.
This is something that was unimaginable even a few years ago because these companies have enjoyed such powerful legal protections until now. Now this new legal theory opens up an entirely new can of worms, one that consists of more than 1,600 cases that are now going to come down on one or more of these companies.
Justin Hendrix:
This was a landmark week for tech accountability in US courts. Juries in New Mexico and California delivered verdicts finding tech giants, Meta and Google, liable for harms to young users on their platforms. Decisions that are projected to open the door to more lawsuits alleging that social media creates addiction and endangers kids. Today's guest sees these developments as positive and in line with the types of thinking he believes will help improve the internet. Olivier Sylvain is a professor at Fordham Law School and the author of a new book titled, Reclaiming the Internet: How Big Tech Took Control and How We Can Take It Back. It's just out from Columbia Global Reports. I had the chance to speak to him about the book at Book Culture, a bookstore on 112th Street in New York City. Thank you very much.
Olivier Sylvain:
Thank you, Cody.
Justin Hendrix:
And thanks to all of you for being here tonight. We're going to have a conversation between the two of us, and then we're going to open it hopefully to some questions from the audience, and hopefully we'll get a lot of good energy from this audience. But I have to ask you a question just to start, which is who here thinks there's something wrong on the internet? Anybody?
There seems something off about the internet lately. Well, this book is intended to get at why and hopefully to frame up some ways that we might go about addressing the various problems that Olivier posits in this book. You have a lot of words in this book that I found myself kind of looking at thinking these feel like the themes, words like "distortion" and "perversion" and kind of "engagement" and "perverse incentives," and things of that nature. There's a lot of words like that in here. What's your sort of diagnosis of what's wrong on the internet right now?
Olivier Sylvain:
Oh, you got it right, drawing out those words. Before I start though, I do want to thank all my dear friends and family that have shown up. It's really nice to see you and see people of all ages have shown up as well, which is really nice. Also, grateful to you, Justin, to do this with me. This is not the first time you and I have had conversations, and I'm grateful that this will be podcast for you all. So the words that you've pulled out mean to, I think, signal that the principle companies, the antagonists, if you will, in my account, have invoked a very romantic story about free speech and innovation and perverted it to shield their business practices from public scrutiny. That's the argument in the book. One of the reasons that I've intervened though is many people are writing about this kind of stuff to be sure. I call them antagonists because I think the protagonists in all of this are not the big tech companies. I think the failure is ours. I think it's policymakers' failure, and this is something that we should be able to turn around as a result.
Justin Hendrix:
So you say early on in the book that you think that both the Left and the Right get certain aspects of social media wrong, get certain aspects of their diagnosis of what's wrong on the internet wrong. Can we go through them both? What does the Right get wrong about social media and the internet and what are its woes?
Olivier Sylvain:
Well, I do want to say that there are opportunities for bipartisan reform, and I know we'll get there, but for a long time, people on the Right have lamented a woke mind virus on social media. They're concerned about censorship, they're concerned about censorship of conservative views, and they're concerned about the distribution of power, that there are coastal elites that are deciding how people communicate and what they communicate with each other. And that concern, the heart of it is something that resonates with me, but I also do say that they get it wrong, at least because the evidence is plain that conservative voices are not the ones that are suffering online. At least because in 2026, the main platforms, so-called platforms that people share content through is monitored or controlled by people who are right-leaning or who have newly discovered their right-leaning positions, Facebook and Meta in particular in this regard.
So I think that they overstate their concern about censorship, given that most of them are now managing, controlling online content. And empirically, the most popular accounts on things like Facebook and other social media and YouTube tend to be right-leaning sources, so that's the other reasons. It's not just that managers and owners of these companies, but it's also the people who are posting, the most popular ones, are likelier to be right-leaning.
Justin Hendrix:
It seems like most of the social science seems to stack up to suggest that in many ways platforms do favor right-wing interests when it boils down to it. That seems to be the kind of empirical record that we have.
Olivier Sylvain:
Yeah, I think that's right, but it's not necessarily by design that they're pushing a right-wing agenda. I think we can talk more about this. It is an indifference to the kinds of material that gets distributed and it gets viral. And as it turns out, bigotry, because of the nature of it, is alarming and draws people's attention. Whether it is someone that believes in the bigotry or people or someone who doesn't, and so I think it's rather that the engagement model, you mentioned one of the words in there, drives attention because it pumps out content that is likely to raise people's temperature. This is also an empirical fact.
Justin Hendrix:
Okay. Now, what about the Left? What does the Left get wrong about its diagnosis of what's wrong on the internet? What's wrong with social media?
Olivier Sylvain:
Yeah. I mean, I do have a beef with the left in the book, but it is... To be plain, a lot of it is directed at this laissez-faire approach of moderation that the companies employ. But on the Left, there is a real preoccupation with user control and moderation techniques. So there was a time when then Twitter actually invested a lot of energy in bringing in academics, but bringing all kinds of people to come up with fricative designs to slow the circulation of bad content. And they believe that the way to do that, if they were to do that, they would optimize engagement, but it would be engagement that would be well-meaning and healthy. My beef with this is that it's a misdirection. The companies are not in the business of tending to our health. They're in the business of tending to optimizing engagement and it doesn't address, the Left's concern for the most part doesn't attend to the incentives that are driving the companies to do what they do.
Justin Hendrix:
Okay. So we've got a little bit of a critique of some of the popular points of view on the Right about what's wrong on the internet, critique of a little bit what's wrong, perspective of the Left. What's really wrong from your point of view? So you're a law professor, so we've got to allow him to take us down that path.
Olivier Sylvain:
You don't have to allow me to do that.
Justin Hendrix:
It's our constitutional order, it's the First Amendment. What are the other points of diagnosis that you see as setting the incentives for the internet to go so badly?
Olivier Sylvain:
Well, I think in the 1990s, which is the starting point for my historical account. I go further back in the book, talk about the origins of the user empowerment model, but it's really in the 1990s that people of all kinds are excited about the possibility for transformational technology. Technology that would empower users to do whatever they want, to disintermediate incumbents in all kinds of industries and politics and in media and music, and a lot of that comes to pass. The language that people are using, again, is about promoting free speech and innovation. And that gets baked into law, it gets baked into doctrine, the First Amendment doctrine. The prominent case, Reno v. ACLU sets out a pretty broad protection for people to speak freely online, and there's much to it that I like about that opinion, but it doesn't distinguish different kinds of applications from others.
It just said that the internet is one big platform for democratization. There's really deep purple prose from Justice Stevens in that opinion. And then Congress, actually the year before Reno v. ACLU is decided, so that's 1997, in 1996, Congress passes a statute, as you know, occupied a lot of my time, that basically shields companies that ostensibly traffic in user-generated content, that there are platforms that promote user-generated content. And again, there's something about this that is very attractive.
Justin Hendrix:
So this is the famous Section 230 of the Communications Decency Act? This just celebrated its 30th anniversary to some folks' chagrin and to many folks delight?
Olivier Sylvain:
Yeah, to many people's delight. My best friends are Section 230 advocates. Like I said, it promotes a sense of user engagement when companies don't have to worry about being held accountable for the posts from other people. The idea is that if you're a platform and you will allow and support the distribution of all kinds of content and not have to worry about being liable for it. We see versions of this kind of protection in other areas of law, but not as dramatically set out by Congress. And then the year later, there's a court that really reads the statute very broadly to allow these companies to traffic in user generated content and make matches with people, make recommendations with people, and have all of that immune from public scrutiny and legal liability.
Justin Hendrix:
Okay. So we're at this stage, the incentives have been set and effectively we've got court challenges, we've got various attempts to hold the tech platforms to account, many of them fail at the feet of Section 230. We end up constitutionalizing this laissez-faire approach you say?
Olivier Sylvain:
Yeah. So the First Amendment theory that's in Reno v. ACLU evolves in a bunch of cases that set out a very broad protection for users like you and me, but also for companies. And the most recent articulation of this just happened a couple of years ago in a case called Moody v. Net Choice, where the Supreme Court said that platforms get to choose the kinds of content that they distribute, that they have a First Amendment right to do that. That is consonant with other ways of thinking about commercial speech and corporate speech. Editorializing is a protection that these companies have. Section 230, also the doctrine metastases as well, all kinds of activity, even algorithmic targeting is protected under section 230 on the theory that the stuff that's getting targeted is user-generated content, right?
You mentioned incentives. What emerges in the early 2000s is the discovery of personalization and targeting, and this for me is something that 230 enables because the companies can pretend that they are mere platforms that facilitate, neutrally facilitate, contact between users, but in fact they're engineering experiences. And Facebook really is the innovator in this domain, where they are delivering content, serving advertisements most importantly, because they have a technology that makes it possible. YouTube also gets really good with the recommendations that they set out. This is beyond public scrutiny because they purport to be engaging in the delivery of user-generated content, meanwhile, they're engineering experiences all along.
Justin Hendrix:
Okay. So we're up at this sort of phase, maybe we're in the early teens now, 2010, 2013, 2014. What's going on in debate in the United States and how is this sort of laissez-faire approach getting baked in effectively?
Olivier Sylvain:
Right. So when I call it laissez-faire, it's because government's hands are off and there's no legal accountability for the companies that purport to be trafficking in user-generated content. And there are happy stories here also, right, depending on your perspective. The Obama campaign discovers the possibility in promoting movements through social media, and this is happening on the Left and the Right, but I think that is one of the more celebrated examples of what the technology makes possible. But what is also happening is the development of techniques that are, again, far from public scrutiny, that are not just targeting content. It is now when the companies are matching people based on the preferences they've expressed, but also matching them on grounds that I don't think many of us would be excited about. That is matching, say, terrorists who want to plot an attack or targeting content because it alarms people, especially, and keeps them engaged. This is also part of the business model.
And I think a lot of people start getting really concerned about this, and there's some pretty dramatic cases. The one I'll mention is involving Armslist, a website that as the title suggests, allows people to buy firearms online. You might see where this is going. People are able to buy unregistered weapons on this site too, but this is a website that is trafficking in user-generated content. One of the things they have is a backroom. I think it may be even called backroom, where people can actually buy unregistered weapons. The survivors of a woman who was killed by a domestic abuser bring a case arguing that Armslist should be held responsible or having allowed the domestic abuser buy a weapon unregistered and foreseeably buy a weapon and use it to kill their loved one. And the company invokes Section 230 to say that they should not be held accountable for having set this up because they're just trafficking in user-generated content. And that argument prevails.
That, by the way, is the run-of-the-mill, this is the run-of-the-mill in the Section 230 doctrine. This is no longer a protection that promotes innovation and free speech, this is a protection that is insulating what lawyers call the least cost avoider. Isolating the people actually most responsible for making these contexts possible, keeping them away from legal scrutiny. And this starts happening in early 2000, 2000 teens into the 2015, 16s. And there are a series of cases that raise similar problems that are addressed to discrimination and other unlawful content, and I start writing about this stuff around 2016, 2017.
Justin Hendrix:
Okay. And let's just talk about what else kind of falls within what the tech companies effectively claim is speech or claim could be protected from liability under Section 230, or potentially protected as their First Amendment activity. What does this umbrella include?
Olivier Sylvain:
Yeah, so one of the most dramatic cases, and it's a case that signals that courts realize what they've done and start turning the corner is a case involving Snap, it's called Lemmon v. Snap, arises out of California, where two young people have Snap and they have the speed filter on Snap. Some of you may know what the speed filter is, it just monitors your movement in real time. So what kind of people do you think would love to know how fast they're going in any given moment? Two teenage boys are in a car, one of them is driving. I think one is 17 and one is 19. The 17-year-old is driving and tracking their speed on a side road, they're going 113 miles per hour and something terrible happens, they both die.
Their parents bring a case against Snap alleging that Snap has facilitated this harm that they wouldn't have gone this fast but for the speed filter. And Snap says, No, we are shielded from liability because of section 230. And you know what the user generated content in this situation is? It's their speed. They're giving the speed, they're sharing the speed information with everyone else that follows them, and the company argues that they should not be held responsible. What's really interesting about this case is that the lawyers decide not to go after the expressive content that is being shared by users, but the design feature as alleged by the families, and that's the speed filter. The company doesn't need to know what the nature of the speed is or how fast people are, they just know that the speed filter is going to do something and likely foreseeably have young people drive really fast, and this negligent design theory prevails over the Section 230 defense. The court says, "You can't be serious. This is not the kind of content that the drafters of Section 230 envisioned."
This happens maybe six or seven years ago, and it's a signal for many courts around the country that 230 should not be protecting all kinds of user-generated activity.
Justin Hendrix:
And what's happened since then? I mean, we've seen various court rulings just in the last couple of years, it feels like the cracks in the wall are continuing to emerge?
Olivier Sylvain:
So you may have remembered when I talked about Reno versus ACLU, I was talking about the ways in which the Supreme Court talked about the internet as one big public square. The same argument emerges in the context of this Section 230 litigation, right? The story is that the internet is one big public square where people can communicate freely. What the courts are now doing is being far more careful about the nature of the applications or the nature of the design. So newsfeed is not going to be the same as a randomized chat room where an adult could meet a young person. It's not going to be the same as an algorithmic delivery of an ad. It's not going to be the same thing as the limitless scroll, design features that actually mean to hold attention that are unrelated to content. And now courts are looking specifically at these applications, not just at the happy notion that these are internet companies.
This is a good term. We look at the speed filter for what it is, not because it's on the internet. We have to look at the nature of the design, the nature of the service, and more and more cases, more and more courts are doing it. You and I talked briefly before we started about these trials against the large social media companies. And just today, a jury voted, what was the number? I think 375 million?
Justin Hendrix:
$375 million judgment.
Olivier Sylvain:
Judgment for the families whose kids have effectively been addicted to social media. Earlier in the litigation, the court rejected the 230 defense because the design features at issue, the infinite scroll, autoplay, unrelated to content. Those are things that should not get 230 protection. This is a good development, and my book means to signal that we are turning the corner.
Justin Hendrix:
Okay. But it's not an uncontested turning of the corner.
Olivier Sylvain:
No.
Justin Hendrix:
And by the way, one thing I'll say about this book is that a lot of books I read, they almost immediately feel like they were written far in the past because the publication cycle takes forever, but this one feels very current. You mentioned kind of current debates over generative AI and what's happening more with the move to AI in general, but the tech firms are not done making the argument for Section 230 defenses. There's still a lot of effort to effectively include within those liability shield the output of chatbots for instance and other functions of generative AI systems. Where might we head on these things?
Olivier Sylvain:
Yeah. So they are invoking Section 230 and the First Amendment more and more also. But on the publication of the book, I have to credit Columbia Global Reports for the fact that it doesn't feel that old. The turnaround in the publication for a book like this and the books that they publish is short, and that's good because it gets things out sooner. So yes, the companies continue to invoke these defenses, even AI companies are. None of these stories are happy stories, and I'm mindful about that, but I do want to flag the ways in which the companies invoke this protection, and the First Amendment protection included. So one of the more frightening cases some of you will have heard about is story of Sewell who died because of his interaction with a chatbot that character technologies developed. And I don't want to get into the details of this, but it's enough to say that he was lured into the act of self-harm.
And the companies in defense to this case say that they are shielded from liability because the First Amendment protects them. They say that they're standing in the shoes of their users, that AI is just a tool for expression. And so any information that Sewell was getting was really his own expression and ought to be protected on the First Amendment. These are the companies standing in the shoes of their users. But of course, this is a deeply perverse argument in this circumstance. They are explaining that young Sewell was the architect of his death. His mothers brought a case and the court that heard this First Amendment argument rejected it. It's a design theory. The design theory is the one that has been prevailing against these defenses in the First Amendment setting and the Section 230 setting, and the design theory is they developed this chatbot before it was ready.
They didn't test it. They spent less than a couple of weeks on testing its effectiveness. There were internal reports, as in these social media cases that we're hearing about from New Mexico and Texas, that these services were not yet ready, that people were likely to get harmed, especially if they were inclined to mental health concerns, and they launched it anyway because there was so much money to be made. And the argument is that this is in promotion of innovation and free speech and change, meanwhile, people are being harmed along the way. I've described one of the more terrible circumstances. There are a handful of cases that are now being filed that are not just about chatbot part harm and luring people to do bad things that they would not otherwise do. Whole range of activities that are involving deepfakes and other kinds of consumer harms.
Justin Hendrix:
There are a lot of folks out there, civil society groups, et cetera, who defend Section 230, defend the tech firm's First Amendment rights to the hilt because they believe that ultimately these are lines that shouldn't be crossed, that eventually this crosses over into restricting free speech, restricting expression, that eventually you end up kind of breaking down some of the barriers, even potentially to government censorship that exist, that the internet has provided us the ability to express ourselves. Arguments are made that you repeal or sunset Section 230, you break the internet. This will create a scenario where the tech platforms will literally over-censor speech to such an extent that it'll sanitize the entire place. You don't seem so concerned about that, you seem concerned in the opposite direction, but what would you say to those folks who may listen to this and see it the other way?
Olivier Sylvain:
Well, this is why history is helpful. Let's see what car manufacturers are saying when people are talking about seat belts. Let's see what people are talking about when they talk about the distribution of drugs. Sure, there's a lot of innovation and creativity that goes into the creation of these products, but there are guardrails and there are mechanisms for holding companies accountable. As I say in the book, and it's a more provocative line that other people have picked up on, there's one other industry that is less regulated, arguably less regulated, and that's the gun industry. I don't think that's a great comparator for these companies. The reason they aren't regulated is because of Section 230's limit on our ability to inquire into whether they've done something wrong.
Just to be clear, Section 230 and this First Amendment defense are not on the merits of whether they're causing harm, it's on whether we can even look into that possibility. So people who worry about the demise of the internet, I think are overstating their claim. And by the way, there are a couple of cases early in the evolution of Section 230, a couple of cases where the 230 defense failed, where judges and litigants argued that this would be the end of search, for example. And as far as I can tell, search is still around. I think what we need to find is a proper middle ground, right, where we want to promote innovation and develop techniques, new AI technologies, and applications that consumers like and they're better for, but we ought to have some mechanism for public accountability, and right now we don't have one for consumer facing applications online.
Justin Hendrix:
There are so many things I wish we could get to, including more news that's just occurred. We could probably talk endlessly about some of those things. We'll get into some of those in the question and answer, I'm sure. But I want to get to your conclusion because I run a publication called Tech Policy Press. We're always interested in the tech policy solutions, the ways that we might be able to fix these things. But you say at the end of this, and I'll just read a little passage, "It will not be enough simply to reject the laissez-faire mindset. The harder task will be for policymakers to enact and enforce laws that recalibrate the incentives driving companies to design services despite their foreseeable consumer harms and social costs." And here's the part I wanted to get to, "Yet Congress today appears paralyzed or cravenly beholden to the president, making meaningful legislative reform unlikely anytime soon." And then you say, "Still, I outline here several ways legislators could curb the incentives that should..." I feel like this is my life, feeling like we're pushing a rock up a hill, they're extraordinary challenges, the system seems a bit broken and unlikely to produce the types of reforms we want, and yet still we outlined several ways that legislators may be able to ...
Olivier Sylvain:
I'm glad that you picked that up. So one answer is, what's the alternative, right? Do we give up? I know you don't think that we should give up, but that's one possible alternative. And time is not going to stand still. I'm hopeful that things will change in Congress in spite of how things may seem now. And the reason I have hope in this is that there have been bipartisan efforts to reform Section 230, and there has been a bipartisan interest in attending to big tech controls over the online experience. Now, we talk about how the Right and the Left, or I talk about how the Right and Left come at this differently, but there have been bills, and I've testified on involving bills that are co-sponsored by Republicans and Democrats. I think we're in a situation now where this Congress in particular seems especially craven to the person in the White House, and I don't count on that continuing, and I don't know what the alternative is.
So I have a prescription for reform that isn't just about reforming Section 230. I think one of the more important things, as you know, for me is attending to the business model, and that is the unlimited capacity to collect information and monetize it. Most other jurisdictions that the US likes to compare itself to have some kind of data protection or privacy law, limits on the ways in which data is used. I believe that the business model on which these companies generate so much income is the ad-based model that allows them to target content and hold people's attention for the purposes of collecting that information. So data protection law is another thing for which there has been bipartisan interest. I think we're on the cusp of it, Justin, just not right now.
Justin Hendrix:
You also point to need for disclosures, risk assessments, researcher access. I think this is something that we spend a lot of time on Tech Policy Press talking about, is just the extent to which these tech platforms are black boxes and it's impossible to get within them and even understand the harms necessarily.
Olivier Sylvain:
Yeah. Well, so the big reason is because of Section 230 and the First Amendment. Like I said, it is a block to inquiry. Just to be clear, it's defense, right, it's not even on the merits of whether or not somebody is negligently design something or whether they have in fact facilitated terrorist collaboration. This is a protection that blocks the inquiry into the black box. And so all these measures that open up the black box as Frank Pasquale uses the term, would be good. Researcher access is good just because most of us don't really understand how these companies operate. A couple of people in the audience might, but not all of us do, and so we do want, I think researchers in particular, to tell us about what's happening, but they should be third party independent auditors or researchers that have access to what's going on.
Justin Hendrix:
Okay. I'm going to come to your questions in the audience in just a minute, but I guess I'll ask you a question about your next project. You've put this into print now. You've in many ways, I think, summed up the history of how we got to this moment. You've laid out a set of things that you'd like to see happen in the world. What's next? What's the next big project for Olivier Sylvain?
Olivier Sylvain:
I appreciate this question. This is a hard question. I am thinking of another project that would sound more in memoir and to talk about the value of our public spaces. I think about growing up in the city with my brothers in particular, and I'd like to tell a story about that. Public spaces are invaluable for a functional democracy at a time where we feel especially polarized. And I think about subways in this regard, the forced encounter online. I want a forced encounter on the subway. I wonder whether we could, as tech journalists and researchers say, transcode those kinds of architectural designs online, and I'm thinking about how to do that.
Justin Hendrix:
Okay. So another book and another maybe gathering here at Book Culture when that comes along. Are there questions from the audience? Anybody that wants to ask a question? Yes, please. I'll repeat the question just for my podcast listeners. We are recording this for later on. Effectively, is there opportunity in coordinating with European regulators or maybe state or local officials that might be able to take action? Seems to me that especially state and local are having huge impact right now in the United States. What are your hopes that the European Union's massive projects to take on tech regulation will ultimately change the game?
Olivier Sylvain:
I'll start with the EU just because I want to end with a good story in answer to your question. The EU has been entrepreneurial in thinking about the regulation of big tech. The Digital Market Act, Digital Services Act, the AI Act, all these require risk assessments and evaluations of potential harm, it's the risk-based model. And there is some hand-wringing in Europe about whether it has been stifling of innovation and caused a competitive disadvantage, and so there is that conversation happening there, but there is resistance in Europe nevertheless. There's a recent opinion out of Amsterdam just this past week that is a pro-algorithmic choice decision where you should be able to make your own decisions about the way your feed flows and not the social media companies.
But you don't have to read many newspaper headlines to see that there's a deep hostility in this administration for regulating tech, and the White House has been unashamed about being big tech's biggest lobbyist. And what's more, the president has invoked his tariff policy to bully Europe into moderating their implementation of these laws. So in the North, short term, I don't see a lot of movement on that front. I'm hopeful, but I just don't see a lot of movement. And even before this administration, there has been an allergy to bringing in EU models of governance. And I say that as someone who finds a lot of inspiration in them, by the way, but it's just a fact of what's happening here. There is good news with regards to the states. And there's Left, there Right, and Red and Blue states that have been innovating on all kinds of ways of attending to big tech. Justin was generous enough to feature a piece I wrote that I think in December that I think speaks to this moment also.
New York has some really interesting laws, California has some really interesting laws, Texas and Florida, Utah, that are not about content moderation but about data protection and kids safety. These are interesting innovations, but this president has invoked a theory that is supported by, and not just this president, the big tech companies have argued that these state interventions are disruptive of innovation. It creates a patchwork that makes it hard for them to attend to different standards. If only that were true in all other settings, there would be something to this, but the states regulate on all kinds of ways differently. But this is an argument that has prevailed in the current administration and there have been moves to preempt state regulation. Just this past Friday, in fact, the White House issued its most recent salvo on this. So the stakes are high here, and what's interesting is that Red states and Blue states are pushing against this effort by this White House and big tech to suppress the state efforts.
Justin Hendrix:
And this is in particular, this effort is around a kind of national AI legislative framework, and there's been a more than a year effort now to try to create a moratorium on state law around artificial intelligence in particular. And we didn't have time to talk about this, but the kind of onslaught from the Trump administration against European tech regulation, which has gotten quite deep, even down to visa restrictions on individuals who have been part of the architecture of the Digital Services Act or other things regarding content moderation.
Olivier Sylvain:
Right. It is transatlantic to be sure, but I think we can also say that this administration, and I guess leveling a lot of blame there, and I think it ought to get a lot of blame, has also targeted domestic companies. And the contest with Anthropic very recently is suggestive of that. So I don't know how many of you paid attention to this, but Anthropic had the opportunity to enter a huge $200 million contract with the Department of Defense but refused to allow the Department of Defense to use it for mass surveillance and automated weaponry. Those are the safeguards that they said that the services they were going to provide would need to abide by those principles, and the Defense Department refused to abide by them. And the reason I mentioned that, you say that this government has gone after Europe in harsh ways, so they're now seeking to designate Anthropic as a national security threat so that other people can't do business with them.
This is remarkable, right? This is not about commerce, this is not in about innovation, this is about grab for power, as far as I can tell.
Justin Hendrix:
Is there another question? Yes, sir.
Audience Question:
Hey, thanks for the doc. So in your talk, you focused on how tech companies have sort of operated and government has mostly failed to sort of respond properly. And I was just wondering, and you look at, in your analysis of things, do you feel that there's room for thinking about the public as a player in this? Something that I've grappled with over the years has just been the notion of revealed preference or this long history of people like Facebook introduces the feed, people say we hate the feed, but start using Facebook more and more, or any other surveys where people are saying social media is bad for democracy, but we keep using it more and more. So Facebook is just saying, or whoever, saying we're just giving the people what they want. If all they wanted to see was like Emily Dickinson poems, that's all we'd have.
Justin Hendrix:
So this is a really good question. In fact, I was talking to somebody about this literally today about, have you heard the third person effect that notion of the extent to which that might extend to social media or AI? Everyone says AI is bad for society, but adoption keeps going up. Everyone says social media is bad, but sure do love my Instagram feed. What do you make of it?
Olivier Sylvain:
Yeah. So are you an economist? You said revealed preference.
Audience Question:
Not an economist.
Olivier Sylvain:
No. Okay. Well, so you don't have to be an economist. I've used that term, I use that term too.
Audience Question:
I feel it is the most succinct phrase to...
Olivier Sylvain:
Yeah, yeah. The material evidence that people want something is the thing they buy or the thing they do, it's a revealed preference and there's a kind of paternalism in telling people what they ought to experience. That's the undercurrent of the point. This reminds me of the privacy paradox, right? So people complain about giving their information away but yet they do give it away. There's a study last year that the Pew Research Center did where people have said that they either are extremely or somewhat, a vast majority of people, are extremely or somewhat troubled by social media harms, and yet still social media is very popular. So I think it's a nice observation. I actually push back a little bit in the book in this regard.
I think looking to user interests and user preference is a misdirection because we do know that there are material harms in spite of the things that people do. One of the things that the Federal Trade Commission and other regulatory agencies has looked into are dark patterns where people feel compelled to give information or feel compelled to do things because they feel they have no option to, and the services are designed this way. And this is what governments are supposed to do, right? Governments are supposed to stand in the shoes of people who cannot adjudicate a problem alone. They can see the consumer harms much more systematically. And I think of seat belts as an example of this. This is something that the government had to mandate in order for this to happen in spite of the fact that people were dying or cars were exploding, car safety. So I think there's a role for government, an essential role for government, particularly when the asymmetries are so dramatic between individuals and the companies. Who else is going to stand in the shoes of consumers is the question when they can't have access to or understand the ways in which the services operate?
Justin Hendrix:
Do we have another question, maybe from this side of the room? All my questions are from this side so far. Anybody over here? Yes, sir.
Audience Question:
I was curious to understand the block to inquiry point that you made earlier, if you could explain that?
Olivier Sylvain:
The block to inquiry?
Audience Question:
Section 230, preventing inquiry into understanding what's going on and refer to the systems as a black box. Can you just talk about that a bit more?
Olivier Sylvain:
Yeah. Okay. Thanks for that. I could be clear about this. So I'm going to sound like this is a lawyerly answer, but I don't mean it to be. So someone files a case and... No, let's say there's a social media harm and it's contested, let's say it's not obvious that Facebook facilitates terrorist attacks when they connect users, but there's a theory and that's a case that's actually gone to the Supreme Court. A complaint alleges it. At this point, the company invokes Section 230 and a motion to dismiss. That is before any discovery. Before we find out anything about the nature of the complaints, they can say, as a matter of law, you can't sue us because section 230 says you can't treat us as a publisher, we are just trafficking in user-generated content.
So there's no discovery, which is typical in litigation, and we never get to find out how much these companies are responsible. That's the block. It's a black box, even in spite of the access that some parties get, it's still a black box because it's not all explainable, but the discovery mechanism is essential in the American system for redressing harm. And that's an argument that the advocates of Section 30 are okay with. They're worried about litigation slowing down innovation and free speech. This block is supposed to be there. I argue that it's time that we narrow the block, if not get rid of it all together. Is that responsive?
Justin Hendrix:
Do we have a last question from the audience? Anybody? Yes.
Audience Question::
Following up on what you were just talking about, is there any discussion happening about a preemptive sort of working group as opposed to litigation following behind the problems that are showing up in this black box scenario? Is there any way to formulate a group, I think, that would be able to set in stride things in the industry ahead of time that would work as stop gaps to prevent some of this from happening?
Justin Hendrix:
So there's a lot of talk about building codes and design codes, and some of that is good thinking from people who've been in industry and think that maybe it's possible to establish things upfront that might allow for people to build safer experiences. Some of that's been codified into attempts at laws. What do you think? Can we potentially arrive at effectively blueprints or building codes, architectures that would help us create safer designs?
Olivier Sylvain:
As Justin knows, this is something that has been going on for a while. The industry has actually set out moderation standards for things like bigotry and misogyny, and a long time ago they committed to them, but the point here is that these are companies that can shift off of these positions, and that's what's happened over the past couple of years. In the months after the election, you likely know, all the major companies shifted their moderation principles, even if they once abided by moderation techniques that we think would be good. Now, maybe we shouldn't turn to the companies, maybe you're saying we should turn to code writers, designers, well-meaning people in the companies. That too has been at play, but for me, it has proven to be wholly insufficient. And one of the points I make in the book and that I've made for a while now is that there's one principle way in which we engender good behavior, and it's one of the most obvious, but is completely opposite in this setting, and that is law. Having companies to abide by law. There is no mechanism for these companies to be held accountable under law because of Section 230. I think that is a way that we engender good behavior, not that we count on them on regulating themselves.
Justin Hendrix:
So I'm not sure we've solved the problems of the internet tonight, but this book is a blueprint to various solutions, various things that hopefully we'll see play out in different ways over the next few years. And if you're keen to learn more about these issues, I'd certainly recommend you to this book and hope maybe you'll wander over to Tech Policy Press on occasion as well, but let's thank the author for speaking to us tonight.
Olivier Sylvain:
And let's thank the moderator/interviewer, Justin. Thank you very much, Justin.
Justin Hendrix:
And a big thanks to Book Culture for hosting us and the folks at Columbia. Thank you so much.
Authors

