Transcript: Senate Commerce Hearing on 30 Years of Section 230
Cristiano Lima-Strong / Mar 19, 2026
Senate Commerce Chairman Ted Cruz (R-Texas) delivers opening remarks at a Section 230 hearing on Wednesday. (Screenshot)
The United States Senate Commerce Committee on Wednesday held a hearing to mark the 30th anniversary of Section 230, the embattled liability shield for digital services that for years now has faced heavy fire in Washington.
While the session was intended to review how those legal protections have been deployed and what changes could be made to them, it also served as a retrospective on how the Section 230 debate has evolved on Capitol Hill — and how it’s been endlessly stuck in the mud.
Just over five years after President Donald Trump first publicly pressured Congress to repeal Section 230 altogether, senators from both sides of the aisle largely rejected the notion that the law should be rolled back or sunset, an idea that has been floated repeatedly in recent years.
“I’m concerned that a full repeal or sunset would lead platforms to engage in worse behavior, to engage in more censorship, to protect themselves from litigation,” Chairman Ted Cruz (R-Texas), who himself has previously called for a total repeal, said in his opening remarks.
But nearly eight years after Congress last made actual changes to Section 230, there was little-to-no sign of a consensus among lawmakers on how to tweak it further, if at all.
Still, lawmakers yet again expressed their desire to find ways to pare back the protections, which for decades have shielded digital services from liability over third-party content and their good faith efforts to moderate that material.
“There’s this idea that Section 230 [was] somehow perfectly written 30 years ago, as if it is a constitutional provision, as if it is the only federal statute that must not be touched. That’s preposterous,” said Sen. Brian Schatz (D-Hawaii.) “How we reform the law matters, of course, but to suggest that any reform would be catastrophic makes no sense.”
During the hearing, lawmakers heard from two critics of the law: Matthew Bergman, an attorney for the Social Media Victims Law Center, which is leading litigation against major social media companies over child safety concerns; and Americans for Responsible Innovation president Brad Carson, a former congressman who has argued that the law has let tech companies “escape accountability” and warned Congress not to repeat the mistake with AI.
It also featured testimony from Daphne Keller, a former Google lawyer who serves as the director of platform regulation at Stanford Program in Law, Science and Technology; and Nadine Farid Johnson, who serves as policy director at the Knight First Amendment Institute.
In her testimony, Keller stressed that while Section 230 is “not sacrosanct,” she has yet to see “other options that I think would be an improvement so far.” Keller urged lawmakers to consider alternative measures to improve the quality of internet experience, such as by passing data privacy protections or measures to boost interoperability or middleware options.
Farid Johnson echoed calls to pass data privacy and interoperability measures and said lawmakers should also pass safe harbors for researchers studying social media. If lawmakers are to consider any changes to Section 230, she said, it should be to make the protections “conditional” on companies complying with other standards, such as privacy and transparency.
Below is a lightly edited transcript of the hearing, “Liability or Deniability? Platform Power as Section 230 Turns 30.” Please refer to the official audio when quoting.
Sen. Ted Cruz (R-Texas):
Good morning. The Senate Committee on Commerce, Science and Transportation will come to order. Welcome to all the witnesses.
Within our lifetimes, the Internet has impacted nearly every aspect of the world and our daily lives, especially how we communicate. It was only a short time ago that speech and newsworthiness was controlled by a handful of TV networks and giant newspaper publishers. If you held a position they didn't want to print or wasn't consistent with their political views, it didn't get said.
The Internet changed that, allowing anyone to bypass these gatekeepers and shape public opinion with their own views. The Internet also created a new way to communicate anonymously and at greater scale, through blogs, message boards, and comment sections.
But with opportunity came legal questions. The law wasn't written for the Internet's ease and anonymity. Holding a platform liable for the illegal speech of another person threatened potentially to overwhelm early Internet companies with ruinous lawsuits that would predictably result in less online speech.
So, Washington explicitly adopted a light-touch regulatory approach with the enactment of the Telecommunications Act of 1996. Congress included Section 230 to ensure that online platforms would not be liable for the illegal speech of another person. It did so to preserve a competitive free market. And the text of Section 230 explicitly recognized that the Internet provided a, quote, "forum for a true diversity of political discourse."
But 30 years later, it seems that Big Tech has now become the new gatekeeper, the new speech police. If you disagree with a particular view, Big Tech doesn't answer that with more speech. They do not try to persuade. They do not debate. They simply make the view they disagree with disappear, and they silence you. That should scare everyone.
What's even more concerning is how the government hijacks Big Tech's powers to shape online discourse and to suppress dissenting views and undermine free speech. This isn't fiction. As I detailed in my report and hearings last year, the Biden administration weaponized the Cybersecurity and Infrastructure Security Agency to bully Big Tech-
Sen. Ted Cruz (R-Texas):
... Security and infrastructure security agency to bully big tech, to censor lawful speech on COVID and on elections, disproportionately muzzling conservative voices. We should recognize and celebrate how the free market can cause a course correction against big tech censorship. Elon Musk's purchase of Twitter was one of the most important steps for free speech in decades. It showed that the censorship regime is not inevitable and it can be challenged in the marketplace and shifted to allow the kind of diverse viewpoints that Section 230 envisioned. Congress must also consider every constitutional tool we have to ensure and to prevent social media from harming Americans, especially children, while not incentivizing big tech censorship. The TAKE IT DOWN Act, which I led together with Senator Klobuchar, demonstrates that Congress can pass targeted legislation to protect children and adults online. The law prohibits non-consensual intimate images, including such images created with artificial intelligence, and it creates a process to provide notice and take down for victims, all without amending Section 230 or chilling lawful speech protected by the First Amendment.
I've also introduced several other legislative reforms to actively support free speech online, including the TERMS Act, which stops online platforms from weaponizing their terms of service to silence Americans and deny them access to essential products and services. And I will soon be introducing the JAWBONE Act to stop government agencies from bullying platforms into silencing the American people. The same reasons why Congress enacted Section 230, to prevent liability for a different person's speech are still relevant. And I'm concerned that a full repeal or sunset would lead platforms to engage in worse behavior, to engage in more censorship, to protect themselves from litigation. I also don't believe, as some of my colleagues have suggested, that we should use Section 230 reform to silence more lawful speech or to turn the government into the arbiter of truth. But we should consider whether reform of Section 230 is needed to encourage and to protect more speech online and to stop big tech censorship.
No government official, regardless a party should have the power of censorship. I agree with John Stuart Mill that the best solution for bad ideas and for bad speech is better ideas and more speech. We don't need to use brute government force because the truth is much more powerful. I turn to Ranking Member Schatz.
Sen. Brian Schatz (D-Hawaii):
Thank you, Chairman Cruz, and thanks to our witnesses for being here today. It's been 30 years since Congress passed the Communications Decency Act, which included Section 230, but since then, everything has changed. In 1996, the internet, as we know it, today was still taking shape. We were still using dial-up modems and pagers and paper maps. DVDs weren't even available yet, much less smartphones, social media, and AI. The world was a totally different place and our laws reflected that. In the years since, we've all come to rely on the internet in virtually every part of our lives. We've learned about its benefits and harms from personal experience, and we've had to keep up with every new iteration of ever-changing technology. And yet, even as people have adapted to the technology, tech companies have demanded that our laws don't change. There's this idea that Section 230 is somehow perfectly written 30 years ago, as if it is a constitutional provision, as if it is the only federal statute that must not be touched.
That's preposterous. How we reform the law matters, of course, but to suggest that any reform would be catastrophic makes no sense. We're the commerce committee. We reform laws. Second, nobody thinks the internet as it currently functions is without problems, whether you're a young person who's grown up with the internet or you're an older user still figuring your way around. The internet can often be a terrible place. It's chaotic and depressing and confusing. And given the chance, people would love something better. But really the main reason to be having this conversation about Section 230 is that Congress can and should respond to changing circumstances. We do it all the time. For instance, we reauthorize the FAA every five years, even though the concept of flying has changed a lot less in the last five years than the internet has changed in the last 30. Similarly, we amend and update laws to protect consumers from scams.
For so long, tech companies have used Section 230 as an excuse to avoid taking meaningful action to protect users, but especially kids from egregious harms, harassment and abuse, frauds and scams. It's not that they don't know what's happening or even why it's happening. It's that to do something about it would be to hurt their bottom line. And so long as federal law provides a shield, why even bother? But this committee is not powerless here. We don't simply have to accept the terrible outcomes as a fact of modern life. We can work together and fix the law. I have a bipartisan bill called the Internet PACT Act, the Chairman, Senator Klobuchar and others have amendments to Section 230. Section 230 is not one of the 10 Commandments. It is not a constitutional provision. It is a federal statute and we are lawmakers. And this idea that we can't touch it, otherwise internet freedom incinerates is preposterous. And so I'm looking forward to a good constructive bipartisan discussion about what reforms are possible. Thank you.
Sen. Ted Cruz (R-Texas):
Thank you. And I would like to enthusiastically reiterate what Ranking Member Schatz just said that Section 230 is not one of the 10 Commandments. Or for fans of the history of the world, it's not one of the 15 commandments either. I'd now like to introduce our witnesses for today. Our first witness is Daphne Keller. Ms. Keller is the Director of Platform Regulation, at Stanford Law Schools, Program in Law, Science & Technology. She previously served as associate general counsel at Google, and her writing has been widely featured in both law journals and popular newspapers. Our second witness is Nadine Farid Johnson, Policy Director at the Knight First Amendment Institute. Ms. Farid Johnson was formerly an American Foreign Service Officer and today teaches at Columbia University School of International and Public Affairs. Our third witness is Matthew Bergman, founding attorney of the Social Media Victims Law Center.
As an attorney, Mr. Bergman has worked extensively in product liability law, and his firm has filed lawsuits against several major social media companies alleging addictive design flaws. Our final witness is Brad Carson, President and Co-Founder of Americans for Responsible Innovation. Mr. Carson served as the representative for Oklahoma's Second Congressional District from 2001 to 2005, and as acting undersecretary of defense for personnel and readiness under the Obama administration. Ms. Keller, you're recognized for your opening statement.
Daphne Keller:
Thank you very much. Chairman Cruz, Ranking Member Schatz and members of the committee. Thank you for the opportunity to speak today. I'm a lawyer with over 25 years of experience in practice and as an academic in the field of platform regulation, working in the U.S. and around the world. I'm also a native Seattleite. I'm a last not a native Hawaiian, married to a Texan. And I firmly believe that some of the most fundamental issues before us today are nonpartisan. And I heard a lot of similar sentiments from the two of you, which I find encouraging. We're here to talk about a law that is deeply unpopular today, Section 230, but it may still be, as Churchill said of democracy, the worst option except for all the other ones. I agree 230 is not sacrosanct. It's not even well written, but it happened to strike a balance that has served its purpose and I do not see other options that I think would be an improvement so far.
Any proposed legal change, no matter how well-intentioned, should be vetted for two things. First, is it constitutional? Second, would it make things worse? I'll speak to the Constitution first. The internet is full of speech that most of us in this room would find offensive and likely consider harmful or dangerous, but a lot of that speech is also protected by the First Amendment, including a great deal of hate speech and miss or disinformation. Lawmakers cannot tell platforms to remove or demote this speech, and eliminating Section 230 wouldn't change that or give them an obligation to remove the speech. The Constitution also limits important details about laws attempting to hold platforms liable for users' genuinely illegal speech, things like fraud or defamation or obscenity. In a mid 20th century case, the Supreme Court rejected strict liability for booksellers because incentivizing them to purge their shelves would harm us, the reading public.
It was not about the right to the booksellers. It was about the analogs of those of us today who depend on the internet to share our messages, to post our book reviews, to read restaurant reviews, to engage in political dissent. This is about our rights. Now, let's talk about the second constraint. Would changes to the law make things worse? This is a question about what would actually happen, what platforms and users, and importantly, governments would predictably do in a world without Section 230. That legal change would very likely make the internet worse for user speech rights without, I believe, actually making it any safer. And it would impose legal uncertainty and expense that today's incumbent giants could survive, but their smaller rivals could not. We have a lot of data to predict what happens when speech, sorry, when platforms are held liable for the speech of their users.
Platforms receive huge numbers of false allegations under laws like the DMCA here or the Digital Services Act in Europe from people demanding removal of perfectly legal speech. Governments do this, companies do this against their competitors, and platforms have strong incentives to simply comply. The idea that the First Amendment alone would cause platforms to stand up for their user's lawful speech when doing so is expensive and inconvenient is simply wrong. We can also make informed predictions about harms to users from bad content, from things like pro-eating disorder content or pro-suicide content. Platforms take down enormous quantities of this material now under their own voluntarily adopted rules. The point of Section 230 was to encourage them to do so and make sure that they could look at the content and make editorial decisions without risking liability. In an alternate timeline without Section 230, platforms that moderated content at all would have reason to purge anything remotely risky, and the other option would be to leave everything up and tolerate all kinds of harmful or illegal material.
Finally, losing Section 230 would, I believe, be catastrophic for competition. I wrote in my testimony about VO, a platform that was very similar to YouTube that got sued on similar grounds, won on similar grounds, but went bankrupt in the process. And so now we don't have this what could have been a major competitor to YouTube. In a 230 list world, we should expect years or decades of legal uncertainty and litigation expense, the kinds of things the giants can withstand and their smaller rivals can't. None of this is to say that Congress's hands are tied. There are ways to make the internet a better and safer place without setting new state imposed rules for speech. Robust federal privacy protections would be a great start.
My fellow witness has written about how middleware and interoperability could offer the exciting promise a more diverse and competitive internet in which speech preferences are controlled by users themselves. Section 230 can help make that a reality. Section 230 has proven value and Congress should not abandon it for proposed solutions that have every chance of making things worse. I thank you for your time today and look forward to your questions.
Sen. Ted Cruz (R-Texas):
Thank you, Ms. Farid Johnson. You're recognized.
Nadine Farid Johnson:
Thank you, Chairman Cruz, Ranking Member Schatz, distinguished member of this committee. Thank you again for the opportunity to testify today. The issues before this committee today are immensely important ones. All of us here agree that the digital public sphere is not working for Americans or for our democracy, and the question is what to do about it. Section 230's protection is vital to free speech online, even if there are difficult questions about how far its protection should extend. Questions that the courts are working through right now. While Section 230 should not be treated as sacrosanct, repealing it would do little to address the problems that all of us are most concerned about, and in some ways it would make those problems worse. The better approach would be to pass structural regulation that would protect users' privacy, allow them to engage with platforms on their own terms or leave them more easily and make the platforms more transparent and accountable to the public.
If Congress is going to amend Section 230, it should make its protection conditional on platforms compliance with transparency, privacy, and interoperability requirements. Section 230 effectively gave platforms the ability to moderate user content without having to fear that doing so would give rise to liability. With that protection, platforms now moderate content in many different ways, including by filtering or suppressing spam, pornography, and to varying degrees, hateful speech. While there is inevitably disagreement about what speech platforms should moderate and how, it is indisputable that the platforms would be unusable if they did not engage in moderation in ways that Section 230 was meant to protect. Some of the speech on the platforms is seriously harmful, denigrating, dangerous, polarizing. The Supreme Court has interpreted the First Amendment to deny to the government the ability to punish hateful and dehumanizing speech or speech that promotes outrage. So it is important to recognize that the platforms would be protected for publishing most of this speech even in the absence of Section 230 because the First Amendment would most likely be understood to protect it.
Without Section 230, because the First Amendment protects harmful, outrageous and sensational speech, the platforms would not be required to remove it, but they would be motivated to take down speech that might plausibly give rise to liability, especially potentially defamatory speech. Rather than risk liability for such content, they would almost certainly remove it, including allegations that turn out to be true, thereby depriving users of access to socially valuable information critical to our public discourse. There's an opportunity now for Congress to act by putting forward legislation that is aligned with First Amendment rights and would foster a better online environment for people who use social media platforms, simultaneously minimizing the harms of social media while liberating users from the monopoly control over and suppression of their online speech. We have three proposals. First, lawmakers should establish legal protection for those who study the platforms in the public interest.
Understanding the online experience of users, how algorithms target content and how these decisions shape public discourse and ultimately our democracy is a necessary step toward informing the public about how the platforms operate. But today, those who study the platforms are under the threat of serious legal liability. Passing a researcher safe harbor would improve public understanding of how the platforms are shaping society. One model for doing so is the Knight Institute's safe harbor proposal, a modified version of which was incorporated into the Platform Accountability and Transparency Act. Second, legislators can address several data privacy related issues that affects users' experience online. Platforms are successful in maintaining user engagement because they use the extensive information they gather about a user to recommend content. Legislators could reply at platforms to clearly inform users as to what data the platforms collect about them, how they use that data and with whom they share it, and could also limit what information platforms collect.
Congress can also pass legislation that prevents platforms from selling user information to data brokers. These privacy enhancing efforts would be a monumental step forward to protect all Americans, but especially minors who engage online. Third, Congress can and should attack the platform's monopoly control over public discourse. The platforms have been extremely successful at maintaining control over the social media industry. In practice, this denies users meaningful choice over the platforms they use and it makes it nearly impossible for new platforms to compete with the incumbents. The most direct way to address this problem would be to establish a requirement of interoperability, which would enable users to take their data and social networks with them when they leave a platform. We support these mandates as standalone bills. We also believe that conditioning Section 230 protections on these mandates can be done in a way that not only respects the limitations of the First Amendment, but actually promotes the values that underlie it. Thank you again for the opportunity to testify this morning. I look forward to your questions.
Sen. Ted Cruz (R-Texas):
Thank you. Mr. Bergman, you're recognized for your opening.
Matthew Bergman:
Thank you, Chairman Cruz, Ranking Member Schatz and esteemed members of this committee. It is an honor to be here to address the current state of Section 230. We appreciate the bipartisan leadership of this committee in addressing the carnage that is being inflicted on American young people through the deliberate design decisions of social media companies to target kids to enhance their profits over the safety of kids. And we commend this committee for its successful reporting out of the Kids Online Safety Act last session, which passed the Congress and then pass the Senate with 91 votes. As Justice Thomas observed, courts have interpreted Section 230 to confer sweeping immunity on some of the largest companies in the world. And in order to convey the human cost of this expansive definition, I've asked three families who've been directly affected by Section 230 to come before this committee and for you to hear their stories.
Sitting to my left is Tammy Rodriguez of Enfield, Connecticut. She's the mother of Selena Rodriguez, Forever 11. In the words of her sister Destiny, "Selena could light up a room as soon as she walked in, and you knew the second she walked in because with the light came a lot of noise." Selena was so addicted to social media that she became physically violent when there was any effort to turn off the mixed machine or take her away. As depicted in the written testimony, she was targeted through the deliberate design decisions of these companies with online sexual abusers and predators. And you can see the conversations that they had with her that would make any parent blanch. She developed such despair and depression that she took her life on Snapchat and recorded a song that was provided to her on Snapchat, Forever 11.
Toney and Brandy Roberts' are from New Iberia, Louisiana. They're the parents of Englyn Roberts, age 14, forever. Englyn was the youngest of five children. She was vivacious, charismatic, and sassy. In the words of her father, Tony, she made every day seem like Christmas. Tony thought that he was monitoring his daughter's online content. He just didn't know how to do it. She was being pulled in a dark world, and as you can see in the written materials, developed more and more suicidal ideation based on the deliberate design decisions. And additionally, in the materials that we've submitted to this committee, you can see a chart showing on a daily basis how much she was being targeted after hours when her parents thought she was safe in her home in sleep. She took her life by mimicking a depiction of suicide online that was targeted to her. And as recently as this morning, that material is still on Instagram.
Jennie DeSerio is the mother of Mason Eden's Forever 16. In the words of his stepdad, Mason was an all American football player, but there was a whole other side to him with a gentle, loving, positive energy about him. As a young high school boy, he broke up with his girlfriend and sought affirming content on TikTok to mend his broken heart. And instead, without asking for it, he was targeted with suicidal videos, encouraging him to take his life with a shotgun, which is what he did in his parents' home. And I've included clips to those things that TikTok targeted him with. And I would ask every parent in this room to look at those videos and ask whether a company that targets children with this content should be accorded unfettered immunity from basic principles of liability and human decency. Every parent in this room needs... These cases have nothing to do with protecting speech. They're about the deliberate design decisions of companies to prioritize profits over the lives and safety of their children. Yet in every one of these cases, social media has tried to dismiss these cases based on Section 230.
Section 230, as written, sought to maximize user control over what information is received by individuals. The statute explicitly sought to empower parents and embolden law enforcement, and yet this original intent has been thrown by the wayside by these interpretations. In the words of Justice Thomas, "In the platform's world, they are fully responsible for their websites when it results in constitutional protestations. But the moment that responsibility could lead to liability, they can disclaim any obligation and enjoy greater protections from suit than nearly under any other industry." We ask that this committee reform Section 230 to conform to its original intent to require that tech companies follow the rules that every other company in America follows, and that the duty of reasonable care not be a duty that is immune from only one segment of our economy. Thank you.
Sen. Ted Cruz (R-Texas):
Thank you, Mr. Bergman, and thank you for telling your client's story. And I want to say to the moms and dads who are here, thank you for being here. Thank you for standing up and remembering and fighting for your kids. I will say as the father of two teenage girls, you are living every parent's nightmare. And I don't know a parent of adolescents or teenagers who is not terrified of the tragic forces that targeted your children. So thank you. Mr. Carson.
Brad Carson:
Chairman Cruz, Senator Schatz, and members of the committee. Thank you for inviting me here today to testify as we mark the 30th anniversary of Section 230. I also want to give special thanks to Chairman Cruz for his determined and prescient work to pass the Take It Down law, a law which ARI supported and which will make a difference in the effort to hold big tech platforms accountable. As we discuss harms the children online in the wake of Section 230, the Take It Down Act is a commendable effort to correct the historical course. 30 years ago, Congress passed Section 230 to address a narrow problem, whether online platforms could moderate content without assuming publisher liability. Today, the law is widely criticized for enabling unchecked harms online. As we confront artificial intelligence, we should ask ourselves a simple question. Will we repeat the mistakes of Section 230 or will we learn from them?
There are two competing interpretations of how Section 230 went so wrong. One view is that Section 230 was ill-conceived from the beginning and that ordinary tort and First Amendment case-by-case development through the common law would have produced a more nuanced and flexible body of law. Rather than allowing the development of rules through our legal system, however, Section 230 froze answers in place to questions the law had not yet fully considered, much less answered. Another view is that the statute was defensible, but courts interpreted it far beyond what Congress intended, providing platforms with legal immunity, not only for user content, but extending legal protections to include significant algorithmic amplification and product design choices. Regardless of which is more accurate, both assessments recognize that the resulting legal regime has been unable to hold platforms accountable in proportion to the harms they have enabled. Families whose children were exploited online or harmed by algorithmically amplified content have too often discovered that the law offers them little recourse.
Section 230 may be the 26 words that created the internet, but they also are the 26 words that have visited irrevocable harm to generations of children. I recount the history of Section 230 because Congress now faces a structurally equivalent moment regarding artificial intelligence. Some have suggested freezing state law developments while enacting no comprehensive federal regulation of the technology. Section 230 is what I would call a Meta law, a law that determines who governs an emerging industry rather than how that governance would occur. Enacted when the internet was still in its infancy, Section 230's consequences have proven extremely difficult to correct once the industry matured. In practice, the very first Meta law on technology often becomes the last major law because it determines who has the power to resist future corrections. I should be clear that section 230 should not be interpreted to immunize the provider of a generative AI system for harms caused by system outputs.
Section 230 protects platforms from liability for content, as the statute says, provided by another information content provider. That framework assumes active users and passive hosts. Generative AI systems do not fit that model. A user provides a prompt, but the company designs the model, selects the training data, fine-tunes the system, and deploys it with parameters of its choosing. The resulting output is not third-party content. It is the-
Brad Carson:
The resulting output is not third-party content, it is the product of the AI system that the company created. Even though that answer does seem obvious, Congress should clarify the law before lower courts extend immunity further than the text permits. The most relevant lesson of Section 230 is that Congress should not establish broad, essentially unchangeable immunity for an industry that it does not yet understand and its future development can scarcely be charted.
The federal preemption of state laws related to artificial intelligence would be a repeat of Section 230's tragic history, replete with all the errors of the former and without any of the latter's corrective measures. There are many forms of legal stagnation that Section 230 has caused. I would urge the committee not to repeat these same mistakes as we embrace new technologies yet again. Thank you for your time and I look forward to your questions.
Sen. Ted Cruz (R-Texas):
Thank you to each of the witnesses. Let's start out for Ms. Keller and Ms. Farid Johnson. With the benefit of 30 years of hindsight, what is the worst or most offensive use of Section 230 as a liability shield that you've seen?
Daphne Keller:
I will give you two answers. One is about racial discrimination. The second is about government censorship. The first is a case called Vargas. That was the one decision out of a cluster of cases. And the general allegation was that Facebook was offering racially based targeting, gender-based targeting, et cetera, for ads for housing and employment and credit, which is prohibited by federal law. And what makes that a not-a-230 case, in my mind, is that the user uploaded something totally legal, just an apartment listing, and Facebook added the thing that made it violate the law, which was the racial targeting. So that's my exhibit A.
And I should repeat, Facebook allegedly did... We don't know exactly what happened. The second is a case called Sikhs for Justice versus Facebook. This was American court plaintiffs alleging that Facebook had silenced them based on pressure from the Indian government. And the court ruled that under Section 230, the platform had the right to do that, which I think is doctrinally correct, but it's very sad. Section 230 exists to protect users. It is there to give a spine to the spineless if they want one. And having platforms back down when they could have done better is unfortunate. But thanks to Section 230, hopefully another platform can come along and do better.
Sen. Ted Cruz (R-Texas):
Ms. Farid Johnson.
Nadine Farid Johnson:
Thank you, Senator. I'm actually going to use the example of a case that I think demonstrates both some of the cynicism in seeking 230 protection when it's unwarranted and that demonstrates as well the limitations of the provision. There's a case called Lemmon v. Snap in which bereaved parents sued the company that is the parent company of Snapchat for its allegedly negligent design of a speed filter. That speed filter encouraged users of the app to drive at excessive speeds.
Snap responded to the complaint by seeking 230 immunity. And the appellates went to the appellate court. The appellate court noted that a platform can still face liability for its provision of content-neutral tools where the liability stemmed from the platform's own acts, such as designing, here in this case, and making available to users the speed filter and the corresponding reward system. And it wasn't about the content posted by the users, it was about the actions of the platform itself. Thank you.
Sen. Ted Cruz (R-Texas):
So let me ask a follow-up. A key motivation for Congress passing Section 230 was to incentivize a flourishing marketplace of ideas online. As I explained in my opening statement, I want to find ways to curb Big Tech censorship so that we have more speech, not less speech on online platforms. For Ms. Keller and Ms. Farid Johnson, how could Congress constitutionally modify Section 230 to protect or incentivize more speech, not less?
Daphne Keller:
Well, here again, I have two answers. The first is about jawboning and pressure from government. The second is about empowering users and diversifying our options. I'll get through the first one and you tell me if you still have time for a second one. I didn't love the Biden administration pressure that was illustrated in the Murthy case. That said, having been on the receiving end of pressure like that, the tone in that record looked like the tone that government officials from all parties sometimes take in high-handed demands that platforms conformed their content preferences. In that case, the plaintiffs got unprecedented discovery and still couldn't establish any actual causation that led to their speech being suppressed. And this led to the Supreme Court issuing a really problematic ruling on standing that for the real victims of real jawboning in the future, it's going to be harder for them to get into court because of that.
We are in an era right now of jawboning that is unprecedented in my lifetime. Brendan Carr is at it again. We see him pressuring news outlets now. We are seeing highly politicized enforcement at the FTC of laws that are supposed to be neutral consumer protection and antitrust laws. All of this leads to unprecedented vulnerability of our speech and communication to this kind of pressure because, as you pointed out, Chairman, as both of you pointed out, all of our speech is very dependent on these big private companies right now. We are seeing this playing out on a screen in front of us with platforms. Maybe it's also taking place in back rooms, but right on TV, we can see back in August of 2024, President Trump threatened to jail Mark Zuckerberg. In January of 2025, Zuckerberg very publicly announced changes to Facebook's and Instagram's speech policies to conform with the current administration's preferences.
He did this in this hostage video. It was very strange. And also in January settled a $25 million lawsuit that Facebook was going to win and paid that out in a suit from President Trump. And when President Trump was asked if these changes were a result of his threat to jail, a CEO of a platform, he said, "Probably." This is crazy. This should not be possible in today's America. The First Amendment should preclude it. Any work that you can do or you perhaps are doing to make it more possible to get into court and to get past the standing restrictions from Murthy, I think would be extremely important. But in the meantime, as I mentioned, Section 230 can give smaller platforms more of a spine.
Sen. Ted Cruz (R-Texas):
And, Ms. Farid Johnson, you can answer briefly because our time has expired.
Nadine Farid Johnson:
I would just say briefly, Senator, the way we see the best way to amend 230 is to condition its protections on the transparency, limited data privacy, and interoperability requirements we suggest.
Sen. Ted Cruz (R-Texas):
Thank you. Ranking Member Schatz.
Sen. Brian Schatz (D-Hawaii):
Thank you, Chairman. Thank you to all of the testifiers. I just wanted to say, at the risk of getting the Chairman in trouble in his home state, he's doing an extraordinary job of leading this on a bipartisan basis. And also specifically on the question of jawboning, I think without his leadership, there would've been fewer guardrails on the current jawboning. But I also take your point, Ms. Keller, that the jawboning occurred in the previous administration as well. And the fog of war, the urgency of protecting people from the COVID-19 pandemic, does not entirely justify the way that federal government officials interacted with private sector companies. I'm just glad that we now have, in three dimensions, in real-time, examples of both parties doing this. So it's no longer theoretical that the door swings both ways in Washington and this is going to bite us all in the butt and we have to fix it, we now know it.
So I'm just pleased that Senator Cruz has sort of set the platform for us to work on this on a nonpartisan basis, not a bipartisan basis. Mr. Bergman, your lawsuits are interesting to me. I'm not a lawyer, but I'm particularly interested in the expansive view of what the platforms view as 230, providing them a shield. And what I think is your theory of the case, which is, "No, the AI, the design choices, all of those are a product and they are subject to regular product liability." And I'd like you to articulate how that distinction is made and whether there's anything that we can do statutorily to constrain the regular 230 immunity. Someone uploads some content, they can't be sued for liable, that's fine, but now we're in an age where half the stuff on these platforms is AI anyway. A lot of it is harming kids. And it seems to me that we need to clarify either through the court system or through the legislature or both, what we actually originally meant by 230. So I'm interested in your thoughts.
Matthew Bergman:
Well, yes. And I think Section 230 remains an important protection for online communication and for the free exchange of ideas. I think what's important is that Section 230 adhered to its original intent, which was to immunize publishing activity. And the impetus for this came from my mentor, Judge O'Scannlain on the Ninth Circuit in Barnes versus Yahoo, where he established a distinction between where a individual is seeking to hold the company liable for traditional publishing activity, bad content moderation, having bad stuff online, putting bad stuff online, then that is and should remain subject to Section 230. But as the court held in Barnes, even if the same harm results from a different theory of liability in the case of Barnes, it was a promissory estoppel in the case of Lemmon versus Snap. It was a negligent design. That's a separate source, it's a separate duty, and that should be allowed to proceed.
Our cases are based on the premise that these platforms are products, that they are designed based on the deliberate design decisions. They target children, not with material that they want to see, but material that they can't look away from. They take advantage of the fact and they exploit the underdeveloped frontal cortices of young individuals, the need for FOMO, fear of missing out, the social anxiety that adolescents have, and they use intermittent reinforcement techniques and very highly sophisticated AI to addict them to their platforms. And the research is showing that it's actually physically addictive. So I think that it is possible. And we've seen this. We just completed a trial and are awaiting a jury verdict as we speak, where we brought a case to the conclusion of trial where we were able to cleave that important distinction between content moderation, which is protected by Section 230, even if the platforms aren't doing a good job of that, and the deliberate design of defective products.
Sen. Brian Schatz (D-Hawaii):
Perfect. Now, I'm always a little cautious about writing a new statute if I think that the existing statute already provides the pathway because I don't want to stipulate to us needing to change the law if your essential point of view is that, "Actually, the law just needs to be interpreted properly." And I think you've got it right, but I'd like your advice over time about whether there's anything we can do in terms of clarifying legislative intent or to include modifying the existing statute. But again, I don't want to stipulate to the idea that the original law provides such a broad immunity to the platform. So what are your thoughts about the need for the legislature to take action or should we wait for the court cases to work their way?
Matthew Bergman:
Well, if we wait for court cases to work their way, more kids are going to die, so I think things have to happen. I think that clarifying the original legislative intent to encourage the development of technologies, which maximize user control over what information is received by individuals, to remove disincentives to the development of utilization of blocking and filtering technologies that empower parents to restrict their children's access and to ensure vigorous enforcement of federal criminal laws to deter and punish trafficking and obscenity, stalking, and harassment. I think if this committee were to reaffirm those objectives, along with the very important objectives of preserving the free marketplace of ideas, I think we could go a long way.
Sen. Brian Schatz (D-Hawaii):
One final thought. I agree with almost everything you said, but Senator Cruz and I have a bill on a bipartisan basis to the Kids Off Social Media Act. The basic idea behind that, besides the obvious, that there's no real use case for a nine-year-old to be on Instagram or TikTok or Snap or whatever it may be, is also that this idea of empowering the user to kind of turn the dials in the settings is a fantasy.
If you are a parent, that is a fantasy. You are not going to get access to your kids' phone all the time. The tech companies, the platforms will work around whatever statutory framework we have, and that's why we need a sort of there-is-no-safe-cigarette point of view as it relates to kids whose, as you're pointing out, brains and bodies are not fully developed, so Kids Off Social Media Act. I don't have any objection to empowering users, but I do not think that is going to actually protect kids.
Sen. Ted Cruz (R-Texas):
And I very much concur with what Ranking Member Schatz has said that the Kids Off Social Media Act is one this committee has already passed in a bipartisan way, and I hope we pass it on the floor of the Senate and get it to the President's desk for signature. I will tell, as a brief story, just how helpless parents are in dealing with this. Several years ago, our eldest daughter was in trouble and we grounded her and took away her phone for a month, which, as parents and teenagers will tell you, is truly a Draconian punishment.
Two weeks into it, Heidi gets a random email from Verizon that didn't make any sense. So we went and dug down more deeply, and it turned out that before she had handed her phone over to us, she had taken the SIM card out of it and had gotten a burner phone and put the SIM card in the phone. And we went to confront her. And at age 14, she sat with her arms crossed in her room, she said, "Dad, you said I couldn't have my phone. You didn't say I couldn't have my SIM card." And I have to admit, I was both-
Sen. Brian Schatz (D-Hawaii):
It's Ted Cruz's kid.
Sen. Ted Cruz (R-Texas):
... annoyed and really proud at the same time, but it does show just how completely outmatched parents are trying to keep up with teenagers with these issues. Senator Fischer.
Sen. Deb Fischer (D-Neb.):
Thank you, Mr. Chairman. The challenge before Congress is ensuring that we have both accountability and the environment for free speech expression online. And Section 230 was designed to meet both of those goals. Ms. Farid Johnson, could mandated transparency about content moderation decisions improve accountability without chilling speech? And really, what's the line here between protecting free expression and systematic harassment online?
Nadine Farid Johnson:
Thank you, Senator. As the Knight Institute has argued in various briefs in this topic, transparency requirements, including disclosure and reporting requirements, are generally upheld under the [inaudible 01:15:12] standard, so long as they are not unduly burdensome with respect to speech. So what that means in plain terms is that there can be laws that require the disclosure of factual and uncontroversial information from a platform about the terms on which a service is offered, for example, as what you were mentioning, and these can be considered constitutional if they are not unjustified. Essentially, what it comes down to is if Congress can craft a provision that demonstrates there is not an undue burden on speech, but also provides a transparency that you're hoping for, that could pass scrutiny.
Sen. Deb Fischer (D-Neb.):
Is the problem here that we're looking at in the statute itself, or is it with how the courts interpret that? Ms. Keller, we'll start with you.
Daphne Keller:
Oops. Sorry about that. I'm trying to think through the how-the-courts-interpret-it piece because right now the courts are all over the place. There are rulings-
Sen. Deb Fischer (D-Neb.):
Do we need to drill down and be more specific in a statute, which is kind of contrary to what Congress usually does?
Daphne Keller:
I actually think that the courts are in the process of working towards some answers. For example, on this question of design liability and figuring out what you would consider to be the platform's own doing and distinct from user speech, there are a bunch of child safety cases going on, including in multidistrict litigation, where courts are saying, "Well, maybe autoplay of videos counts as design and that's not immunized and it is something platforms could be liable for. Maybe sending notifications counts as design. Maybe infinite scroll counts as design." So they're moving toward answering these questions that I think actually would play out similarly under the First Amendment and 230. I will stop.
Sen. Deb Fischer (D-Neb.):
Do you think we need to look at Section 230 and should it be able to protect platforms when their algorithms are really actively amplifying some really harmful content or only when they host it? Do we have a distinction out there between hosting and creating?
Daphne Keller:
I think the first question is, are we talking about content that is in the lawful but awful category as so much of the pro-anorexia terrible content is? If Congress doesn't have the power to prohibit that, they also don't have the power to tell platforms to change their algorithms and downrank it. The Supreme Court has been very clear in saying that a direct ban on speech and something that just burdens speech by making it harder to distribute, get the exact same scrutiny from the court.
Sen. Deb Fischer (D-Neb.):
If we would carve out algorithms out of Section 230, does that create a workable law or is it just going to create a lot of litigation?
Daphne Keller:
So this is the question that went to the Supreme Court in Gonzalez and then didn't get resolved. So there are a stack of briefs of very smart people arguing every side of this issue. But what I think is that it would cause the most important real estate on platforms, the places that most users actually go to be dignified, to be purged of anything with any risk. For example, at the time of that case, at least 70% of YouTube views were from the recommended videos, and it would mean that in that area, YouTube has an incentive to definitely not have any anti-ICE videos, not have any accusations of wrongdoing by powerful people, not have new voices breaking in and have that be extremely safe. And I think that would be harmful for the many, many people, creators, advocates who rely on platforms to get their word out.
Sen. Deb Fischer (D-Neb.):
Okay. Thank you very much. Thank you, Mr. Chairman.
Sen. Ted Cruz (R-Texas):
Thank you. Senator Klobuchar.
Sen. Amy Klobuchar (D-Minn.):
Thank you very much, Senator Cruz, and thank you for holding this important hearing. And of course, our work together on the TAKE IT DOWN Act couldn't be more important. I think I'll start with you, Mr. Bergman. Can you talk about how the just blanket Section 230 immunity has shut the courthouse door to so many parents seeking justice for their children?
Matthew Bergman:
Yes. We have a case involving a 12-year-old girl who through Snapchat's friendship recommendation algorithm was connected to a online sex predator who was able to utilize the Bitmoji process to groom this child into sextorting her, meeting her, and he raped her. And we brought suit against Snapchat saying that the technology that failed to provide protection to a 12-year-old girl and allowed a sex predator to operate its scale and disguise his malevolent intent was a design decision. That case, unfortunately, was struck down or was dismissed. The court said, "It's offensive to our conscience, but I have to do it."
Sen. Amy Klobuchar (D-Minn.):
And how would repealing it or making changes to Section 230 create real incentives for platforms to be designed in a way that are safe for users? I just think of an exploding washing machine that wouldn't be protected from immunity, so they get sued and then they make changes. Talk about that quickly.
Matthew Bergman:
Well, Senator, that's the exact point. We just want social media companies to follow the same incentive structure that every other company does. It's reasonable care.
Sen. Amy Klobuchar (D-Minn.):
Mr. Carson, should we continue to think of these companies as merely neutral distributors, as they would like, as opposed to publishers picking and choosing what users view and when? Because the whole idea originally was this distribution.
Brad Carson:
I think the idea that their neutral platforms is normatively persuasive to me, but runs constitutionally afoul of how the Supreme Court approaches First Amendment law these days. I do think in kind of an answer to your question to what Senator Fischer said, a great reform of Section 230 would be to remove the algorithmic optimization that the platforms do from being seen as another's speech. Right now, it doesn't protect your own speech, only another's speech.
So if you were to define, say, what Facebook does to their optimization programs as Facebook's speech, it would still have First Amendment protections and you'd have to litigate those kinds of questions, but it would be their own speech and therefore not subject to 230 immunity. So there is a way to get after what is a very serious problem, which is the use of algorithms, which the Supreme Court has suggested does have some constitutional protection, but these ones that are just about optimization, they don't actually have any expressive value, I'm not promoting speech about [inaudible 01:22:51]-
Sen. Amy Klobuchar (D-Minn.):
I understand.
Brad Carson:
... causes, right?
Sen. Amy Klobuchar (D-Minn.):
So could the TAKE IT DOWN Act, just as we look at beyond Section 230, is there some model we could use? We have the NO FAKES bill to protect people's identities on the AI front and the like. Is that some kind of model we could use?
Brad Carson:
That is the right model. TAKE IT DOWN Act, the DMCA Section 512 similar, are both, I think, promising avenues.
Sen. Amy Klobuchar (D-Minn.):
Okay, very good. Thank you. Professor Farid Johnson, we need these common sense rules of the road and Senator Grassley and I had the American Innovation Choice Online Act, which we actually got through the committee with a very good vote to pry open competition online as another way to look at this by stopping anti-competitive self-preferencing.
In your testimony, you argue that Congress should attack the platform's monopoly control over public discourse. Can you talk about how increasing competition and choice online, especially through interoperability, can help address some of the problems that we are talking about today?
Nadine Farid Johnson:
Yes, Senator, thank you. What we're hoping to do here with this type of legislation is to give users more control over their experience. Right now, what happens is when someone logs onto, let's say Facebook, for example, they are given a feed and that feed is what comes to them because of the information that Facebook collects about them and Facebook is making recommendations accordingly.
If somebody wanted to log onto Facebook and be able to talk with certain friends or their grandmother or whoever it might be, but doesn't want to deal with all of that, if we had a robust interoperability mandate, that person would be able to use third-party software called middleware to be able to decide how they want to view that feed. So it's one example of it.
Sen. Amy Klobuchar (D-Minn.):
Okay. Could I just go to you quickly, Ms. Keller? You previously said that middleware, that Ms. Farid Johnson just referred to, software can give users more control over what they see online, talk about one solution created that could help with Section 230 but almost no major platform allows it. How would opening up these platforms mitigate online harms?
Daphne Keller:
I'm so glad you asked because I actually wish that Senator Schatz were still here to respond to his very valid concern that parents don't want a bunch of choices, they just want their kids to be safe. Part of what middleware would make possible, or interoperability, is competitors who figured out a better way to protect children to offer a one-click option for parents. Bringing in interoperability is a way for innovators to find new and different ways to do these things, so I think it is extremely promising. Right now, people are scared to build interoperable projects because they can get sued and shut down.
This happened to a company called Power Ventures 10 or 15 years ago. They offered a very basic service to aggregate your social media feeds from multiple platforms in one user interface. And Facebook sued them under the Computer Fraud and Abuse Act, which is this anti-hacking law from the 80s, and got them shut down. Similarly, Section 1201 of the DMCA, which is intended to prevent hacking through encryption on copyrighted DVDs, gets weaponized this way. So I think there are a lot of opportunities to reform existing laws in ways that open the door to interoperability.
Sen. Amy Klobuchar (D-Minn.):
Okay. These were great answers. Maybe I'll do a few more on the record, but thank you all for your thoughtful work and I really hope we can press ahead with some of these changes. Thank you, Senator Cruz.
Sen. Ted Cruz (R-Texas):
Thank you. Senator Schmitt.
Sen. Eric Schmitt (R-Mo.):
Thank you, Mr. Chairman. Ms. Keller, I assume as a First Amendment advocate, you would agree that the government shouldn't be censoring American's speech, correct?
Daphne Keller:
Correct.
Sen. Eric Schmitt (R-Mo.):
And the government can't outsource that censorship to private actors either, can it?
Daphne Keller:
I hope not.
Sen. Eric Schmitt (R-Mo.):
Right. Now, you're the director of the Stanford Cyber Policy Center, which oversees the Stanford Internet Observatory, correct?
Daphne Keller:
No, I'm not the director.
Sen. Eric Schmitt (R-Mo.):
Okay. Do you have a role in that organization at all?
Daphne Keller:
I have a role in the organization, but my main placement now is in the law science technology program at the law school.
Sen. Eric Schmitt (R-Mo.):
Okay.
Daphne Keller:
Now is in the law science technology program at the law school.
Sen. Eric Schmitt (R-Mo.):
Okay. Did you have anything to do with that observatory participating in the election integrity project of 2020?
Daphne Keller:
Not really, but I fully support their First Amendment right to do so.
Sen. Eric Schmitt (R-Mo.):
Okay. Well, let's walk through that. So their role with the Biden administration was to flag content that they thought was offensive, that didn't line up with the Biden administration's view of the origin of the COVID virus or on the vaccine or on people's opinions on how the 2020 election went down. And so what their job was to do, which was to flag content that the Biden administration didn't like, feed it back to the Biden administration, and then put pressure on social media companies to take that content down. Is that consistent with your view of the First Amendment?
Daphne Keller:
No, that version of the facts is not consistent with my understanding. I don't think they were doing the Biden administration's bidding.
Sen. Eric Schmitt (R-Mo.):
What were they doing?
Daphne Keller:
They were exercising their First Amendment rights to go talk to the government and say what they thought should happen and to go talk to the platforms.
Sen. Eric Schmitt (R-Mo.):
What is it that they thought should happen? That content should come down?
Daphne Keller:
Probably in some cases, as is their First Amendment right.
Sen. Eric Schmitt (R-Mo.):
Are you defending that? That the government should take that content down and put pressure on social media companies?
Daphne Keller:
There's no government in this scenario I talked about.
Sen. Eric Schmitt (R-Mo.):
Oh, there is. Absolutely. The government outsourced to your university the job of flagging offensive content that they didn't like. And in this country, the government doesn't get to decide what people see or they hear or what they get to say. And your university was right in the middle of all of this. And what I hear you saying is that you actually think that's okay.
Daphne Keller:
I think, again, academics exercising their First Amendment rights to say what they think and say it to government are absolutely protected by the First Amendment.
Sen. Eric Schmitt (R-Mo.):
That's not what was happening. That is not what was happening. The Biden administration charged them with the duty of, "We can't do this all ourselves. We want to start a disinformation governance board, but it got so much pushback we can't do that. So, we're outsourcing to Stanford to flag this stuff that the government agency won't do now because we don't have one. Tell us what is offensive by way of these guidelines, and then we will tell the tech companies, 'That's a nice tech company that you got there. Be a shame if something happened to it if you don't throttle these people or take down RFK Jr., Or take down Jay Bhattacharya, who's now the head of NIH.'" So, I find it a little rich that you're here lecturing us about and espousing the virtues of the First Amendment. And then on the other hand, defending your university's role in the censorship regime that we lived through for four years.
Daphne Keller:
I wasn't there. I can't tell you how those conversations went.
Sen. Eric Schmitt (R-Mo.):
You know what a better answer is-
Daphne Keller:
But I do know the people-
Sen. Eric Schmitt (R-Mo.):
I wasn't there and I don't like [inaudible]-
Daphne Keller:
I can't imagine them doing the government's bidding.
Sen. Eric Schmitt (R-Mo.):
Well, I can tell you can read all about it in Missouri versus Biden, a lawsuit that went to the Supreme Court.
Daphne Keller:
The one that you lost?
Sen. Eric Schmitt (R-Mo.):
No, it's actually back. It was sent down to lower court for standing issues. So, we didn't lose the case. And I think it's telling... I actually have no idea why you're here today. You've embarrassed yourself. Your university's embarrassed itself. It's part of this censorship regime, which by the way, is a cautionary tale for the future. And so-
Daphne Keller:
Thank you, sir.
Sen. Eric Schmitt (R-Mo.):
Yeah, it's a cautionary tale. So, Mr. Bergman, I want to ask you, since Section 230 was enacted, are there features that exist now that weren't contemplated maybe with Section 230 that are worth taking a look at? I happen to believe having an open platform for people to share their points of view is very important in this country, but there have been obviously outcomes that are terrible that we're talking about today. Are there certain things that protecting that open platform for people to add that pressure release valve to speak their mind, but especially as it relates to kids or other things that maybe weren't contemplated in the late 1990s that we could address?
Matthew Bergman:
Well, absolutely. Netscape was the biggest online platform when the Section 230 was enacted. But yes, Senator, are there specific features, the infinite scroll, the like feature, the streaks, the push notifications that are designed to addict kids? And again, not by showing them what they want to see, but what they don't want to look away from. If a 12-year-old girl really wants to access anorexic content, God forbid, that's a very sad situation, but I don't think it gives rise to liability. On the other hand, if the platforms only in order to maintain an addictive relationship to sell more ads, feed that child information she's not looking for, I think that's a distinction that can be drawn and can preserve the vibrancy of the internet as a free marketplace of ideas.
Sen. Eric Schmitt (R-Mo.):
Thank you. Mr. Chairman, I would just point out that I've filed the Collude Act, which would say basically that if you're a social media company and you get into the content moderation business and you violate somebody's First Amendment rights, there'd be a private right of action and you lose your Section 230 protections. I think that'd be a very important reform for us to consider. Thank you.
Sen. Ted Cruz (R-Texas):
Thank you. Senator Baldwin.
Sen. Tammy Baldwin (D-Wisc.):
Thank you, Mr. Chairman. So, job owning is informally defined as informal, often coercive efforts by government officials to pressure private companies into moderating or removing content that they cannot legally censor directly. I was listening with interest of the previous senator's questioning and certainly feel like there's examples we can draw upon from multiple administrations, including this one. President Trump has attempted to rewrite history by forcing museums to remove content. Brendan Carr, the head of the FCC, has threatened broadcasters licenses, including very recently, who air unflattering news about this administration and this president.
If used properly and validated by experts, the internet can be a place that people turn when information is being limited or attempted to be censored by the government. Think about now and recent events. It might be where somebody accesses information on abortion or LGBTQ identity or scientific research on climate change or your rights against discrimination in the workplace. So, I want to talk about the upsides. I also want to talk about the downsides of Section 230, but Ms. Keller, can you explain whether and how Section 230 works to protect access to information that people rely on to make informed decisions?
Daphne Keller:
Section 230 is essential in protecting that access. You referenced information about reproductive healthcare and abortion under current Texas law and under a bill that I believe is still pending there. It would be very easy for someone to sue a platform because a young woman researched this information. So, I think you are completely right to connect those two issues.
Sen. Tammy Baldwin (D-Wisc.):
Thank you. Mr. Bergman, your testimony and the work that you do clearly highlight the harms that can come from the internet and technology, especially for children. I want to thank the parents who have brought their stories and their children's stories here with them today and are actively taking on these social media companies. Your stories clearly highlight the immediate need to remove harmful content, especially content directed towards children from these sites. So, I'd like to just, first of all, go down the line and ask all of the witnesses, does Section 230 prohibit platforms from engaging in content moderation? Ms. Keller?
Daphne Keller:
No.
Sen. Tammy Baldwin (D-Wisc.):
Ms. Farid Johnson?
Nadine Farid Johnson:
No.
Sen. Tammy Baldwin (D-Wisc.):
Mr. Bergman?
Matthew Bergman:
It does not.
Sen. Tammy Baldwin (D-Wisc.):
Mr. Carson?
Brad Carson:
No.
Sen. Tammy Baldwin (D-Wisc.):
All right. So, Ms. Farid Johnson, in your testimony, you highlight that Section 230 does not protect platforms from harmful design decisions they make. When is a design decision expressive and when is it not?
Nadine Farid Johnson:
Senator, that question is being worked through with the courts right now. And it's an incredibly challenging question. I think one of the reasons that we have at the Knight Institute been looking to structural changes in structural regulation is because as Ranking Member Schatz said earlier, the law doesn't always keep up with technology. And as we try to figure out what is expressive and what is not, and we're waiting for the courts to help us with that, it's almost like a game of whack-a-mole, right? If we can actually look at some of the root causes of this where we know these terrible things are happening and people are being drawn in because the platforms know so much about them, they kind of know what we're going to do before we know we're going to do it kind of thing. But if we have more research into what's happening, if researchers are protected, if we have specific data privacy protections, understanding how what's being collected and how it's being used, those are the issues that are going to help us, I think, try to solve this problem as quickly as possible.
Sen. Tammy Baldwin (D-Wisc.):
Thank you. It seems that part of the problem here is that every platform uses different algorithms to push content to users and have different protections in place for children on their platforms and have different policies on content moderation. I want to stick with you, Ms. Farid Johnson. How transparent are online platforms about their use of algorithms and granular moderation decisions? Would it be beneficial if Congress required increased disclosure and explanation for these practices and their uses?
Nadine Farid Johnson:
So, I think there is a way to ensure additional transparency into the platform's practices. What we have said is that any kind of required transparency, any kind of disclosure should not be seen to be an undue burden on speech, but those can be crafted certainly. I think in terms of whether we look at having additional understanding of what the platforms are doing would also benefit from allowing independent researchers in the public interest to conduct that research online so that they have access. And right now they are really thwarted from doing so because the terms of service of these platforms are so onerous that they could be subject to civil and even criminal liability for their efforts.
Sen. Tammy Baldwin (D-Wisc.):
Thank you. Thank you, Mr. Chairman.
Sen. Ted Cruz (R-Texas):
Thank you. Senator Curtis.
Sen. John Curtis (R-Utah):
Thank you. And especially thank you, Mr. Chairman, for this hearing. It means a lot to me and it means a lot to these parents, so thank you. Mr. Bergman, you started your firm, I think in reaction to what you were seeing out there from some of these parents that are here today. Section 230 was intended to protect us and it's clear that other things get in the ways, maybe it's profits, whatever it is, from that. And you've looked for ways to hold companies accountable. And we've talked about these product features several times. They enlisted them and I'll come back to them in a minute. But I agree with that approach and that's why I entered the Algorithm Accountability Act to clearly define that a platform who doesn't uphold a duty of care should not be protected by Section 230. So, the question is, does Section 230 reform? Does it come in conflict with the Second Amendment?
And before you answer that, I'll ask you to answer it, but I want to put this in context with my second question. And Section 230, as I understand, was originally designed to protect platforms from the liability of a third party message or a third party search, if that makes sense. And the best analogy I've been able to think of is the Post Office. So, imagine for a moment that I mail a letter to you and the Post Office delivers that letter. We would never dream of holding the Post Office accountable for the content of that letter. But I think we do acknowledge that there are certain things that the Post Office would be justified in withholding that letter at child pornography or perhaps a bomb or something that we give them that license in certain cases.
And my experience after eight years here in Washington is that we all stop right there and say, "How does the Post Office define whether or not they should deliver that letter?" Let's suppose hypothetically they could read every letter. And so we hear the chairman say, and I totally agree with him, is we would want the Post Office to not stop any more letters than they have to. We want as many letters to get through as possible. And we're in this, what I would call this foolish cycle of trying to define what should be delivered and what should not be delivered. And everybody has their own definition. You hear some, like the chairman say, almost everything should be delivered. And then I hear other of my colleagues argue and say, "Well, why would you deliver that letter?" But I want to take us past that point to the mailbox.
Now, when that letter gets to the mailbox, imagine if the Post Office had opened that letter and read it and then designed features for the homeowner of that letter that they then said not the letter to that person, but they sent it to 100 million people. That same letter, we said, "Oh, we like the content of that. We're going to send it to 100 million people. We like the content because we think they'll use the Post Office more if they read this letter." And then they design and now we come into the features, push notifications, the automatic scroll and all these things that have been listed that the Post Office is doing to get those 100 million people to read that letter. And layer on top of that, the fact that we know that they're far more likely to distribute a letter that is sensational, that does what we've talked about today.
And so, you've defined that and we're trying to define that as product liability. So, now let me come back to the question of, can we approach this without damaging the Second Amendment? If we go to the end, right to the mailbox where it's being opened up instead of this decision about are they censoring and we're talking about now the liability of their actions. And can you define the legal distinction between holding a platform liable for speech, which is like in the middle there versus the platform liable for the way it distributes that speech?
Matthew Bergman:
Yes. Yes, Senator. And just to follow up on your analogy, it would be as though the letter had cocaine as attached to it and that person will become addicted. But listen, in 1996 when Section 230 was enacted, the First Amendment had been around for 205 years and had protected the rights of free expression. The fact that Section 230 imposed a blanket liability, if that were to be modified, we would still have a robust body of law to protect free expression and the free exchange of ideas. And there's two elements to that. Number one, the question is to what extent does AI constitute speech? And that's a very esoteric question and the answer is sometimes yes, sometimes no. So, algorithmic recommendations may or may not have a free speech component. The second question though, Senator, and this is very important, is that even if something is speech, it doesn't mean that it's necessarily protected.
The vile material that Selena Rodriguez received was speech, but it was online grooming and it's clearly not protected. Libel isn't protected. And our courts have a very robust jurisprudence and there's nothing more, I think, impressive than Chief Justice Roberts analysis in Snyder to analyze where a tort action imposes speech and where it doesn't. So, I think that if cases were allowed to go back, if cases that would be barred by Section 230 were allowed to proceed in the tort system, the jurisprudence that we have on the First Amendment, I think would draw that vital distinction between free speech and [inaudible 01:44:07]-
Sen. John Curtis (R-Utah):
Okay. So, I agree with you, but let me point out, and a lot of my colleagues would point out, and you're lawyer, you would love this, right? There would be lots of lawsuits. So, walk through with me the advantage of the legal system deciding this versus a senator placed in a moment of time trying to say, "This is okay, but this isn't okay."
Matthew Bergman:
Well, the laws evolve over time, Senator, and particularly defining what is or isn't protected speech is something that's conferred on the courts. I think that where Section 230 too, and I don't favor the repeal, but allow cases such as these families to go forward, courts would still have the opportunity to apply time tested First Amendment jurisprudence to determine the extent to which those claims implicate protected speech in which case they don't.
Sen. John Curtis (R-Utah):
Versus us taking a point in time and then 30 years later deciding if you got it right. Now, and I'm out of time, but let me just conclude with this, a quote from Section 230, "Encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools." Are we meeting that standard of Section 230?
Matthew Bergman:
Unfortunately not, Senator.
Sen. John Curtis (R-Utah):
Yeah. I wish you had more time. Thank you. Mr. Chairman, I yield.
Sen. Ted Cruz (R-Texas):
Thank you, Senator Curtis. Senator Luján.
Sen. Ben Ray Luján (D-N.M.):
Thank you, Mr. Chairman. Mr. Carson, one question raised by the introduction of generative AI products is the extent to which companies should be held liable for illegal content generated by their products. As legal scholars point out under the current Section 230 regime, while platforms enjoy immunity, plaintiffs can still sue the original creator of illegal content, but AI generated search results that may cause harm could leave injured parties without recourse. Mr. Carson, what is your view on whether generative AI search engines fit into existing Section 230 framework?
Brad Carson:
I don't believe that generative AI should be under Section 230 at all.
Sen. Ben Ray Luján (D-N.M.):
Do you think platforms will need to verify that posts are authored by a human to benefit from Section 230 protection?
Brad Carson:
No. If they're hosting someone else's use of AI, like say I make an AI image and then post it on Facebook, that is clearly covered by Section 230 today, but the AI system itself, if they were to use it or what ChatGPT creates for me personally, that's not covered by 230. That's their own speech and therefore not covered by 230.
Sen. Ben Ray Luján (D-N.M.):
So, Elon Musk using Grok to put your face on a naked body is okay?
Brad Carson:
No, definitely not okay in any way. And he probably has some 230 immunity for that. I wouldn't give him that immunity. I would make him subject, as Mr. Bergman said, to the common law on those kind of questions, but he would certainly claim 230 for his own posting since he owns Grok today and X. So, that would be an interesting legal question.
Sen. Ben Ray Luján (D-N.M.):
I appreciate that. I hope it gets tested. Ms. Keller, in 2023, I authored an amicus brief in Moody versus NetChoice where I argued that social media companies have a First Amendment right to curate and moderate content on their platforms. Discussions regarding Section 230 often focus on the laws, liability, protections, but in your testimony, you write that quote, "The choice to moderate is precisely what Section 230 encourages and that the law has produced a variety of platforms that have adopted diverse approaches to content moderation." Ms. Keller, you also filed an amicus brief in Moody versus NetChoice where you argue that Texas and Florida laws at issue burden the speech of the platforms. Can you explain to us the role Section 230 plays in the First Amendment rights of platforms to engage in content moderation?
Daphne Keller:
Certainly. So, the platforms have rights to engage in content moderation as the majority in the Moody case decided under the First Amendment. Section 230 basically gives an additional layer of procedural protection around that so that if they are sued over these things, they can get the cases dismissed quickly, which if you are a startup is incredibly important. These cases can get extremely expensive. Sorry, I feel like I'm missing the other part of your question. It was about Gonzalez and...
Sen. Ben Ray Luján (D-N.M.):
I think you touched on it. I was getting more to the area that you included, the burden from the Texan and Florida law, which was in the same case.
Daphne Keller:
I see. I mean, and to be clear, the thrust of that brief was really about the rights of users being affected, about the state governments coming along and saying, "Hey, if you want to use the internet and look at, hear the podcast that you want to hear, you have to navigate through all of this garbage that otherwise would've been weeded out by the platform." So, we were concerned with the rights of users and we proposed that a less restrictive means than the state setting the rules would be middleware and interoperability and something that allows users themselves to take control.
Sen. Ben Ray Luján (D-N.M.):
Ms. Farid Johnson, a similar question to you. Do platforms have a First Amendment right to engage in content moderation?
Nadine Farid Johnson:
Yes.
Sen. Ben Ray Luján (D-N.M.):
In your testimony, you state that... Again, Ms. Farid Johnson, your testimony, you state that Section 230 does not always provide liability protection. For example, certain platforms may be held liable for harms that arise from content neutral tools. Unfortunately, as Mr. Bergman points out in his testimony, social media companies will argue that every algorithmic design must be exempt from liability. And while there are certainly gray areas, it's hard for me to believe that a design feature solely in place to get and keep kids addicted to social media is immune from all liability. My question is, when does a design feature cross over from being an editorial decision immune from liability to a content neutral tool outside of Section 230 protections?
Nadine Farid Johnson:
Thank you, Senator. That's the question that is actually making its way through the courts right now. We are seeing a number of cases that are asking the question about when a design is expressive such that it would be protected under the First Amendment, and we just don't have the answers yet. I will say that in terms of algorithmic amplifications, decisions that amplify are going to be quite difficult to distinguish from public decisions to publish. So, for example, if a newspaper were to put a particular headline, they're amplifying a particular piece of news because they are trying to get someone to read that news. So, I think that it's important to look at the fact that there are going to be these decisions there that are going to seem editorial. And from our perspective, the way to address this is not necessarily to wait out the courts in terms of trying to figure this part out and not to try to legislate on that because you can't legislate around the First Amendment, but rather to say, "We know what some of the structural issues are with respect to people's engagement online."
And if we know that they are being targeted because of the information collected about them from these platforms, then the way to go about it is to address the data privacy issue rather than trying to circumvent the First Amendment.
Sen. Ben Ray Luján (D-N.M.):
Let me ask you this yes or no follow up. Should social media companies provide the public with transparency regarding targeted algorithms and data collection?
Nadine Farid Johnson:
Yeah, I do believe that transparency provisions if appropriately drafted, can certainly be constitutional, yes.
Sen. Ben Ray Luján (D-N.M.):
I appreciate that. And Mr. Chairman, I'll submit this question to the record. Mr. Bergman, I was going to ask a question about through your litigation experience, how are social media companies weaponizing design decisions to keep kids addicted to social media? I'll submit that into the records. I look forward to hearing your response as well. Thank you for the time, Mr. Chairman.
Matthew Bergman:
I will [inaudible]. Thank you.
Sen. Ted Cruz (R-Texas):
Thank you. Senator Capito.
Sen. Shelley Moore Capito (R-W.Va.):
Thank you, Mr. Chairman. Thank all of you for your testimony today. Very interesting. I'd like to address the parents of the three children. I think when we think of 30 years ago, when Section 230 was written and enacted, we could have never imagined that we would see three young people at such a young age take their own life through what we thought was a magnificent discovery in the 90s. And so, I want to thank them for what they're doing because what you're doing is helping other parents as we see these kinds of things. You're so brave to do this. And we just had an instance of this. I live in West Virginia and instances you might have read about it. 15 year old boy took his own life very quickly in a sextortion case and his parents have come forward. So, I know it can't be easy and that leads me to my other thought.
So, we're sitting here, we've got kids online safety. We had a hearing on the addictive nature of social media. We had four information sessions on what AI and what does that mean and how good is it and how bad is it, but we haven't done anything. And so, I think part of our problem is if we legislate today in this area, we're not going to pick up what I learned day before yesterday, the new thing that's coming, which is super intelligent AI that's smarter than anybody in this room and all of us put together. And so, we're going to still have those gaps, I think, that are still going to be able to catch our young people and others into the entrapment to do damaging things to themselves. And so, I feel frustrated by it that we can't figure out a better way to protect.
And I understand the pushes and pulls of the First Amendment, but when I look at the families, "I think we got to do better here. We can't just keep having hearings and wondering." And I know the chairman and there's lots of good legislation out here, but we really need your help as a professional panel to help us weave those challenges. And so, if you look at the companies... This just struck me. I thought, "If you look at the companies, why would they pursue this line of algorithm?" Well, it's advertising, it's money. And I started thinking about Chick-fil-A. Well, you know what? Chick-fil-A has a belief system that they don't want to be open on Sundays, and that's to their own economic detriment, although they think they smoke all the other fast food places on six days that was them doing on seven days. So, it can be done.
You can self-select, you can self-regulate and still be extremely successful in a competitive environment. So, my question is, am I right to be concerned that whatever we could do today, I'm just going to go down the panel, whatever we could do today, we're going to come back in 10 years and be obsolete? And so, I'll start with the first, Ms. Keller.
Daphne Keller:
The risk of obsolescence is indeed high. However, for AI, there's the backdrop of common law and tort law. Anything that is unprotected by Section 230, as some people have said, the output of generative AI would be, has this more flexible set of tools.
Sen. Shelley Moore Capito (R-W.Va.):
Yeah. But aren't these companies using AI to generate their algorithm? So, am I wrong there?
Daphne Keller:
I think they're using a mix of AI and other editorial inputs that the court told us were immunized in the Moody case.
Sen. Shelley Moore Capito (R-W.Va.):
Okay. Ms. Johnson.
Nadine Farid Johnson:
I agree the risk of obsolescence is high, which is why I think that looking at kind of these foundational questions is critically important. If we have... Even from now, if we were to establish an AI regulatory framework that was promoting independent research into the AI developers, if we understood more about what's happening on the platforms, if we limited the data they were collecting about us, those are the things I think that could help move the needle in a way that would not run up against what happens next in terms of the next big tech thing.
Sen. Shelley Moore Capito (R-W.Va.):
Thank you. Mr. Bergman?
Matthew Bergman:
Yeah. Senator, in McPherson versus Buick, Justice Cardozo famously said, "The law changes, the laws of the stage coach..."
Matthew Bergman:
... obviously said, the law changes, the laws of the stage coach adapt to the age of the automobile. The common law of the states does provide the ability to adapt and apply traditional concepts of responsibility, of negligence and products liability to ever expanding technology. So the first and foremost thing that I believe Congress can do is allow the legal system to operate and basically to use its economic function that Judge Posner said to internalize the cost of safety and impose on social media companies the same rules that every other company has. I think that would be a good start.
Sen. Shelley Moore Capito (R-W.Va.):
Thank you. My former colleague, Mr. Carson.
Brad Carson:
Senator, more than most on the panel, I think Section 230 was a mistake. Mostly it's not because we don't need law. It's because, as Justice Thomas so well said, it's a get-out-of-jail-free card. Those are the kind of things you can't put in place before you actually understand how an industry is going to develop. So you need flexible laws, but you shouldn't just freeze it in place at a time when the internet was hardly developed.
Sen. Shelley Moore Capito (R-W.Va.):
Thank you. Again, thanks to the parents for being here.
Sen. Ted Cruz (R-Texas):
Thank you. Senator Rosen.
Sen. Jacky Rosen (D-Nev.):
Thank you, Chairman Cruz. Thank you to the witnesses for being here. I just want to build on a point that Senator Capito made. I just don't believe we can allow just self-regulation and our common law to be our guide. Just because technology moves quickly, it doesn't mean that we here in Congress shouldn't legislate or regulate where necessary and adjust where needed. We can't cede our power because things move quickly. The world has always moved quickly, to your point about the stage coach to the car to, well, autonomous vehicles maybe soon. So I just want to make that comment for the record.
But I want to talk about risks of Section 230 protection for AI, because last summer I led a bipartisan task force. I led a letter with the bipartisan task force for combating antisemitism to xAI, calling for accountability after their AI chatbot, Grok, went on multiple antisemitic tirades on X and spread conspiracy theories about the Holocaust. So to reiterate what Senator Luján asks, Mr. Carson, should chatbots like Grok when integrated into a social media platform be protected by Section 230? What are the risks if AI chatbots, which are sometimes seen as less biased and more accurate, are integrated on platforms and are protected under Section 230?
Brad Carson:
I don't believe that you should have generative AI be considered as speech at all, and, therefore, should not be protected by Section 230. In the case of Grok, in the example you gave, those companies should be liable under some kind of product liability theory, some kind of design defect theory for making products that are engaging in this kind of behavior.
Sen. Jacky Rosen (D-Nev.):
Thank you. I'm going to move on and talk about speech then because Congress and the courts have determined that commercial speech by companies is not the same as free expression by an individual. Companies are liable for harm when they release unsafe products, like an unsafe car seat, energy drink that causes heart attacks, and they can be held accountable for lying about their products. So, Mr. Bergman, I'm going to ask you this time, do you think social media platform failing to enforce its own content moderation policies makes its products unsafe? Are these platforms marketing themselves as one thing by explaining their policies online, but then failing to create the online environment that reflects their policies?
Matthew Bergman:
Well, I think that's very much the case. We just completed a trial in Los Angeles. Because we could get past Section 230, we were able to elicit and present documentary evidence that these companies intentionally have addicted children knowing that children are being hurt because of it. This evidence directly contradicted the testimony of the executives before this very committee. We think that that's a very important development and a reason why Section 230 should not preclude these cases from going forward and let the truth be heard.
Sen. Jacky Rosen (D-Nev.):
Thank you. Then we'll move on to impact on other platforms. Because some websites like Wikipedia, Reddit, they rely on decentralized content moderation. It's partially why their communities are so vibrant and why people use their websites. So there is a concern that eliminating Section 230 could jeopardize their entire model. Ms. Johnson, we'll move down to you. How can Congress hold the largest social media companies accountable while protecting novel content moderation approaches in smaller websites that operate differently from the big players?
Nadine Farid Johnson:
Senator, you're exactly right that Section 230, when we say platforms, we often mean social media, but it applies to quite a broad swath. Which is why when the Knight Institute has considered this question, we have come up with three proposals looking at researcher access, data privacy, and interoperability as possible means of conditioning to Section 230 protection for the very largest platforms, and, thereby, allowing others to continue to thrive.
Sen. Jacky Rosen (D-Nev.):
Thank you. I'm going to move back to you, Mr. Bergman, and ask you this. Some larger platforms distinguish, again, many of which have designed their products to maximize engagement over all other metrics. Just keep those eyeballs on. They have fired content moderation staff and are, in some cases, no longer taking down content that violates, again, their own policies, and they still claim Section 230 is necessary. So should there be a different standard for our larger platforms?
Matthew Bergman:
I think everyone should have a duty of reasonable care. I think Section 230 does provide important protections. Some should stay, but it should not be interpreted outside of what Congress intended when it enacted the statute in 2000... I'm sorry, in 1996.
Sen. Jacky Rosen (D-Nev.):
Thank you. I yield.
Sen. Ted Cruz (R-Texas):
Thank you. Senator Hickenlooper.
Sen. John Hickenlooper (D-Colo.):
Yeah, thank you, Mr. Chair, and thank each of you for being here today and for all your work on this. This is obviously an issue of great complexity but of almost unimaginable importance if we really look at how deeply this damage goes. This issue around broad immunity and 230, you guys are already debating pretty well. I'm not going to opine. I'm not sure government is smart enough or can move fast enough to regulate AI at the speed it's going. I don't see another way to deal with the situation without allowing or compelling the industry to regulate itself, which is really what 230 gets in the way of. In other words, if tort law can take actions against damages done to groups of people, history shows us that that's the way you create standards, and you create the evolution of standards to meet the needs of that moment.
In terms of my time with you now, I've given my own little speech. I want to talk a little bit about supply chain designations. This is a little bit obscure but I think no less important. Really, for the first time ever, the United States has designated an American company as a supply chain threat and blacklisted it from working with the US government. This is the same level that's been given to China's Huawei, Russian cybersecurity firms like Kaspersky. It's ironic to see that our administration that fears censorship under Section 230 and claims to want a smaller government is now trying to weaponize federal government law to force a company to dismiss their own policies and prevent AI from being used for conducting mass surveillance against Americans for lethal targeting that is done autonomously.
Ms. Keller, why don't we start with you, but I'd like to go down the list. What message does it send to a supply chain risk designation? What's that message that's given to an American company...? What's the message it sends to the rest of the innovation economy? Let's put it that way.
Daphne Keller:
It sends a message that the government is willing to bully and retaliate and make improper uses of laws to punish those who dissent or disagree or attempt to hold reasonable limits.
Sen. John Hickenlooper (D-Colo.):
Thank you.
Nadine Farid Johnson:
I think to the extent that this type of action is taken in retaliation for speech, it sends an incredibly chilling message. It's very chilling.
Matthew Bergman:
It's really not in my ambiate to opine on that.
Brad Carson:
Senator, we've worked a lot on that issue. It's an indefensible decision that will chill the innovation economy and an attempt to murder one of the leading instruments of national power that our country has, which is an incredible AI lab in Anthropic.
Sen. John Hickenlooper (D-Colo.):
It's certainly something that is a place of power that we're giving away in a way. Let me switch there. Ms. Johnson, thank you for being here and your advocacy of the First Amendment rights, your ability to defend the First Amendment. Some have argued that the way an AI model responds to a prompt from a user is a form of editorial discretion. In this case, that is certainly protected by the First Amendment. But if we imagine a scenario where Section 230 liability shield projections for AI outputs were removed, when we think about that, Congress, as we debate about how to establish AI accountability and an accountability framework, how could this be done without violating the First Amendment, the protections on speech?
Nadine Farid Johnson:
It's an incredibly challenging question, Senator, and it's something that's working its way through the courts now. I think I'll respond in two ways. One is that if you're looking at Section 230 protection specifically, Section 230 insulates platforms from liability for the publication of third-party content. So to the extent that AI output is out of AI companies, it does not fall under that provision. In terms of crafting a durable piece of legislation that would help with AI accountability, I think what's important to start with are transparency provisions. If we had provisions that promoted and protected independent research into these AI developers to really understand how they operate, to understand what is going on behind the scenes, that would provide Americans with information that they can use in the public interest that could ultimately then lead to additional accountability.
Sen. John Hickenlooper (D-Colo.):
I agree. Thank you. I yield back the floor.
Sen. Ted Cruz (R-Texas):
Thank you. Senator Markey.
Sen. Ed Markey (D-Mass.):
Thank you, Mr. Chairman. Thank you for having this hearing. Again, I just want to recognize the parents who are here who have lost their children. It's almost unimaginable the loss which you have suffered, and thank you for being here. Thank you for bearing witness to this mental health crisis in our country amongst teenagers and children. It's just very important that you're here. It's almost impossible to fully understand the grief you must feel right now.
It's my honor to stand with you in fighting in order to protect other families from the same thing that you've gone through because those harms underscore the importance of the issues before us today. I'm glad that we're having this hearing because it's obviously very controversial, and it just raises so many important issues of online safety and privacy and free speech. As the different views are being laid out here today, we can see that this is a subject that we should be talking about.
Mr. Bergman, if you could, I know that your firm has done a lot of thinking about this issue and how to protect young people online while working within Section 230 in the lawsuits which you're bringing. Can you explain further to the committee how your firm is bringing cases against the platforms and avoiding dismissal on Section 230 grounds?
Matthew Bergman:
Well, avoiding the dismissal sometimes, not other times. But-
Sen. Ed Markey (D-Mass.):
Excuse me?
Matthew Bergman:
Avoiding dismissal sometimes, not other times. We follow the theory enunciated by the Ninth Circuit in Barnes v. Yahoo! and then Lemmon v. Snap, that we focus on the design. It's not the content. It doesn't matter what the... The algorithms don't care what they show the kids, whatever addicts them to the platform. It could be moon beams and rainbows as long as a kid becomes addicted to this platform through operant conditioning. So we focus on the addictive design. We focus on the infinite scrolls. We focus on the likes and the streaks features. We focus on the fact that these companies, as we've learned in this litigation, deliberately target kids knowing that their brains are not fully developed and that they're very susceptible as adolescents to peer pressure.
Sen. Ed Markey (D-Mass.):
So you're saying the conduit itself-
Matthew Bergman:
That is correct.
Sen. Ed Markey (D-Mass.):
... is [inaudible].
Matthew Bergman:
The platform itself is dangerous.
Sen. Ed Markey (D-Mass.):
Is dangerous, yeah. I do, I appreciate your approach to this work to protect kids online because ultimately we are going to have to try to deal with this issue. This committee with the chairman and the ranking member were very instrumental with Senator Cassidy and I in passing COPPA 2.0, the Child Online Privacy Protection Act, through the Senate floor just two weeks ago unanimously. Could you talk a little bit about how important COPPA 2.0...? Do you agree it's important for us to pass the Child Online Privacy Protection Act in order to guarantee that they can't target kids with ads, that the parents can demand that everything be erased, that we raise it up to age 17 and under 17? Do you agree that should become a law in our country? Ms. Keller?
Daphne Keller:
Can adults have that too, please?
Sen. Ed Markey (D-Mass.):
I'm with you, and I actually passed that on the House floor in 1995. It got taken out in the conference committee with the Senate. But I did pass for adults as well-
Daphne Keller:
Thank you.
Sen. Ed Markey (D-Mass.):
... across all platforms. I got that passed 30 years ago. It was highly anticipatable. Yes, Ms. Johnson.
Nadine Farid Johnson:
Yes, yes, robust protection for children online in terms of privacy protections is critical, yeah.
Sen. Ed Markey (D-Mass.):
Thank you.
Matthew Bergman:
Yeah, absolutely, Senator. The bipartisan leadership of this committee has been instrumental. This committee really has been a fulcrum of bringing these issues to the fore over the last five years under the bipartisan leadership. On behalf of the families, you have really already saved a lot of lives.
Sen. Ed Markey (D-Mass.):
Thank you. Mr. Carson?
Brad Carson:
It would certainly be a great improvement. Congratulations to the committee for their leadership on this.
Sen. Ed Markey (D-Mass.):
I'm right now working on AI chatbots legislation. Does anyone want to talk about that and the importance for us to legislate there? Mr. Bergman?
Matthew Bergman:
Our firm brought the first case involving AI chatbots involving a 14-year-old boy who was goaded into suicide through an online chatbot. We successfully overcame a First Amendment challenge and were able to move forward. We are bringing the first cases against OpenAI for the same thing. Again, this is basically a design flaw. We know that AI is here to stay, and it does a lot of good, but companies need to take proactive measures to think about safety.
Sen. Ed Markey (D-Mass.):
These kids are becoming emotionally dependent upon AI chatbots, and so we need to move in that area as well, passing legislation. Mr. Carson, would you like to add on to what Mr. Bergman just said?
Brad Carson:
Yes, I think that's a critical issue. I think it comes to Congress making clear, and this addresses Senator Hickenlooper's issue as well, the outputs of ChatGPT or Claude or Gemini are not protected speech under the First Amendment at all and are not worthy of the protections of the First Amendment as expressions of human creativity or human autonomy and that, yes, the rules around those chatbots are very important for children especially, but for the broader society.
Sen. Ed Markey (D-Mass.):
As technology moves, we have to move as well. The technology is inanimate. It's only as good or bad as the human values that we instill into those inanimate objects. We have to continue this conversation and chat. This new area is absolutely something that we should be discussing and moving on in this committee as well. Thank you, Mr. Chairman, for having this hearing.
Sen. Ted Cruz (R-Texas):
Thank you. Senator Blackburn.
Sen. Marsha Blackburn (R-Tenn.):
Thank you, Mr. Chairman, and thank you to each of you for being here today. It has been a wonderful discussion, and we appreciate your insight. To the parents, I want to say thank you for once again being here and for the advocacy that you bring. I appreciate that Senator Markey talked about KOSA and Kids Online Safety, and Senator Cruz has been such an advocate for protecting children in the virtual space. Indeed, I wish that our friends in the House were as committed to getting some of this legislation across the finish line as we are here in the Senate.
Mr. Bergman, I want to come to you. As I've listened to the hearing today, thinking back through where we were in the mid '90s and the advent of Section 230, which seemed like a really great idea for something that was going to be an unknown, if you will, with the virtual space and giving companies a chance to get their sea legs under them, but what we have seen is massive abuse of Section 230. The way that the social media platforms and Big Tech, as they have grown, they have become more given to making excuses for their actions and blaming it on Section 230 that it allows them to do this, that, and the other.
As I've thought through this, one of the reasons I have grown to be in support of sunsetting and removing 230 is because Big Tech has proven they are incapable of regulating or policing themselves. They will not do it. They're like an errant child who keeps pushing and pushing and pushing and trying to move away any kind of responsibility, any discipline, and they fight it every single day. We have seen it as we have worked with parents. We have seen it as we talk to pediatricians and principals, who talk to us about behavioral issues in school, that all find what happens online, and the online platforms do nothing. So talk for me a little bit about Big Tech's refusal to take an action to protect and the need that that puts on Congress to take an action to force them to protect.
Matthew Bergman:
Well, Senator, first of all, thank you from the bottom of my heart for your steadfast, indefatigable efforts on this issue. The kindness and the compassion and the commitment is an inspiration to all of us, and thank you for that.
Sen. Marsha Blackburn (R-Tenn.):
Sure.
Matthew Bergman:
We just finished a trial in which we saw and we've now been able... because we got over Section 230 at least a little bit, to be able to see the internal documents from these companies. We see that indeed there are people of conscience within these companies sounding the alarm bells. Time and time again, their calls go unheeded, because anytime a design change would impair profitability or engagement, they say no. How many times have the executives been excoriated before your committee and they don't change their behavior? How many times have they had bad press?
The only thing that's going to change their behavior is when they have to pair the economic costs of their deliberate design decisions. As Richard Posner or Milton Friedman would say, "You have to internalize the cost of safety." If they have to bear the cost of their dangerous platforms instead of these families, instead of clergymen and policemen and doctors and psychologists and insurance companies, then they will have the incentive to change their behavior. But right now, there's no way. If they actually had to bear the cost through a civil lawsuit, their behavior would change. I think through the imposition of civil liability, we can change their economic calculus. We're not talking about imposing a special duty. We're just talking about imposing the same rules that every other company has. Every other company in America operates under a duty of reasonable care. We're asking the same thing for social media.
Sen. Marsha Blackburn (R-Tenn.):
Well, and I think it's important to note that every industrial sector in the United States has safety standards. They have that duty of care. The only one that does not is the virtual space. There are no safety standards. That's why KOSA is a safety by design. It is a product safety bill to protect children in the virtual space. It's unseemly to me that we have this growing industrial sector, and they have zero safety standards. And honestly, they don't give a rip and flip. When you've got somebody like Mark Zuckerberg who says each kid is worth $270 to him, that is one of the most offensive statements I think I have ever heard come out of the mouth of a US corporate CEO to put a value on the head of every child that is using their product. It is just unseemly.
We are hard at work. I have just released the Trump America AI Act framework. The president asked me to take a shot at drafting this as just a guideline as to where we start on the discussion about a framework for AI. My hope is we're going to move forward on this more quickly than we did other components of the virtual space. I'd like for you to respond for just a minute about why you think it is important, Mr. Bergman, to have an AI framework as we begin to move forward with more AI concepts moving into commercialization.
Matthew Bergman:
Well, because we continue to see families that have buried children because AI chatbots encourage suicide. One would have thought after two and a half years, I could never be shocked. But when I saw what Sewell Setzer was provided, encouraged to kill himself, when I saw that Zane Shamblin was given a how-to manual, we have to do something, Senator. Your leadership is such an inspiration. I'm just so grateful on behalf of all the people I represent, but also as a father and a grandfather, thank you.
Sen. Marsha Blackburn (R-Tenn.):
Well, we appreciate you all. I want to thank each of you for being here today and for the spirited debate that you have brought to the issue. I will remind you that members of the committee have until March 25th to submit their questions. You all will have until April 8th to respond in writing to those questions that will be coming to you. With that, it concludes our hearing. Committee adjourned.
Matthew Bergman:
Thank you.
Authors
