Home

Transcript: Senate Subcommittee Hearing on Platform Accountability: Gonzalez and Reform

Justin Hendrix / Mar 11, 2023
UC Berkeley Professor Hany Farid testifies to the Senate Judiciary Subcommittee on Privacy, Technology and the Law, March 8, 2023.

On Wednesday, March 8, 2023, the Senate Judiciary Committee Subcommittee on Privacy, Technology and the Law hosted a hearing titled, Platform Accountability: Gonzalez and Reform," chaired by Senator Richard Blumenthal (D-CT).

Witnesses included:

  • Mary Anne Franks, Professor of Law and Michael R. Klein Scholar Chair, University of Miami School of Law (written testimony)
  • Hany Farid, Professor, School of Information and Electrical Engineering and Computer Science, University of California, Berkley (written testimony)
  • Jennifer Bennett, Principal, Gupta Wessler PLLC (written testimony)
  • Andrew Sullivan, President and CEO, Internet Society (written testimony)
  • Eric Schnapper, Professor of Law, University of Washington School of Law (written testimony)

What follows is a lightly edited transcript of the hearing.

Sen. Richard Blumenthal (D-CT):

Convene. We are a Subcommittee of the Judiciary Committee and the chairman of our committee is with us today. I want to thank all of our panel for being here, all of the members of the audience who are attending my ranking, member, colleague, partner in this effort, Senator Hawley. I'm gonna turn first to the chairman because he has an obligation on the floor for some opening remarks. We're very pleased that he's with us today.

Sen. Dick Durbin (D-IL):

Senator Blumenthal and Senator Hawley, thank you for holding this important meeting. We had a rather historic meeting of the Senate Judiciary Committee just a few weeks ago. I think everybody agreed on the subject matter of the hearing. I don't know when that's ever happened, at least recently. And it was encouraging that the hearing considered the subject of protecting kids online. One of the witnesses we heard from Kristen Bride, a mother with a son who died by suicide after he was mercilessly bullied on an anonymous messaging app. There were several other mothers in attendance carrying colored photos of their kids who suffered similar heartbreak. In addition to tragically losing children, these mothers had something else in common. They couldn't hold the online platform that played a role in their child's death accountable. The reason Section 230, well known to everyone who's taken a look at this industry, coincidentally, after that hearing, I had a meeting with the administrator of the Drug Enforcement Administration, Anne Milgram.

She described for me how illegal and counterfeit drugs are sold over the internet to kids, often with devastating results. When I asked her what online platforms were doing to stop it, she said very little and refused to cooperate with her agency to even investigate. I asked her, how do they deliver these drugs by mail? What? Oh no. By valet service, they bring boxes of these counterfeit drugs, deadly drugs, leave them on the front porch of the homes of these kids. Imagine this, we're talking about a medium that is facilitating that to occur in America. These platforms know these drug transactions are happening. What are they doing about 'em? Almost nothing. Why Section 230? In our hearing last month, there seemed to be a consensus emerging Democrats and Republicans that we've gotta do something to make Section 230 make sense? Something needs to change. So online platforms have an incentive to protect children, and if they don't, they should be held liable in civil actions. I look forward to hearing from the witnesses today. I'm sorry I can't stay because I'm major on the floor to consider in a few minutes, but I will review your testimony and thank you for your input. Thank you, Mr. Chairman, ranking member.

Sen. Richard Blumenthal (D-CT):

Thanks very much, Senator Durbin. I think it is a mark of the importance and the eminence of reform that Senator Durbin is here today. His leadership led to the hearing that we had just a couple weeks ago, showing the harms– really desperate, despicable harms that can result from some of the content on the internet and the need to hold accountable the people who put it there. And that's very simply why we are here today. I want to thank Senator Durbin for his leadership. Also, Senator Coons who preceded me as head of this subcommittee. There are certainly challenging issues before us on this subcommittee from reigning in big tech to protecting our civil rights in an era of artificial intelligence. And I am enormously encouraged and energized by the fact that we have bipartisan consensus on this first hearing. Not always the case in the Judiciary Committee, not always the case in the United States Senate, but I'm really appreciative of Senator Hawley's role, especially his amicus brief to the United States Supreme Court in Gonzalez. The comments by the Solicitor General in that case, some of the comments by the justices, we have no ruling yet, but I think what we are seeing is, as Senator Durbin said, an emerging consensus that something has to be done. So here's a message to big tech: reform is coming.

Can't predict it'll be in the next couple weeks or the next couple months, but if you listen, you will hear a mounting consensus and a demand from the American public that we need to act in a bipartisan way. Section 230 dates from a time when the internet was a young, nascent startup kind of venture that needed protection if it tried to weed out the bad stuff. And now it's used to defend keeping the bad stuff there. This so-called shield has been long outdated as we enter an era of algorithms and artificial intelligence, which were unknown and perhaps unimaginable on the scale that they now operate. When Section 230 was adopted and the case law, and I've read it, the Gonzalez Court addressed it simply doesn't provide the kind of remedy that we need quickly enough and thoroughly enough. I think that the time when the internet could be regarded as a kind of neutral or passive conduit has long since passed.

Obviously we need to look at platform design, the business operations, the personalization of algorithms, recommendations that drive content. And we've seen it particularly with children, toxic content driven by algorithms in a very addictive way toward children with this overwhelming protection that is accorded by Section 230 to the tech platforms that are responsible and need to be held accountable. Section 230 actually was designed to promote a safer internet plainly. It's doing the opposite right now. And what we have heard graphically as Senator Durbin described it again and again and again, ad hearings in the Commerce Committee, the Subcommittee on Consumer protection, which I chaired hearing from the whistleblower Francis Haugen documents that we've seen from Facebook and the victims and survivors, Mrs. Bride who lost her son, Christian. Anastasia, who wrote me along with another young woman, Sanvi Aurora and Anastasia Shane, they started a petition that received 30,000 signatures from Americans across the nation after they were victimized, pictures of their sexual abuse repeatedly transmitted on anonymous platforms. And I'm gonna put their letter to me in the record without objection, but the point is, we've seen the harms we need to take action to address those harms.

And we've also seen harms. Section 230 has shielded platforms like Craigslist when they hosted housing ads that openly proclaimed “no minorities.” Section 230 has immunized Facebook when its own advertising tools empowered and encouraged landlords to exclude racial minorities and people with disabilities. For any other company, these would be violations of the Fair Housing Act. But Section 230 shut the door on accountability for them. And in so many other instances, the case history on Section 230 is clear when big tech firms invoke it, those being denied justice are often women, people of color, members of the LGBTQ community or children and the victims and survivors of sexual abuse.

So this hearing is very simply part of a broader effort to reform Section 230. We've seen some of the models and the frameworks that are possible for reform. I'm not taking sides right now, but by the end of these hearings, I hope to do so, and this enterprise is not new for me. 15 years ago when I was Attorney General dealing with MySpace and Craigslist and many of the same issues that we're confronting today, I said to my staff, we should repeal Section 230. And they came down on me like a house of bricks and said, whoa, you can't repeal Section 230. That's the Bible of the internet. Well, it's not the Bible of the internet. It's not the 10 Commandments that have been handed down. It is a construct. It is now outdated and outmoded and needs reform. And I'm really so thankful to have the leadership of Senator Hawley, who is also a longstanding champion of survivors and victims of sexual abuse and other harms. And to his great credit, a former State Attorney General. Senator Hawley.

Sen. Josh Hawley (R-MO):

Thank you very much, Senator Blumenthal. Thank you Mr. Chairman for being here as well. Thanks to all the witnesses for making the long trek here. I just want to add a few remarks. I am delighted that the first meeting of this subcommittee is focusing on what is, I think, maybe the critical issue in this space, and that is Section 230. And I want to amplify something that Senator Blumenthal just said, which is that Section 230, as we know it today, is not only outmoded, it's not only outdated, it's really completely unrecognizable from what Congress wrote in the 1990s. I mean, let's be honest, the Supreme Court heard arguments to this effect a few weeks ago, but the Section 230, as we know it today, has been almost completely rewritten by courts and other advocates. Usually at the behest of big tech, the biggest, most powerful corporations, not just now, but in the history of this country, they have systematically rewritten Section 230.

And listen, I hope that the United States Supreme Court will do something about it because frankly, they share some of the blame for this. And I hope in the Gonzalez case they'll begin to remedy that. But whatever the case may be, there, it is incumbent upon Congress to act. We wrote Section 230 originally, we should fix it now. And I welcome these hearings to collect evidence to hear from experts such as those who are before us today about the paths forward. From my own view, I think that some of the common ground that Senator Blumenthal mentioned and that the chairman mentioned that we've heard in our hearings recently really boils down to this. It really is time to give victims their day in court. What could be more American than that? Every American should have the right when they have been injured to get into court, to present their case, to be heard, and to try to be made whole.

230 has prevented that for too many years. And I would hope that if we could agree on nothing else, we could agree on that basic fundamental, dare I say, fundamentally American approach. And I hope that that's something that we'll be able to explore together. Now, I just note that progress on reforming Section 230 has been very slow. As a Republican, I would love to blame that on my Democrat colleagues, but the sad fact of the matter is Republicans are just as much to blame, if not more. And my own side of the aisle when it comes to vindicating the rights of citizens to get into court, to have their day in court, has often been very, very slow to endorse that approach and very, very wary. But I think that the time has come to say that we must give individuals, we must give parents, we must give kids and victims that most basic right. And I hope that this subcommittee and the committee as a whole, the Judiciary Committee as a whole will prove to this Congress that real bipartisan action with real teeth is possible and we will see real reform for America's families and children. Thank you, Mr. Chairman.

Sen. Richard Blumenthal (D-CT):

Thanks, Senator Hawley. I'm gonna introduce the panel and then as is our custom, I will swear you in and ask you for your opening remarks. Dr. Mary Anne Franks is an internationally recognized expert on the intersection of civil rights and technology. She's a professor of law and the Michael Klein Distinguished Scholar chair at the University of Miami, and the president and legislative and tech policy director of this cyber Civil Rights Initiative, a nonprofit organization dedicated to combating online abuse and discrimination. Professor Hany Farid is a professor of computer science at uc, Berkeley. He specializes in image and video analysis and developing technologies to mitigate online harms, ranging from child sexual abuse to terrorism and deep fakes. Ms. Jennifer Bennett is a principal at Gupta Wessler, where she focuses on appellate and Supreme Court advocacy on behalf of workers, consumers and civil rights plaintiffs. She recently argued and one Henderson v Public Data, a Section 230 appeal before the Fourth Circuit that established a framework for interpreting the statute that has for the first time garnered widespread support.

Andrew Sullivan is the president and CEO of the Internet Society, a global nonprofit organization founded to build, promote, and defend the internet. Mr. Sullivan has decades of experience in the internet industry having worked to enhance the Internet's value as an open global platform throughout his career. Finally, Professor Eric Schnapper is professor of law at the University of Washington School of Law in Seattle. He recently argued the cases of Gonzalez v Google and Twitter v Taamneh before the United States Supreme Court. Before joining the University of Washington faculty, he spent 25 years as an assistant counsel for the NAACP Legal Defense and Educational Fund in New York City, and he also worked for Congressman Tom Lantos. He is a member of the Washington Advisory Committee of the United States Commission on Civil Rights. I assume that your appearance today will not be as arduous as arguing two Supreme Court cases back to back. Would the witnesses please stand and raise your right hand? You swear that the testimony you will give today will be the truth, the whole truth, and nothing but the truth. Thank you.

Sen. Dick Durbin (D-IL):

Mr. Chairman, does this mean that for the first time you're not the person in the room who's argued the most Supreme Court decisions?

Sen. Richard Blumenthal (D-CT):

Well, I've done four, but I think Mr. Schnapper may exceed my record in total. I'm not sure. Let's begin with Dr. Franks.

Dr. Mary Anne Franks:

Thank you. In 2019, nude photos and videos of an alleged rape victim were posted on Facebook by the man accused of raping her. The posting of non-consensual intimate imagery is prohibited by Facebook's terms of service. The company's operational guidelines stipulate that such imagery should be removed immediately and that the account of the user who has posted it should be deleted. However, Facebook moderators were blocked from removing the imagery for more than 24 hours, which allowed the material, which the company itself described in internal documents as revenge porn to be reposted 6,000 times and viewed by 56 million Facebook and Instagram users leading to abuse and harassment of the woman. The reason why, according to internal documents obtained by the Wall Street Journal, was that the man who had posted the non-consensual pornography was a famous soccer star. That is, this was no mere oversight, but rather an intentional decision by the company to make an exception for an elite user.

This was in accordance with a secret Facebook policy known as crosscheck, which grants politicians, celebrities, and popular athletes special treatment for violation of platform rules. The public only knows about this policy because of whistleblowers and journalists who also revealed meta's full knowledge of Facebook's role in genocide and other violence in developing countries. The harmful health effects of Facebook and Instagram use on young users and the, and anti-democratic impact of misinformation and disinformation amplified through its platforms. The law that is currently interpreted to allow Facebook and other tech platforms to knowingly profit from harmful content was passed by Congress in 1996 as a good Samaritan law for the internet. Good Samaritan laws provide immunity from civil liability to incentivize people to help when they are not legally obligated to do so. The title of the operative provision of this law, and the text of Section 230 C2, reflect the 1996 House committee reports description of the law as providing good Samaritan protections from civil liability for providers or users of an interactive computer service for actions to restrict or to enable the restriction of access to objectionable online material.

How did a law that was intended as a shield for platforms who restrict harmful content become a sword for platforms that promote harmful content by ignoring the legislative purpose, history, and the statute's language as a whole to focus on a single sentence that reads, no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. The use of the words publisher speaker, which are terms of art from defamation law, make clear that this provision bars certain types of defamation and defamation-like claims that attempt to impose liability on people simply for repeating or providing access to unlawful content. But many courts have instead interpreted the sentence to grant unqualified immunity to platforms against virtually all claims and for virtually all content, an interpretation that not only destroys any incentive for platforms to voluntarily restrict content, but in fact provides them with every incentive to encourage and amplify it.

The Supreme Court in taking up Gonzalez v Google has the opportunity to undo more than 20 years of the preferential and deferential treatment of the tech industry that has resulted from the textually, unsupported and unintelligible reading of the statute. It was an encouraging sign during oral argument that many justices pushed back against the conflation of a lack of community with the imposition of liability and seemed unconvinced by claims that the loss of preemptive, unqualified immunity would destroy the tech industry. As Justice Kagan observed, every other industry has to internalize the costs of its conduct. Why is it that the tech industry gets a pass? Supporters of the Section 230 status quo respond that the tech industry is special because it is a speech focused industry. This claim is disingenuous for two reasons. First, Section 230 is invoked as a defense for a wide range of conduct, not only speech, and secondly, other speech focused Do not enjoy the supercharged immunity that the tech industry claims is essential for its functioning.

Colleges and universities are very much in the business of speech, but they can be sued as can book publishers and book distributors, radio stations, newspapers, and television companies. Indeed, the New York Times and Fox News have both recently been subjected to high profile defamation lawsuits. The newspaper and television industries have not collapsed under the weight of potential liability, nor can it be plausibly claimed that the potential for liability has constrained them to publish and broadcast only anodyne non-controversial speech. There's no guarantee that the Supreme Court will address the Section 230 problem directly or in a way that would meaningfully restrict its unjustifiably broad expansion. And so Congress should not hesitate to take up the responsibility of amending Section 230 to clarify its purpose and foreclose interpretations that render the statute internally incoherent and allow the tech industry to inflict harm without, with impunity. At a minimum, this would require amending the statute to make clear that the laws protections only apply to speech, and to make clear that platforms that knowingly promote harmful content are ineligible for immunity. Thank you.

Sen. Richard Blumenthal (D-CT):

Thank you. Thank you very much. Dr. Franks. Professor Farid.

Dr. Hany Farid:

Chair Blumenthal, Ranking Member Hawley and Members of the Subcommittee, thank you. In the summer of 2017, three Wisconsin teenagers were killed in a high speed car crash. At the time of the crash, the boys were recording their speed of 123 miles an hour on Snapchat's speed filter. Following the strategy, the parents of the passengers sued Snapchat claiming that the product which awarded trophies, streaks, and social recognition was negligently designed to encourage dangerous high-speed driving. In 2021, the ninth Circuit ruled in favor of the parents and reversed a lower court's ruling that had previously emphasized at the speed filter as creating third party content, thus finding that Snapchat was not deserving of 230 protection Section 230, of course, immunizes platforms in that they cannot be treated as a publisher or speaker of third party content In this case, however, the Ninth Circuit found the plaintiff's claims did not seek to hold Snapchat liable for content, but rather for a faulty product design that predictably encouraged dangerous behavior.

This landmark case, Lemmon v Snap, made a critical distinction between a product's negligent design and the underlying user-generated content. And this is gonna be the theme of my opening statements here, frustratingly, over the past several years, most of the discussion of 230 and most recently in Gonzalez v Google, the this fundamental distinction between design and content has been overlooked and muddled At the heart of Gonzalez is whether 230 immunizes YouTube when they not only host to third party content, but make targeted recommendations of content. Google's attorneys argued that fundamental to organizing the world's information is the need to algorithmically sort and prioritize content. In this argument, however, they conveniently conflate a search feature with a recommendation feature. In the former, the algorithmic order of content is critical to the function of a Google or a Bing search. In the latter, however, YouTube's watch next and recommended for you features which lie, the core of Gonzalez are a fundamental design decision that materially contributes to the product safety.

The core functionality of YouTube as a video sharing site is to allow users to upload a video, allow other users to view the video and possibly search videos. The basic functionality of recommending content of which 70% of watched videos on YouTube are recommended is done in order to increase user engagement and in turn ad revenue. It is not a core functionality. Youtube has argued that the recommendation algorithms are neutral and that they operate the same way as the, as it pertains to a cat or an ISIS video. This means then that because YouTube can't distinguish between a cat and a nice video and a ISIS video, they've negligently designed their recommendation engine. Youtube has also argued that with 500 hours of video uploaded every minute, they must make decisions on how to organize this massive amount of content. But again, searching for a video based on a creator or a topic is distinct from YouTube's design of a recommendation feature whose sole purpose is to increase YouTube's profits by encouraging users to binge watch more videos.

In so doing the recommendation feature prioritizes increasingly more bizarre and dangerous rabbit holes full of extremism, conspiracies, and dubious alternate facts. Similar to Snapchat's design decision to create a speed filter. Youtube chose to create this recommendation feature, and they either knew or should have known that it was leading to harm by focusing on 230 immunity from user generated content. We are overlooking product design decisions, which predictively have allowed and even encouraged terror groups like ISIS to use YouTube to radicalize, recruit and glorify global terror attacks. While much of the debate around 230 has been highly partisan on this, Senator Hawley, we agree it need not be, the core issue is not one of over or under moderation, but rather one of a faulty and an unsafe product design. As we routinely do in the offline world, we can insist that the technology in our pockets are safe.

So for example, we've done a really good job of making sure that the battery powering our device doesn't explode and kill us, but we've been negligent in ensuring that the software running our device is safe. The core tenets of 230 limited liability for hosting user-generated content can be protected, insisting as in Lemmon v Snap. The technology that is now an inextricable part of our lives can be designed in a way that is safe. This can be accomplished by clarifying that 230 is intended to protect platforms from liability based exclusively on their hosting of user at generated content and not, it has as ha has been expanded to include a platform's design features that we now know is leading to many of the harms that Senator Blumenthal opened with at the very beginning. Thank you.

Sen. Richard Blumenthal (D-CT):

Thank you very much, professor. Ms. Bennett, good afternoon.

Jennifer Bennett:

Good afternoon, and thank you for the opportunity to testify before you today. I'm gonna focus on Senator Blumenthal mentioned in this case, Henderson v. Public Data, and I'm gonna focus on that case. And the reason for focusing on that case is because if you look at the transcript in Gonzalez of the oral argument, what you'll see is that the parties there disagreed about virtually everything, the facts, the law, whether the sky is blue and the grass is green, everything. But the one thing, the one place they found common ground was that this case, Henderson got Section 230, right? And so in thinking about what Section 230 means, what it means, how it might be reformed, I think Henderson might be a good starting place. So what is this magical framework that gets Google and the people suing Google and the United States government all on the same page?

This framework has two parts, and it mirrors the two parts of 230 that people typically fight about. So part one addresses what does it mean to treat someone as a publisher? Because Section 230 says, we'll protect you from claims that treat you as a publisher of third party content. But it doesn't say what that means. And what Henderson says is, well, we know that publisher liability, what Section 230 is saying about publisher liability comes from defamation law. And in defamation law, what publisher liability means is holding someone liable for disseminating to third parties content that's improper. So for example, someone goes on Facebook, they say, Jennifer Bennett is a murderer. I am not in fact a murderer, so I sue Facebook for defamation. That claim treats Facebook as a publisher. Because what it's saying is, Facebook, you're liable because you've disseminated to third party's information that I think is improper.

On the other hand, say I apply for a job and the employer wants to find out some things about me. So they go online and they buy a background check report about me and the online background check company doesn't see if the employer got my consent. And so I sue that company and I say, the Fair Credit Reporting Act requires you to ask the employer if they have consent. You didn't do that. That claim, as Henderson holds, doesn't treat the company as a publisher. And the reason for that is that the claim doesn't depend on anything improper about the content. The claim says your company was supposed to do something and you didn't do it. It's a claim based on the conduct of the company, not on content. So that's part one of the Henderson framework. A claim only treats someone as a publisher if it imposes liability for disseminating information to third parties, where the claim is, that information is improper for some reason.

Part two of the Henderson framework is what it means to be responsible for content. Because even if a claim treats someone as a publisher, Section 230 as written, offers no protection if they're responsible, even inpart for the creation or the development of that content. And what Henderson says, and this is what a lot of courts have said actually, is that at the very least, if you materially contribute to what makes the content lawful, then you're responsible. And Section 230 should offer no protection to you. So to take a seminal example, say there's a housing website, and to post a listing on the housing website, the website requires you to pick certain races of people to which you'll offer housing. And so there's a listing that says whites only. Someone sues the website and says, you're dis, you're discriminating. It violates the Fair Housing Act.

The website should have no protection in that case because the website materially contributed to what's unlawful about the posting. The website said you have to pick races of people, so the listing should be available to, so that's part two of the Henderson framework, which is you're responsible for content, and you're outside the protection of Section 230, even as it currently exists, if you created that content or materially contributed to what's unlawful about it. And I just wanna end by noting that both parts of this framework depend on the same fundamental premise. And I think that's what's driving people's, you know, even Google's willingness to say this case is correct. And that fundamental premise is that Section 230 protects internet companies and internet users from liability when the claim is based solely on improper content that someone else chose to put on the internet, but it doesn't protect. And what it was never intended to protect is to protect platforms from liability based on their own actions. Thank you again. I look forward to any questions.

Sen. Richard Blumenthal (D-CT):

Thank you very much, Ms. Bennett. Mr. Sullivan.

Andrew Sullivan:

Good afternoon chair Blumenthal ranking member Hawley and distinguished members of this subcommittee. Thank you for this opportunity to appear before you today to discuss platform accountability. I work for the Internet Society. We are a US incorporated public charity founded in 1992. Some of our founders were part of the very invention of the internet. We have headquarters in Reston, Virginia and in Geneva. Our goal is to make sure that the internet is for everyone. Making sure that is possible is what brings me here before you today. The internet is in part astonishing because it is about people. Many communications either allow individuals only to speak to one another, or they allow one central source, often corporate controlled, to address large numbers of people at one time. The internet, by contrast, allows everyone to speak to anyone that can sometimes be a problem.

I too am distressed by the serious harms that come through the internet and that we have heard about today. But I also know the benefits that the internet brings, whether that be for isolated people in crisis who find the help that they need online or to those who learn a new useful skill through freely shared resources, or to distill others who are led to new insights or devotions through their interactions with others. People interact with one another on the internet and Congress noted this important feature in Section 230 with its emphasis on how the internet is an interactive computer service. Yeah, the internet is a peculiar technology because it is not really a single system. Instead, it is made up of many separate participating systems, all operating independently. The independent participants, including ordinary people just using the internet, all use common technical building blocks without any central control.

And when we put all these different systems together, we get the internet. Section 230 emerged just as the internet was ceasing to be a research project and turning into the important communication medium it is today. But even though Congress was facing something strange and new, the legislators understood these two central features. The interactive nature meant that people could share in ways other technologies hadn't enabled. And the sheer number of participants meant that each of them needed to be protected from liability for things that other people said. The internet has thrived as a result. And this is what concerns me about proposals either to repeal Section 230 or to modify it substantially outright. Repeal would be a calamity as online speech would quickly be restricted from fear of liability. Even the trivial things retweeting a news article sharing somebody else's restaurant review would incur too great a risk that somebody would say something and make you liable.

So anyone operating anything on the internet would rationally restrict such behaviors. Even something narrowly aimed at the largest corporate players presents a risk to the internet. In a highly distributed system like this, you can try something without anyone else being involved, but if some players have special rules, it is important that everyone else not be subject to those rules by accident because those others don't have the financial resources of the special players. It would be bad to create a rule that only the richest companies could afford to meet. It would give them a permanent advantage over potential new competitors. Issues of the sort Americans are justly worried about naturally inspire a response. It is entirely welcome for this subcommittee to be examining these issues today, but because Section 230 protects the entire internet, including the variability of individuals to participate in it, it is a poor vehicle to address admittedly grave and insidious problems that are nevertheless caused by a small subset of those online.

This is not to say that Congress is powerless to address these important social problems. Approaches that give rights to all Americans, such as baseline privacy legislation could start to address some of the current lack of protections in the online sphere. Given the concerns about platform size, competition policy is another obvious avenue to explore. We at the internet society stand ever willing to consult and provide feedback on any proposals to address social problems online. I thank you for the opportunity to speak to you today. I look forward to answering any questions you have, and of course, we would be delighted to engage with any of your staff on specific proposals. Thank you.

Sen. Richard Blumenthal (D-CT):

Thanks, Mr. Sullivan. Professor Schnapper.

Eric Schnapper:

Thank you, Senator Durbin.

Sen. Richard Blumenthal (D-CT):

You might turn on your microphone.

Eric Schnapper:

Senator Durbin and Senator Blumenthal put your finger on the core problem here, which is that Section 230 has removed the fundamental incentive that the legal system ought to provide to avoid doing harm. And the consequence of that statute has been precisely as Senator Hawley described, that the right of Americans to obtain redress if they've been harmed by knowing misconduct has been eviscerated. Now, part of the concern that led to the adoption of the statute was that internet companies wouldn't know what was on their websites, but there's, we have decades of experience with the fact that they know exactly what's going on and they don't do anything about it. And the presence of terrorist materials on their websites, and the fact that those materials are being recommended has long been known. Federal officials have been raising this with the internet companies for 18 years. In 2005, Senator Lieberman, whom you know well, wrote a letter to these companies and asked them to do something about terrorist materials on their websites. Since then, members of the other body and of the administration have made that point publicly. There have been dozens of published articles about the use of websites by terrorist organizations. I brought a sample today, A small fraction of men. I'm happy to provide the staff with other examples.

Sen. Richard Blumenthal (D-CT):

We'll, we'll ask that those materials be entered in the record without objection.

Eric Schnapper:

You may wanna see how many there are before you put them all on the record. <Laugh>.

Sen. Richard Blumenthal (D-CT):

We have a big record.

Eric Schnapper:

Fine. The terrorist attacks were so rooted in what was going on in the internet that when there was a rash of terrorist attacks in the state of Israel, they were known as the Facebook Antifa and, and complaints were made to the social media companies without effect. In January, 2015, the problem was so serious that there was a meeting with internet executives in which the representatives of the federal government were the Attorney General, the director of the fbi, the Director of National Intelligence, and the White House Chief of Staff. And I urged the committee to ask for a readout of that meeting and what those companies were told. Most recently in the Twitter litigation, a group of retired generals filed a brief describing the critical role that social media had played in the rise of isis.

And again, I commend that brief to you. I I think it's extremely informative of their informed military judgment about the consequences of what's been happening. The response of social media to this problem has often been indifferent and sometimes deeply irresponsible. In August and September of 2014, two American journalists were murdered by Isis. They were brutally beheaded in, and, and the killings were videotaped when Twitter was called upon to stop publicizing those con the, those types of events. An official accommodating one man's terrorist is another man's freedom fighter that illustrates how fundamentally wrong the status of the law is today. And, there's a good account of other comments like that from social media in a brief that was filed by the concerned Women for America which describes efforts and responses of that kind. What we have learned from the past 25 years is that absolute immunity can breed absolute irresponsibility. Now, we understand that private corporations exist to make a profit, but they also have obligations to the rest of the country and to your constituents to be concerned about the harms they can cause. Google and Meta have made billions of dollars since the enactment of Section 230, and Twitter may yet turn a prophet, but those firms have a long way to go before they emerge from moral bankruptcy. Thank you.

Sen. Richard Blumenthal (D-CT):

Thank you, Professor Schnapper. You argued before the United States Supreme Court. I think it's pretty fair to say that the court was struggling with many of these issues and Justice Kagan said, quote, every other industry has to internalize the costs of misconduct. Why is it that the tech industry gets a pass? A little bit unclear. She went on to say, on the other hand, “I mean, we're a court, we really don't know about these things. You know, we are not like the greatest experts on the internet,” end quote. That became clear, I think, in the course of the argument, but it also emphasizes the importance, what we're doing here, because ultimately my guess is that the court will turn to Congress. But I think it's also worth citing a remark by Chief Justice Roberts when he said the videos I'm quoting just don't appear out of thin air. They appear pursuant to the algorithms.

The Supreme Court understands that these videos, the content very often is driven, it's recommended, it's promoted, it's lifted up sometimes in a very addictive way to kid, and some of it absolutely abhorrent to which they have been, as you put it, Professor Schnapper, indifferent or downright irresponsible. And let me just make clear, Mr. Sullivan, we are not denying the benefits of the internet. Vast important benefits in interactive communication and the large number of participants, but the cases that have begun to make a start toward reigning in Section 230 Henderson described by Ms. Bennett, but before it, roommates and Lemmon, both cases that try to do carve outs in a way, Henderson, based on the material contribution case, show that we can establish limits without breaking the internet and without denying those benefits. Let me ask you, Ms. Franks, you know, well, the material contribution test in your testimony, you distinguish, you make another potential distinction or test involving information versus speech. I wonder if you could comment on the material contribution test, whether it is sufficient or whether we need a different kind of standard incorporated into the statute.

Dr. Mary Anne Franks:

Thank you. As to the first question, I think the material contribution test would be useful if we had agreement about what it meant. And there seems to be a lot of uncertainty about how to apply that test. And so I would be concerned that that test would be difficult to codify. What I think on the other hand would be a promising approach would be to incorporate some standard along the lines of deliberate indifference to unlawful content or conduct. And to relate to the other part of your question, the reason why I've advocated for a specific amendment that would change the word information to speech is partly because a lot of the rhetoric that surrounds much of the defense of the status quo is that it's intended to defend free speech in some sort of general sense that the tech industry is able to leverage that halo of the First Amendment to say, if it weren't for us, you wouldn't get to have any free speech.

And I think that is suspect for many reasons, not least because the kind of speech that is often encouraged by these platforms and amplified is speech that silences and chills vulnerable groups. But it is also troubling because a lot of what gets invoked for Section 230 s protections are not speech, or at least are not uncontroversially speech. And what I mean by this is that the Supreme Court has actually had to struggle over decades to figure out whether or not, for instance, an armband is speech or whether the displays of certain flags or speech. And ultimately the Supreme Court has been quite protective of certain types of conduct that they deem to be expressive. But usually that takes some sort of explicit consideration and reflection as to is this expressive enough conduct to get the benefit of First Amendment protection? And by putting the word information and allowing that to be interpreted incredibly widely, what companies are able to do is to short circuit that kind of debate over whether or not what they're actually doing and what they're involved with is in fact speech. And I think that the clarification that it has to be speech and that the burden should have to be on companies to show that what they are at what is at issue is in fact speech. I think that would be very helpful.

Sen. Richard Blumenthal (D-CT):

Thank you. I have many more questions. I'm gonna stay within the five minute limit so that as many of my colleagues can ask their questions. And turn now to Senator Hawley.

Sen. Josh Hawley (R-MO):

Thank you very much, Mr. Chairman. Mr. Professor Schnapper, let me, let me start with you, thinking about the arguments that you made in both the Gonzalez case and then also in, in the Twitter case recently, I, in both of those cases, just to make sure that that folks who are listening understand it, you were arguing on behalf of victim's, families that were challenging the tech companies. Have I got that?

Eric Schnapper:

Basically? Yes, sir.

Sen. Josh Hawley (R-MO):

So the Court, of course, is deliberating with this case as we don't know exactly what they're gonna do. We'll have to wait to find out. But in both of these cases, help us understand your argument and set the scene for us. You are arguing that there is a difference. These tech companies have moved beyond merely hosting user-generated content to affirmatively recommending and promoting user-generated content. Is that right? That's correct. So explain to us the significance of that. What, what's the difference between claiming immunity from not just hosting user-generated content, but now claiming immunity from promoting and affirmatively recommending and pushing user generated content?

Eric Schnapper:

Well, I think that's a distinction that derives from the wording of the statute. The statute seeks to distinguish between conduct of a website itself and materials that were simply created by others and that that distinction's clear on the face of the statute and the legislative history. Representative Lofgren at one point said holding internet companies responsible for defamatory material would be like holding the mailman. Those are… the language that we used at the time responsible for delivering a plain brown envelope. What's happening today is far afield from merely delivering plain brown envelopes in our companies are promoting this material, and they're doing it to make money. At the end of the day social media companies make money by selling advertisements. The longer someone is online, the more advertisements they sell. And they have developed an extraordinarily effective and sophisticated system of algorithms to promote material and keep people online. And it sweeps up cat videos and it sweeps up terrorist materials and it sweeps in depictions of tragically under-weight young women with dreadful consequences. So that's the distinction we were drawing.

Sen. Josh Hawley (R-MO):

Tell us, you mentioned algorithms, and I think this is so important. Tell us why you think these algorithms which don't, didn't generate themselves. The algorithms are designed by humans, they're designed by the companies. In fact, they're, the companies regard them as, as very proprietary information. I mean, they protect them with their lives, the essence of their companies, their business model in many cases. Tell us what, what legal difference under Section 230 you think these algorithms and algorithmic promotion makes in these kind of cases. Why, why is that such a key factor? Well,

Eric Schnapper:

The algorithms are the method by which the companies achieve their goal of trying to interest a viewer in a particular video or text or whatever. And it's done in a variety of ways. It's done with autoplay so that you turn on one video and you start to see a series of others that you never asked for. It's done through little advertisements. They're known as thumbnails, which appear on a YouTube page. It's done with feed and newsfeed where Facebook, in, in, in, in the hopes of keeping you online more proffers to materials, which they think you'll be interested in.

Sen. Josh Hawley (R-MO):

So let me just ask you this. Does anything in the, in the text of Section 230, as it was originally written, suggest in your view that platforms ought to get this really form of, of super immunity for promoting, taking other people's content, hosting it, promoting it, and in promoting it, making money off of it? I mean, does, should, does the statue immunize them from that? Does anything in the text support the super immunity in that way?

Eric Schnapper:

I spent a very long hour and a quarter trying to answer that question a few weeks ago. <Laugh> we, we, we think, we think the text does draw that distinction. And that brings back so many happy memories that you asked that. So yes, that's our view, and, but we're, but we're not here to retry the case. And, but, but, but that is our view of the meaning of the statute. But it does it would be entirely appropriate for the committee to clarify

Sen. Josh Hawley (R-MO):

That. Let me just get to that point and finish my first round of questions. With that. If Congress acts on this issue, what do you, what would be your recommendations for the best way to address this problem from a policy legislative perspective? The problem you've identified in these cases about affirmative recommendations, how should we change the statute reform the statute to address this problem?

Eric Schnapper:

I'd prefer not to try to frame a legislative proposal as I sit here. It's complicated. And I'd be happy to work with, with your staff and, and my colleagues here, all of them on that for you. But I think it would be inappropriate for me to start tossing that language as I sit here.

Sen. Richard Blumenthal (D-CT):

Thanks, Professor Schnapper. Thanks Senator Hawley. Senator Padilla.

Senator Alex Padilla (D-CA):

Thank you, Mr. Chair. I wanna start out by asking consent to enter a letter into the record for more than three dozen public interest organizations, academics, legal advocates, and members of industry. A letter that notes quote in policy conversations Section 230 is often portrayed by critics as a protection for a handful of large companies. In practice, it's a protection for the entire internet ecosystem.

Sen. Richard Blumenthal (D-CT):

Without objection, your thank you made a part of the record.

Senator Alex Padilla (D-CA):

As we heard from the Supreme Court, this is a very thorny and nuanced issue, and we need to make sure that we treat it as such. Because of Section 230, we have an internet that is a democratizing force for speech, creativity, and entrepreneurship. Marginalized and underserved communities have been able to break free of traditional media gatekeepers and communities have leveraged platforms to organize for civil rights and for human rights. But it's also important to recognize that there is a horrifying conduct and suffering that we can and must address. My first question is for Professor Franks. In your testimony, you call for internet companies to more aggressively police their sites for harassment, hate, speech, and other abhorrent conduct, and you recommend changes to Section 230 to compel that conduct. I share your concerns about the prevalence of this activity online. Now, that said, I also know that many marginalized communities rely on platforms to organize many of these same communities, fall prey to the automated and inaccurate tools employed by companies to enforce their content moderation policies at scale. Is it possible to amend Section 230 in a way that does not encourage providers to over remove lawful speech, especially by users from marginalized groups?

Dr. Mary Anne Franks:

Thank you for this question. I'd first like to state that the current status quo where companies essentially have no liability for their decisions means that they can make any decisions that they would like, including ones that would harm disproportionately marginalized groups. And so, while it is encouraging to see that some platforms have not done so, some platforms have behaved responsibly, some have even made it a commitment to in fact amplify marginalized voices. These are all decisions that they are making essentially, according to their own profit lines or according to their own motivations. And they can't really be relied upon as a guideline for how to run businesses that are so influential throughout our entire society. So when I suggest that Section 230 should be changed, I do want to again emphasize the distinction between immunity versus the presence of liability, which is to say Section 230 presumably provides immunity from certain types of actions.

That is not the same thing as saying you are responsible for those actions if you are found not to have immunity. So my suggestions are really directed towards asking the industry the same question that Justice Kagan has asked, which is, why shouldn't this industry be just as subject to the constraints of potential litigation as any other industry? So not that they should be treated worse, but that they should be treated the same as many other industries. And that what that would hopefully do would be to incentivize these platforms to at least take some care in the way that they design their products and the way that they apply their policies not to give them sort of directed to say, this is how you have to do it, because you don't need a directive like that. Essentially, what you need is to allow companies to act in a certain way. And if they do so in a way that contributes to harm and there is a plausible theory of liability, they should have to be accounted for that. But nothing preemptively that should allow them to say, we are excused from misconduct, or that we are guilty of this conduct, but to simply change the incentive so that they have to sometimes worry about the possibility of being held accountable for their contribution to harm.

Senator Alex Padilla (D-CA):

Thank you. Next question is for Mr. Sullivan. Yesterday we had a subcommittee hearing on competition policy that focused on digital markets. I wanna make sure our legislative efforts to promote an open, innovative, equitable and competitive internet harmonized with the platform accountability efforts here, notably, in response to questioning during oral argument in Google versus Gonzalez, Google's attorney acknowledged that while Google might financially survive liability for some proposed conduct presented as a hypothetical, smaller players, most definitely could not. Can you speak to the role Section 230 plays in fostering a competitive digital ecosystem?

Andrew Sullivan:

Yes. Thank you for the question, because this is the core of why the internet society is so interested in this. This is precisely what the issue is. If, if there are changes to 230, it is almost certain that the very largest players will survive it because they've amassed so much wealth. But a small player is gonna have a very difficult time getting into that market, and that's one of the big worries that I have. You know, the internet is designed with no permanent favorites, and if we change the rules to make that favoritism permanent, it's going to be harmful for all of us.

Senator Alex Padilla (D-CA):

All right. Complex indeed. Thank you, Mr. Chair.

Sen. Richard Blumenthal (D-CT):

Thanks, Senator Padilla I'm gonna call now on Senator Blackburn, who has been like Senator Hawley, a real leader in this area. She and I have co-sponsored the Kids Online Safety Act, which would provide real relief to parents and children tools and safeguards they can use to take back control over their lives, and more transparency for the algorithms. And then we will turn to Senator Klobuchar, who has been such a steadfast champion on reforming laws involving big tech, her Safe Tech bill, as well as the competition bills that you mentioned, Mr. Sullivan, that I've been very privileged to help her lead on. Senator.

Sen. Marsha Blackburn (R-TN):

Thank you, Mr. Chairman. And this is one of those areas where we have bipartisan agreement, and as the chairman said, I've worked on this issue of safety online for our children for quite a while, and for privacy for consumers when they're online data security as they've added more of their transactional life online. And Ms. Bennett, I think I wanna come to you on this. When I was in the house and chairman of Comms Tech there, I passed Foster cta, and that has been implemented. And we had so much bipartisan support around that and finally got the language right and finally got it passed and signed into law. And some of the people that worked with us during that time have come to me recently and have said, Hey, the courts are trying to block some of the victim's cases based on 230 language. And Professor, I see you nodding your head. Also, I would like to hear from you what you see as what they have ascertained to be the problem. How we fix it, if you think there is a fix, or is this just an excuse that you think they're using not to move these cases forward?

Jennifer Bennett:

Sure. So I actually don't litigate FOSTA cases. So if, was it Professor Franks who was nodding their head? I unfortunately don't know the answer to that for you, but I'd be happy to get it for you and could submit it afterwards.

Sen. Marsha Blackburn (R-TN):

I would appreciate that.

Sen. Marsha Blackburn (R-TN):

Professor.

Dr. Hany Farid:

Yeah, I'm not the lawyer in the room. I'm the computer scientist, but I will say I've seen the same arguments being made. I want to come back to something earlier too. Cause this, I think this speaks to your question, Senator, about small platforms. Small platforms have small problems. They don't have big problems. In fact, we have seen in Europe, when we deploy more aggressive legislation, small companies comply quite easily. So I don't actually buy this argument, but somehow regulation is gonna squash the competition because they don't have big problems. Coming back to your question, Senator Blackburn I, we also saw, and I think this is important as we're talking about 230 reform, the same cries of if you do this, you will destroy the internet. And it wasn't true. And so we can have modest regulation, we can put guardrails on the system and don't destroy the internet. I am seeing, by the way, and I don't know the legal cases, but I am seeing some pushback on enforcing sus DEOs. And I think that's something Congress has to take up.

Sen. Marsha Blackburn (R-TN):

Well, I think you're right about that. That's probably another thing that we'll need to revisit and update as we look at children's online privacy in COPPA 2.0, Senator Markey when we were in the house led on that effort. And then Senator Blumenthal and I have had the Kids Online Safety Act recently. Senator Ossoff and I introduced the Report Act, which would bolster NCMEC. And we think that's important to do. It would allow keeping CSAM info for a longer period of time so that these cases can actually be prosecuted. And it's interesting that one of the things we've heard from some of the platforms is that changes to Section 230 would discourage the platforms from moderating four things like CSAM, and I, I would be interested from the professor really from each of you on the panel if you believe that reforming 230 would be a disadvantage, that it would make it more difficult to stop CSAM in some of this information because it's amazing to me that changing, they think changing the law, being more explicit in language, removing some of the ambiguous language in 230 would be an incentive for the platforms to allow more rather than a disincentive.

Ms. Franks, I'll start with you.

Dr. Mary Anne Franks:

Thank you. I think the, the clarity that we need here about Section 230 and about this criticism is to say, which part of Section 230, because if the objection is that changes to c1, which is really the part of the statute that is being used so expansively, if the argument is that some of those changes would make it harder and would disincentivize companies from taking these kinds of steps, I'd say that's absolutely false. C2 quite clearly and expressly says this is exactly how you get immunity, is by restricting access to objectionable content. So what that means, of course, is that if it's a Section 230 C1 revision, you still have C2 to encourage and to incentivize platforms to do the right thing. That being said, potential attacks on C2 could, in fact, have an effect on whether or not companies are properly incentivized to take down objectionable material. But there is, of course, also the first amendment that would come into play here too, because as private companies, these companies have the right to take down, to ignore, to simply not associate with certain types of speech if they so choose.

Sen. Marsha Blackburn (R-TN):

Okay. Professor, anything to add?

Dr. Hany Farid:

I'll point out a couple of things here. I was part of the team back in 2008 that developed a technology called Photo DNA that is now used to find and move child sexual abuse material CSAM that was in 2008. That was after five years of asking, begging, pleading with the tech companies to do something about the most horrific content. And they didn't, there's, there's, it defies credibility that changes to 230 is gonna make them less likely to do this. They came kicking and screaming to do the absolute bare minimum, and they've been dragging their feet for the last 10 years as well. So I agree with Professor Franks. I don't think that this is what the problem is. I think they just don't wanna do it cause it's not profitable.

Sen. Marsha Blackburn (R-TN):

Thank you, Ms. Bennett. Anything to add? Welcome.

Jennifer Bennett:

I'll do what everybody should always do, which is agree with Professor Farid and Professor Franks, which is okay, you know, to the extent we're talking about C1 it shouldn't have any impact. If you're keeping the good, the defense, the good faith defense for removing content, then that's still there and nothing, no changes to C1 should impact that.

Sen. Marsha Blackburn (R-TN):

Thank you. Mr. Sullivan.

Andrew Sullivan:

While I agree with everything that has just been said, the truth of the matter is that this illustrates why this is so complicated, because when you open the legislation, the chances that only one little piece of it is going to get changed, not so high. And, and so the, the problem that we see is, you know, Section 230 is what gives the platforms the ability to do that kind of moderation. It's what protects them and therefore, you know, we're concerned about the potential of, you know, for that to, for that to change as well.

Sen. Marsha Blackburn (R-TN):

Okay. Professor?

Eric Schnapper:

I can't quite agree with everybody. It's gotten a little more complicated, but I think you can reform Section C one without creating disincentives to remove dangerous material. I, I think that's sort of a make weight argument. I think you have to be careful about changes to c2. Although I understand that there are there are issues there, but, but I, I just make, bring home a point, I guess it was Professor Farid made, spending money to remove dangerous material from the website is not a profit center. And I think Elon Musk has explained that to the country in ex exquisite detail. If there are no financial incentives to avoid harm, you don't make money by doing it, and you've got to change those incentives.

Sen. Marsha Blackburn (R-TN):

I'm way over and I thank you for your indulgence.

Sen. Richard Blumenthal (D-CT):

Thanks a lot. Senator Blackburn. Senator Klobuchar.

Sen. Amy Klobuchar (D-MN):

Oh, thank you very much. And thank you to both you chair Blumenthal and Senator Hawley for holding the series. Hearing Senator Blackburn for her good work in this area. So I was thinking Section 230 was enacted back in 1996. Probably there's just one or two remaining members that were involved in leading that bill when we had dial up modems accessing CompuServe. That's what we're dealing with here. To say that the internet of 2023 is different from what legislators contemplated in 1996 is a drastic understatement. And yet, as I said at our antitrust subcommittee hearing yesterday the largest dominant digital platforms have stopped everything that we have tried to do to update our laws to respond to the issues we are seeing from privacy to competition. And like Senator Blumenthal, I, with the exception of the human trafficking that I'd been involved in early on, I was not crying for major changes to Section 230, either at the beginning.

And part of what's brought me to this moment is the sheer opposition to every single thing we tried to do. Even when we tried Lindsay Graham and I before that, Senator McCain did the Honest Ads Act to put disclaimers and disclosures. We got an initial objection and then eventually some support, but it still hasn't passed. The competition bills the work even on algorithms. The simple thing is that we should do some reforms to the app stores. This idea that we shouldn't be self-referencing their own products when they have a 90% or a 40% market share, depending on which platform it is. The hypocrisy of things that we were told would break the internet that we now see them agreeing to do in Europe. That is the final dagger as far as I'm concerned. And why you see shifting positions on Section 230.

Obviously this is also a cry for some ability of the companies to come forward and actually propose some real reforms we can put into law, because so far it's just buy it all off with money commercials, ads, attacking those of us who have been trying to make a difference. So my question I guess to you Professor Farid first is they've said, trust us. We've got this for so long. And the way the internet companies amplify content profit as Senator Hawley was explaining off of it, allowing criminal activity to persist on their platforms, we clearly need our reforms. And I always think of it like if you yell fire in a crowded theater you know, the theater multiplex, as long as they have nice exits, there aren't gonna be liable. But if they broadcasted it in all their theaters, that would be called algorithms. That would be a different story. You noted in your testimony that some legal arguments have conflated search algorithms with recommendation algorithms. Can you explain how these algorithms differ and their role in amplifying content on platforms?

Dr. Hany Farid:

Good, thank you, Senator. So if you go to Google or Bing and you search for whatever, whatever topic you want your interest and the company's interests are very well aligned. The company wants to deliver to you relevant content for your search, and you want relevant content, and we are aligned and they do a fairly good job of that. That is a search algorithm. It is trying to find information when you proactively go and search for something. When you go to YouTube, however, to watch a video of a link that I sent you, you didn't ask for them to queue up another video. You didn't ask for the thumbnails down the right hand side. You didn't ask for any of that. And in fact, you can't really turn any of that off. That's a recommendation algorithm. And the difference between the search algorithm where the company's interest in your interests are aligned.

That is not true of recommendation algorithms. Recommendation algorithms are, are, are designed for one thing to make the platform sticky, to make you come back for more. Because the more time you spend on the platform, the more ads are delivered, the more money we make. And if we're talking about harms, we've talked about terrorism, we've talked about child sexual abuse, we've talked about illegal drugs and illegal weapons, we should also talk about things like body image issues. We should talk about su suicidal ideation. Go to TikTok, go to Instagram, start watching a few videos on one topic, and you get inundated with those. Why? That's because the recommendation is vacuuming up all your personal data and trying to figure out what is it that is gonna bring you here over and over again. Last thing on this issue, cause it goes to knowledge, is that the Facebooks of the world, the YouTubes of the world know that the most conspiratorial, the most salacious, the most outrageous, the most hateful content drives user engagement. Their own internal studies have shown that as you drive content from cats to lawful to awful, but lawful, and then across the violative line to awful, to illegal engagement goes up. And so then the algorithms have learned to recommend exactly the problematic content, because that is what drives user engagement. We should have a conversation about what is wrong with us, why do we keep clicking on this stuff? But the company know that they are driving the most harmful content because it maximizes profit.

Sen. Amy Klobuchar (D-MN):

Thank you. Professor Franks, kind of along those lines circuit courts have interpreted Section 230 differently with some saying that social media and internet companies are not liable for content that could only have been created with the tools they designed. However, unlike most other companies that make dangerous or defective products, internet and social media companies are often shielded by Section 230 from cases that involve design defects. Should Congress consider reforming Section 230 to allow for design defect cases to move forward when a site is designed in a way that causes harm?

Dr. Mary Anne Franks:

Thank you. I think it would be, as I said with the other suggestion about revisions two Section 230, the concern I would have is about how exactly to codify that sort of standard. And I think that the impulse there is a good one. I think that the distinction between faulty design as opposed to simply recommendations or making access to other content, I think that is a solid distinction to make. My concern, an efficient way to reform Section 230 is to try not to think about discrete categories of harmful content or conduct, but rather to talk about the underlying fundamental problem with Section 230, which is this idea that you should provide immunity in exchange for basically doing nothing or for even for accelerating or promoting harmful

Sen. Amy Klobuchar (D-MN):

Content. Yeah, I agree. I was just trying to, you know, throw it out there. <Laugh> The last thing, how would reforms to Section 230, Professor Franks create a safer internet for kids and where Congress should focus its efforts. We have a lot of things and talk a little bit about why that could be a priority.

Dr. Mary Anne Franks:

Well, one of the reasons it's a high priority is exactly for the reasons that Professor Farid has been speaking to that those types of behavioral changes that we see that are essentially an, an intended consequence or an intended strategy on the part of companies to keep people on their platforms longer, to keep them engaging with those platforms. These are dangerous for adults, but they're particularly pernicious for children. This is a kind of approach that is essentially trying to encourage a form of addiction to these services. And it is part, I think, of what explains some of the very heightened rhetoric on the site, on the part of the tech industry, and those who are convinced that the status quo is the best way forward. People identify so closely with their social media platforms at this point that any changes that are suggested to Section 230 feel like personal attacks. And I think that that is a testament to how much Google and Facebook and TikTok and every other company we can think of is really striving and succeeding to make us feel that we cannot live without these products. That they're not products that we are using, but they're using us. And so I think it is a particular importance and concern when this kind of effect is having on younger and younger children who have had really no time to develop their own personalities and their own principles. Okay. Thank you.

Sen. Richard Blumenthal (D-CT):

Thanks. Senator Klobuchar. Senator Hirono.

Sen. Mazie Hirono (D-HI):

Thank you, Mr. Mr. Chairman. Well, it's <laugh> It's very clear that we want to make changes to Section 230, but there are always unintended consequences whenever we attempt to do that. There has been a lot of discussion of unintended consequences arising out of the SESTA/FOSTA. Six workers have raised legitimate concerns about the consequences of that legislation and its effect on their safety. But that does not mean that we should shy away from reforming Section 230 to protect other marginalized groups, just that we need to be very intentional about doing so and paying attention to the potential unintended consequences. This is for Professor Frank. Can you explain how the experience of SESTA/FOSTA should inform the types of reforms we should pursue?

Dr. Mary Anne Franks:

Thank you. I do think that SESTA/FOSTA is a good and instructive example of what can go wrong. When Section 230 was amended, it of course had the very best of intentions. There were concerns, however, throughout the process that were coming from some of the individuals and groups who were saying, this kind of change is going to affect us most, and please listen to our concerns about how it should be done. So I think that one lesson there is definitely to identify and to bring into the conversation the individuals who are most likely to be impacted by any form of reform. That's lesson one. I think the other lesson is that this shows the dangers of attempting to highlight a certain category of bad behavior and try to carve that out into the statute as opposed to, as I said before, identifying the fundamentally flawed nature of Section 230 as it stands right now, as it's interpreted by the courts. And try to fix this on a more generalized level. Because I think the more we tend to get with this, the more likely it is that we are going to make mistakes and have unintended consequences.

Sen. Mazie Hirono (D-HI):

Well, when you talk about not focusing on certain types of bad behaviors, but to look at the sort of, sort of the general problem with Section 230. So how would you make the kind of changes that you're talking about to protect vulnerable communities?

Dr. Mary Anne Franks:

The two forms of amendment that I particularly suggest are changing the word information in C1 to refer to speech instead. And the other is to limit C1's protections to those who are not engaged in the knowledgeable promotion or contribution to unlawful content. I've suggested that the language there should be a deliberate and different standard because that is a standard that is used in other forms of third party liability cases and areas. And so what I think would be useful about that approach is that this is not an approach that's going to try to take one type of harm and say that that is more harmful than something else. But rather to say, this is really how this form of liability tends to work in other industries and in other places. And to be clear, not just industries that have very little to do with what the tech industry supposedly does, namely sometimes speech, but actually the industries that are very much about speech, including newspapers and television broadcasters and universities, all of whom have to be responsible at a certain level if they are deliberately indifferent to unlawful conduct.

Sen. Mazie Hirono (D-HI):

I think you were asked this question of them related to the Safe Tech Act, which does talk about protecting speech rather than information. So are the other panelists aware of the provisions of the Safe Tech Act? And if so, any of you would agree that to protect speech is okay, but you know, protecting information is not where we wanna go. That may be one of the approaches that we should take to reforming Section 230. Would I, any one of the other panelists would like to weigh in on this?

Eric Schnapper:

This may be too complicated to solve quite that, that way. Turning to Senator Klobuchar's point, it's not difficult to imagine lawyers putting back into the word speech, everything that we, that the committee thought it was taking out mm-hmm. <Affirmative>

Sen. Mazie Hirono (D-HI):

Darn those lawyers. Okay. I, you know, I realize that if we're gonna make that kind of change, I think we need to provide more guidance as to what we mean by what we wanna protect. Again, for Professor Franks, one of the concerns I've had, particularly after the Supreme Court struck down a 50 year precedent and the right to abortion is that reproductive health collected by these tech platforms may be used to target individuals seeking these services. So these apps and websites are collecting location data, search histories, and other reproductive health information. And last Congress, I introduced my body, my Data Act to help individuals protect private sexual health data. However, what I'm understanding is that though that act creates a private right of action to allow individuals to hold regulated entities accountable for those violations, these tech platforms can currently just hide behind Section 230. Even when put on notice that this information is being used for nefarious purposes, unintended purposes. Based on your extensive legal experience, is there a way to hold tech companies disseminating reproductive health information from behind the Shield of protection accountable?

Dr. Mary Anne Franks:

I think it's possible. I think that it would need, it would require moving away from this dominant interpretation of Section 230 as it currently stands, because that view of Section 230, that revision of C1 as providing some sort of unqualified immunity to these platforms, really makes it difficult for any individual who was harmed in this way. Do we even get their foot in the courtroom door? And so I think what we would need at this point is either a very wise decision from the Supreme Court about how to properly interpret C1 or, and or we would need Congress to clarify that once again, C1 can be modified to make sure that it is clear that these companies can in fact be sued if there is a plausible theory of liability and a causal connection between what those platforms did and the ultimate harm that is resulting to a plaintiff.

Sen. Mazie Hirono (D-HI):

It's not that easy for the plaintiff to show that, but she should have that opportunity, I would say.

Dr. Mary Anne Franks:

Exactly.

Sen. Mazie Hirono (D-HI):

Thank you. Thank you, Mr. Chairman.

Sen. Richard Blumenthal (D-CT):

Thanks, Senator Hirono. We may have other members that the subcommittee or our committee come but why don't we begin a second round of questions now and we can interrupt to accommodate them when they come here. Lemme just say to Dr. Franks I appreciate your comments about SESTA as one of the principal authors and co-sponsors. We endeavored to listen and we will change the statute if it has unintended consequences. And we will listen in the course of ongoing Section 230 reform, whether it's the EARN IT Act that Senator Graham and I or co-sponsoring a number of us have proposals as I mentioned Senator Klobuchar with the Safe Tech Act. And Senator Hirono is a co-sponsor as, as I am Senator Hawley has a number of very promising proposals, but I, I think we should be very clear about what is really going on here.

And Professor Schnapper, I think you made reference to the money involved. The fact of the matter is that Big Tech is making big bucks by driving content to people knowing of the harms that result. We saw that in the documents that were before the Commerce Subcommittee on Consumer Protection that helped support the Kids online safety Act. More eyeballs for longer periods of time mean more money. And big tech may be neutral or indifferent on the topic of eating disorders, suicide or bullying or other harms. They may not wanna take out an ad saying to engage in these activities, but they know that repeating it and amplifying it and, in effect, addicting kids to this kind of content has certain consequences. And as Justice Kagan said, why should big tech be given a pass? An airline Boeing that has a faulty device that causes the plane to nose dive or company like GM that has a defective ignition switch that causes the car to stop and go off the road. They're held responsible. Why shouldn't big tech be held responsible? Whether the standard's deliberate indifference or some other standard? It may be difficult, it may be complicated, but it's hardly impossible to impose a standard. So let me ask you, Mr. Sullivan, what's your solution here? I'm asking big Tech to be part of the solution, not just the problem.

Andrew Sullivan:

Well, let me be clear that I can't speak for Big Tech because I work for a nonprofit. I'm asking then, but, but we have, you know, our concern is really what users need. Our concern is really what people need. And what we are trying to, to, to point out is that 230 is the thing that allows the internet to exist. So I am not here to say that the behavior that we see, the behavior of various large tech corporations, the behavior of some platforms you know, that are perhaps outside of the United States for that matter, that those are all unproblematic, there are definitely problems there. What I'm suggesting is that a narrow application, the attacking of this narrow piece of legislation, is going to harm the internet in various ways. And so if you want to do something about large corporations, for instance, then you've got an issue having to do with industrial policy. It's not an issue to do with the internet. And it seems to me that, you know, these concerns are legitimate ones, but I think we're trying to go after the wrong tool.

Sen. Richard Blumenthal (D-CT):

Well, the tool can't be just more competition. The tool can't be more privacy, as you've suggested in your opening comments. It has to be something dealing with this harm. And as I have said before, I'll say it again, the carve outs, the limits that have been imposed so far, whether it's Henderson or Lemmon or other case law, have broken the Internet. I I don't think you can argue that Section 230 as it exists right now is essential to continuing the internet. That's not your position, is it?

Andrew Sullivan:

I think that Section 230 is a critical part of keeping the internet that we have built. And the reason I think that is because it protects people in those interactions, it protects from that kind of third party liability. I am not, you know, I'm not here to suggest that it is logically impossible to find a particular carve vote from 230 that will help solve some of these problems. I haven't seen one yet. And so I'm very skeptical that we're gonna get one, but I am not here to suggest that it's logically impossible. I'm just very concerned that we understand the potential to do a lot of harm to the internet. When people say destroy the internet, I think that this, you know, this sounds like an on/off switch, but that's not how the internet works. And we can either drift in the direction of losing the advantages of the internet, losing the interactivity, losing the ability of users to have the experience that they need online and in favor of a centrally controlled system. And that is the thing that I'm mostly concerned about.

Sen. Richard Blumenthal (D-CT):

So taking the Kids Online Safety Act, which simply requires these tech platforms to enable and inform parents and children, they can disconnect from the algorithms. And if they do something, you know, let's use some non-legal term, something really outrageous, and they violate a basic duty of care, which under our common law is centuries old, they can be held liable. And as Senator Hawley said, so well, they get a day in court. That's fundamental to our system. I don't understand why there would be harms inevitably as a result of that kind of change.

Andrew Sullivan:

The concern that I have is that, you know, in the United States, it's easy to initiate a lawsuit and, and it's expensive and complicated to defend it to, to defend against it. So very, very large players, incumbents that we have today, the people who are, you know, the richest corporations in the history of capital, they have the resources to do this. But if you are a, like a community website and you and you, you know, or a church website and you allow discussions on there and somebody comes on and they start doing terrible things, you're going to end up with the exact same liability and that will gradually turn down the ability of the internet to connect people to one another. That's what I'm concerned about. Not, you know, I, I mean, I'm not carrying any water for a giant tech corporation. I don't work for one and I can't really, you know, influence their direction. But my point is that the way 230 works right now, it protects all of the interaction on the internet. And if we lose that, we will most certainly lose the internet. We'll still have something we call the internet for sure, but it will not be the thing that allows people to reach out and connect to one another. All of these terrible harms, all of these terriblethings that happen online. There are corresponding examples of people getting help online.

Sen. Richard Blumenthal (D-CT):

I appreciate your concern about the community websites, but they're not the ones driving suicidal ideation or bullying or eating disorders to kids. And I understand that Section 230 dates from a time when the internet was young and small, nobody's forever young, and these companies are no longer small. They're among the most resourced of any companies in the history of capitalism. And for them of all companies to have this free pass, as Justice Kagan call it seems to me not only ironic, but unacceptable. And again, what's at stake here ultimately are the dollars and cents that these companies are able to make by elevating content. And I sort of am reminded of big Tobacco, which said, oh, we're not interested in kids. We don't advertise to them. We don't promote our products to children. And of course, their files, like Facebook's files showed just the opposite. They knew what they were doing in order to raise their profits and they put profits over those children. So I think, again, I'm hoping, not talking to you personally, but to the internet world out there and the tech companies that have the resources and responsibility, they will be a constructive part of this conversation. Thank you.

Sen. Josh Hawley (R-MO):

You know, it isn't the best, let me just pose this question on the panel. Maybe it's, maybe I should start with you, Professor Franks, but isn't the best way to address the many abuses that we're seeing by the big tech companies. And we've talked about some of 'em today. Professor Farid, you mentioned, CSAM, that's got to be one of the leading ones. I'm a father of three children, all of 'em, very small. I worry about this every day as my oldest is 10 as they get old enough to want to be on the internet. There are other abuses, the, the, the videos that promote suicide, the, the videos that promote violence, and we could go on and on isn't the best way to deal with that, just to allow people to get into court and hold these companies accountable.

Here's what I've learned in my short time in the Senate, is that we can write regulations and we can give the various regulatory agencies the power, whether it's the FTC or others, it's the power to enforce them. But my experience is, my observation is, that the big tech companies tend to own the regulators at the end of the day. I mean, no offense to any of the regulators who are watching this, but you know, you know, I'm right. At the end of the day, it's a revolving door. They go to work for the big tech companies, they come out of employment and go into the government. And it's just amazing how the regulators always seem to end up on the side of tech. And for that matter, even when they do find tech, even if it's a big fine, meta got fine, I think a billion dollars a couple of years ago, they didn't care, didn't change anything.

That's nothing to them. Their revenues are massive, their profits are massive. But what strikes fear into their hearts is if you say, oh, but we'll allow plaintiffs to go to court, we will take the big ta tobacco example. What finally changed the behavior of big tobacco lawsuits, normal people got into court class action suits. So just to simplify this, isn't that what we're really talking about today? I mean, the, the thing that we ought to be doing is figuring out a way, and you proposed a way, Professor Franks, I was just reviewing your written testimony here a second ago with the changes you would make to the statute, but the, the gist of that is to create a system and a standard that is, that is equitable in terms of being, it's the same across the board for everybody. You don't single out one particular piece of conduct, you would just change the standard. But the point of it is people would be able to use this standard to get into court, to have their day in court and to hold these companies accountable. Was that fair to say? Is that too simplified?

Dr. Mary Anne Franks:

I think that is fair to say that as you're pointing out, litigation is one of the most powerful ways to change an industry. And it's not just because of the ultimate outcome of those cases, but also because of the discovery in those cases. What we get to see instead of having to wait for whistleblowers or wait for journalists, is actually the documents themselves, internal documents about what you knew when you knew it and what you were doing. And so I think that exactly for this reason, we have to be interpreting Section 230 as not providing some kind of supercharged immunity that no other industry gets, but actually yes, allow people who have been harmed to get into court, make their claim. They may not prevail, they might, but at any event, we will see some public service also in terms of the discovery process that shows us what these companies are doing, what they're, and to the tune of how much money. Because a lot of what is being said about the distinctions between the big corporations and the little ones, how did the big corporations get so big? Because they didn't get sued. And so if we care about that kind of monopolization, if we care about that kind of disproportionate influence, what will benefit the entire market is actually letting those companies be sued if they have caused harm.

Sen. Josh Hawley (R-MO):

Yeah, I couldn't agree more. And with all due respect to you, Mr. Sullivan's, I know you, you have an obligation to represent the people for whom you work. But I, I would just say it's pretty hard to argue that the social media landscape, the social media industry right now, for example, is a good example of a competitive industry. It's not particularly competitive at all. It's controlled overwhelmingly by one or two players. It is the very quintessence of monopoly. I mean, it's a, you have to go back over a century in this country's history to find similar monopolization. And to Professor Franks’ point, I think you can make a very strong argument that Section 230 and the incredible immunity that it has provided for a handful of players has contributed to this monopolization. It is in effect a massive government subsidy to the tune of billions of dollars a year.

So I'll just say that, listen, as a conservative Republican, I mean, I wanna be clear about this. I am a conservative Republican. I believe in markets. I am skeptical of massive regulatory agencies, but one of the reasons I'm skeptical is I just see them get captured time after time, after time. But I believe in the right of people to be heard and to be vindicated and to have their days in court. And I think the best way you protect the little guy and give him the power or her the power to take on the big guy, is allow them into court, let them get discovery, let them hire a tort lawyer, let them bring their suits. And, and you're right, Professor Franks, maybe they win, maybe they don't, but that's justice, right? It'll be a fair, even-handed standard. The last thing I would say is that I, I think that's actually much closer to what Section 230 when Congress wrote it was meant to be.

If you look at the language of 230, you know, it's been interpreted by courts to provide this super immunity, as you were saying, Professor Franks, I think it's very arguable and I, this is the argument I made in the Google case of my amicus brief was that listen, what it, what it really was meant to do is preserve distributor liability. There's a baseline at the common law of distributor liability that says if a distributor who doesn't originate the speech but merely hosted and distributes it, they can't be liable for any, for somebody else's speech and shouldn't be. I think we all agree on that, only if they promote speech that they know is unlawful or should have known as unlawful regardless of the nature of the speech. Which gets to your point, Professor Franks, you know, whatever, whatever category you want, if unlawful they know it, they should have known it then under traditional distributor liability, then and only then they can be liable.

But what has happened is the courts have obviously swept that away completely now too in 230 bars, even that form of liability, surely we can agree that there should be, whether it's the standard you propose Professor Franks, which I think is pretty close actually, to, to traditional distributor liability, we could find a way to allow people to have their basic claims vindicated to hold accountable these companies when they are actively promoting harmful content that they know or should know is harmful. And I would just submit that's the best way forward here and I look forward to working with the chairman here as we continue to gather information and, and try to put forward proposals that will do that in a meaningful way. Thank you, Mr. Chairman.

Sen. Richard Blumenthal (D-CT):

Thanks a lot, Senator Hawley. I have another question or two relating to ai. We haven't really talked about it specifically in much detail, but obviously Americans are learning for the first time with growing fascination and some dread about ChatGPT and Microsoft Bing passing law school exams and making threats to users both fascinating and pretty unsettling in some instances. And some of what's unsettling involves potential discrimination. There's a study from Northeastern University, Harvard the nonprofit upturn studies by them finding that some of Facebook's advertising discriminates on the basis of gender, race, and other categories. Maybe I could ask the panel whether the threats, beginning with you Professor Franks, are distinct or whether they're part of this algorithm threat that we see generally involving some of these tech platforms.

Dr. Mary Anne Franks:

With the caveat that I'm not an AI expert by any means, I think I would reiterate my position that my concern about approaches that try to parse whether something is an algorithm or whether something is artificial intelligence or whether something is troubling from a different perspective, I would rather the conversation be about, again, the fundamental flaws and the incentive structure that Section 230 promotes, rather than trying to figure out whether one particular category or another is presenting a different kind of harm. I think the better approach is to look at the fundamentals in incentive structure and ensure that these companies are not getting unqualified immunity.

Sen. Richard Blumenthal (D-CT):

Professor Farid.

Dr. Hany Farid:

There's a lot that can be said on this topic. Senator Blumenthal. I'll say just two things here. One is we are surely but quickly turning over important decision making to algorithms, whether they are traditional AI or machine learning or whatever it is. So for example, the courts are using algorithms to determine if somebody is likely to commit a crime in the future and perhaps deny them bail. The financial institutions have for decades used algorithms to determine whether you get a mortgage or a small business loan. Medical institutions, insurance companies, employers are now more likely than not, the young people sitting behind you when they go to apply for a job will sit in front of a camera and have an AI system determine if they should even get an interview or not. And I don't think it's gonna surprise you to learn that these algorithms have some problems.

They are biased, they're biased against women, they're biased against people of color. And we are unleashing them in these black box systems that we don't understand. We don't understand the accuracies, we don't understand the false alarm rates, and that should alarm all of us. And I haven't gotten to the ChatGPTs of the world yet, or the deep fake yet. So the second thing I wanna say about this is there's gonna be something interesting here around 230 in generative AI, as we call it. So generative AI is ChatGPT, OpenAI DALL-E, the image synthesis and deep fakes. Let's say we concede the point that platforms get immunity for third party content, but if the platforms start generating content with AI systems, as Microsoft is doing, as Google is doing, and as other platforms are doing, there's no immunity.

This is not third party content. If your ChatGPT convinces somebody to go off and harm themselves or somebody else, you don't have immunity. This is your content. And so we have to, I think the platforms have to think very carefully here more broadly. We need to think very carefully about how we are deploying AI very fast and very aggressively. The Europeans have been moving quite aggressively on this. There is legislation being worked on in Brussels to try to think about how we can regulate AI while encouraging innovation, but also mitigating some of the harms that we know are coming and have already come to our citizens.

Sen. Richard Blumenthal (D-CT):

Thank you very much. Important points, Ms. Bennett?

Jennifer Bennett:

Yes. Yeah, so your question is, is there anything fundamentally different? I think with respect to 230 with these different kinds of technologies and with the same caveat as Professor Franks gave, which is I'm not an AI expert. You know, I think the answer is no. And the fundamental distinction here is are we trying to impose liability on these companies for something someone else said, so because Facebook allowed somebody to post something, or is the harm really caused by something the company itself is doing? You know, there have been claims against Facebook, for example that it provides ads on insurance and housing and things like that by race or by gender or by age. And the problem there isn't the housing ad. The housing ad is fine. The problem is the distribution to by race or by age or by gender, the harm is being caused by what the platform is doing, not by the content. And I think that's true. You know, you see that ChatGPT, that, you know, has similar principles. The harm isn't what people are putting into ChatGPT, it's what ChatGPT might spit out. And there again, it's the Honda of the platform itself. And so I think the principles apply, you know, no matter what the technology is, this distinction between content that somebody else puts on the internet and what the platform itself has done.

Sen. Richard Blumenthal (D-CT):

Mr. Sullivan.

Andrew Sullivan:

I I think that's broadly right. And more importantly, I don't really think that AI is fundamentally a part of the internet. It's just a thing that happens to use it a lot of the time. But the reality under those circumstances is it's, it's another piece of content. Somebody else has made it. And so for 230 purposes, I don't think it's, I don't think it's part of the conversation.

Sen. Richard Blumenthal (D-CT):

Professor.

Eric Schnapper:

I just add that it's my understanding that to some degree AI is in, in place now, that is when the, the, these algorithms were constantly being tweaked to be more effective. And some of it's done by software engineers, but some of it is machine learning as the software discovers what works and what doesn't, it changes what it does. That's been going on for some time.

Sen. Richard Blumenthal (D-CT):

Thank you. Well I think that that last question shows some of the complexity here. I'm not sure we all agree that AI is totally distinguishable. I guess it depends on how you define AI and algorithms. But I do think that we can make a start on reforming Section 230 without waiting for a comprehensive or precise definition of ai. And I want to thank this panel. It's been very, very informative and enlightening and, and very, very helpful. We've had a good turnout and many of you have come from across the country, really appreciated.

And the record is gonna be held open for one week in case there are any written statements or questions from members of the subcommittee. I really do thank you and we are gonna be back in touch with you. I am sure as we proceed, but have no doubt we are moving forward. I think the bipartisan showing here and the bipartisan unanimity that we need change is probably the biggest takeaway. And I think we are finally at a point where we could well see action. Can't predict it with certainty. Some of it will depend on the cooperation from the tech platforms and social media companies that have a stake in these issues. But I'm hoping they will be constructive and helpful. And you have certainly been all of that today. Thank you so much. This hearing is now adjourned.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics