Home

Donate

Transcript: Senate Hearing on Platform Transparency

Justin Hendrix / May 5, 2022
Stanford professor Daphne Keller testifies to the Senate, May 4, 2022

On Wednesday, May 4, the Senate Judiciary Committee subcommittee on privacy, technology and law hosted a hearing titled, "Platform Transparency: Understanding the Impact of Social Media." The hearing featured five expert witnesses:

  • Brandon Silverman, Founder And Former CEO, CrowdTangle (written testimony)
  • Nate Persily, James B. McClatchy Professor Of Law, Stanford Law School (written testimony)
  • Daphne Keller, Director, Program on Platform Regulation at the Cyber Policy Center, Freeman Spogli Institute for International Studies at Stanford University (written testimony)
  • Jim Harper, Nonresident Senior Fellow, American Enterprise Institute (written testimony)
  • Jonathan Haidt, Thomas Cooley Professor Of Ethical Leadership, New York University Stern School of Business (written testimony)

What follows is a lightly edited transcript. Please refer to the hearing video when quoting the speakers.

Senator Chris Coons:

This hearing will come to order. I'd like to thank all of our witnesses for participating today. I'd also like to thank Ranking Member Sasse and his staff for working with mine to put this hearing together on a consensus basis. This is a critical topic, and I'm looking forward to a productive conversation today. Social media companies, as we all know, play an enormously important role in our lives and in our society, they have helped to connect billions of people across the world and deliver a whole range of new and innovative services in ways that provide tremendous value to individuals, families, and communities. At the same time, there are critical questions about the potentially negative effects these platforms may have. Concerns about propagation to misinformation, incitement to violence, or serious impacts on self-concept or mental health.

A central issue that we face in confronting these questions, whether as a consumer, as a parent or policy makers is what are the facts? What are the actual facts? Right now, we don't really have a well-grounded data driven understanding of how social media platforms are impacting us in our society. The reality is the vast expanses of human interaction that occur on any given large social media platform can be studied and analyzed really only by that platform. It's a problem, and it's why in my view, we need to promote greater transparency.

Transparency is important for consumers and the public who should know how a platform they're choosing to engage with is potentially affecting them. Transparency is important for policymakers so we can better write whatever rules may regulate these platforms going forward. And transparency, I would argue, is important for the platforms themselves because transparency itself is a positive means to promote change. Greater transparency can address the deeply rooted market failure of imperfect information. We've seen how public disclosure of internal research or other data can create a conversation, can alter consumer behavior, or lead to regulatory scrutiny, and then initiate market pressure that leads platforms to adjust their behavior in potentially positive ways.

On top of all that I think pursuing greater transparency is and should be nonpartisan. All sides, all points along the political spectrum have questions that can be answered by greater transparency. Some of the most pressing topics like the effects platforms have on our children's mental health transcend political affiliation. This hearing will explore how to bring about greater transparency. What are the right pathways? What are the questions that we have and we need to ask and answer?

Late last winter, I released a discussion draft of the Platform Accountability and Transparency Act known by the catchy acronym PATA with Senators Portman and Klobuchar. That draft bill would provide for transparency by creating first, a provision to require platforms to make certain key information available to the public on an ongoing basis. High level information about ads and algorithms and widely viewed content. And second, a mechanism for data access by truly independent researchers through the National Science foundation (NSF) and Federal Trade Commission (FTC).

And third is safe harbor so that researchers conducting research in the public interest do not need... need not fear legal action from platforms. We release this bill as a discussion draft, knowing that it raises important questions. I look forward in this hearing to discussing those questions as well as many others alongside my ranking member. I think those questions include making sure we're striking the right balance between privacy and transparency and appropriately also weighing what our colleagues are doing on these issues in Europe and what legislation may arise in the states. Of course, PATA is not the only bill out there to address these issues. I want to thank some of my colleagues, Senators Blumenthal and Blackburn, for example, for their work on the Kids Online Safety Act also relevant to this discussion. With Senator Sasse's cooperation we've assembled an all-star panel with a diversity of views and perspectives to grapple with these questions today, and I look forward to it. I'll introduce you shortly, but now I'd like to turn to my colleague and friend, Senator Sasse.

Senator Sasse:

Thank you, Chairman, and thank you to all five of our witnesses for participating today. I want to applaud Chris for his leadership and his team for focusing on an incredibly important issue. We're here to talk about transparency on social media, which is another way to say, how do we understand the role social media platforms play in our lives, in our families, and in society more broadly? There's no question that social media has changed American society. My colleagues on both sides of the aisle have serious concerns about how social media has changed their country. Although their concerns are often quite diverse, what they think they're aiming at. My democratic colleagues are often concerned with whether social media platforms take down enough posts that they think are harmful. And many of my colleagues on this side of the aisle spend a lot of time arguing that social media companies should stop censoring speech with which they differ.

And most of the time the people making these arguments are talking far past each other. And the sense that there's consensus about what to do is really just consensus that something is wrong, because we can't even agree on some of the basic facts about what content moderation is, how it works, how it should work. It is not surprising that we can't see eye to eye on these facts because frankly we don't understand the platforms very well. The Congress is often undereducated about how these things work and you're going to help us learn more today, and we're grateful for that. Social media companies have opaque policies that are applied inconsistently, and there often seems to be one set of standards for the wealthy and the well-connected, and another set of standards for everyday Americans. As I've said before, I'm pretty skeptical that turning to the government is necessarily going to fix many of these problems, but there are things we should certainly start to do to understand the problems better together.

What we do know is that a lot about social media is complicated and has negative effects. One of the most important things we know is that social media companies have a business model that exists to maximize engagement. It's a pretty basic truth. But when we get to a lot of the screaming about what legislation might or might not fix the problem, we first need to admit that the business model of these companies is to maximize engagement, and social media algorithms and company policies and business decisions are ultimately designed to keep people on platforms for as long as possible. So it should come as no surprise to many of us that social media is habit-forming over time. For some, it slips into something that should rightly be called addiction. And as we've integrated social media more deeply into our lives and our institutions, most of us haven't stopped long enough to ask, what habits are we forming? What loves are we developing? And are these good and healthy habits?

How do our social media habits impact our families and our kids on the one hand, and our politics and our political and social and cultural institutions on the other. So, let's start with families and kids. Ask just about any parent of a teenager, whether they worry about social media and whether it's impacting their kids, and almost all parents, but especially parents of teenage girls, almost immediately begin telling a story that's worrying to them. And they start talking about things they've seen social media do to impact and influence their kids. Like a lot of parents, Melissa and I worry about our three. We want to raise our daughters to be strong, confident women who will love their neighbors and serve the world. We want them to know their worth and understand their identity, their purpose, and their value, which is infinitely more than you'll ever get on social media's infinite scroll.

I've often joked that I'd prefer my 11-year-old son steal the keys to the pickup and go joy riding with an open bottle of Bourbon, than to wander with a smartphone unmanaged, unattended on the internet. It's a joke, but it's not a joke, because these are serious, lifelong identity warping forces at play. We have also seen a lot of new neurobiological research, which I won't try to summarize in this slot because we have Jonathan Haidt here who is much better qualified than almost anybody on earth to talk about this. Social media also poisons our political discourse. One of the most important ways that social media platforms maximize engagement is by sending users two powerful, unending messages. You're right, and everyone who disagrees with you is evil. Platforms show Americans the news, posts, and commentary that they already agree with on the one hand, and they amplify the most outrageous behavior by crazy partisans on the other side.

This creates echo chambers where political weirdos are mostly just talking to political weirdos with their certainty that all political weirdos are only on the other side. It turns out the weirdness is mostly driven by being addicted. It isn't mostly a right-left problem. This is mostly an addiction problem versus the vast majority of people who don't want to be addicted. Over time in these echo chambers, people tend to adopt more and more extreme views. And then those extreme views are picked up by the other side, which claims victory for having been right all along and having recognized that only the crazy people, are that only people on the other side are crazy and the virality spiral, the polarization continues.

So, as we step back and think about the role of transparency about social media platforms, we should not lose sight of the fact that we already do know quite a lot of basic introductory things about social media and how it impacts us, and how it harms kids, families, and institutions, and how it polarizes our politics. Hopefully today we can learn more, and we're grateful to the five of you for helping us in that quest. Mr. Chairman?

Senator Coons:

Thank you, Senator Sasse. Thank you for that insightful opening comment and several powerful visual images. I'd like to take the Chair’s prerogative and invite Senator Klobuchar to give some opening remarks. She's been a great partner on developing the Platform Accountability and Transparency Act, and she has a schedule in conflict that'll prevent her from staying for testimony and questioning, but wanted to make some opening remarks. Senator?

Senator Klobuchar:

I promise I'll look at the transcripts. So, thank you so much to both of you. And thank you, Senator Coons for your leadership on this bill with Senator Portman. Thank you, Senator Sasse for that great description. I had one mom tell me that it's like, she keeps trying to get control of what her kids are saying, but it's like water that's coming out of a faucet that won't stop and it's overflowing. And she's just standing out there with a mop trying to do it, and she can't. And so, I think so much of this has to do with the magnification, the amplification, of what you have called this kind of polarized speech. I don't want to get your words wrong. What you just said?

Senator Sasse:

Weirdos.

Senator Klobuchar:

Okay. Yes. Yes. Political weirdos. Okay. There you go. And I think what you see with that is that the influence of these platforms is just simply unprecedented. And even though these platforms are seemingly free, they're really not, because you're seeing advertising, and they're making money off of you. And then they use the information they have, and then make more money off of you. And in fact, the number of, even looking at the data shows that they make more money off of us than other industrialized nations, just simply because we don't have any rules of the road in place.

And for so long, we've been hearing, trust us, we've got this. And I think that the era of blind trust is coming to an end. And that's why you see the work that's going on on the competition side. I'm also on the commerce committee that's why you see the work that's going on with updating some of the kids' laws. And that's why you're seeing the work that's going on on privacy. And then of course on algorithms and getting more transparency. We all know that you can't yell fire in a crowded theater that is not considered free speech, that's not considered okay. And the way I think about it is if you're a theater, okay, someone yells fire, it's not your fault, but you better have an exit so people can get out of there. But if you had speakers and you amplified that guy yelling fire in all of your multiplex of theaters deliberately, that would be a problem if you thought somehow you would make more money. And that is kind of what we're talking about here with algorithms.

This hearing is also about how quickly misinformation and disinformation can spread online. A recent report from the Center for Countering Digital Hate identified a dozen specific content producers as the original sources of an estimated 65% of coronavirus disinformation online. We have literally people die when this stuff is not corrected. We need transparency because these tools can hurt millions of people. That's why this bill is so important, the Platform Accountability and Transparency Act, it will make it so companies like Facebook can't block researchers from looking into their platforms and algorithms. And I note that President Obama touted the importance of this legislation in his recent speech at Stanford. And I am committed and I know Senator Coons is to seeing this through. So I want to thank all of you. I look forward to reading your testimony and I know that my staff will be here as well. So thanks for coming before us today.

Senator Coons:

Thank you Senator. Today, we welcome five witnesses to testify about the need, the value, the importance of greater transparency at social media companies, and how that can most reasonably be achieved. Our first witness is Brandon Silverman. Mr. Silverman founded CrowdTangle, a social analytics tool used to monitor and understand how content is performing on social media platforms in real time. CrowdTangle was purchased, acquired by Facebook in 2016, where it became an indispensable tool for journalists, fact checkers, and external organizations. Mr. Silverman worked to implement and improve the product while at Facebook until recently departing the company. Next we have Professor Nate Persily, Professor Persily?

Nate Persily:

You got it correctly.

Senator Coons:

Persily teaches at Stanford Law School and is the co-director of the Stanford Cyber Policy Center, Stanford program on democracy and the Internet, and the Stanford MIT Healthy Elections Project. And we'll next hear from Professor Daphne Keller. Professor Keller directs the program and platform regulation at Stanford's Cyber Policy Center, where her work focuses on platform regulation and internet users rights. After that, we have Professor Jonathan Haidt, who is testifying remotely today. Professor Haidt is the Thomas Cooley Professor of Ethical Leadership at NYU's Stern School of Business, where his work focuses on morality, emotion, and politics. And finally, we have Jim Harper, a non-resident senior fellow at AEI, the American Enterprise Institute, where he focuses on privacy issues and select constitutional law issues.

So let me briefly lay out the mechanics. After we swear in all of the witnesses, each witness will have roughly five minutes to provide an opening statement. We'll proceed to questioning. Each Senator will have initially five minutes. Hopefully we'll have time for a second round after that. So, if you would all please stand to be sworn in. Please raise your right hand. Do you affirm the testimony you are about to give before this committee will be the truth, the whole truth and nothing but the truth, so help you God?

All:

I do.

Senator Coons:

Thank you all. Mr. Silverman, you may now proceed with your opening statement.

Brandon Silverman:

Thank you for having me. My name is Brandon Silverman, and I was the CEO and co-founder of CrowdTangle, a social media analytics tool. We made it easy to see what was happening on social media, and we were very good at what we did. In 2016, we were acquired by Facebook, and over the next few years we partnered with thousands of outside organizations around the world to help them study and monitor social media, including journalists, academics, human rights activists, non-profits, and more. By the end of 2020, The New York Times called us perhaps the most effective transparency tool in the history of social media. However, in the spring of 2020, after an intense internal debate about whether the transparency we provided was worth it, Facebook paused all work on CrowdTangle, and disbanded the team running it. I left the company shortly after. Today, the future of CrowdTangle is up in the air. And along with it, one of the most robust and successful transparency programs that Facebook operates.

I'm here today to share what I've learned, working on transparency for over 10 years, including what it's like to try and advance transparency from inside one of these large platforms, and why I believe it's time to stop sitting back and hoping that platforms voluntarily share data, and to pass legislation that makes it safe and responsible to share data with the outside world.

In 2016, CrowdTangle was acquired by Facebook. And over the next few years, we expanded our work to a wide variety of new industries. We added new features and new data, and it became one of the most widely used tools in the industry. Our data was frequently cited here in congressional testimony over the years. There was no shortage of use cases for what we'd built, but there was a limit to how far we could push transparency from inside the company.

The truth is that it's not easy to work on transparency from inside a major platform. For one thing, it can be incredibly uncomfortable when your work and the work of your team are constantly fueling criticism, some fair and some not, of the company where you work. Those moments take a toll on your team, but they also make it harder to get resources. They make it more difficult to launch new features and add more data. And ultimately they provide constant ammunition to executives who are skeptical about doing transparency at all. On top of that, there are also organizational challenges that make it hard. There are conflicting regulatory and legal requirements all around the world that are all interpreted differently by whichever set of whatever team or lawyers you might talk to. There's the fact that no matter how much transparency you do, you're rarely going to get credit for it in the public eye. There is a constant and fierce competition for resources and the attention of leadership. And that's just to name a few.

But, more than all of those factors combined, the single biggest challenge that platform... The single biggest challenge is that platforms can get away without doing any transparency at all. YouTube, TikTok, Telegram, Snapchat, these platforms represent some of the largest and most influential platforms in the United States, and they provide almost no functional transparency into their systems at all. Moreover, despite a lot of Facebook's laudatory transparency efforts, there are also incredibly significant parts of their platform that still remain entirely inside black boxes. Today CrowdTangle is still available, but it's in maintenance mode. Facebook has stopped onboarding new partners, no new features or major updates have been released in two years, and a global partnerships team that used to run it no longer exists.

But while we were there, our team saw the power of transparency. Almost every single day, our team saw examples of partners using social media data to help protect elections, prevent real world violence, fight global pandemics, empower independent journalism, hold platforms accountable, and more. We saw how transparency can be a tool to make sure that social media lives up to the promise of strengthening free and open societies instead of being used to undermine and weaken them. I think what happened to CrowdTangle should be seen as a bellwether, and that it's too hard to make progress on these issues at the scale and breadth we need from inside a company. And as a result, we've seen that the industry as a whole has simply not made enough progress equal to the responsibilities they have. And I don't think there's any reason we should expect that to change going forward. If anything, I think we should expect less voluntary efforts.

That's why I believe it's time to create legislation that makes data sharing and transparency a requirement for the entire industry, and creates mechanisms to do it in safe and responsible ways. And I think the Platform Accountability and Transparency Act represents an important step in the right direction. There are real challenges to getting this legislation right, and we're going to hear about some of those today. However, if we don't find ways to move forward, we'll continue to be in the dark about the real impact of social media on our lives. We'll continue jumping from one anecdotal data point to another, from one leaked document to another, from one urban myth to another, without ever establishing baseline evidence backed conclusions about the role it plays in our lives.

When I think about social media and a lot of the public discourse about the role it plays, I'm reminded of the anecdote about the policeman who sees a drunk man searching for something under a streetlight and asks what the drunk man has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes, the policeman asks if he's sure he lost them there and the man replies, no, he lost them in the park. And the policeman asked, "Well, why are we searching here?" And the man says, "This is where the light is." For too long, our public discussion about social media has been focused on the moments where we briefly found ourselves under a street light. It's time to turn the lights on for good. Thank you.

Senator Coons:

Thank you, Mr. Silverman. Professor Persily.

Nate Persily:

Thank you, Chairman Coons. Thank you ranking member, Sasse, senator Blackburn, and my Senator, Senator Padilla for being here. I really appreciate testifying for this committee again. I'm going to talk a little bit about the purposes of transparency and then also, like Brandon, I'll talk a little bit about my experience in working with Facebook and other platforms and trying to get data out of them. But let me begin by just saying sort of what the bottom line is here, which is that we cannot live in a world where Facebook and Google know everything about us and we know next to nothing about them, all right? And these large platforms have basically lost their right to secrecy. All right. Their power over the information ecosystem is really unrivaled in world history, and given that it's time for a democracy to step forward and regulate them in the public interest. And the least that we can do is to get some transparency out of them so that we understand exactly what's going on.

As Senator Sasse said, the questions, with respect to internet regulation, are extremely complicated, very difficult. And in many respects, we don't know what the right answer is in many of these domains. But transparency is the first step to finding answers in any of them. And so, as we think about the purposes of transparency, it's sort of a, if you'll excuse the pun, it is a Metabill, right? It is about enabling our ability to regulate and to act in so many other domains. So first, let me say that these firms are information monopolies. And usually when we talk about it, we talk about it in the antitrust sense, in terms of competition and the monopoly power that they have over their economic domains, but they're information monopolies in a different sense. Which is that they control all of the information, which is now most revealing about social problems, right? We've never been in that position before.

Those of us who are social scientists, when we would be analyzing data in the pre-internet age, right? Most of that data was freely available, either through government statistics, survey data, or other kinds of data. Now, most of the data, which is relevant to contemporary social problems, is locked up in these private companies, right? And it's only through legislation that we're going to be able to unlock it and to find out exactly how big these problems are. But I want to emphasize one thing, and I think Senator Coons, you mentioned this before, which is, yes, this is a very difficult trade off to get, right? We need to balance transparency with privacy, and I think everybody on this panel and in the Senate is aware of that.

But the question is not whether this data will be collected and analyzed. The question is whether the only people who will be able to analyze the data are the people inside the firms who are tied to the profit maximizing mission of the firms. Right? And so the only question that PATA and other bills like this are posing is, is someone who is not tied to the profit maximizing mission of the firm going to have access and to do research in the public interest?

So now let me talk a little bit about the purposes of transparency. The first set of purposes and goals I think of transparency, which is sometimes undersold, is that it will actually change the behavior of the firm. To some extent, I get criticism a little bit when you emphasize transparency, that it's seen as sort of weak legislation because it's not breaking up the companies, or it's not going right after content moderation. But once the platforms know that they are being watched, it will change their behavior, all right? They will not be able to do certain things in secret that they've been able to do up till now.

Second, as Senator Coons mentioned, I think it will lead them to change their products, right? Because once we have a greater appreciation for what's actually going on in these firms, those on the outside can do research that a lot of the insiders are not doing on their products. Second, as you both mentioned before, this will educate policy makers. It'll educate policy makers here in Congress, whether the issue is child endangerment, or disinformation, or antitrust, or privacy, it will educate our European allies who are doing much more aggressive regulation right now, as well as in the States, look at the bills coming out of Texas and Florida. As you said, we are legislating in the dark.

And then finally it will educate the public. Not just about what's happening inside these firms, but also the dynamics of the information ecosystem. As Senator Sasse was saying, there is a fundamental debate here as to what is going on? Is the problem runaway hate and disinformation and the like on the platforms? Is the problem platforms are over censoring, particularly conservative voices? This is what transparency is going to answer, right? Those are the questions that we would be investigating.

And I'll say, just in conclusion that I spent probably four years of my life trying to get data out of Facebook, through a program called Social Science One. And the people we worked with inside the firm were fantastic. They were with us all the way. The problem was that ultimately when it came down to providing the most robust data that those inside the firms had access to, they simply couldn't do it either because they were worried about another Cambridge Analytica, or they simply didn't understand whether the legal environment would be conducive to that. And so what I would say sort of in conclusion is that we shouldn't have to wait for whistle blowers to whistle, right? This is something, this type of transparency legislation is about empowering outsiders to get a better idea of what's happening inside these firms.

Senator Coons:

Thank you, Professor. Professor Keller.

Daphne Keller:

Thank you for the opportunity to testify. I work on platform regulation, which as Senator Sasse referenced, is really complicated. And it's exciting to be able to testify about a topic where I think there's actually a viable way forward. So I'm particularly honored to speak beside Brandon Silverman and Nate Persily, both of whom have built actual real world functioning transparency models. And I'm also excited to talk about this at a time when the EU has just moved forward on trans-... Platform, transparency in a dramatic way under the Digital Services Act, or DSA. I think that is really paving the way for a whole new era of platform transparency.

I think today we will hear about a real diversity of transparency tools and that range of tools is what we need. Not every approach is useful for every research topic, or even for the way that every platform functions. As someone who has worked with multiple approaches to platform transparency, going back over a decade, including in-house at Google, I would like to find a way to enable a whole range of these approaches. This includes fixing the laws that constrain so-called scraping of data from public websites. This is kind of low hanging fruit, and would have the side effect of helping with some interoperability and competition goals. It also includes building APIs, which are just channels for computers to talk to each other, so people researching things like bias and algorithms can submit bulk queries and look at bulk results. And it includes the kinds of mandatory disclosures contemplated in laws like the discussion draft of PATA. All of this can vastly improve public information, both about online harms and about what regulatory responses will actually make things better.

To make these laws work, though, there are some pitfalls we need to navigate. Some are practical, some are constitutional, some are political. My written testimony on this was very long. I'm sorry. And it raised a lot of questions that I think appropriately should go to an agency, because they are in the weeds and they're iterative and that they will change with technology. But it also raised questions that I think are serious policy decisions that are Congress's job to resolve. So I'm going to talk about those a little more here.

One is about privacy and surveillance. For researchers to examine what is being said and claimed and propagated online, and how platforms influence that, they will need to look at information about people. There are unavoidable trade-offs in deciding how much they get to do that. Sometimes I think the need for public information should be paramount, other times, user privacy should. This is something where Congress should provide guidance, ideally through federal privacy legislation, but if you have to resolve it here, it deserves attention.

When it comes to government surveillance though, so not the researchers, but law enforcement getting access to data, I think there is a brighter line. Nothing about these transparency laws should change American's protections under the Fourth Amendment, or laws like the Stored Communications Act. And I don't think that's anyone's intention here, but clear drafting is essential to ensure that government can't effectively bypass Fourth Amendment limits by harnessing the unprecedented surveillance power of private platforms.

A second issue is about competition and the practical costs and benefits of using different transparency tools for different companies. Laws designed for a Google or a Facebook are a bad fit for companies that are far smaller in measures of revenue, or users, or employees. And those companies may not actually be relevant for the problems that laws like this are trying to solve. I don't think we need a live dashboard showing us which hotels are most popular on Tripadvisor, for example. Hopefully the problems of designing rules for giant incumbents, and then applying them to a whole competitive ecosystem speak for themselves.

The third issue is about CDA 230, I hate to bring it up, but as I mentioned in my testimony, I don't think tying transparency obligations to CDA 230 solves the problems, even that CDA 230's critics want to solve. I think instead it sort of opens the door to very unpredictable litigation.

And the last thing is the First Amendment. I want transparency mandates to be constitutional, but there are serious challenges. And I hope that you will put really good lawyers on that, because I want this thing to work. So those are my concerns, in platform speak. These are the things to solve before launch, and I'm happy to answer any questions about the many other things I wrote about in... Later on.

Senator Coons:

Thank you very much, Professor Keller. I believe we're now going to have remote testimony from Professor Haidt.

Jonathan Haidt:

Hello, greetings. I hope you can hear me properly. So it's an honor to be testifying here. Senator Coons, you're absolutely right that this issue of transparency is completely non-partisan. Senator Sasse, you're absolutely right that all of us with teen kids, myself included, are concerned about this. We don't know what's going on, but the kids are getting sick and we want to know, is this the cause? And Senator Klobuchar, yes. The era of blind trust is coming to an end.

I'm a social psychologist. I study morality, I've studied moral development. I began to notice that something was going really wrong with teenagers, entering college students around 2014. By 2015, it was clear all of our mental health centers were flooded on campus. A few years later, it became clear, there is an epidemic of depression and anxiety sweeping across the United States for teenagers. What I'd like to do in my remarks here is to make... I want to give you a distillation of the fact that I've aggregated on what's the nature of the teen mental health crisis, and then second, what is the evidence, the empirical evidence that this is caused in part by teens suddenly moving on to social media en masse around 2011.

So all of my testimony I submitted in the PDF file that is submitted with my testimony. I have links to two collaborative Google documents where I've invited other experts to critique, to say what are we missing? So we have relatively complete listings of the studies on all sides here. So let me begin. Part one, the specific, gigantic, sudden, and international mental health crisis. I'll just make these six points, you can find elaboration of them with graphs and links in the submitted testimony. First, the crisis is specific to mood disorders. This is not an across the board increase in all kinds of mental health problems, it's specific to anxiety and depression, and the behaviors that go along with them, which is especially suicide and self harm.

Second, the crisis is not the result of changes in the willingness of people to self-diagnose. As late as 2018, some experts were still saying, "Oh, it's not real. It's just Gen Z. They're more comfortable talking about depression. It's not a real thing." Now it's clear, it is, again, because of the very sudden increase in depression and, I'm sorry, in hospital admissions for self harm, and completed suicides. All of that goes way up in the early 2010s. The self-harm lines are quite dramatic. Relatively flat, and then right around 2010, boom, they start skyrocketing upwards.

Point number three, it came on very suddenly. This was not a gradual curve up. Point number four, the increases are very large. We're not talking 10 or 20%. Depending what you look at, it's anywhere from 50 to 150%. Self harm for young teen girls in particular is up more than 150%, about 180 in the last data that I saw. These are enormous sudden increases.

Point number five, the crisis is gendered. Boys and girls both have plummeted in mental health since the early 2010s, but the girls have fallen further on some measures. On others, they're about the same percentage wise, but the girls starting rates of anxiety and depression are higher. So the increase is a much larger number of girls. So the crisis is gendered. The sixth point, the crisis has hit many countries, it's not just the USA. The patterns in Canada and the UK are identical to those in the US, and Jean Twenge and I, a Professor at San Diego, have published a study showing that actually across the world, loneliness at school also went up after 2012. So those are the points that I want to make about the mental health crisis. It is real. It is big. It is sudden, it is gendered.

Now, what's the evidence that social media is a cause? The first point is that correlational studies consistently show a link between heavy social media use and mood disorders where the size of the relationship is disputed. Point two, you will often hear from experts, "Oh, the size is so tiny. It's no bigger than the correlation of mental health problems with eating potatoes, or wearing eyeglasses." But this is not true. In the main study that found that, which I talk about in the submitted testimony, that's the relationship for all digital media use. This is very important. When you hear people talk about studies, usually it's about all digital media use, including watching Netflix, with mental health outcomes. Those correlations are indeed tiny. But, and this is point number three, when you zoom in and look just at the relationship between social media use and mental health outcomes, the correlation is much bigger. There's an emerging consensus that it's between 0.1R equals 0.1 and 0.2, it's in that range.

Point number four, those correlations are even larger for girls. So probably closer to 0.2R equals 0.2. Point number five, the effect size is even bigger during puberty. A recent study showed between 11 and 13, girls are especially likely to be harmed by being on social media. We must try harder to protect puberty, get this stuff out of middle school for God's sakes. Point number six, correlations of 0.1 to 0.2 are not small. This is what public health matters are mostly about. The correlation of childhood exposure to lead with adult IQ is 0.09. There's a new realization among psychologists that small effects played out over millions of people over many years, add up to gigantic public health crisis, as we have with lead and water.

Point number seven, there's experimental research as well. This is not just correlational. The experiments mostly show that if you randomly assign people to conditions, reducing social media use tends to have beneficial effects. Point number eights, the eyewitness testimony is there. You ask the kids, as Facebook did. And what they found... Facebook's own research found that teens blame Instagram for increases in the rate of anxiety and depression. This reaction was unprompted and consistent across all groups.

So I'm going to conclude by saying along with the others who have been testifying here, we really, really need to see the data. We must have platform accountability and transparency. Imagine that all of our kids began eating a certain kind of candy that it never existed before in 2011, all of them were eating this all day long. And in 2012 they started developing leukemia in large numbers. And some people say, "Well, you know, causation... Correlation is not causation. We can't be sure we can't be sure." Okay, fine, we can't be sure. But can we at least compel the candy maker to tell us what our kids are eating? Thank you.

Senator Coons:

Thank you. I very much look forward to continuing our conversation and appreciate the structured engaging way in which you delivered those remarks. Mr. Harper, if you might, you're our final witness today.

Jim Harper:

Thank you, Senator Coons and Senator Sasse. Thanks for the opportunity to testify today. This hearing and the Platform Accountability and Transparency Act raised many interesting issues. When I finished writing my 22 page testimony, I thought I had barely scratched the surface. I also thought mine would be the longest, but I lost.

We're all in favor of transparency. I've done a good deal of work on government transparency over the years, as detailed in my testimony, and I always hoped for more. The bulk of my attention over the last two decades though, has been on privacy. I'm sorry to play skunk at the garden party a little bit here, but there are very high privacy costs, I think, to the mandated disclosure regime found in PATA. I see no limit on the subject matter of data held by platforms that the legislation effects. It would allow the National Science Foundation and Federal Trade Commission to take essentially any data from platforms to give to approved researchers. In my opinion, this runs contrary to an emergent property rights regime. That is an important protection for consumers and their privacy. My belief is that platforms and communications providers make contractual promises to protect privacy in their privacy policies, and in their terms of service documents. In doing so they also divide up nascent property rights in data.

In the bundle of sticks model of property taught in law school, the right to exclude others from personal information remains with the consumer. Subject to the narrow exceptions like protection of the platform itself, response to law enforcement, and so on. Across the legal landscape, there are strains of recognition that this is the case, including in the Supreme Court. If congressional legislation took data from platforms to give to researchers or anyone else, it would cut against the grain of this trend. It would treat data held by platforms, including data held as a bailment or entrust for users, as so much soup in a tureen waiting to be ladled out. The legal immunities in the PATA legislation further erode confidence that privacy will be protected in a mandated disclosure regime. An unconstrained disclosure mandate may also be unconstitutional.

The Supreme Court appears likely to revive the non-delegation doctrine soon, and an unrestricted grant of authority to mandate disclosure for the all purpose, but unfortunately, nebulous goal of transparency may not survive it. That's not just a constitutional point, I don't think, but it's a prudent one. We don't know what the future holds for our politics. And we don't know what type of data a future administration might use the NSF and FTC to order from platforms about Americans, our habits and our communications. I also specifically recommend in my written testimony against bringing transparency to platforms, moderation practices, and to their security efforts. Both how they secure themselves and how they provide security to their users. The PATA legislation considers this. It provides for privacy and security regulations with pertaining to the mandated disclosure data. But the mandatory disclosure regime itself would increase vulnerabilities and the attack surface, as they say in computer science, of these systems.

Moderation and security systems are constantly probed by bad actors who want to turn platforms to their own ends or just ruin them. Mandated disclosure would create new opportunities for wrongdoing. I'll say this, since reading my colleague's testimony and some of the authorities they cite, I'm less enthusiastic about my written testimony on the First Amendment issues. I am taken by the breadth of the data seizures made possible by PATA, and my frame of references Fourth Amendment privacy. There are areas where government requires businesses to disclose or publish information about themselves. This is not the case though, I think, where that includes the editorial choices of institutions engaged in free speech. The internet and social media are strange, but real descendants of the printing press. Disembodied and given to everyone to use as much as they want.

Social media companies aggregate and augment this mass exercise of expression. I think it's well within precedent to regard what they do editorially as protected by the First Amendment. Where disclosure mandates exist, you can probably find a close fit between means and ends when auto dismantlers are forced to reveal their inventories and sources, that makes them much less likely to become fencing operations. And requiring audited financial statements from public companies prevents various forms of fraud.

So far as I'm aware, and unfortunately, platform transparency through disclosure mandates does not have that kind of tight nexus with recognizable protections or social gains. Professor Haidt's testimony, reading it, the effects of social media on teen girls, that's a natively powerful testimony, and that's important stuff. He's especially credible to me because of his work through various organizations to strengthen academic inquiry norms and honest debate. I would like it if transparency could directly fix the problems he cites, or make our democracy evidently more functional. But the benefits of transparency unfortunately are contingent and remote where the privacy losses are immediate and I think real.

The solutions to all these problems will be years in coming. And they will come from a wide variety of adaptations and sources, university research being just one contributor. This hearing has helped emphasize the importance of transparency. I don't think unfortunately that it lays much groundwork for a mandatory disclosure regime, like the one found in PATA. It's excellent to have had a discussion draft though, to work from. Thanks very much.

Senator Coons:

Thank you. I'd like to thank all of our witnesses today for your thoughtful testimony. This is exactly the sort of broad reaching hearing I was hoping we would have, and I'm excited that the vote has not yet been called. And so we may have a little more time to explore. So, and I particularly appreciate the perspectives offered on transparency, the potential ways to achieve it, questions about its cost and the cost benefit. So I'm going to start my questioning by setting the scene a little bit and exploring various pathways towards greater transparency, which I believe would be beneficial by partisan and the role that Congress should play. And we may well take a second round depending on what happens with votes.

So if I might, first Professor Presily, I appreciate the point you made about how transparency can transcend political ideology and might be, I think, as you said, a bipartisan first step on the road to sounder policy. Could you say more about the kinds of questions independent researchers might answer with greater access, and how answers to those questions might, in a more immediate way, deliver on the promise of better policy making.

Professor Nate Persily:

Thank you for that. So there is a fundamental disagreement between conventional wisdom and what the platforms say is happening on their services. For example, we have no sense of the prevalence and size of the problem of hate speech, disinformation, incitement, child endangerment, and the like. And if you ask the folks at the platforms, they will say, for example, that a lot of the interpretations that were made off of CrowdTangle data, which looks at different engagement metrics and the like, that that was misleading. That actually the average person's experience is not one which is replete with hate speech, disinformation and the like, that's an empirical question. If we had access to the data, we would be able to figure out how big a deal some of these problems are in the average person's newsfeed. And not just the average person, because I think Senator Sasse was exactly right when he was talking about weirdos in his opening comment.

Part of the question is not just what the average user sees, but how concentrated minorities of users, what rabbit holes they may be going down. That also is the second area of inquiry, which is open for debate. What role are the algorithms playing in sending users down rabbit holes of polarization, conspiracy theories, and the like? On the outside, it's conventional wisdom right now that the algorithms are having a huge impact in doing so. If you ask the folks at the platforms, they say, "No, that's not actually what's happening. If anything, the algorithms are putting them toward more moderate content." We can answer that question. If you just give us access to some of the same data, we can figure it out.

But finally, particularly on the issue of censorship and whether the content moderation practices of the firms have an ideological bias to them, which a lot of people believe they do. That's the kind of thing that if we could look at the content moderation enforcement in a detailed way, we'd be able to figure out, whether this is sort of neutral principles that are having a disparate impact, or whether there is something more nefarious going on. That, again, as I said in my opening statement, this is what will change the behavior of the platforms. If they know that their content moderation decisions are going to be transparent and viewed by others, then they will make decisions in a different way immediately.

Senator Coons:

Thank you, Mr. Silverman, if I might... I think it's important for people to understand, tangibly, how transparency of the kind that CrowdTangle as a tool created, and how that could possibly benefit society. Could you just give us a few concrete examples of how greater transparency might benefit society? I think, Mr. Harper appropriately raised the question about, harm to privacy being immediate and the benefits from greater transparency being remote or contingent, I think was the term.

Brandon Silverman:

Yeah, absolutely. I'll go through a few examples and I'll try and focus on ones that are immediate and real time, and you could get fairly quickly if this sort of legislation was passed. So first is, one of the challenges social media is at scale. We've seen through a lot of research, as well as some of the leaked documents over the last year, that it is very hard for these platforms to manage the sheer number of communities, dialects, languages, individual nuances of places where they exist and where they're incredibly important.

If you give the outside world the ability to also see what is happening with public content, they can play a role in helping with that effort. And one specific example, in the Philippines, this year's Nobel Peace Prize winner, Maria Reesa, and her news organization Rappler, for years have used CrowdTangle to help identify coordinated inauthentic networks that were violating the community standards of Facebook, helping identify, flag them for Facebook to get removed before the platform would've themselves. So one is you have the ability to engage, not just the platforms, but much broader swath of society in helping monitor, debate, and engage with what's happening on these platforms.

A second one is there is a lot of academic research that can be done on public data. There are very privacy sensitive data sets that you have to build a lot of controls around, to deal with in a safe and responsible way. But there's also a lot of public available data that can empower a lot of research. We have over a thousand academics and researchers who have used CrowdTangle and published hundreds of research papers, including in nature and science and others, looking at this data to inform lawmakers and policy makers.

And I'll give you just one last one is, I think if you talk about simply, at a really high level, the idea of a marketplace of ideas, if in any way we want these platforms, which already house so much of our civic and political discourse, to actually be functional marketplace ideas, where people can engage and debate with them, you can't do it if people can't see what's in the marketplace. And so simply as creating healthy, dynamic places of liberal debate and engagement, one of the just first foundational things you need is to make it easy to see what's happening.

And we saw over and over local news outlets, independent journalists, investigative news, using the data that was coming through our system to both cover the platforms themselves, but also simply talk about issues of the day and report on them to the public.

Senator Coons:

Just one last question-

Senator Sasse:

Sure.

Senator Coons:

.Last question to you if I might, Mr. Silverman. So some would argue, to your point about marketplace of ideas, that the market should just sort it out. And that any regulatory effort by Congress is likely to overreach or poses profound risks to privacy. You saw in your experience at Facebook, I don't want to put words in your mouth, but that it was in the market interest of a lot of platforms, other platforms, to simply avoid any transparency whatsoever.

So help me understand, why you think it is that large social media platforms want to avoid scrutiny through transparency? And why in your view, if this is the case, we can't just rely on voluntary disclosures around the commercial market to solve this problem?

Brandon Silverman:

I think there are a number of things happening, and I'm not an economist, so I don't want to go out too far above my expertise, but I think there's a reality in which there are not fully diverse free markets happening in some of these industries. That there are dominant players that have enormous percentage of the market share, and are less subject to the whims of a free market of users that can pick among a wide variety of choices. So one is, I just think there's a reality of the actual nature of the markets in which these companies operate.

But two is, at this point, we have 10 plus years of evidence that some of these companies can do very little and it doesn't matter. And the reality is there were a lot of challenges to doing it inside Facebook, but one of them was certainly the question of, why are we putting ourselves out on a limb when others aren't? And I don't see that dynamic changing at the moment, and maybe that's a judgment they call and I could be wrong. But I think there's just too much evidence in the industry right now of too little efforts around transparency, relative to the scale and the need out there while.

And just to add one last note on this, while also at the same time, the executives all talk about how important transparency is. So I think they both acknowledge its importance, but I think the level of efforts across the industry just haven't met the need we have as a country. And so now it's time for legislation.

Senator Coons:

Thank you, Mr. Silverman, Senator Sasse for eight minutes.

Senator Sasse:

Eight. Wow, that's generous. That's good. Let me just first underscore Chris's point, five for five really useful testimony. The stuff you submitted, even if 300 pages, but also all of your verbal comments here have been useful. Professor Haidt, I want to start with you, because there were a few side comments about how unrepresentative our stereotypical sense of a messed up conversation public square, digital public square is compared to maybe the median experience. You didn't in your opening statement, refer to your Atlantic piece two weeks ago, but given how viral that has gone in this case, in a good way, would you be willing to unpack your four part typology on who the overrepresented, very online, very angry, loud people are.

Jonathan Haidt:

Yes. So I've spoken with Mark Zuckerberg a couple of times, I've heard the arguments, and I've heard him say, "How could it be wrong to give more people more voice?" Now, that sounds great. And if everybody, especially those who had less voice were all lifted up, that would be great. But in fact, what I argue has been happening is once social media developed tool make it really, really easy to attack people, criticize them complain, once it became not about look at my nice photos of my kids, but "can you believe that this person said that," it was as though everybody was given a dart gun and everybody could shoot whoever they wanted. But most of us don't want to shoot anyone.

The four groups were doing, most of the darting are the extremists on the far right, the extremists on the far left, trolls who are mostly men with personality disorders, they enjoy harassing people and showing off, and Russian intelligence agents. So social media, especially Twitter, but also Facebook and others, have been an incredible gift to those four groups. And the other 80% of us lost voice. We're afraid to speak up.

We see this among our students in class, it's horrifying. Our students are literally afraid to challenge something, because they're afraid someone will record it, someone will talk about it, some will shame them. So social media, there was a period when we were incredibly optimistic, a period of technodemocratic optimism. When it seemed in the early 2000s like this was going to be the best thing for democracy ever. But I believe what we're seeing is James Madison's nightmare, where it just promotes factionalism and fear of speaking up. And it gives us a distorted marketplace. I would love it if Elon Musk could clean it up so that we weren't so afraid to speak, but right now people are.

Senator Sasse:

Really helpful. And for folks who haven't seen it or read it, I highly recommend his Atlantic piece two weeks ago. It wasn't exactly called the dumbest decade, but it's something like that. Professor Haidt, can we stay with you a minute on the teenage girl harm effects that we see? You work hard in your written testimony to distinguish between studies that look at social media in particular, from those who look at screen time in general.

Joathan Haidt:

Yes.

Senator Sasse:

Can you unpack why that distinction matters?

Jonathan Haidt:

Yes, because look, when I was a kid, we all watched too much television. It turns out that was a moral panic television screens they don't rot your brain. It was hard to link that to bad health outcomes. And many have said the same thing as happening now. And it turns out, when you look at the research on say video game playing, in general, it's not particularly harmful. Now you play huge amounts it's different. But the point is digital technologies, screens are not bad intrinsically. It depends what you do with them. And if a screen promotes social learning and engagement in a healthy way between kids who can connect and play together, that's great. That's not harmful. Watching Netflix is not harmful. Watching videos is not harmful.

And so much of the research has looked at all digital activities and then they say, "Hey, all digital activities for all kids. The correlation is so tiny, it's not even worth worrying about." But what I found over and over again is when someone sends me a meta-analysis, this happened on Twitter just this morning, someone said, "Oh, here's a meta-analysis from 2020 that just disproves your thesis." Well, if you look at the meta-analysis, yeah, they find nothing when you look at everything, but whenever you zoom in on social media for girls, you almost always find a much bigger correlation, up in the range of R equals point one to point two, which is clinically very serious. This is what public health effects are.

So we really have to distinguish, don't get caught up in digital media and screen time, focus on social media for girls, especially in middle school. For God sakes, let kids go through puberty first, before we encourage them to live their lives, putting up pictures and asking people to tell them how pretty they are.

Senator Sasse:

Really helpful. I would like to ask some more questions about the solution set relative to that problem. Because to me that is one of the biggest problems we face. A Republic is not going to survive and thrive unless we, the people, have a set of habits that recognize our digital consumption habits are things we're going to have to be responsible for. That doesn't mean there are no collective solutions to any of these problems, but, fundamentally, most of what we're talking about with digital addiction is going to have to be responded to at the level of individual and family and communitarian, localist, and healthy institutions that figure out ways to put constraints on our own addictions and consumption.

It seems to me that kids are a completely different category. And so if I weren't at time, I would ask a number of you who are advocating for this particular piece of legislation to help me understand, why we wouldn't really want to be starting by targeting solutions that deal with a teen problem or with a minor's problem? But given that we're going to have votes soon and we've got other members waiting, I'll defer it for later. Thanks.

Senator Coons:

Thank you, Senator Sasse. Senator Ossoff.

Senator Jon Ossoff:

Thank you, Mr. Chairman. Mr. Silverman, what are the implications of government access to some of these research tools? Are there Fourth Amendment implications? Privacy implications? Surveillance implications?

Brandon Silverman:

Great. I might also let some of the other experts on this-

Senator Ossoff:

Sure.

Brandon Silverman:

... panel answer some of that. But what I will say is any answer to transparency, shouldn't be a one size fits all approach. There needs to be tiered access, or in my opinion, there should be tiered mechanisms with different access points, with different audiences designed to serve different purposes, each one with a different set of constraints. And just to give an example, one way I think about it sometimes is a pyramid or a funnel, where at the very top you can have the most widely accessible forms of transparency. I think about that as reports. Right now, a lot of platforms put out reports on hate speech or coordinating authentic behavior, et cetera. That's available to the public. It's privacy safe. As far as I know, there haven't been any issues raised about Fourth Amendment or First Amendment about it.

As you go down the funnel into more sensitive data sets, with smaller audiences, I think some of those issues got much more prevalent. But the idea of having a tiered system would ideally give you a way to address each of those in safe and responsible ways. But I also say there are real issues. And I think one that also private companies shouldn't be trying to figure out, but is a argument for why government should be looking at those trade offs and figuring out a solution.

Senator Ossoff:

Anyone else want to weigh in on the Fourth Amendment or privacy aspects, concerns that are raised by government access to these transparency or research tools?

Jim Harper:

I will try to, just briefly. As I detailed somewhat in my written testimony, certainly the PATA legislation broadly written as it is, would have Fourth Amendment and privacy considerations. The argument I make is that the written materials, the privacy policies in terms of service that platforms put forth, are contracts that allocate that personal information as data. The right to possession is often with the platform. The right to exclude others, which is privacy protection, is with the consumer, subject to narrow exceptions, that I think are generally appropriate.

Taking that away, ladling that out for researchers, for any other purpose would be taking of property from the business, taking of property from the individual. It would be taking the papers and effects, in Fourth Amendment terms, of those people. And the Supreme Court has some, there are some wisps heading in that direction, the Riley case referred to as digital materials in the phone, as a person's effect. And that signifies that it's something owned by the individual. And I think that's true, whether it's on your phone or whether it's housed for you by a service provider.

Senator Ossoff:

If there were to be enacted transparency requirements for these platforms, would it be sensible that there be transparency requirements for government access of those transparency tools?

Jim Harper:

First limitations, I think the legislation should specify certain categories, perhaps, that should not be available. Certain reasons, again, the PATA legislation, a discussion draft is very broad, but it gives to the NSF and the FTC, the ability to decide what data and what for, and that's far too broad. Congress should actually be the policymaker on those questions. So first limitations, and then certainly transparency if the government accesses that data, either through the NSF, FTC program or from the researchers themselves. I think if not entirely barred from access, there should be transparency, if given access that way.

Senator Ossoff:

Professor Haidt, I want to invite you to elaborate on some of what you shared with Senator Sasse, with respect to the impact on public discourse and political discourse of how these platforms are functioning. I'm not sure if your research has included this specifically, if not, I invite you to speculate in public about the impact on elite opinion. The opinions of policy makers and those who staff them and the activists who are most vocal in advocacy, and then ask you what solutions you're proposing, please.

Jonathan Haidt:

Well, thank you, Senator Ossoff. So I think that the effect on the elites is extraordinary and is overwhelmingly bad. Many people will say, "Oh, Twitter's not that important, because 80% of Americans are not even on it, so who cares?" But almost all journalists are on it, if senators and Congress people aren't on it, their staff certainly is. And they're very responsive. We want our leaders, we want our representatives to be responsive to all of their constituents and to the broader country. But now that they're all on Twitter, they're responsive to the most loud, vocal, angry people on it. So it takes them away from their duty. I would say that representatives who pay a lot of attention to social media are in a sense violating their fiduciary duties to the country.

I understand why they're doing it. We're all doing it. And of course, journalists, my God, think about how many stories on the news are about something someone said on Twitter. Right wing ecosystem is much more dependent on cable TV, but even that they get the stories from Twitter. So they know exactly what the most angering story is going to be. So don't listen to anyone who says, "Oh, only a small percentage of people are on it." The influence on those people then goes out through many other channels to affect the entire country. We can't have a deliberative democracy if we're not able to deliberate. And I would say that social media has given us an environment in which we're sucked into fighting over trivia. We don't deliberate.

Senator Ossoff:

Solutions?

Jonathan Haidt:

So the solutions, there are a number that are crucial for making social media less toxic. The most important thing is, how can we make it so that... Of course, the extremes are always going to have more voice, they care more, they're more passionate, the middle is always going to have less fine. But what happened after about 2012, 2014 was the extremes got so amplified along with the trolls, and the middle went down. So how can we undo that? The most important thing is please stop talking about content moderation. I'm so sick of it.

What we learned from Frances Haugen is it doesn't actually even matter all that much, whether they get a little bit more, a little bit less. Look at the architecture, that's what changed. That's where the solution lies. And so the most important thing is verifying identities. Not that you have to post with your real name, but just as you can't go to a bank and give them a bag of money, and say, "Open an account," banks have know your customer laws. Systemically significant platforms that affect the health of our country should be like banks. Not that they're going to tell you what you can say and can't say, but they're going to say, "In order to speak on this platform, that has section 230 protection, we get section 230 protection, we have a duty to at least verify that you're human being, that you're old enough to be using the platform." We have to begin to do some age gating.

And I think also verifying that you're in a particular country, because we'll have different rules in different countries. If we do those three things, that would wipe out most of the bots. It would make us less afraid to speak. It would elevate the center and quiet the extremes. That's one, but there are many others. Architectural changes, don't focus on content moderation.

Senator Ossoff:

Professor Haidt, I'm way over time, and so I just want ask you if you got a 15 second response here, how do you respond to folks who point to, for example, the role that anonymous pamphleteering played at various points in our history? Or the capacity of whistleblowers or those who possess sensitive information to disclose such information without putting themselves at risk? How does that conflict with what you just suggested if at all? And that'll be my last question. Thank you.

Jonathan Haidt:

Sure. So the spread of information has always been an issue in democracies. All the things are going on right now, most of them have a precedent. But all these things that used to happen, they didn't make us afraid of each other. They didn't make us afraid to speak up in class. This is something new. This is changing the social dynamics. Don't just focus on information and false information, focus on the fear of speaking up. That is what is making our institutions structurally stupid.

Senator Alex Padilla:

Thank you, Senator Ossoff. In Senator Coon's absence, I'm presiding for the time being, and we'll proceed with my questions before recognizing the next member. And let me just begin by thanking Senator Coons for holding this hearing on platform transparency, and to all the witnesses for exceptionally detailed testimony. Transparency has been a key piece of the debate over the responsibilities of technology companies and the impact their services have on our democracy, and the welfare of our loved ones and our neighbors.

Now, as one of the few senators with an engineering background, I believe that policy making is at its best when we're also armed with research and data. And to that end, I support the calls to enhance platform transparency, to better serve the public, users of these services, the company themselves, and lawmakers. We have an important opportunity for real productive and effective policy making.

And before I jump into my first name in question, I also want to note that, I have the honor of representing the state of California, home to more innovators and executives and investors in technology than any other state in the nation. Also, home to more consumers of technology and users of platforms than any other state in the nation. And of course, more employees in the sector than any state in the nation.

And so my first three questions are a little rapid fire. Mr. Silverman, what state are you from?

Brandon Silverman:

California.

Senator Padilla:

Professor Persily, what state are you from?

Nate Persily:

California.

Senator Padilla:

Professor Keller, what state are you from California?

Daphne Keller:

California.

Senator Padilla::

No disrespect to the other witnesses, just wanted drive home the point, California plays a huge role in this discussion, and in the problem solving as well, from all angles. Now, advocates are routinely highlighting the failure of technology companies to moderate content in non-English languages.

As Frances Haugen testified at a previous hearing, 87% of all spending combating misinformation on Facebook is spent on English language content, but only nine percent of Facebook users are English speakers. That's important data. Other platforms likely have similar disparities, but they haven't publicly disclosed similar information. And I certainly would love to see that data. Professor Keller, in your testimony, you shared that the EU's Digital Service Act requires public transparency reporting around the number of content moderators platforms employ, and what their linguistic expertise is. Do you think similar transparency reporting should be considered in United States?

Daphne Keller:

So I agree with you that the language issue is incredibly important. And I think one of the most important disclosures from my former client Frances Haugen, was that not only in languages that don't have as many speakers is the content moderation not as good, but also they don't build machine learning systems that are as good as a result, so the problem is just compounded.

I do think that attention to additional languages is incredibly important. I mean, unfortunately, it's part of what makes measures of things like prevalence so expensive, because it means that you're deploying people in many languages who need many cultural context to try to extrapolate how much content exists across the platform. But, yes, I am in favor of reporting that you described.

Senator Padilla:

And just briefly, is there any other types of information or data that would be helpful in addressing this or other disparities that concern you?

Daphne Keller:

Well, one of the most important things is to see the actual content that platforms took down or left up or demoted. If researchers can't see that, then the platforms are grading their own homework. Researchers can't see if there's a pattern of bias or if they're making mistakes or whatever. This collides with the privacy issues that I identified in my testimony for platforms like Facebook. But certainly for publicly shared content, where the person who posted it hasn't tried to take it down, having an opportunity for third parties to look at what's actually going on, in multiple languages, for multiple cultural contexts is really important.

Senator Padilla:

Let alone viability, which further speaks to the complexity of the policy here. Now on a separate notes when legislating on technology related issues, I think it's important, and I raised this in the full committee previously, important to not enable or incent politically ambitious government officials with legal authorities who may use our efforts to undermine the ability of platforms to limit the spread of hate speech and election disinformation, just as two examples.

Question for Professor Keller, again, in your testimony, you cited that Texas Attorney General's ongoing dispute with Twitter as an example of states trying to influence online speech. So as Congress considers regulating the conduct of tech companies with respect to transparency and competition, what should we know about the battle over online speech regulation? And what's taking place in the states?

Daphne Keller:

I think it's really important to appreciate the rash of cases out there, litigation and state laws, some of which have passed, that have the goal of compelling platforms to carry content that the platforms don't want to, that violates the platforms rules. And this includes white nationalists saying that they have a right to post white nationalist content on Twitter. And there have been over 70 of these cases. The laws that were passed in Texas and Florida both create these obligations to carry content that violates the platform's rules.

And I'm sympathetic, in a way, these are the public forums of today. I understand why people are very concerned about being excluded, if they are being excluded. But creating a mechanism for government actors to effectively strong arm platforms about their policies, which I think is what is going on with the Texas AG investigation, is quite dangerous.

And I think this is something to worry about with some competition bills pending that are, otherwise, I think really good ideas. I'm sorry that Senator Klobuchar isn't here to hear me talk about her bill, but they 95% are about saying platforms shouldn't self-preference and promote their own properties over their competitors. And then both her bill and a Senator Blumenthal's bill, he's not here either, both have just a couple of sentences that permit a different category of lawsuit. A lawsuit that's Breitbart saying they should be treated like The Wall Street Journal, for example. And I think it's a very serious change in the law if that comes into effect.

Senator Sasse:

Thank you. Speaking of Texas, Senator Cruz.

Senator Ted Cruz:

Thank you, Mr. Chairman. The single biggest threat to free speech in this country in my judgment is the power of big tech. A handful of Silicon Valley billionaires who have arrogated to themselves complete monopoly power over the public discourse. Professor Keller just referred to these social media sites as the public square. And that is very accurate. It is how we speak with each other. And big tech has gotten more and more brazen in its abuse of that power.

This is a hearing on transparency for big tech. I would be in support of almost anything imaginable to increase transparency for big tech. The bill being discussed here is a fairly modest step that gives access to some academic researchers. I suppose that would be fine. It's not clear to me why a professor at Harvard or Stanford should have some special access that Joe Q Citizen should not. When it comes to transparency, the people have a right to know. But to the extent academic research marginally increases the ability of the people to know what's going on, I imagine that's a positive step.

The lack of transparency is not an accident. It is a deliberate feature of how big tech has set up its systems. To all the witnesses here today, in the 2016 elections, does anyone know how many posts from Republican candidates for office were blocked? Does anyone know how many posts from Democrat candidates for office were blocked? How about in the 2018 election? 2020 election? Does anyone know the average ad rate charged by Google or by Facebook to Democratic candidates for office? Does anyone know the average ad rate charged by Google or Facebook for Republican candidates for office?

Nobody knows. I don't know. The chairman doesn't know. And I'll tell you, Mark Zuckerberg has sat at that table, I've asked him that question. I've asked the CEO of Google those questions. I've asked them those questions in writing. And they hire teams of lawyers to write letters back that say in every way possible, "Pound sand. We refuse to tell you. But by the way, trust us, we're not censoring. We're just not going to tell you."

During the Trump administration, I begged the Department of Justice, if they did nothing else on big tech censorship, to use the subpoena authority of DOJ to get answers to basic questions on transparency. I think there were multiple people in the administration who wanted to do that, but they did not get that accomplished. Few things illustrate the abuse of power of big tech over free speech better than the reaction in the last two weeks to Elon Musk announcing that he's buying Twitter. I find it quite remarkable. I think Elon Musk's buying Twitter is without exaggeration the most significant development in favor of free speech in decades. I also find it astonishing the reaction of much of the corporate media in the left to Elon Musk buying Twitter. And oh my God, suddenly conservatives being allowed to speak. And it is truly Armageddon. It's cats and dogs living together. It is the worst imaginable watching the public histrionics of the left. If their opponents are not silenced is amazing. And by the way, Elon Musk, the last I checked is not some right wing character. He's a lifelong Democrat who voted for Barack Obama twice.

And this is the scary specter because he's dared stand up and say we'll allow free speech and we'll allow speech I disagree with. Look, I'll give you one data point. I asked how many were blocked. Nobody knew. I could ask how many were shadow bands. No one would know because they don't tell you, but I'll give you one data point from my own Twitter page. I'm active on Twitter. Spend a lot of time on social media. So Twitter accepted Elon Musk's offer to purchase on April 25th.

All right. On April 22nd, my Twitter account gained 1,488 new followers. On April 23rd it gained 1,526 new followers. On April 24th it gained 1,486 new followers. On April 25th it gained 1,214 followers. Going back I was pretty consistently gaining 1000 to 2000 a day. Twitter accepts Elon Musk's offer to purchase them. The next day, April 26th, Mr. Chairman, I would ask you how many new followers do you think I gained the next day?

Senator Coons:

More than a thousand.

Senator Cruz:

You would be correct. The next day I gained 51,405. The next day, April 27th, I gained 61,261. The next day, April 28th, I gained 70,584. In the week and a half since Elon Musk purchased Twitter, my Twitter followers went from 4.8 million to 5.1 million. Conservatives all across the country have reported numbers like that. Have put up numbers like that. And it is obvious someone flipped a switch. The governors they had on that said silence conservatives were flipped off. That is the only rational explanation from going from 1000 to 70,000 the day after he bought it. And I'll just point out he hasn't even taken it over yet. This is just the [inaudible 01:39:11] effect of some engineers who I imagine are running the document shredders like crazy going, "Oh crap. They're going to find out what we're doing. Turn this stuff off."

That activity illustrates the need for transparency profoundly. And I hope Congress does something about it.

Senator Coons:

Thank you to my colleague from Texas. You, I think highlight the many ways in which, whether it's from the right or from the left, there are lots of questions about how these platforms operate, how they censor, how they reinforce, how they amplify. I am tempted to ask whether any of you have a alternative suggestion for exactly why the Senator's Twitter followers may dwarf mine in number and scope. A question perhaps of interest really only to me or maybe that my colleague from Texas. Mr. Harper, if you'd like to offer. He does follow me, which I appreciate. Very few. My Twitter followers are [crosstalk 01:40:17].

Jim Harper:

If it's not too late, I wanted to ask Senator Cruz to retweet me. I'm Jim underscore Harper.

Senator Coons:

Mr. Harper, if you'd like to speak to that, then I have a second round of questions. My understanding is Senator Blackburn is on her way. Other members on their way, because relatively soon, I am going to have to close the hearing out and go vote the second time. Mr. Harper, if you would, to the intriguing characterization the Senator from Texas made about the unavoidable conclusion one must reach about what is happening at Twitter.

Jim Harper:

Well, I think we know less than that about how things work in those companies and what occurred in the days that the Musk takeover was reported. Let me say though, that thematically, both his question testimony and a lot of what we've discussed here for me, are speech topics. We're talking about what we want our speech in our country to go like. How we wanted to go. This, by the brusk language of the First Amendment is not the place where that stuff is supposed to be decided. So again, I don't think my written testimony was the strongest on First Amendment issues, and there's a lot there to consider, but I'm sorry to say it. This body is largely disqualified from regulating speech in the country and that's for the good for-

Senator Coons:

No, I don't think you should hesitate to say that. I think that is something we can all agree on. This was simply a follow on to Senator Cruz's questioning. I'll now yield to Senator Blackburn and she I think will be the last of our first round of questioners and then Senator Sasse and I may have a second round. Senator Blackburn.

Senator Blackburn:

Thank you, Mr. Chairman. And I want to thank each of you for being here. Over at the commerce committee, Senator Blumenthal and I have held five hearings this year on big tech and the effects that it has on the public writ large and also on children. And Professor Heite I want to come to you with a question if I may. Or let's see, I'm hoping that he is still on. Okay, great. As long as you're still there online. Thank you for your opening statement. I have a question for you pertaining to the Kids Online Safety Act that Senator Blumenthal and I have filed.

We have a provision in there that would allow independent researchers access to the data that, and to these two big tech platforms into the data, how they're holding that data, how they're utilizing, crunching that data when it comes to harms to minors. And what I'd like for you to do for just a moment is talk about in your research and the work that you are doing with research and parents and teachers, physicians, policymakers, what are you finding when it comes to kids 18 years of age and under, as to the impacts and the experience that they have on these social media platforms?

Jonathan Haidt:

Well, thank you for that question. The fundamental reality, the thing we have to keep our eye on is that using social media is not like consuming sugar. A lot of the research has looked at, if you consume a lot, do you get more sick? If you consume a little, you get a little sick. But if we look at it from the kids' point of view, what we have to see is that children have always wanted to play with each other and talk to each other and do things with each other. And it's when around 2011 plus or minus, when they all got online, they spent so many hours online performing for each other. They don't have much time to actually connect and play. Now, when they're actually online, sometimes they say it's fun. Sometimes they say it's not.

So that data is mixed. But the overall impact on them is clearly negative. And this has been overwhelmingly consistent. I gave a talk at my old high school, Scarsdale High School just before the pandemic. And all the teachers are like, "Oh my God, we can't get through to the kids. They're not paying attention. This is a disaster." And by the time they come in, they're already, a lot of them are depressed and fragile. So I went. I gave a talk at my middle school in Scarsdale, and I spoke to the principal there and the teachers there. They said the same thing.

In fact, they even said, "By the time they come to us in sixth grade, they're already addicted to their devices. Many are fragile. Many of them are depressed and anxious." So what we're hearing consistently from all who work with kids is this is messing them up. And we don't want this to be happening. So I think it's been terrible from all reports. I don't know anybody who's happy that our kids are now spending four to 10 hours online on social media every day.

Senator Blackburn:

Yeah. Well, we are hearing some of the same things and it's why we have put the provision in there in the bill that would allow this research. Ms. Keller, it's good to see you again. And I want to talk with you about privacy because as you know, the EU has moved forward with a Digital Services Act and a Digital Marketing Act. One is more closely akin to our 230, the services. But their Marketing Act does deal with privacy. We have yet to address this issue in the US, even though some of us are hard at work on it. I would like for you to touch on why we need to make certain that we have a consumer privacy bill that exercises federal preemption.

Daphne Keller:

So I'm an ex privacy lawyer. It's good to see you again too.

Senator Blackburn:

Yes.

Daphne Keller:

I'm an ex privacy lawyer. And most of what I've worked on is the GDPR, the General Data Protection Regulation in the EU. And it gives them a baseline to start with of how privacy is supposed to work. Then you can build transparency on top of it and have some starting point rules and some mechanics for resolving the new questions that arise when it comes to platform transparency. So the Digital Services Act, as I mentioned in my testimony, it has multiple transparency provisions. It has almost all of the provisions that are in PATA, although it does not have the scraping provision, which is the sort of democratic scrappy kind of research. That one's not in the DSA.

And in particular, the researcher access. So the analog of the first four or so sections of PATA, which is in article 31 of the DSA, because the GDPR is there. And because they're able to convene experts, and this is happening right now. There should be a code out by the end of the month, trying to say how this is going to work with privacy. So you guys are in a much more difficult position because you don't have that baseline. And so the lack of federal privacy legislation to start from leaves you having to answer questions that shouldn't be your job to answer.

Senator Blackburn:

Thank you. I appreciate that. We need to get to that starting point. Thank you, Mr. Chairman.

Senator Coons:

Thank you, Senator Blackburn. Senator Sasse, any closing question you'd like to ask before you have to go?

Senator Sasse:

No, thank you again to all the witnesses and not just you, but your team for organizing this. It's been instructive and I'm sure we'll have follow ups. Thanks.

Senator Coons:

If I might, I'm going to ask a couple more questions until I get told I have to go back and vote again. Professor Keller, to the point that Senator Blackburn was raising. Help me better pull apart the tensions, competitive impacts, privacy concerns and how you think progress, that's how I view it, in Europe, the foundation of GDPR and the potential. We don't yet have the full text for Digital Services Act, but where do you think Europe is going? What should we learn from European regulatory and legislative efforts in terms of the balance we might strike both around individual privacy and privacy rights and the digital domain, but also competitive consequences, competitive concerns? Do you think PATA strikes the right balance? Do you think there are other ways in which we might make sure that we are protecting privacy and companies' interests in competitive concerns?

Daphne Keller:

Yeah. So there was a lot in there.

Senator Coons:

Yep.

Daphne Keller:

Please tell me if I missed anything.

Chairman Coons:

And then I'm going to invite anybody else who wants to respond to that as I heard Mr. Harper will.

Professor Daphne Keller:

So there are things I really like about the DSA as a regulatory approach. I actually had an op-ed in The Hill a couple of years ago saying Congress can learn things from how Europe approached the equivalent of 230 issues.

Senator Coons:

Did your editorial say, “We hope that Congress can learn?"

Daphne Keller:

I was more polite than that. So there are great things about it. I do think that it gets the balance of competition and other values a little wrong. And if you want to get really in the weeds, I have a blog post on that, on something called the Verfassungsblog, which is a German constitutional law blog, it's awesome. But it goes into how effectively I think the DSA is sacrificing competition goals in the name of content regulation goals, or speech control goals by putting lots and lots of mechanical burdens even on very small platforms. And that I think are just disproportionate. It'll make it harder for them to grow up and compete with the bigger platforms.

Senator Coons:

And one of the concerns you've raised in our conversations about PATA is making sure that we're not writing rules that are unduly burdensome for smaller platforms. If you'd make a brief comment on that, I'd appreciate it.

Daphne Keller:

Yeah, I think that's right. PATA has 25 million monthly active users and up as its size range. And I put in an exhibit that I can't swear it's right, because nobody really knows these numbers, but that at least tries to get at a question of which platforms is this talking about, and a lot of them, I think are not the kinds of platforms that people have in mind when they're asking for this transparency. So I should say also in transparency, I do some consulting for Pinterest and Pinterest is somewhere on that list, but think of a platform like Glassdoor, which is people talking about their employers doing... That doesn't have the democratic discussion consequences of a Twitter or a Facebook. And so the justification for wanting transparency is lower. And their capability of carrying that burden is also lower.

Senator Coons:

Small d, democratic.

Daphne Keller:

Small d, democratic.

Senator Coons:

If I may, I'm going to jump to Mr. Silverman. We're going to have a series of... I'm going to keep going until I get a text saying, "You got to leave now Senator." PATA, the Accountability and Transparency Act, creates a requirement that platforms provide ongoing disclosure of information of the type that has fewer privacy implications. Forgive me. Public facing content, typically viral public facing content, platform advertising practices. How would expanding disclosure of that kind of information that I would argue has a lower privacy risk actually help the public better understand how platforms are impacting online discourse?

Brandon Silverman:

Yeah. Thank you for that question. And I'll start off by saying that I agree with your assessment that there are types of content on these platforms that have greater privacy risks. And there are also types of content that have less. We had a very stringent set of lawyers and policy folks at Facebook, but the work we were able to do was able to be blessed from their perspective in terms of whether it was privacy safe or not. So there is a lot of content on these platforms that has very minimal privacy risk. One version of that that I think PATA tries to capture, and I think is a really promising place to move is thinking about content that comes from particularly public or in some cases what some of us have been calling reasonably public accounts.

So if you think about the president of the United States or a major media outlet that uses an account in a very public way, has millions of followers and uses it for their official business. Can you in some ways treat that as a type of content where there are less privacy implications and you could make publicly available to a wide swath of the public? When you do that, there is a lot of discourse that is shaped by influential large accounts on these platforms, and even just providing a real time window into those accounts and that content is enormously valuable to being able to see an important part of what's happening on these platforms.

Senator Coons:

Mr. Harper, you've raised some concerns about privacy in particular. And if I understood your opening testimony was essentially that the balance of interests, there's not enough good public good to be accomplished here to outweigh the risks to privacy. Could you see some reasonableness to Mr. Silverman's point that there may be some accounts that are so clearly geared towards large scale public communication where the privacy risks of offering sort of an under the hood look at those accounts are less and arguably worth that trade off?

Jim Harper:

Yeah, I think there are lots of ways probably to slice and dice what information is made available to researchers or made available publicly. Obviously information that is not personally identifiable doesn't have privacy risks. It has risks that it will be re-identified and that's important to consider. And I think that's an interesting idea, the accounts that are so public that we don't treat their behavior as private. It might be appropriate to modify the terms of service so that if you are a public figure, for example, you don't get the same privacy that a private citizen does. I can't think through here on my feet exactly.

Senator Coons:

I understand.

Jim Harper:

All the manifestations, but certainly there are ways to limit the privacy consequences of this sharing. My point, which I think you get, is that the wholesale form that PATA allows for threatens privacy, because it could be lots of communications.

I appreciate Professor Keller joining me over here on the skunk side of the garden party. But to highlight those kinds of concerns if I may just briefly in the negative two minutes you have left. I see things. So we line up in interesting ways, of course, but I sort of see things, a phase shift differently on this question of GDPR for example. It's an outgrowth of the fair information practices, which arguably arise from the privacy study committee that the Health Educational Welfare Department put together in 1971, I'm saying '71 as I'm sure of what year it was. It was back around then. But ever since the FIPs came into existence, they have been sort of the intellectual way we figure out what privacy protections should be. And they were obviously more firmly adopted in Europe. They here in the privacy act, for example, in 1974, but they were adopted in Europe because of the civil law tradition in Europe, which is an intellectual tradition. The smart, thoughtful people get together and figure out what the rules should be on.

It's the common law and civil law aren't entirely separate from one another. The categories aren't clean. But the common law tradition of England and the United States is a little more loosey goosey. And really in the privacy area, you see distinctions culturally between Europe and the United States, where in Europe, privacy is really more of a dignity value so that everybody gets the same treatments as the princes and kings and queens have gotten. In the US it's more of a liberty value that is keeping King George out of my house. So we have common law privacy protections.

Contract is common law. Property is common law, but also the privacy torts here in the United States. And I think when it comes to getting data out of these companies, GDPR is a significant disincentive. Mr. Silverman talked about having to go through the lawyers because of the extreme pressure they're under on the privacy side. If your rule is do no harm, you've got a little more leeway to act than if your rule is we've got this legislation and we've got this regulator in the FTC who's over our shoulder. So in my written testimony, I talk more about the common law. And I think it's a better way to get more innovation and to release a little bit of pressure so that there can be more give and take that allows researchers access to data.

Senator Coons:

Thank you, Mr. Harper. Now Professor Persily if I might just, because I'm going to ask you one last question, Professor Heite one last question and then go off to the joy that is legislating with my colleagues. Platform Accountability and Transparency Act, sorry to keep coming back to it. But part of my goal here today was to take both critical and complimentary input and then try and further refine it. It requires that platforms turn over data sets to researchers upon review and approval by the NSF. Some of this data really could be sensitive in ways that I'm concerned about. And I'd be interested in your explanation, your insights into how the FTC could best work to ensure that user privacy isn't compromised in datasets disclosed to researchers and best could be effective as a overseer policing function of how that data is used going forward in ways to be most protective of individual privacy.

Nate Persily:

Thank you for that. And I think PATA does the kind of tiered disclosure that Brandon was talking about, which is that for the sections that look at algorithms, the content and advertising, that is going to be more publicly available. And then we will have aggregated reports. The same may be true for some of this what we're going to call sensitive data here that sometimes aggregated information is going to be the way that it would be given to researchers. No personally identifiable information should be given to researchers. And I think the FTC would make that clear. Now researchers are not interested in going and looking at individual accounts, right? We want to know sort of groups of people and what they are going to be sharing and engaging with. And so I think whether it's through technologies of differential privacy, other kinds of anonymization techniques, this is something that we do all the time.

And this is something that the FTC, which also, as we know, has fined Facebook $5 billion for privacy violations, is the right institution to oversee this. So we do need to have privacy baked into this, both in the legislation and in the regulations that would come out of it. But this is a balance that can be struck and we need to be clear, and I think whether it's heaping more privacy protections into the bill itself, or just making clear through the iterative process of the agency, that no, there should not be any individual whose privacy is compromised by this, but we need to make sure that outside, someone other than the data scientists at the firm are able to make the kind of inferences that those on the inside are able to do.

Senator Coons:

Thank you. I think it was framed early. Well, I think it was by you that we need to be able to have the rest of the world have insights as comparable to those who are actually designing and operating and running and making profit from these firms rather than our current situation where they know virtually everything and we know virtually nothing. Professor Heite, if I might invite you to make a closing comment, if there's concerns or issues or questions that have been raised by the testimony so far, I'd welcome. My question to you is why is this important? Why act now? Why not simply wait and let the market work itself out? And you're muted.

Jonathan Haidt:

Okay. Thank you. Thank you, Senator Koons first. I want to apologize to Senator Ossoff. I misunderstood his question. He was asking when I recommended identity authentication, is this going to put whistle blowers at risk? The answer is no because identity authentication doesn't mean you post under your real name. You just prove that you are somebody who you say you are to a third party, ideally, a nonprofit or something that would be protected. And even if you were afraid of that, you could still just put up an anonymous blog anywhere you want on the internet. And then just tell a journalist, hey, here's the whistle blowing stuff on this blog. So identity authentication would reduce a lot of the garbage and the nastiness and still there's plenty of ways to be a whistleblower and keep complete security. With regards to your question about any sort of closing statement and especially why now, in the graphs in my submitted testimony, I show that there are hockey stick type graphs that is a long straight line. And then it goes up and up and up.

And I deliberately focus my analysis up to 2019 before COVID because I didn't want to be confused with what COVID has done. Everything I've said is so much worse now that COVID has basically taken kids and given them less free play outside and more time on their devices. So the problem keeps getting more and more serious. It's now to the point where something on the order of a quarter of our girls seem to have depressive disorder, severe anxiety, and it keeps going up and up and up. So we should have acted. We should have acted 10 years ago. Well, the data was only really clear about six years ago.

We should have acted then. We didn't. And I think we have to act now. And even if you're not convinced that I'm right about causation, we don't have the data. We're trying to as my NYU colleague, Josh Tucker put it, researchers are working with one hand tied behind our back. We have to do secondary studies and try to look at shadows. Whereas they've got the data. They know exactly what the kids are consuming and they know how happy the kid is. They can code their content. So almost all the data is there. We can't get to it. So we're guessing. We're flying blind. This really needs to change. And I hope that Congress can help us here.

Senator Coons:

Well, I'd like to express my thanks to all of the witnesses who've appeared today. I'm also grateful to the members who attended and asked thoughtful questions, and I'm particularly appreciative to ranking member Sasse for being a great partner on this subcommittee and allowing us to hold this hearing. It's reinforced my view that there is much more that platforms can and must do to be more transparent about how their products and services actually affect each of us, our families, our community, our society, our democracy. I believe that both the public and policy makers need better information if we're going to understand and act on the impact of social media and find better solutions that will allow us to take advantage of all that social media has to offer while limiting the harms that it creates, or even exacerbates.:

It's clear we've got a lot more work to do. And I look forward to working with my colleagues and with each of you. And I really, again, deeply appreciate your constructive comment and input today. Members of this committee can submit questions for the record for the witnesses. They are due by 5:00 PM one week from today, thus on May 11th. And I want to thank our witnesses again for participating in this compelling and engaging hearing. And with that today's hearing is adjourned.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics