Home

Donate

Transcript: Senate Judiciary Subcommittee Hosts Hearing on Oversight of AI: Insiders’ Perspectives

Justin Hendrix / Sep 18, 2024

Dirksen Senate Office Building Room 226. Source

On Tuesday, September 17, the US Senate Judiciary Subcommittee on Technology, Privacy, and the Law convened a hearing on Oversight of AI: Insiders’ Perspectives. Witnesses included:

  • Helen Toner, Director of Strategy and Foundational Research Grants, Center for Security and Emerging Technology, Georgetown University (written testimony)
  • Margaret Mitchell, Former Staff Research Scientist, Google AI (written testimony)
  • William Saunders, Former Member of Technical Staff, OpenAI (written testimony)
  • David Evan Harris, Senior Policy Advisor, California Initiative for Technology and Democracy, Chancellor’s Public Scholar, UC Berkeley (written testimony)

What follows is a lightly edited transcript of the discussion.

Sen. Richard Blumenthal (D-CT):

Welcome the Ranking Member and as well my colleagues, Senator Durbin, who is chair of the Judiciary Committee, and Senator Blackburn, my partner on the Kids Online Safety Act. Members of this body who have a tremendous interest in the topic that brings us here today, and we are very, very grateful to this group of witnesses who are among the main experts in the country, and not only that, but experts of conscience and conviction about the promise and the dangers of artificial intelligence. We welcome you and we thank you for being here. We've had hearings before in this subcommittee it seems like years ago, and in fact a short time on artificial intelligence may seem like years in terms of the progress that can be made. We've heard from industry leaders responsible for innovation and progress in AI and they shared their excitement for the future, but they also warned about serious risks. Sam Altman, for example, who sat where you are now, shared his worst fear that AI could cause significant harm to the world.

But as he sat with me in my office and described a less advanced version of his technology, he assured me that there were going to be safeguards, red teams, all kinds of guardrails that would prevent those dangers. We're here today to hear from you because every one of the witnesses that we have today are experts who were involved in developing AI on behalf of Meta, Google, OpenAI, and you saw firsthand how those companies dealt with safety issues and where those companies did well and where they fell short. And you can speak to the need for enforceable rules to hold these powerful companies accountable. And in fact, Senator Hawley and I have a draft framework that would impose those kinds of safeguards and guardrails and impose a measure of accountability. And we are open to hear from you about ways it can be strengthened if necessary or improve.

But my fear is that we're already beginning to see the horse out of the barn. Mr. Evans and Mr. Harrison, your testimony I think assured us that the horse was not out of the barn, but my fear is that we'll make the same mistake we did with social media, which is too little too late, and that's why the work that Senator Blackburn and I are doing on his online safety is so important to accomplish with urgency. Despite those self-professed fears of Sam Altman and others, big tech companies and leading AI companies are rushing to put sophisticated AI products into the market. The pressure is enormous. Billions and billions of dollars, careers of smart, motivated people are on the line and what seemed to be a kind of slow walk on AI has turned into literally a gold rush. We're in the wild west and there's a gold rush. The incentives for a race to the bottom are overwhelming, and companies even as we speak, are cutting corners and pulling back on efforts to make sure that AI systems do not cause the kinds of harm that even San Altman thought were possible. We're already seeing the consequences.

Generative AI tools are being used by Russia, China, and Iran to interfere in our democracy. Those tools are being used to mislead voters about elections and spread falsehoods about candidates. So-called face swapping and notify apps are being used to create sexually explicit images of everyone from Taylor Swift to middle schoolers In our educational institutions around the country, one survey found that AI tools are already being used by preens to create fake, sexually explicit images of their classmates. I don't have to expand on this point because everyone in this room, probably by this point, everybody in America who is watching or seeing anything on the news has become familiar with these abuses and voice cloning software is being used in imposter schemes, targeting senior citizens, impersonating family members, defrauding those seniors of their savings. This fraud and abuse is already undermining our democracy, exploiting consumers and disrupting classrooms, but it's preventable.

It's also a preview into the world that we will see expanding and deepening without real enforceable rules. And now artificial general intelligence or a GI, which I know our witnesses are going to address today, provides even more frightening prospects for harm. The idea that AGI might in 10 or 20 years be smarter or at least as smart as human beings is no longer that far out in the future. It's very far from science fiction. It's here and now one to three years has been the latest prediction, in fact, before this committee. And we know that artificial intelligence that's as smart as human beings is also capable of deceiving us, manipulating us, and concealing facts from us and having a mind of its own when it comes to warfare, whether it's a cyber war or nuclear war or simply war on the ground in the battlefield field.

So we have no time to lose to make sure that those horses are still in the barn. And I am going to abbreviate the remarks that I was going to make because we've been joined by a number of our colleagues and I want to get to the testimony and give Senator Hawley a chance to comment. But let me just say for the benefit of others in this room, I know our witnesses are familiar with our legislation that the principles of this framework include licensing, establishing a licensing regime and transparency requirements for companies that are engaged in high risk AI development. It is about oversight, creating an independent oversight body that has expertise with AI and works with other agencies to administer and enforce the law. Watermarking rules around watermarking and disclosure when AI is being used, enforcement ensuring that AI companies can be held liable when their products breach privacy, violate civil rights or cause other harm.

And I will just emphasize the last of these points, enforcement for me as a former law enforcer, I served as Attorney General in my state and the federal prosecutor US attorney for most of my career is absolutely key, and I think to Senator Hawley as well as a former attorney general and to many others like Senator Klobuchar who also was an I'm very hopeful that we can move forward with Senator Klobuchar bill on election security. She's done a lot of work on it and it's an excellent piece of legislation and I salute her for a leadership as well as Senator Durbin's and Senator Coon's bill on DeepFakes. We have a number of proposals like them that are ready to become law, and I hope that a hearing like this one will generate the sense of urgency that I feel for my colleagues as well. You can see from the membership today that it's bipartisan, not just Senator Hawley and myself, but literally bipartisan, I think across the board, just as the vote on the Kids Online Safety Act was 91 to three in the United States Senate to approve it.

I think we can generate the same kind of overwhelming bipartisan support for these proposals. Finally, what we should learn from social media is that experience is, don't trust big tech. I think most of you very explicitly agreed, we can't rely on them to do the job. For years they said about social media, trust us, we've learned we can't and still protect our children and others, and as one of you said, they come before us. They say all the time we're in favor of regulation, just not that regulation. Or they have other tricks that they are able to move forward with the armies of lobbyists and lawyers that they can muster. So we ask for their cooperation. I challenge big tech to come forward and be constructive here. They've indicated they want to be but some kind of regulation to control and safeguard the people of the world. It's not just America have to be adopted, and I hope that this hearing will be another step in that process and I turn to the Ranking Member.

Sen. Josh Hawley (R-MO):

Thank you very much, Mr. Chairman. Thanks for your leadership on this, this entire Congress. It's been a real pleasure to work with you. Thanks to our witnesses for being here. I don't really have much to add to Senator Blumenthal's outstanding opening statement other than just to observe that we have had many executives sit where our witnesses today are sitting many avid proponents of this AI revolution that we're in the midst of, and we've heard a lot of promises to this subcommittee from those executives who I might just point out always seem to have a very significant financial interest in what they're saying, be that as it may, but they have given us all kinds of rosy predictions. AI is going to be wonderful for this country. It's going to be fantastic for the workers of this country. It's going to be amazing for normal workaday Americans.

Well, I think today's hearing is particularly interesting and particularly important because today we start to test those promises we have in front of us, folks who've been inside those companies who have worked on these technologies, who have seen them firsthand, and I might just observe, don't have quite the vested interest in painting that rosy picture and cheerleading in the same way that some of these other executives have. So I want to particularly thank you for being here today. Our witnesses, thank you for being willing to speak up and thank you for being willing to give the American people a window into what's actually happening with this technology. I think the testimony you're about to offer is so important, and I think it will help us realize, understand where this technology is, what the challenges are that we are facing, and also I hope help us as Senator Blumenthal alluded to, legislate in a way that will actually protect the American people, which is our charge in all of this. Thank you, Mr. Chairman.

Sen. Richard Blumenthal (D-CT):

Thanks, Senator Hawley. I haven't asked Senator Durbin whether he would like to make an opening statement. Evidently not. So let me just introduce the witnesses. Helen Toner is a policy researcher at Georgetown University's Center for Security and Emerging Technology, where her work focuses on AI safety, US-China competition and national security. She previously worked as a senior research analyst at Open Philanthropy where she advised policymakers and grant makers on AI policy and strategy. Alongside her work at CSET, she served on open AI's nonprofit board of directors from 2021 to 2023. David Evan Harris is a chancellor's public scholar at the University of California Berkeley and faculty member at the Haes School of Business, where he teaches courses on artificial intelligence, ethics, social movements and social media and civic technology. In addition to teaching, he conducts research on AI and governance and serves as an advisor to the California Initiative for Technology and Democracy and a number of other organizations focused on technology and policy.

Mr. Harris has advised the White House, US Congress, European Union, United Nations, NATO, and the California legislature about technology policy. His writings and commentary have been featured by publications too numerous to mention here. Margaret Mitchell is a computer scientist who works on the ethical development of AI systems within the tech industry. She has held researcher positions at Microsoft and Google and her pioneering work on model cards. Now, common across the tech industry has been recognized by the Secretary of Defense at that point, Ash Carter as an outstanding innovation for public good. She currently works at Hugging Face as a researcher and chief ethics scientist driving forward work on ML data processing, responsible AI development, AI ethics, and she's appearing today in her personal capacity. William Saunders worked at OpenAI for three years on the Alignment team, which later became the super alignment team researching techniques to help humans understand and control AI systems. He resigned in February, 2024, concerned that OpenAI was not adequately preparing for the advanced AI systems the company is trying to build. So as Senator Hawley very, very aptly said, you are insiders who have the courage to come forward, and we are very grateful to you as is our custom. I would ask you to rise and I'll administer the oath.

Do you swear that the testimony that you're about to give is the truth, the whole truth, and nothing but the truth, so help you God? Very good, thank you. We'll just begin and go down the panel beginning with Ms. Toner.

Helen Toner:

Chair Blumenthal, Ranking Member Hawley, members of the subcommittee. Thank you for the opportunity to testify today. I want to start by commending you on the depth and sustained focus that this subcommittee is bringing to the many and varied challenges and opportunities that we face with ai. My work has focused on AI policy for the last eight years with an emphasis on national security, US China competition and AI safety. In 2019, I moved to Washington to help found cce, the Center for Security and Emerging Technology at Georgetown University, which has since grown to be a highly respected source of analysis on AI policy issues. While working full-time at CSET, I also spent two and a half years serving on OpenAI's nonprofit board from 2021 until the widely covered events of last November. Today, I'll be drawing on my research at CSET, my experiences on the board and my extensive relationships with and interactions with AI researchers and executives from the years that I've worked in this space, the title of this hearing is Oversight of AI Insider's Perspectives, and the biggest disconnect that I see between AI insider perspectives and public perceptions of AI companies is when it comes to the idea of artificial general intelligence or AGI.

This term AGI isn't well-defined, but it's generally used to mean AI systems that are roughly as smart or capable as a human in public and policy conversations talk of human level. AI is often treated as either science fiction or marketing, but many top AI companies, including OpenAI, Google, Anthropic, are building AGI as an entirely serious goal and a goal that many people inside those companies think they might reach in 10 or 20 years, and some believe could be as close as one to three years away. More to the point, many of these same people believe that if they succeed in building computers that are as smart as humans or perhaps far smarter than humans, that technology will be at a minimum extraordinarily disruptive and at a maximum could lead to literal human extinction. The companies in question often say that it's too early for any regulation because the science of how AI works and how to make it safe is too nascent.

I'd like to restate that in different words. They're saying we don't have good science of how these systems work or how to tell when they'll be smarter than us or don't have good science for how to make sure they won't cause massive harm. But don't worry, the main factors driving our decisions are profit incentives and unrelenting market pressure to move faster than our competitors. So we promise we're being extra, extra safe, whatever these companies say about it being too early for any regulation, the reality is that billions of dollars are being poured into building and deploying increasingly advanced AI systems, and these systems are affecting hundreds of millions of people's lives even in the absence of scientific consensus about how they work or what will be built next. So I would argue that a wait and see approach to policy is not an option. I want to be clear, I don't know how long we have to prepare for smarter than human ai and I don't know how hard it will be to control it and ensure that it's safe.

As I'm sure the committee has heard a thousand times, AI doesn't just bring risks. It also has the potential to raise living standards, help solve global challenges, and empower people around the world. If the story were simply that this technology is bad and dangerous, then our job would be much simpler. The challenge we face is figuring out how to proactively make good policy despite immense uncertainty and expert disagreement about how quickly AI will progress and what dangers will arise along the way. The good news is that there are light touch adaptive policy measures we can adopt today that can both be helpful if we do see powerful AI systems soon and also be helpful with many other AI policy issues that I'm sure we'll be discussing today. I want to briefly highlight six policy building blocks that I describe in more detail in my written testimony.

First, we should be implementing transparency requirements for developers of high stakes AI systems. We should be making major research investments in how to measure and evaluate AI as well as how to make it safe. We should be supporting the development of a rigorous third party audit ecosystem, bolstering whistleblower protections for employees of AI companies, increasing technical expertise in government and clarifying how liability for AI harms should be allocated. These measures are really basic first steps that would in no way impede further innovation in ai. This kind of policy is about laying some minimal common sense groundwork to help us get a handle on AI harms we're already seeing and also set us up to identify and respond to new developments in AI over time. This is not a technology we can manage with any single piece of legislation, but we're long overdue to implement some of these basic building blocks as a starting point, thank you and I look forward to your questions.

Sen. Richard Blumenthal (D-CT):

Thanks very much, Ms. Toner. Mr. Saunders.

William Saunders:

Mr. Chairman, Ranking Member Hawley and distinguished members. Thank you for the opportunity to address this committee. For three years I worked as a member of technical staff at OpenAI. Companies like OpenAI are working towards building artificial general intelligence, AGI. They're raising billions of dollars towards this goal. OpenAI's charter defines AGI as highly autonomous systems that outperform humans at most economically valuable work. This means AI systems that could act on their own over long periods of time and do most jobs that humans can do. AI companies are making rapid progress towards building AGI. A few days before this hearing, OpenAI announced a new system GPT-01 that passed significant milestones including one that was personally significant for me when I was in high school. I spent years training for a prestigious international computer science competition. OpenAI's new system leaps from failing to qualify to winning a gold medal doing better than me in an area relevant to my own job.

There are still significant gaps to close, but I believe it is plausible that an AGI system could be built in as little as three years AGI would cause significant changes to society including radical changes to the economy and employment. AGI could also cause catastrophic harm via systems autonomously conducting cyber attacks or assisting in the creation of novel biological weapons. OpenAI's new AI system is the first system to show steps towards biological weapons risk as it is capable of helping experts in planning to reproduce a known biological threat. Without rigorous testing, developers might miss this kind of dangerous capability. While OpenAI has pioneered aspects of this testing, they've also repeatedly prioritized speed of deployment over rigor. I believe there is a real risk they will miss important dangerous capabilities in future AI systems.

AGI will also be a valuable target for theft including by foreign adversaries of the United States. While OpenAI publicly took claims to take security seriously, their internal security was not prioritized when I was at OpenAI, there were long periods of time where there were vulnerabilities that would've allowed me or hundreds of other employees at the company to bypass access controls and steal the company's most advanced AI systems including GPT-4. No one knows how to ensure that a GI systems will be safe and controlled. Current AI systems are trained by human supervisors giving them a reward when they appear to be doing the right thing. We will need new approaches when handling systems that can find novel ways to manipulate their supervisors or hide misbehavior until deployed. The super alignment team at OpenAI was tasked with developing these approaches, but ultimately we had to figure out as we went along a terrifying prospect when catastrophic harm is possible. Today, that team no longer exists. Its leaders in many key researchers resigned after struggling to get the resources they needed to be successful.

OpenAI will say that they're improving. I and other employees who resigned doubt they'll be ready in time. This is true not just with OpenAI. The incentives to prioritize rapid deployment apply to the entire industry. This is why policy response is needed. My fellow witnesses and I may have different specific concerns with the AI industry, but I believe we can find common ground in addressing them. If you want insiders to communicate about problems within AI companies, you need to make such communications safe and easy. That means a clear point of contact and legal protections for whistle blowing employees. Regulation must also prioritize requirements for third party testing both before and after deployment. Results from these tests must be shared. Creating an independent oversight organization and mandated transparency requirements as in Senator Blumenthal and Senator Hawley's proposed framework would be important steps towards these goals. I resigned from OpenAI because I lost faith that by themselves they will make responsible decisions about AGI. If any organization builds technology that imposes significant risks on everyone, the public must be involved in deciding how to avoid or minimize those risks. That was true before ai. It needs to be true today with AI. Thank you for your work on these issues and I look forward to your questions.

Sen. Richard Blumenthal (D-CT):

Thanks very much Mr. Saunders. By the way, I'm going to ask that all of your written testimony be made a part of the record so I know you've abbreviated your remarks as I expect the others will do as well. Your full remarks will be part of the record without objection. Mr. Harris,

David Evan Harris:

Chairman Blumenthal, Ranking Member Hawley and members of the committee, it is an honor to appear before you today to discuss the harms and risks of artificial intelligence. It is particularly heartening to see that this committee has developed a promising bipartisan framework and proposals that I earnestly hope will become law. They are urgently needed to provide effective guardrails for AI systems. My name is David Evan Harris. From 2018 to 2023, I worked at Facebook and Meta on the civic integrity and responsible AI teams. In my role, I helped lead efforts to combat online election interference, protect public figures, and drive research to develop ethical AI systems and AI governance. Today, those two safety teams do not exist. In the past two years, there have been striking changes across the industry. Trust and safety teams have shrunk dramatically. Secrecy is on the rise and transparency is on the decline.

Since leaving meta, I have helped craft two bills about DeepFakes and elections in California that await the governor's signature. Working closely with policymakers in California, Arizona, and internationally, I am more convinced than ever that effective oversight of AI is possible. Today, there are three things that I hope you take away from my testimony. First, voluntary self-regulation does not work. Second, many of the solutions for AI safety and fairness already exist in the framework and bills proposed by the members of this committee. Third, as you said, not all the horses have left the barn. There is still time to act. Back to my first point, voluntary self-regulation is a myth. Take just one example from my time at Facebook. In 2018, the company set out to make time on their platforms into time well spent reducing the number of viral videos and increasing more meaningful content from friends and family.

The voluntary policy opened up a vacuum that TikTok was more than happy to step into. Today, Facebook and Instagram are fighting to claw back market share from TikTok with reels. Essentially, those same viral videos that they sought to diminish. When one tech company tries to be responsible, another less responsible company steps in to fill the void. While non-binding efforts such as the White House voluntary AI commitments and the AI elections accord our positive steps forward. The reality is that we've seen very little clear progress towards the promises made by tech companies in those commitments. When it comes to policies and laws governing, AI laws with shalls rather than mays are essential. Without the shalls, the legislation becomes voluntary and many companies will delay or simply avoid taking meaningful actions to prioritize safety or harm reduction. To my second point, we don't need silver bullets. The framework proposed this committee's leadership already has so many of the answers.

Two recommendations in particular in the framework are essential components for legislation. AI companies should be held liable for their products and they should be required to embed hard to remove provenance data. In AI generated content, it is encouraging to see a bill on transparency in elections that would require labeling of some AI generated material. More steps like these are needed. This brings me to my final point. The horses have not left the barn. The misconception is that it is too late to do anything. It can be dizzying to watch the fast-paced releases of ai, voice and image, deep fakes and the growing role of biased AI systems making decisions about our lives, but there are still so many more uses of AI technology that have not yet seen the light of day. Next, the realistic video, deep fakes live audio deep fakes that can interact with millions of people at once. Personalized election disinformation calls, large scale automated sex schemes targeting children. Those are just of a few of the ones that we see on the horizon. We need to move quickly with binding and enforceable oversight of ai. It is possible if you take action now on the promising framework and bills already before you, you can reign in the Clydesdales and the Centaur waiting just behind the barn door. Thank you.

Sen. Richard Blumenthal (D-CT):

Thank you very much, Mr. Harris. Ms. Mitchell.

Margaret Mitchell:

Chairman Blumenthal, Ranking Member Hawley, and members of the subcommittee. Thank you for the opportunity to testify here today. My name is Margaret Mitchell and I'm here in my capacity as a computer scientist and researcher who has worked in the tech industry for over 10 years. My PhD is in natural language generation once a niche area, but currently a topic of intense interest all over the world. As a student eating ramen at a state school in Washington, I couldn't imagine that my work would have the relevance that brings me here today. I've had the privilege of being a researcher at Microsoft and Google and I'm currently chief ethics scientist at AI startup Hugging Face. I'm grateful that I've had the opportunity to work with some of the world's brightest minds in pursuit of creating beneficial technology that can aid and assist people in their work and everyday lives.

I began working on ethical AI around 2015 as neural networks began to show clear signs of working well for the tasks I cared about. While they could be used to provide beneficial technology such as visual descriptions for people who are blind, they were entangled with a host of troubling issues. For example, the amount of data they required brought with it concerning questions for me about how the data was being collected and how data biases were being accounted for. These issues were difficult to address within the tech industry. It was too easy to overlook the connection between what we developed and the foreseeable harms to people once deployed, too easy to gather more and more data while ignoring issues of consent, credit and compensation. We were on a path of creating systems that we didn't fully understand enabled by data sets that we hadn't critically analyzed and motivated by evaluations that didn't critically engage with easily foreseeable contexts of use.

When I joined Google in 2016, I was highly motivated to try and change this. One of the basic ideas I had was the role of foresight, of thinking through all the different ways that technology might evolve and using this to develop technology that's as beneficial as possible while mitigating foreseeable risks before they occur. But this type of thinking is difficult to incorporate into standard development practices where internal incentives push developers to launch new products quickly without productive collaboration across different ideas, viewpoints, and critiques. As I discussed these issues with my friends across the industry, it seemed this problem was common present throughout large tech companies. Because of this, I realized that ethical AI practices could be more successful if I could turn those practices into launches, and so with my colleagues, we introduced model cards, launchable artifacts that incentivize thinking about technologies impacts, including documenting the intended use of a model and evaluating its performance across subpopulations that could foreseeably be subject to unfair treatment from an AI system.

Part of my work on documentation and the role of critical thinking and foresight and development made it clear to me that there should be some amount of due diligence before a system is developed, such as research to inform predictions on what the system will be like by reviewing past literature and work from related fields. This molded me to co-author a now famous paper called On the Dangers of Stochastic Parrots. Can language models be too big? I've since continued my work trying to understand gaps in AI development that overlook the impacts of the technology and developing methods to address them. I've taken on new challenges in understanding when systems might be more open or more closed, and now work at AI startup, Hugging Face, a company focused on open science where I operate in a culture of transparency and collaboration across different viewpoints and for the first time can go deeply into how the AI world I'm most familiar with connects to AI policy.

My journey has made clear some of the ways that the government might be able to help shape AI development within the tech sector so that it may be positive for the public. Briefly, this involves filling key gaps in AI development practices and research, including one research on the relationships between model inputs and model outputs. Two, rigorous analysis of data models and systems. Three, implementing due diligence and foresight including before development, and four, operationalizing transparency in the AI life cycle, including after deployment. Potential solutions primarily take two forms, federal research funding and requiring documentation and disclosure, and I echo my colleagues here in advocating for increased support for whistleblowing. I greatly appreciate the work that so many members of Congress, congressional staff, federal agencies, and my research and private sector colleagues are doing to advance the science and practice of ethical ai. I look forward to discussing these important topics with you today and welcome your questions.

Sen. Richard Blumenthal (D-CT):

Thank you very much, Ms. Mitchell. Again, I want to thank all of you for your very insightful and important testimony. We're going to have five minute rounds of questions and I'll begin. Mr. Saunders, you, I'm sorry, Mr. Harris, you said in your testimony, trust and safety teams have shrunk dramatically. Secrecy and lack of transparency have increased. Is that true across the board of all companies?

David Evan Harris:

Thank you so much, Senator Blumenthal for the opportunity to answer that question. I have appended in the appendices to my written testimony, a report by an organization called The Free Press that is entitled The Big Tech Backslide. I would encourage you and your colleagues to visit this report. It's the most comprehensive study that I have seen so far about the backslide, the backslide consistent part of just removing the people from those teams. It has been reported that Elon Musk let go of 81% of the staff of Twitter after purchasing it. We saw large layoffs across the board in the entire sector, but part of the backslide is also the retreat from policies, specific policies that the companies have designed to keep our elections safe, so I do believe that I personally have observed one company that has actually made strides forward and hired many people during this same period of the past two years, and you're not going to like the one that it is, but it's TikTok. Honestly, TikTok has been hiring a lot of the people who were let go from many of those other companies,

Sen. Richard Blumenthal (D-CT):

But the measure of their commitment to some oversight and safeguards is their investment in that kind of in effect trust and safety teams, and they have been backsliding, which belies their stated commitment to trust us. We'll do it, don't worry.

David Evan Harris:

Yes, perhaps because my equine metaphor resonated so much with you, I could offer it another story from the animal kingdom, but this time sine in nature, there is a metaphor that is used in the tech industry that I know of being used in at least two of the very biggest tech companies, and it's called the metaphor of the bear. You senators are the bear in this metaphor along with other regulatory bodies. The regulators and the tech companies in this metaphor are people running away from the bear as fast as they can in this story. The bear eventually catches up and it eats the slowest person eating the slowest tech company. Thus, the moral of the story if you are a tech company is just don't be the second slowest. That is the strategy. So as long as you can point to someone to another tech company that is doing a worse job than you are on trust and safety, the idea is that is the optimal allocation of resources in your company.

Sen. Richard Blumenthal (D-CT):

I think what's important here is that all of the tech companies, not just one, not just for example, open ai, we're not singling out one or another, are engaged in that kind of strategy. Let me ask Ms. Toner, you wrote in your testimony that your experience on the board of OpenAI taught me how fragile internal guardrails are when money is on the line, and, why it's imperative that policymakers step in. I know there are limits on what you can discuss publicly. I'm very respectful of them, but maybe you can tell us what you had in mind when you wrote that sentence.

Helen Toner:

Certainly Senator, and there's now been enough public reporting of some of the kinds of incidents that demonstrate this dynamic, including situations like a process as the deployment safety board, which was set up internally. Great idea for something to try to try and coordinate safety between open AI and Microsoft when using open AI products still being reported publicly that in the early days of that process, Microsoft, they're in the midst of planning a very big, very important launch. The launch of GP PT four and Microsoft went ahead and launched GPT-4 to tens of thousands of users in India without getting approval from that deployment safety board. Another example would be that there's been, since I stepped off the board, there's been concerns raised from inside the company that in the lead up to the launch of their 4.0 model, which was the voice assistant that had very, very exciting launch videos was launched the day before a Google event that OpenAI knew might upstage them and there's been concerns raised from inside the company about their inability to fully carry through the kinds of safety commitments that the company had made in advance of that. So there's additional examples, but I think that those two illustrate the core point.

Sen. Richard Blumenthal (D-CT):

Thank you. I'm going to be hopeful we'll have another round of questions, but I'm going to be respectful about my five minute rule and turn to the Ranking Member.

Sen. Josh Hawley (R-MO):

Thank you, Mr. Chairman. Mr. Toner, I just want to stay with you and maybe pick up there. My understanding is that when you left the OpenAI board, one of the reasons that you did so is you felt you couldn't do your job properly, meaning you couldn't effectively oversee Mr. Altman and some of the safety decisions that he was making. You had said this year that I'm just going to quote you that Mr. Altman gave inaccurate information about the small number of formal safety processes that the company did have in place. That is that he gave incorrect information to the board. To the extent you're able, can you just elaborate on this? I'm interested in what's actually being done for safety inside this company in no small part because of what he told us when he sat where you're sitting.

Helen Toner:

Thank you, Senator. Yes, I'm happy to elaborate to the extent that I can without breaching any confidentiality obligations. I believe that when the company has safety processes, they announce them loudly and proudly, so I believe that you and your staff would be aware of the processes they have in place at the time. One that I was thinking of, which was one of the first formal processes that I'm aware of was this deployment safety board that I just discussed and this breach by Microsoft that took place in the early days there. Since then, they have introduced a preparedness framework, which I think is I want to commend many of these companies for taking some good steps. I think the idea behind the preparedness framework is good. I think to the extent they execute on it, that's great, but there have been concerns raised about how well they're able to comply with it.

They've also, it's been publicly reported that the really respected expert they brought into run that team has since been reassigned from that role, which I worry what that means for the influence that that team is able to exert on the rest of the company, and I think that is illustrative as well of a larger dynamic that I'm sure all of the witnesses here today have observed, which is there are really great people inside all of these companies trying to do really great things and the challenge is that if everything is up to the companies themselves and to the leadership teams who are needing to make trade-offs around getting products out, making profits, attracting new investors, those teams may not get the resourcing, the time, the influence, the ability to actually shape what happens that they need. So I think many of the dynamics that I witnessed echo very much what I'm hearing from my fellow witnesses.

Sen. Josh Hawley (R-MO):

That's very helpful. Lemme just ask you this, lemme just put a finer point on it because Mr. Altman, as I said, testified to us on this connection this past year. Here's part of what he said. We make meaning open AI significant efforts to ensure that safety is built into our systems at all levels and then he went on to say, and I'm still quoting him before releasing any new system, OpenAI conducts extensive testing, engages external experts for detailed reviews and independent audits, improves the model's behavior and implements robust safety and monitoring systems. In your experience, is that accurate?

Helen Toner:

I believe it is possible to characterize the company's activities accurately that way. Yes. The question is how much is enough? Who is making those decisions and what incentives are driving those decisions? So in practice, if you make a commitment, you have to write that commitment down in some words, and then when you go to implement it, there's going to be a lot of detailed decisions you have to make about what information is shared with whom, at what time, who is brought into the right room to make a certain decision. Is your safety team whatever kind of safety team it might be, are they brought in from the very beginning to help with conception of the product, really think from the start about what implications this might have or are they handed something a couple of weeks before a launch deadline and told, okay, make this as good as you can do here?

I'm not trying to refer to any specific incidents at OpenAI. I'm really referring to I, again, examples that I've heard reported publicly heard from across the industry that there are good efforts and I worry if we rely on the companies to make all of those trade-offs, all those detailed decisions about how those commitments are implemented that they're unable, they're just unable to fully account for the interests of a broad public, and I think you hear this as well from people, I've heard this from people in multiple companies, sentiment along the lines of, please help us slow down. Please give us guardrails that we can point to that are external that help us not only be subject to these market pressures

Sen. Josh Hawley (R-MO):

Just in general, is your impression now it was OpenAI doing enough in terms of its safety procedures and protocols to adequately vet its own products and to protect the public?

Helen Toner:

I think it depends entirely on how rapidly their research progresses. If their most aggressive predictions of how quickly their systems will get more advanced are correct, then I have serious concerns. If their predictions, their most aggressive predictions may well be wrong, in which case I'm somewhat less concerned.

Sen. Josh Hawley (R-MO):

Let me finally, because I want to be mindful of the time and I've got colleagues who want to ask questions. Lemme just end with this. You in your written testimony, you make I think a very important and helpful point about AI development in China and why the competition with China, though real should not be taken as an excuse for us to do nothing. Could you just amplify that because we've heard a lot of folks sitting where you're sitting over the last year and a half raise the China point and usually say, well, we mustn't lose the race to China, therefore it would be better if Congress did little to nothing. You think that that's wrong? Just explain this to us why

Helen Toner:

I think that the competition with China is certainly a very important consideration and we should be keeping a very close eye on what they're doing and how US technology compares to their technology. But I think it is used as an all-purpose excuse to not regulate and an all-purpose defense against any kind of regulation. I think that's mistaken on a few fronts. It's mistaken because of what's happening in China. They are regulating their sector pretty heavily. They are scrambling to keep up with the us. They're facing some serious macro headwinds in terms of economic problems, access to semiconductors after us export controls. So China has its own set of issues. We shouldn't treat them as just absolutely raring to go and about to pass us at any moment, and I think it also totally belies the fact that regulation and innovation do not have to be intention. This is a technology AI that consumers don't trust. There's been recent consumer sentiment surveys showing that if they see AI in a product description, they're less likely to use the product. So if you can implement regulation that is light touch, that increases consumer trust, that helps the government be positioned to understand what is going on with the technology you can regulate in really sensible ways without even impacting innovation at all, then it's irrelevant if it's going to affect the race with China.

Sen. Richard Blumenthal (D-CT):

Thanks very much, Senator Hawley. Senator Durbin.

Senator Dick Durbin (D-IL):

Thank you. I'm going to ask some basic questions. Liberal arts lawyer, forgive me, maybe that will lead to inspiration. I'm not sure. I'm trying to understand the mechanism of regulation in the AI venture. We look back in history to the Manhattan project and it was inspired by the government, funded by the government, staffed by the government, built by the government succeeded. I think it generally succeeded. Then you fast forward a few years to the race to the moon, and the question is was it inspired by the government, but is now, do we have more private sector involved in it in terms of the actual project and its success? Then you go to the world of quantum and let me tell you at this point, that's as far as I can go in my expertise, other than the fact that I met a man who works for DARPA, who explained to me that his job is to prove that the quantum effort of a certain company, private company doesn't work.

He's supposed to prove this over and over again. His name's all Peter, very interesting guy. But it appears that that is all private sector and we as regulators are on the outside getting ready to prove it doesn't work, hoping that someday we won't be able to prove that. So the question is now where is AI and can we as a government actually regulate the future of an industry unless we have a team and technology that matches what they have or at least as close to it. It seems to me it's tough to regulate an entity if you don't understand it and the decisions that they have to make. Would anybody like to straighten me out, please?

Margaret Mitchell:

Yes, thank you for the question. I think that the government has a really useful role to play here in incentivizing good work. So in my written testimony, I speak to how there are very clear gaps in current AI development processes within the tech industry where there just isn't enough research and there isn't enough work and it hasn't organically emerged due to market pressures and things like that. So in particular, there isn't a really rigorous mature science of data analysis in order to understand the relationship between inputs and outputs. We've seen in the past that there have been grants, things available through darpa, for example, to increase the state-of-the-art in machine translation. So now it's possible to communicate with someone speaking in a different language without knowing that language that has been partially enabled to by DARPA and the grants through there. So I think that by focusing in on these sort of difficult points within the AI lifecycle where there really hasn't been development and there should be, and I think we cover that in some of our written testimonies, the government has a very good role to play in helping make sure that AI continues to be developed in a way that's beneficial as opposed to overlooking a lot of the serious issues.

Senator Dick Durbin (D-IL):

The way you explained your background, at least I hope I caught it, you've been primarily private sector?

Margaret Mitchell:

Yes.

Senator Dick Durbin (D-IL):

So the question I'm asking you is if the government is going to regulate you with some federal employees or contract employees, do we have the level of expertise to really interface with what you're doing in the private sector?

Margaret Mitchell:

I think it's possible to hire for the expertise if you don't already. I mean, I will speak to that. As I've met with staff, I've been incredibly impressed by the intelligence of the people that I've operated with, so I do have some faith that the government is able to employ relevant people. I think part of the difficulty is the compensation, right? So big tech companies are offering million dollar packages, so that ends up being I think, quite a difficult tension there, but this is a situation where you might have individuals in government under NDA, working within the company or auditing the company in some way, respecting trade secrets, ip, those kinds of things, but still being able to sort of lift the hood and see what's happening underneath and provide feedback on it.

Senator Dick Durbin (D-IL):

Anyone else want to comment on the government effort? Mr. Harris?

David Evan Harris:

Yes. Thank you so much for the question. Senator Durbin, I understood two different elements from your question. One being can we actually regulate this, and two being, do we have the right people and government to do it? To the first point, I would refer you back to the excellent Blumenthal-Hawley framework that has already been presented.

Senator Dick Durbin (D-IL):

I was hoping you would say that.

David Evan Harris:

It contains the key elements that I would recommend, even if I had not read it, I would've said licensing and registration of AI systems and the companies that make them liability, clearly holding AI companies liable for the products that they make and provenance, giving people the ability to know what content is produced by AI and what content is produced by humans. Those are just a few pieces that are already in the framework, and I'm excited to see legislative text with the details of that. To your question about talent, this is not the first time this issue has come up in the federal government. You might recall the healthcare.gov launch and that was Oh yeah.

Yes. That was, I believe one of the first major public moments when people realized that was very hard for the federal government under procurement and staffing procedures to hire the right people to launch large scale technology projects. Now, there have been advances since then. There's something called cyber pay that allows agencies to pay a little bit more, but I don't think it's gone far enough. I think there's a situation now where one of the techniques to recruit is to find people who've already made a good amount of money in the private sector and can make a little bit less for a few years and then know they're going to go back. I don't actually think that's a long-term sustainable model. I think the rates of pay still need to go up, but I would like to call your attention to the AI Safety Institute, which exists within nist, within the Department of Commerce that has actually started hiring up for this task at hand. I think that there are incredible people there that need a lot more money, and I hope that you will give that to them.

Sen. Richard Blumenthal (D-CT):

Thank you. Thanks, Senator Durbin, thank you for your support and others, your support for the framework that Senator Hawley and I have developed. I'm not going to hold you to support for the bill because the text is important. We hope it will be available very, very shortly. But your support for the principles and the basic framework is very, very important to us and encouraging us to go forward. I'm going to call on Senator Kennedy and then he'll be followed by Senator Padilla. We have a vote ongoing. I'm going to run to vote so that I will be back and then leave Senator Hawley to chair and we'll be tag teaming, but we'll go forward with the questioning while I'm gone. Senator Kennedy.

Sen. John Kennedy (R-LA):

Thank you, Mr. Chairman. Mr. Harris, you mentioned as one of the tenets of regulation providence. By that, do you mean notice that when consumers deal with the robot, they should be told that it's a robot?

David Evan Harris:

Thank you so much for your question.

Sen. John Kennedy (R-LA):

You don't need to thank me. You can just answer it.

David Evan Harris:

There are multiple elements of provenance technologies. One is disclosure, and yes, that is saying that if a company uses AI and has AI interacting with you, that it should disclose that you are interacting with an AI system or a bot. But another element of…

Sen. John Kennedy (R-LA):

So number one is notice if I'm interfacing with a robot or with artificial intelligence, the owner of that artificial intelligence should tell me as a consumer, I'm a robot.

David Evan Harris:

Yes, absolutely.

Sen. John Kennedy (R-LA):

Okay, that's number one. Tell me what number two is.

David Evan Harris:

So number two is sometimes referred to as watermarking. I know that this committee heard in April testimony about this topic of watermarking and watermarking can happen in one of two ways. It can be either a visible or direct disclosure. That content, for example, an AI generated image, an AI generated audio file video or even text can be a direct disclosure that it's AI generated. You've seen watermarks that have,

Sen. John Kennedy (R-LA):

I get it. So that's another form of notice.

David Evan Harris:

Yeah, you could call it that, but that's direct watermarking. Then there's also another technique, which is a more indirect disclosure, which is hiding an invisible signal within the text of a piece of text that's generated by AI within an image, which is a hidden pattern of pixels.

Sen. John Kennedy (R-LA):

What good does that do in terms of a consumer?

David Evan Harris:

Well, the good thing about that is that it can be much more difficult to remove than simply a notice that says this was produced by AI at the top or bottom of a picture that you could just crop out and remove very easily. So watermarks have value in that sense. There's also another technology that was also discussed in April here, which is called digital fingerprinting. This technology has been used to keep track of child sexual abuse material and terrorist content circulating online, and that creates what's called a, a unique identifier for images. Audio files could be even done with text or videos, and that's stored in a database and that can associate a piece of content and then identify it as AI generated.

Sen. John Kennedy (R-LA):

All these are forms of notice.

David Evan Harris:

I would some more direct than others. I would call them forms of…

Sen. John Kennedy (R-LA):

I'm not trying to trick you. I'm trying to understand are there any companies that are giving the world proper notice right now? Anybody?

David Evan Harris:

So I'm happy to answer that. Sure. So according to their public statements…

Sen. John Kennedy (R-LA):

Yes or no, please, sir. I got other ground to cover.

David Evan Harris:

Google DeepMind, synth ID is the name of the technology. Looks good. I haven't tested it.

Sen. John Kennedy (R-LA):

But they're trying to give notice.

David Evan Harris:

Yeah.

Sen. John Kennedy (R-LA):

Okay. Licensing. What good is licensing going to do? I mean, you go to the government and say, okay, here, give me a license. What good is that going to do?

David Evan Harris:

Anybody? I'm happy to take that one. Also briefly, as is the case in many professions, the law, medicine, you need a license to practice, and if you violate the code of that profession, you can no longer practice. And with technology, I see no reason why it should be

Sen. John Kennedy (R-LA):

Different. So with the license, we should have a code of behavior?

David Evan Harris:

Yes, absolutely.

Sen. John Kennedy (R-LA):

Okay. And what should be in that code of behavior?

David Evan Harris:

Again, happy to offer this to any of my co-panelists here, but I believe that ethics are critical, that artificial intelligence systems should be designed in ways that they can't harm human beings, that they don't do damage to people, that they don't discriminate, that they don't give people advice that brings harm to them. That's incorrect. Those are a few elements of the code.

Sen. John Kennedy (R-LA):

So basically, and I'm trying to understand this because I think you're all extraordinarily bright, that's obvious, but the American people don't understand what you're talking about. Okay. And frankly, many times, neither do I. So you license with a code of behavior that would require government to set up some sort of agency to enforce that code of behavior. Is that correct?

David Evan Harris:

This is an area where there are actually multiple interesting proposals. There's a law professor at Fordham Law School named Chinmayi Sharma, who has been writing about creating a malpractice regime for engineers who develop AI

Sen. John Kennedy (R-LA):

Well, that's what I'm getting to. Next, liability. It is possible for the private sector to enforce through liability. Why are our tort laws right now? Are our contract laws right now not adequate again?

David Evan Harris:

Oh, go ahead.

Helen Toner:

I can jump in on that. Yes. At present, there's, it's very unclear how AI should be thought of and how different actors in what gets called the value chain should take responsibility for unintended outcomes. The best comparison that people turn to is issues with software, issues with cybersecurity, which I think many tort experts, which I'm not one considered to be an area of liability that has not been especially successful in allocator responsibility for harms. And so that's not an especially promising precedent necessarily. And also AI is very different from the kinds of software used to dealing with, and the problems and harms that AI can cause are different from cybersecurity issues. So I think liability is a way, if there can be clear allocation of liability, that's a way of flexibly setting incentives that depend on the specifics of any given case that could potentially be quite helpful if it were possible to provide more clarity.

Sen. John Kennedy (R-LA):

Thank you.

Sen. Josh Hawley (R-MO):

Senator Klobuchar.

Sen. Amy Klobuchar (D-MN):

Thank you very much. Senator Hawley. I think I'll start, given that Senator Kennedy was just asking questions, he and I have a bill on journalism compensation. I'd ask this of you, Dr. Mitchell. We want to make sure that this is fair for people that are producing content and that they're getting paid. And one concern is that AI developers may train their models on content from journalists and other content creators only to regurgitate that content without attribution or compensation. And that's why last week I sent a letter on this, and in your testimony you say that AI firms should conduct due diligence on the foreseeable outcomes of a technology before it is deployed. Obviously Senator Kennedy and I and others believe that we should have some kind of agreement on what compensation is here, but could you talk about harms that new AI features pose in markets like journalism, which is key to the First amendment that we have functioning journalism? Go ahead.

Margaret Mitchell:

Right? Yeah, thank you. And that's such an important point. So there's a couple things here. One is that machine learning models that are trained on previous news articles are only able to speak to the past. So one of the critical needs in journalism is to continually be up to date about what's actually happening, and we lose that when we start to just rely on sort of past statements and regurgitate it in a way that may be relevant for a current situation. So that's one sort of key issue another issue is that generative AI technology seems to be somewhat replacing journalist jobs, at least I personally know lots of journalists who are now out of the job and they have managers who are pushing use of this technology more and more. That's problematic because the biases of the system that's generating the content is going to then proliferate throughout the different articles that are being written.

When ideally we could have lots of diverse perspectives reporting on different material, and then it also sort of waters down or makes a little bit more bland the information that's put out there and available. And there's also this additional problem of automation bias where even if you're working with generated content, you might be more likely to accept it because it's been generated by an automated system where it might not be saying something that you otherwise would be saying. So there are a lot of issues here where I think journalists in particular should be protected, relevant to language generation and minimally have compensation when their work is included in the data.

Sen. Amy Klobuchar (D-MN):

Okay, thank you. We obviously have a number of bills. People are working on this committee, including Coons and Tillis, Blackburn and I on people's right to their own images and the like. But I thought I'd focus just on one thing today and I have a number of bills with Senator Thune and others to make sure we have regulations that work and guardrails in place just because I see the potential upside of this as well. And I think the only way we're going to get there is if we don't have a bunch of people ripped off in scams and if we don't have a harm done to our democracy. So two ways that we look at this. One is to do, we have two bills that are major. One is with Senator Hawley and myself on banning the deep fix where it is actually someone pretending to be the candidate when they're not.

We've seen this happen on both sides of the aisle, and we've seen other states, including Texas, actually take action on this. We have, I think 18, 19 states have done this, including red states, blue states, purple states, where they have at least required labeling of ads. And we understand in our bill, Senator Hawley and myself that you're not going to be able to cover satire and that kind of thing because the constitution, but the labeling requirements that Senator Murkowski and I have, we'd get at not small ball stuff like hair color, but we'd get it at least so that people know if it is the candidate or not themselves. Because even with satire, I know people are confused because they've showed me stuff and said, is this really them? Is that really Trump? Is that really Harris? Is that really? I mean, it's an unbelievable thing to do to voters when you're trying to get them to make a decision. So I just look at this as two-pronged, and I guess I'd go with you, Mr. Harris, because you talked about this earlier. Do you agree there's a serious risk posed by the use of AI in our elections? And can you tell us how AI has the potential to turbocharge your election related disinformation?

David Evan Harris:

Thank you so much, Senator Klobuchar for the question and for your commendable efforts on legislation producing all of the bills that you mentioned.

Sen. Amy Klobuchar (D-MN):

We have gotten those two bills through the rules committee, but Senator McConnell is having an issue with them and I'm just really concerned that we are not going to be able to advance them on the floor before it's too late, but continue on.

David Evan Harris:

Yes, absolutely. So this is a topic that I have written about extensively in part with, in my work with the Brennan Center for Justice at the New York University School of Law, I produced a guide for how election officials can prepare for AI threats along with co-authors. And I think that it's very, very important that we be prepared for a variety of threats. Those threats could include deep fakes of election officials of candidates like yourselves, and also deep fakes that resent election apparatus in them that indicate that there was tampering with physical objects associated with the election in two bills that I mentioned earlier that I worked on in California that are currently awaiting signature fingers crossed by Governor Newsom, Assembly Bill 2655 by Mark Berman, Assembly Bill 2839 by Gail Pein. Those would actually have serious consequences so that if someone posted those types of election DeepFakes, they could be removed. That platforms would be required to remove them.

Sen. Amy Klobuchar (D-MN):

Yeah. Well, our bill allows for them. Many of them are actually supporting this bill because it makes it clear the platforms are that they would have to take it down, but it also puts liability on the people that put those up,

Potential liability, which is a way we handle so many areas liability for whatever Malfeasant group decided to fake that it's the candidate. Anyway, I hope that's the direction. I haven't seen those bills. I just know what they've done in other states, including as I said, Texas, Mississippi, and many others. So I am just sad and very concerned that our federal government isn't doing anything and moving these bills because we don't have anything really that protects federal election, which is kind of a big deal. We'll try to use existing laws, but the state laws would remain in our bills, but it would just simply give us the protection in federal elections. So I guess that Mitch McConnell has decided we're going to just roll the dice and see if at all is fine, but I'm already having people come up to me and say, is this really the person? I don't think this is a person, and we could stop this right now by putting these bills in place.

Sen. Josh Hawley (R-MO):

Senator Padilla.

Sen. Alex Padilla (D-CA):

Thank you. Thank you to all of you for your testimony today. As you're I'm sure aware, the US AI Safety Institute recently announced a deal with OpenAI and Anthropic who agreed to share their models with researchers before and after deployment for research, for testing and for evaluation. My first question is for Ms. Toner and for Dr. Mitchell. What do you think about this agreement?

Helen Toner:

Thank you, Senator Padilla. I haven't seen the details of the agreement. I think in principle, it sounds excellent. I think it's a great step forward. I'm excited by the work that the AI Safety Institute is setting out to do, and I echo Mr. Harris's call for them to be as well-resourced they possibly can be. I think the success of an arrangement like this will depend on a lot of details about timing and access and what kind of assets are allowed to be accessed in what kinds of ways. So I'm optimistic about it and very pleased to see the agreement, but we'll have to see where it goes from here.

Sen. Alex Padilla (D-CA):

Dr. Mitchell?

Margaret Mitchell:

Thank you. Yeah, I echo Ms. Toner. I think that one of the reasons why this is really critical is that it can help keep companies accountable for the statements they make and the kind of things that they might be misleading the public about with respect to how well the technology works. When you can have independent examinations or reproducibility of the evaluation results or more rigorous evaluation, this kind of thing. I think we're in a better situation for developing AI in a way that's very well-informed. My one concern, and again, this is not knowing a lot about particular agreement, would be if research was abused. So for example, saying you're a researcher and then using the technology in a malicious way. I think this happened previously with Facebook. And so there's a lot of detail there about how you decide that there's a researcher that is okay to be using it, and there's a lot of potential, I think, issues there. But at a high level, I do agree that this is an incredible thing to do and we need to have in general sort of these independent analyses that are able to hold tech and check.

Sen. Alex Padilla (D-CA):

So just as a follow, you both began to touch on it, how would you advise policymakers to view what success looks like from these agreements and arrangements? We'll start with Dr. Mitchell then come back.

Margaret Mitchell:

Yeah, so one thing in particular that I've been really interested in is just evaluation and rigorous evaluation and how you really quantify how well these systems work. I think that larger tech companies are sort of incentivized to report that the systems work well without really breaking down what that means, the context of use, when it can be used, when it can't. And one thing that sort of independent research might be able to do is actually do pretty rigorous analysis of how the systems might work in specific kinds of scenarios and provide just a lot more insight that would not be provided if they had more of a profit incentive.

Helen Toner:

Just to add onto that, all of which I agree with, I think it'll be difficult for us from the outside to be able to evaluate, I believe because I expect a lot of this testing and evaluation to happen behind closed doors, which I think is reasonable. And so I think the task for Congress will be to be interfacing and working closely with NIST and the AI Safety Institute to hear from them how well this is working. Do they have again, the sort of contractual arrangements that they need in order to carry out the kinds of testing that they think would be most valuable? Do they have the funding that they need? Do they have the staffing that they need? And so we've talked a little bit about the need for more technical talent and government. Personally, I think salaries are only one piece of the puzzle there.

I know multiple people who are really interested in offering their immense talents to the US government and are just the processes, the hiring processes are impossible. And another element that I know comes up for people who do end up getting into government is their ability to use halfway up to date technologies rather than extremely old devices, extremely old systems. So I do think that the talent issues are critical, and I think that the salaries are part of that, but I worry sometimes that we see the salaries as a totally intractable problem and then give up. And I think there are other ways to increase access to technical talent as well.

Sen. Alex Padilla (D-CA):

Okay, thank you. In my time remaining, I do want to touch on one other topic, and this is for you, Dr. Mitchell. In your testimony you observed that there's currently no well developed science that analyzes how inputs affect outputs once a model is trained. In simple terms, can you help all of us better understand why this is important for developers to consider?

Margaret Mitchell:

Sure. Yeah. So an analogy to consider might be with baking or cooking. So when you make a cake, ideally you have a sense of what the ingredients are and what the different ingredients might do. So if I add more egg, it'll be more puffy, these kinds of things. We don't have a similar thing with building models. And so if you think of the data as essentially ingredients and the training as cooking and then the model is sort of the output, the thing that you've cooked, we're missing the sort of approach where we have recipes, we're missing this deep understanding of what all the pieces are that result in this output thing we might want to eat or not.

Sen. Alex Padilla (D-CA):

So we've got from the animal kingdom to bake, but in a very effective way. Thank you all. Thank you, Mr. Chair.

Sen. Richard Blumenthal (D-CT):

Thank you very much. Senator Padilla, all of the witnesses that we have before us today have left companies, AI companies based on concerns about commitment to safety. You're not alone, obviously. Open AI in particular has experienced a number of high profile departures, including the head of its super alignment team, Jan Leike, who left to join Anthropic. And upon departing, he wrote on X. I have been disagreeing with OpenAI leadership about the company's core priorities for some time until we finally reached a breaking point. He also wrote that he believe much more of our bandwidth should be spent getting ready for the next generation of models on security monitoring, preparedness, safety, adversarial robustness, super alignment, confidentiality, social impact, and related topics. These problems are quite hard to get right, and I'm concerned that we aren't on a trajectory to get there. Let me ask all of you, based on your firsthand experiences, would you agree essentially with those points? Let me begin with you, Mr. Saunders, and go to the others if you have responses.

William Saunders:

Yeah, thank you Senator. Dr. Leike was my manager for a lot of my time at OpenAI, and I really respected his opinions and judgment. And I think what he was talking about were sort of a number of issues where OpenAI is not ready to deal with models that have some significant catastrophic risks, such as high risk models under the preparedness framework. So things that could actually start to assist novices in creating biological weapons or the systems that could start conducting unprecedented kinds of cyber attacks. And so for those kinds of systems, we're going to, first, we're going to need to nail security so that we make sure that those systems aren't stolen before we figure out what they can do and used by people to cause harm. Then we're going to need to figure out how do you actually deploy some system that under some circumstances will could help someone construct a biological weapon.

But lots of people want to use it for a bunch of other things that are good. So every AI system today is vulnerable to something called jailbreaking where people can come up with some way to convince the system to provide advice and assistance on anything they want no matter what the companies have tried to do so far. And so we're going to need to have solutions to hard problems like these. And then we're going to need to have some way to deal with models that again, might be smarter than people supervising them and might start to autonomously cause certain kinds of risks. And so I think he was speaking to again, just a number of areas where the company was not being rigorous with the systems that we currently have, which again, can amplify some kinds of problems. But once we reach the point where catastrophic risk is possible, we're really going to need to have her shut together.

Sen. Richard Blumenthal (D-CT):

Others have comments in response to Ms. Mitchell?

Margaret Mitchell:

Yeah, I agree with Jan's statement there, and I think this also echoes some of what Ms. Toner had said about responsible AI type people being disempowered in tech companies. And so while on the one hand it's helpful for tech companies to have responsible AI trust and safety teams so that they can tell senators that they have them, on the other hand, when it comes to making critical decisions about how the technology gets developed, they're usually left out of the room. So this is a serious issue.

Sen. Richard Blumenthal (D-CT):

I'm going to interrupt my second round to yield to Senator Blackburn who has rejoined us.

Sen. Marsha Blackburn (R-TN):

Thank you so much. Yes, there are so many hearings today. It is catching us running back and forth. Mr. Saunders, to the point you were just making, one of the things we've said repeatedly is we have to have an online privacy bill that is federally preemptive before we start down the AI path because people want to be able to firewall their information and keep it out of the public domain. And last week, and Ms. Toner, I want to come to you on this because Meta announced it was going to use public content and things from Facebook and Instagram in the uk, and they were going to use this to train their generative AI models. And Meta said their goal was to reflect, let's see, British culture, history and idiom and that UK companies and institutions will be able to utilize the latest technology. And I think we all know what that means. I'd love to hear what concerns you have over that announcement and what limits should we place on companies like Meta that are using this data that is shared on their platforms to train their generative AI models.

Helen Toner:

Thank you, Senator Blackburn. My understanding of that announcement, which I should admit have seen briefly but not dug into in depth, is that that was actually a practice that Meta was already very much going ahead with in the United States due to the lack of privacy protections here. And that the announcement last week was I think they had need that privacy bill. Indeed, I believe, and other witnesses may know better, I believe that they had held off on initiating that process in the UK due to privacy protections that do exist in the uk. So to me, this is actually an example of perhaps success on the UK's part. If Meta felt the need to be a little more thoughtful, a little more deliberate, a little more selective about the ways in which they were using British users data because of the legal protections that existed there, which as you rightly point out, do not exist in the United States. I dunno if others want to add to that.

David Evan Harris:

I'm happy to add on that. There was actually something on this same topic that really got a lot of attention at the beginning of the summer. A lot of users of Facebook and Instagram found that they could opt out of the process of having their public data used for training of AI systems and then they posted instructions. I saw this on both TikTok and on LinkedIn. Users had posted instructions how to go to the part of your Facebook or Instagram settings where you can opt out of having your data used. I tried to do it in Facebook, I tried to do it in Instagram and I couldn't find the button. And then I posted and said, I can't find the button in the comments there. And it turned out everyone with their IP address in the United States couldn't find the button because this was a feature that I believe was only offered to people in the EU and perhaps in the uk.

I heard different stories about which parts of the world, but this idea that we as Americans are second class citizens, that we don't even have the right to object that Europeans or people in the UK have to object to our photos of ourselves. What is our families of our children being used to train AI systems and that AI systems that we don't even have confidence in how they work. We don't know if they will accidentally release personal information about us in the future, make images that look just like us. So I applaud you for raising this issue and I'm excited to see bills like a PR make progress so that we have the foundations of a legal system that can address that issue.

Sen. Marsha Blackburn (R-TN):

Well, we think having an online privacy, federally preemptive privacy bill that gets signed into law is something that is going to be necessary. And as Senator Blumenthal set in his opening remarks, and I'm paraphrasing them now, but basically big tech has proven to us they are not going to take steps that are necessary to protect the information of people. I do want to ask you something else, and when we are talking about AIs, that act in the world without humans and they're using AI and that brings about the difference between intelligence and agency, between systems that think and systems that can act. So Ms. Toner, let me come to you on this. When you look at this difference between Intel and agency, do you see these as different concepts? Do they carry different threats? Should these be approached separately differently? Does this play into a GI? Tell me your thoughts on that.

Helen Toner:

Thank you, Senator. It's a good question. It's a timely question. We actually have a paper coming out introducing exactly this issue for policymakers in a couple of weeks. And what I would say is I think these ideas of agency or agents that can take actions autonomously is not at all separate. It is not the same thing as intelligence, but we are already seeing the ability of companies to take language models, chatbots like ChatGPT and others, and add a little bit of additional software, add a little bit of additional development time and convert them into systems that can go out and take autonomous action. Right now, the state of these systems is pretty basic, but certainly talking to researchers and engineers in the space, they're very optimistic. They're very excited about the prospects of this category of system. And it's something that is very actively under development at as far as I'm aware, all of the top AI companies with the goal being initially perhaps something like a personal assistant could help you book a flight or schedule a meeting.

But ultimately Mustafa Salman, formerly at Google now at Microsoft has talked about, for example, could you have an AI that you give it? I forget the number. It's something like a hundred thousand dollars and it comes back to you a little while later with a million dollars that it's made because it's run some business or done something more sophisticated at the limit. Certainly this is very related to ideas around a GI and advanced AI more generally. Sort of the founding idea or founding excitement of the field of ai, I think for many people has been the idea of systems that can take very complicated actions, pursue very complicated goals in the real world. And I see a lot of, again, excitement in the field that we might be on the path in that direction.

Sen. Marsha Blackburn (R-TN):

Well, in Tennessee with our healthcare industry, our logistics industry, our advanced manufacturing, we see great promise with my entertainers and singers and songwriters and musicians and authors. We don't want this to become a way to steal their name, image, likeness voice. And that's why we have the no fakes act. And I know Senator Klobuchar who has joined me on that Bill, talked with you all about that later. I've run way over and you've been generous, Mr. Chairman. Thank you.

Sen. Richard Blumenthal (D-CT):

Thank you very much. Senator Blackburn, a number of you have mentioned whistleblowers and the need for protecting them. Maybe anyone who would like could expand on that point. You are all insiders who have left companies or disassociated yourself with them in one way or another. And I'd be interested in your thoughts, Mr. Saunders.

William Saunders:

Yeah, thank you, Senator. So when I resigned from OpenAI, I found that they gave to every departing employee a restrictive non-disparagement agreement, and you would lose all the equity you had in the company if you didn't sign this agreement where you had to effectively not criticize the company and not tell anybody that you'd sign this agreement. And I think this really opened my eyes to the kinds of legal situation that employees face if they want to talk about problems at the company. And I think there's a number of important things that employees want in this situation. So there's knowing who you can talk to, it's very unclear what parts of the government would have expertise in specific kinds of issues. And you want to know that you're going to talk to somebody who understands the issues that you have and that has some ability to act on them. And then you also want legal protections. And this is where I think it's important to define protections that don't just apply when there's a suspected violation of law, but when there's a sort of suspected harm or risk imposed on society. And so that's why I think legislation needs to include establishing whistleblower points of contact and these protections.

Sen. Richard Blumenthal (D-CT):

Other thoughts, Ms. Mitchell?

Margaret Mitchell:

Yeah, thank you. So I think Mr. Saunders is making one of the really important points here is that there just isn't a lot of knowledge about when and how to whistle blow. So as part of my ethics studies, I tried to familiarize myself with a situation where you would whistle blow versus whether this would be breaking your NDA, that sort of thing. This is something that I had to learn myself, and ideally I would've had some sort of resource. Ideally, there's some agency you could call and say, Hey, theoretically if I think there's an issue now what do I do? But essentially, if you're considering whistle blowing, it's you and you alone against a company making a ton of money with lawyers who are set up to harm you if you make the smallest move incorrectly, whatever it is. And so I think that it needs to be very clear to people working internally when and how to whistle blow. And it needs to be very clear at the highest levels of the company that is supported. I could even imagine having orientations where you're required to provide information on whistleblowing, that kind of thing. But currently there's no information internally and you're very much on your own in a situation where you might lose your job and then not have the money to pay for a lawyer to fight it.

Sen. Richard Blumenthal (D-CT):

Ms. Toner.

Helen Toner:

Thank you. Just to put a finer point on something that I think both Mr. Saunders and Dr. Mitchell are describing, I think a core to the problem here is that the lack of regulation on tech means that many of these concerning practices are not illegal. And so existing whistleblower protections, it's very unclear if they apply at all. And if you're a whistleblower, a potential whistleblower sitting inside one of these companies, you don't really want to go out on a limb and take a guess at, well, is this enough of a financial issue that the SEC would cover me? Or do I have to go talk? If it's something that's kind of novel related to AI development or other technology development where you have serious concerns, but there's not a clear statute on the book saying the company is breaking the law, then your options are limited. So I think the need for whistleblower protections goes hand in hand with the lack of other rules.

Sen. Richard Blumenthal (D-CT):

So I think the point that you've just made is really important that the failure to develop safety and control features in a product is not illegal perhaps, and therefore may not be covered by a strict reading of whistleblower laws, even if it is a practice which is unethical and harmful. I think that's a very important point. Let me sort of go to the other side of that question, which is the incentives, whether promotion or compensation, bonuses and other kinds of incentives that are offered by the company, do they align with safety? For example, are employees rewarded for developing better safety features or is it more rewards for racing to the market, which is the dynamic that we've discussed today, Mr. Harris or Mr. Saunders? Mr. Harris?

David Evan Harris:

Yeah, I could share an anecdote about that. Senator Blumenthal, early in my tenure at Facebook, I was introduced to someone who was a friend of a friend and I told them that I had joined the civic integrity team and this person said, oh, that work sounds so interesting, but I just never joined teams like that. But you can never show impact. You can never have anything to say on your performance reviews about what you did because the only thing you achieved was hopefully to not have something happen. And I heard different versions of that throughout my time in the tech industry, not just from inside of the companies where I worked, but I heard that across the board, it's very hard to make progress in your career when you work in an area that doesn't have clear ways to show progress. I also heard of people being asked whether they could demonstrate that work that they did reduced the number of public relations emergencies for the company. How can you possibly demonstrate that you reduced the number of public relations emergencies for a company? So again, these are structural problems with doing that work, and I applaud you for bringing up that question.

Sen. Richard Blumenthal (D-CT):

Thank you, Mr. Saunders.

William Saunders:

So I think there were a couple of significant, I dunno, goals that OpenAI as an organization has. So one of these is called maintaining research velocity and so on things like security. OpenAI is reluctant to do things that will require additional work from researchers or might slow them down to use a more secure system. And then on more of the testing side, there's often release states that are set where the company starts talking to a bunch of customers and saying, oh yes, we're going to ship by this date. And then the amount of safety work is determined after picking how quickly you want to ship and subject to office politics and these kinds of things. And so this is because companies just face an enormous incentive to be seen as the company that's leading in the AI space. And so this is why we need to have some, there needs to be some kind of regulation to be doing the right thing because otherwise it's going against the grain.

Sen. Richard Blumenthal (D-CT):

I'll give Ms. Toner and Ms. Mitchell a chance to respond as well if you have any responses. Yes.

Margaret Mitchell:

Yeah, thank you. Yeah, I echo my co witnesses. We used to sort of, I've previously sort of that it's difficult to get promoted if you focus on safety, this kind of thing because you're not promoted for the bad headlines that never exist, and so you're trying to prove something that never actually happened. And this also speaks to the role of foresight and why foresight isn't incentivized, because if you take these extra steps to make sure that something bad doesn't happen, you can't prove that it could have happened. So it's really quite difficult to focus on safety and ethics and these kinds of things. In general, your promotional velocity is much less compared to your peers, which also means that you are less likely to become a leader at the company in order to further set norms. So by focusing on things like safety and ethics, it's a way to sort of remain at the lower levels of a company and not be able to fundamentally shape it for the better.

Sen. Richard Blumenthal (D-CT):

We've seen examples already of our foreign adversaries using AI to meddle in our democracy. I think last month open AI revealed that the Iranian government linked group had used chat GPT to create content for social media and blogs attempting to sow division and push Iran's agenda in the United States. There've been reports that China and Russia have also used AI tools deceptively to interfere in our democracy. I mentioned them earlier, I think the threat to our elections is real and we're unprepared. Mr. Harris, you worked on a California law that seeks to safeguard our democracy. Are those the kinds of protections that you think would be effective at the federal level? And is there more that you would add to the California law?

David Evan Harris:

Thank you so much for the senator. I believe that in California it's difficult to make a lot of types of laws that could be made at the federal level. There are a number of reasons for that. One is simply the state agency infrastructure is dramatically smaller than the federal agency infrastructure. And in California, in a situation of budget deficit, it's very hard right now to pass any legislation that has any significant cost. That I believe is one of the biggest barriers to passing the legislation that we need. And I think that you have in front of you in your framework, the type of legislation that California would not be able to achieve, things like licensing and registration and liability, those would be very costly at the scale of a state to enforce those. And to be honest, it would of all the states in the country maybe be one that cost California more than many others simply because of the location of the technology industry there. And there might be political conditions that make it harder in California to pass that type of legislation.

Sen. Richard Blumenthal (D-CT):

Again, I'll entertain any points about the California law that any of you would like to make and if not, let me follow up. Mr. Harris, in your experience there, what was your takeaway from the tech companies? Were they supportive, helpful? How would you characterize their reaction?

David Evan Harris:

Thank you so much for the question. I believe that you need to look at two different phenomena. One is the outward presentation of the tech companies about legislation, and then another is what's happening behind closed doors. I made a reference in my opening statement to the idea of shalls and mays. I was surprised. I have been surprised in my work in the California legislature by the way, in which tech industry lobbyists sometimes hiding behind industry groups, sometimes from individual named companies, are able to arrive at legislators doors with requests to remove shalls and replace them with mays to take legislative language that was very well intentioned and at the 11th hour turn it into something that is meaningless. It concerns me greatly what I've seen in California. There are political realities that if draft legislation that comes from a civil society group, like the one that I work have been working for, the California Initiative for Technology and Democracy, that if those pieces of legislation are too bold sponsors of that legislation, organizational sponsors will be told this isn't going to work and you're going to have to weaken it. And sometimes that comes in many rounds of weakening and it can be very painful to watch.

Sen. Richard Blumenthal (D-CT):

There are two studies you mentioned, a study Mr. Harris, which we will put in the record along with your testimony. I want to mention two others. A Stanford Internet observatory report this past December that found that the training data sets used by AI companies are filled with thousands of known images of child sexual abuse, possessing that kind of material is a crime. And as you wrote Dr. Mitchell in your testimony, training AI with these kinds of abusive materials means there's a good chance that the model will generate new abuse material. The second study is a survey released last month by the anti exploitation group, thorn, one in 10 preteens and teens reported that their friends or classmates had used AI tools to generate sexual images of other kids, one in 10 preteens and teens saying they'd use these tools to generate sexual images of their peers. So these new AI tools, which are trained on child sexual exploitation are fostering sexual harassment of young people. And the apps to create that abuse material are easy. They're free to find on Google and Apple app stores right now possessing and creating child abuse material. Clear example of the problems with this race to the market, which can easily turn into a race to the bottom. There must be steps that generative AI companies can be taken to make sure their tools aren't being trained on illegal child exploitation material and aren't creating these kinds of abusive images of anyone, let alone kids. Would you agree? And what kinds of steps should these companies be taking?

Margaret Mitchell:

Yes, thanks for the question. So this speaks to a few different issues that I think I also hit on in my written testimony. So one is just how data analysis is really not a norm within AI culture generally. And so we don't really understand when we're training a model what might be in the data. And part of my work has actually tried to invent methods in order to probe datasets in order to figure out possible problematic content. And it falls on deaf ears. There really isn't an interest in it. So I think that work that can further incentivize this, I think NIST and the USA Safety Institute probably have a very clear role to play here in helping us understand what the contents of the data are and further providing mechanisms for people to actually do this in a lawful way in order to really understand the contents.

I'll also add, this is just sort of insider knowledge that might be useful for you to know. A lot of the data that ends up being used in machine learning systems are collected via an organization called Common Crawl. I do wonder if there might be room for the government to help Common Crawl to analyze the data before it's released at all, which would stop the further proliferation right where it starts. And the last part I want to make is this is also where watermarking and provenance information is really key, including invisible watermarking, which is open research, but something that would be very, very helpful in order to trace back where this came from.

David Evan Harris:

If I could add Senator.

Sen. Richard Blumenthal (D-CT):

Absolutely.

David Evan Harris:

So yes, I agree with the statements that you made and I think there are a few lessons that we should learn from this current situation. One is, again, voluntary self-regulation does not work. There is another public statement about this topic that was signed by a number of companies that signed on to what Thorn and Alltech is human, another tech nonprofit called the Safety by Design Principles. And amongst the signers to that were included stability, ai, I believe hugging Face another company called C ai, and they signed this and made agreements about doing things to prevent the use of child sexual abuse material in AI models. Now the problem is that that report that you referred to from the Stanford Internet Observatory that was released in December, but until just a few weeks ago, there had been no moves to actually take down the offending AI models in any serious way.

A few weeks ago when I was on the eve of publishing a paper on this topic with IEEE Spectrum a technology publication, I reached out to those companies and perhaps in response to my outreach, perhaps not one of the companies involved runway ml, the company that people point to as being responsible for training Stable Diffusion 1.5, the specific model trained on thousands of images of CSAM, they actually removed their Hugging Face repository. Now, the problem with removing a repository, this was an open source AI model. The problem with removing it is that there were many other versions of that model that are still hosted on Hugging Face. And my co panelist, Dr. Mitchell, appearing here in her individual capacity, and I have immense respect for her research and do not expect her to feel the need to speak up on behalf of or in response to my statements here.

But Hugging Face has chosen not to take down other versions of Stable Diffusion 1.5. Another company called CIVITAI has a website with many, many models derived from Stable Diffusion, 1.5, perhaps the original model itself or something close to it. They also choose not to take it down. And these are the open source models that are likely behind the report that I shared with you from Graphika that will also go into the record states that a big reason for the rise of these undressing or notifying apps is the advancement of open source AI image generation tools. And until we can get the companies that are hosting these tools to take just the minimum level of responsibility, I mean, I've done content moderation before I've worked on this issue. If something is called Stable Diffusion, 1.5, Stanford says that was trained on child sexual abuse material. Why let someone upload something called Stable Diffusion 1.5 today after it was taken down, more people uploaded dozens more versions of it.

Sen. Richard Blumenthal (D-CT):

Great. Any other comments on that question, Dr. Mitchell?

Margaret Mitchell:

Yeah, I mean I'd love to follow up with you if that's the case. I wasn't aware of that, so that's very important information for me to know. So yeah, I mean this is a serious concern and I think this is part of why we need the government's help in order to be able to even search for these things. So there are techniques you can use to try and understand if there might be an issue. And I won't detail them here because of potential malicious actors watching. I can follow up with you privately, but trying to specifically identify the content I think breaks several laws. So you end up in a weird sort of bind where you have to try and see if it's there, but you can't really see if it's there. And then it ends up with potentially censoring some content that really shouldn't be censored. So it is really, really difficult, and we could use the government's help on this directly.

Sen. Richard Blumenthal (D-CT):

I'm not suggesting that it's a simple issue, but there is a need for action. And I think all of you have made the point that we can't rely on the companies alone to take action because of the incentives that they have and that they communicate to their employees, which are get to the market as soon as possible. And let's worry about hindsight afterward. Someone mentioned open source, is that a consideration for us? What should we think about open source in this context of protecting the public? Dr. Mitchell?

Margaret Mitchell:

Thank you. And so this is something that I actually work on within my professional capacity. So there's this term open source that I think is used a lot without fully understanding what it means. It's also used, I think in situations where it doesn't necessarily apply. The thing to recognize is that there's a gradient of openness that you can have things that are more open or less open, depending on foreseeable risks. So for example, one of the things I've been involved with at hugging face is implementing gating such that you can't access a model, you can't access a data set unless you've provided personal information, unless you've taken a training and can show the certificate, that kind of thing. And so there's an entire spectrum from fully closed to fully open where I think we can do a lot of really good work depending on the specific foreseeable problematic uses of different models and data sets,

David Evan Harris:

If I could. Sure. So I've written extensively on this topic of open source, and I think there are a number of important points. The first one that I would like to caution you of in advance is that in the European Union, in the lead up to the signing and the finalization of the EU AI Act, a number of companies pushed very, very hard to get full exemptions for open source ai from the provisions of the EU AI Act. I would expect if I were in your position that you will receive these types of requests here if you have not already exempting open source AI from AI regulation is not appropriate. That's because once you release an open AI system, some people now even including in the federal government, are using a different term. Open weights, that's perhaps a more appropriate term. The executive order on AI uses the term dual use models, foundation models with widely available model weights.

There's a lot of terminology issues here. I'll go with open weights because I think that's the clearest one right now. Also, unsecured is a way to think about it. So when you release open weights, unsecured models, you never get them back. If they're trained on thousands of images of CSAM and they can produce CSAM, a child sexual abuse material, you will never retrieve that, it will live on, you can do things to make it hide out in only the hidden corners of the internet, you can make it less available. And that's the case with all of these open weights or unsecured models. And unfortunately, we've seen a number of companies take that strategy without doing significant safety testing on models. And that's why I'm very concerned about this. And I believe there should be no exemptions to any AI laws for open weights or unsecured AI systems.

Sen. Richard Blumenthal (D-CT):

Just a few questions in closing. One of the arguments that we hear in this realm as well as many others, is regulation will inhibit innovation or regulation will impede our competition with our adversaries. China being one. We've had some questions about China. Should we be concerned about impeding innovation or putting companies in this country at a disadvantage with efforts abroad? I think that's a question I'd like to ask all of you, Ms. Toner.

Helen Toner:

Certainly happy to begin. I think it's a question that is worth your consideration and should certainly factor heavily into how regulation is designed and how Congress makes decisions about what legislation to pass. But I think it is used as a catchall rejection of any kind of regulation, and I think that that is mistaken. I think it's mistaken for a few reasons, primarily among them that there are many, many kinds of regulation and some of them inhibit innovation quite a lot. Certainly poorly designed heavy handed regulation can do that, but plenty of kinds of regulation do the opposite. So one example is there were, my understanding is in maybe the seventies or eighties, there were rule changes around maybe more recently, there were rule changes in the financial sector around who was responsible for credit card fraud that made the banks have to foot the bill if cards were used fraudulently.

And one significant result of that regulation was significant innovation in credit card security and different mechanisms to reduce the chances of that fraud. So that's a regulation that both protected consumers and also directly spurred innovation. I think a mechanism that to me seems very important for AI that I mentioned before and can elaborate on a little, is this idea of consumer trust or consumer willingness to use this technology to rely on this technology, not just consumers, also businesses, enterprise use of ai. Right now we're in a state where people don't actually trust this technology. They don't have the experience that it's particularly reliable or that it necessarily has results that fall within their expectations and that makes them less willing to use it, which means the companies are owning less revenue, which means that they're less able to reinvest that revenue in research. So I think there are examples in food safety is an area we're very fortunate in this country that we have to think almost not at all about the safety of our food going from day to day because we have such good food safety standards. And I think AI is very, very far from that point. So I think in my mind, almost all of the policy recommendations that have been made today are the kinds of recommendations that really would not inhibit innovation at all, and in some cases might actively promote it,

Sen. Richard Blumenthal (D-CT):

Mr. Saunders.

William Saunders:

So yeah, I think this is an important question to consider in any regulation. And then I think there are approaches. I think the things that are going to really harm innovation is if there is a large scale AI disaster in the way that nuclear power was cut back on after Chernobyl, that will really lose trust in the industry. And I think another way to sort of lose the race for everyone frankly, is if we develop cutting edge AI systems and then they're stolen by a foreign government before we understand that they're dangerous, and now you end up at a situation where both America and another country are sort of in a standoff where they're both afraid of each other using the systems, this is a way that we would all lose. So I don't think you can have a lead without being really careful about security.

And then I think the frameworks or and ideas that are emerging are sort of tied around testing systems to figure out when they reach levels where they have certain dangerous capabilities and then adding requirements only when that point is reached. And then if that testing is done rigorously, then you can respond before the horse leaves the barn. And companies are starting to agree to some sort of voluntary frameworks that have this character, but there aren't laws in place that would enforce that. And so it'll come to some very difficult decisions where there are sort of, on one hand, there is some system that has maybe billions of dollars put into it, and on other hand, there are unresolved safety concerns or testing that is rushed or something. And so I do think both that we need to have standards around that, but I also do believe we can figure out how to make this technology safe.

And I think the worst problems are going to have is if the companies don't take the time to prepare and then they wait until the last minute and it's not clear what the rules and restrictions will be. And so then again, they end up in a situation where they really want to ship something that really hasn't been ready. And so it's much better if the rules and guidance are laid out in advance. And companies know that if they build something that is capable of causing catastrophe, here's what the requirements they'll have to meet are. That's what I think the path for legislation to both protect the public and preserve innovation. Thank you.

David Evan Harris:

Thank you. Thank you so much, Senator, for the question. In anticipation of this line of questioning, I actually included a sampling of quotes from AI companies, CEOs asking for more regulation. There are so many, it appears that almost all of the CEOs of all the major AI companies have called for this. Now, I do want to caution that I see some kind of disconnect. I suspect that the disconnect is that the lobbyists are not, the lobbyists that work for these tech companies are perhaps not in direct communication with the CEOs. Maybe something gets lost in communication, maybe they're actually automated themselves and they're still in an old version of their operating systems, but they come in with a goal to kill or hobble. Every piece of legislation about AI seems like it's autopilot because I think if they thought more deeply, talk to the CEOs who said those quotes that I've given you in the written testimony, they would be more thoughtful about collaborating on these efforts to regulate.

But I would also submit to you that in addition to that evidence CEOs are calling for it. I think if they're calling for it, that probably doesn't mean they think it's in all cases going to impede innovation. Also, it's advantageous to tech companies to have clear rules of the road upon which they are racing. Otherwise they're going to end up parked in the parking lot of the courthouse depending on regulations that stem from decades old laws that are brought to bear on them, or tort law, which will just lead to more cost and confusion or states with many different laws. And that itself is more costly. So I do believe that clear regulation is advantageous to innovation and to the tech companies, and we should take their CEO's word for it.

Sen. Richard Blumenthal (D-CT):

Dr. Mitchell?

Margaret Mitchell:

Yes, thank you so much. So I think I covered this a bit in my written testimony, but I do see a way perhaps naively that regulation might actually be able to help innovation. So I'll give you an example of privacy. For example. It was previously thought that you couldn't ensure privacy with mathematical guarantees, but with a goal set for ensuring privacy, the research community was able to develop ways where you could mathematically guarantee privacy. And so I think the government might have a role to play in setting goals that must be demonstrated, statistical fairness, privacy, security, safety, these kinds of things in order to incentivize further development on how to do that in the best way. Again, this might be naive. I know that there's the regulation stifles innovation, but I've generally seen ways where it might be helpful. I do see how regulation might be problematic in the case where it tries to change the behaviors of development. So for example, saying you can't infer gender might mean that you can't create a model that you can certify, does not discriminate against gender, that kind of thing. So by getting into the details of how the tech is developed, there might be serious issues. But I think by setting high level goals of what must be demonstrated, regulation can be very useful.

Sen. Richard Blumenthal (D-CT):

I think these points are very, very important. If there's a disaster on ai, it's equivalent to a food company taking the issue of food safety, having a major outbreak of listeria food poisoning. It has to recall products, its credibility is undermined. It's a major marketing disaster. And this kind of AI catastrophe would impact not only that company, but possibly others as well. So there's a very rational and powerful argument for regulation that makes sure that everybody is abiding by common sense, rational standards that are good for the industry. And the idea of some certainty, one after another. For 20, 30 years, businesses have told me, give us the rules. We may disagree with you, but just tell us the rules and don't change them from year to year. We need some stability. We can work with whatever rules you give us, but we need to know what they are and they need to be clear and they need to be stable.

And frankly, I take them at their word because I think, again, in their rational interests, that ought to be their attitude. Unless we do something crazy, nobody here wants to do something crazy because we recognize the immense good that can come from ai. And this goes back to the point that you made miss toner, that it would be easy to say, let's ban ai, but it has enormous potential for good. Take the medical area, the potential for research as well as for treatment and diagnosis immense. It's a little bit like, and forgive me for being over simplistic, but I've been doing some reading on covid, the origins of covid, the debate about did it start in a lab, was there a release? Was there research that tried to develop a more potent evil virus or did it come from the wild? At this point, we don't know at least there are equally, or I shouldn't say, there are theories that have credibility that argue for all of it, but we do know that there are dangers in research on gain of function trying to develop more lethal viruses. Now, the argument for that research is then we'll know how to deal with them. The argument against it is why would you develop a virus that is more destructive and damaging?

The argument is not exactly the same. It's analogous, not the same. But with every advance in technology, there are also potential downsides. And the point here is to deal with the downsides and not rely only on the creators to oversee what they're doing, but impose some rules of the road that protect them and the industry as well as the public. And I think you have been enormously enlightening today to us. I hope we can continue this conversation because our goal is to achieve the promise with what you have called Ms. Toner a light touch. We won't all agree on what a light touch is, but if we are honest with each other, I think we can develop some standards as well as enforcement mechanisms that make sure that we impose accountability as well as criteria for judging whether or not a particular form of AI is safe and effective.

Just as we would impose that standard on new drug development, safety and efficacy are the standard for the FDA. I am not going to ask you the 64 billion question. When will we have generative AI that is as smart as people? Unless you want to answer that question, I'll open the floor to you if you would like to give us your prediction, because that's a horse's out of the barn issue in a sense. Once it's there and we don't have standards, it's going to be put on the market. So maybe you want to address it. You don't have to. Mr. Saunders.

William Saunders:

Yeah, I think when I thought about this, there was at least a 10% chance of something that could be catastrophically dangerous within about three years. And I think a lot of people inside of OpenAI also would talk about similar things. And then I think without knowing the exact details, it's probably going to be longer. I think that I did not feel comfortable continuing to work for an organization that wasn't going to take that seriously and do as much work as possible to deal with that possibility. And I think we should figure out regulation to prepare for that because I think, again, if it's not three years, it's going to be the five years or 10 years the stuff is coming down the road, and we need to have some guardrails in place.

David Evan Harris:

I'm happy to answer that one.

Sen. Richard Blumenthal (D-CT):

Mr. Harris.

David Evan Harris:

Thank you so much. I think the good news I have for you is that the answer to that question might not be that important for you. The reason I say that is that my background working on AI is in other areas, not about artificial general intelligence, but about AI and bias, AI and DeepFakes and elections and more recently AI and harm to children. That said, I'm lucky to have an excellent colleague at uc, Berkeley, Stuart Russell, who's also appeared before this committee. Stuart has invited me to a couple of his conferences, of his lab where I am in a room surrounded by hundreds of people with computer science degrees, PhDs who've dedicated their lives and their careers to stopping artificial intelligence from eliminating our species. It was a little bit difficult for me train as a sociologist to decide where I came down on this issue.

But as the conversation moved to policy and what policies we need, what I found was that there was almost no disagreement between me thinking about the problems that I've focused on in my work and theirs the same issues, liability, licensing, registration. Many of them are also actually quite concerned about DeepFakes and deception. So provenance, those key solutions are the solutions that we need to address the issues of AI bias, discrimination, disinformation interference in elections, harm to children, and the specter of ai, artificial general intelligence or super intelligent AI being abused by bad actors to harm us in catastrophic ways.

Sen. Richard Blumenthal (D-CT):

Dr. Mitchell.

Margaret Mitchell:

Thanks. Appreciate it. So I think it's useful for me to say, as someone who works on ai, who has done a lot of work on rigorous evaluation, this sort of thing, I don't know what it means for an AI system to have human level intelligence. What I do understand that I think is related is to have a system that can do well on a lot of different tasks that are also important for people, that people also do those kinds of comparisons. I think we make a mistake when we group everything together and say, this is basically like humans. And I want to say that I think the enterprise of a GI might be inherently problematic when it's not focusing on what the specific tasks that the systems should be solving are. So while there's been an interest in a GI so that lots of different things might be done, it might be beneficial to think about what those specific things are and creating task built models for those specific tasks. And that way we have a lot more control over what's happening. We can do much more rigorous analysis of input and outputs. We keep things closed within the specific domains where a system is meant to be helpful.

Sen. Richard Blumenthal (D-CT):

I think Mr. Harris, you're absolutely right. It isn't that important and it shouldn't be that important because right now we're seeing some of the abuses and the problems. So we need to act right now. And if someone were to say, well, you don't have to worry about it for five or 10 years, I wouldn't believe that person anyway, because the consequences of getting it wrong are so disastrous that we should be doing it right now. It's like saying, well, we don't have to worry about an outbreak of the next pandemic because it won't happen for a hundred years. The flu didn't happen for a hundred years after the last pandemic. Well, but it could have happened 10 years, it could have happened five years afterward. And we live in a world where the disasters seem to be happening more often, as in climate change and so forth. So I'm going to have to close the hearing. I regret that I do have to close the hearing because I'm learning a lot. I hope my colleagues have learned a lot. I hope to continue this conversation. It is immensely important, and I thank you all for your good work, for the perspective that you brought to us as insiders who have seen this issue firsthand and have chosen to give us the benefit of your perspective. And thank you all. This hearing is adjourned. Thank you.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics