Home

Donate

Transcript: Senate Rules Committee Hearing on AI and Elections

Gabby Miller / Sep 28, 2023

Gabby Miller is staff writer at Tech Policy Press.

Sen, Amy Klobuchar (D-MN), Chair of the Senate Rules and Administration Committee, September 27, 2023.

On Wednesday, the US Senate Rules and Administration Committee gathered to discuss the emerging threat artificial intelligence may pose to elections, and how to put guardrails in place to safeguard democracy, in a hearing titled “AI and the Future of Our Elections.”

Led by Chairwoman Sen. Amy Klobuchar (D-MN) and Ranking Member Sen. Deb Fischer (R-NE), the hearing covered a range of topics, such as weighing the benefits and risks of using AI technologies in elections, empowering the Federal Election Commission (FEC) with the ability to protect US elections against fraudulent AI-generated political communications, and prioritizing First Amendment speech protections, among others.

Sen. Klobuchar expressed urgency for the Senate to “take the lead” on crafting relevant bipartisan legislation that will pass constitutional muster before the end of the year. The Rules Committee is the only committee in the Senate where Sen. Majority Leader Charles Schumer (D-NY) and Sen. Minority Leader Mitch McConnell (R-KY) both serve, making it an important forum for drafting legislation with higher odds of success.

Witnesses included:

Sen. Schumer, who has been leading the charge on AI in the Senate with his series of AI ‘Insight Forums,’ the first of which took place last week, made a cameo appearance during the Rules Committee hearing. Maya Wiley was also among the twenty-two total attendees invited to Sen. Schumer’s closed-door Forum. (Tech Policy Press has and will continue to follow all nine of these promised forums with our dedicated tracker.)

A lightly edited transcript of the hearing is below:

Sen. Amy Klobuchar (D-MN):

Noon everyone. I am honored to call this hearing to order. I'm pleased to be here with my colleague, Senator Fischer, wearing her pin with the ruby red slippers, which symbolizes there's no place like home and this on my heels. Yeah, this week in Washington, it's kind of on our minds, so thank you as well, Senator Merkley for being here. I know we have other members attending as well, and I want to thank Ranking member Fischer and her staff for working with us on this hearing on artificial intelligence in the future of our elections.

I want to introduce, I will introduce our witnesses shortly, but we are joined by Minnesota Secretary of State, Steve Simon with vast Experience running elections and is well-respected in our state and nationally. Trevor Potter, the president of the Campaign Legal Center, and former FEC Commissioner and Chair, thank you for being here.

Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights. And we're also going to hear, I know that ranking member Fischer will be introducing our two remaining witnesses. We thank you for being here. Neil Chilson, senior Research Fellow at the Center for Growth and Opportunity, and Ari Cohn, free speech counsel at TechFreedom. Like any emerging technology, AI comes with significant risks and our laws need to keep up. Some of the risks are already clear, starting with security, which includes protecting our critical infrastructure, guarding against cyber attacks, staying ahead of foreign adversaries. We must also protect our innovation economy, including the people who produce content and countering the alarming rise in criminals using AI to scam people. Confronting these issues is a major bipartisan focus here in the Senate where two weeks ago we convened the first in a series of forums organized by Leader Schumer and Senators Rounds and Young and Senator Heinrich, to discuss this technology with experts of all backgrounds, industry, union, nonprofit across the spectrum in their views.

Today we're here to focus, hone in on a particular risk of AI. That's a risk that it poses for our elections and how we address them. Given the stakes for our democracy, we cannot afford to wait. So the hope is we can move on some of this by year end with some of the legislation which already has bipartisan support to be able to get it done with some larger legislation. As I noted, we're already seeing this technology being used to generate viral, misleading content to spread disinformation and to see voters. There was an AI generated video, for instance, posted on Twitter of one of my colleagues, Senator Warren, in which a fake Senator Warren said that people from the opposing party shouldn't be able to vote. She never said that, but it looked like her. The video was seen by nearly 200,000 users in a week, and AI generated content has already began to appear in political ads.

There was one AI generated image of former President Trump hugging Dr. Fauci, that was actually a fake. The problem for voters is that people aren't going to be able to distinguish if it's the opposing candidate or their own candidate, if it's them talking or not. That is untenable in a democracy. Plus new services like banter AI have hit the market, which can create voice recordings that sound like say President Biden or other elected officials from either party. This means that anyone with a computer can put words in the mouth of a leader that would pose a problem during an emergency situation like a natural disaster, and it is not hard to imagine it being used to confuse people. We also must remember that the risks posed by AI are not just about candidates, it's also about people being able to vote. In the judiciary hearing, I actually just simply asked ChatGPT to write me a tweet about a polling location in Bloomington, Minnesota.

I noted that sometimes there were lines at that location. What should voters do and it just quickly spit out, go to 1, 2, 3, 4 Elm Street. There is no such location in Bloomington, Minnesota, so you have the problem of that too. More likely to occur as we get closer to an election with AI. The rampant disinformation we have seen in recent years will quickly grow in quantity and quality. We need guardrails to protect our elections. So what do we do? And I hope that'll be some of the subject. In addition to admiring the problem that we can discuss today, Senator Hawley and I worked over the last two months on a bill together that we are leading together Hold Your Beer. That's correct. On a bill that we're leading together to get at deepfake videos like the ones I just talked about used against former President Trump, used against Elizabeth Warren.

Those are ads that aren't really the people. Senator Collins and Senator Coons, Senator Bennet, Senator Ricketts have joined us already on that bill. We just introduced it and it creates a framework that is constitutionally all right, based on past and recent precedent with exceptions for things like parody and satire that allows those to be banned. Another key part of transparency when it comes to this technology is disclaimers for other types of abs. That is another bill Congresswoman Yvette Clarke is leading it in the house, which would require disclaimer on ads that include AI generated images, so at least voters know that AI is being used in the campaign ads. And finally I see Commissioner Dickerson out there. Finally, you happy about that Mr. Cohn? There you go. Finally, it's important that the Federal Elections Commission be doing their part in taking on these threats while the FEC is now accepting public comments on whether it can regulate the deceptive AI generated campaign ads after dead locking on the issue earlier this summer, we must remain focused on taking action in time for the next election.

So whether you agree or not agree that the FEC currently has the power to do that, there's nothing wrong with spelling it out if that is the barrier. So we are working with Republicans on that issue as well, so I kind of look at it three prong, the most egregious that must be banned with the constitutional limitations, the disclaimers, and then giving the FEC the power that they need, as well as a host of state laws, one of which I'm sure we'll hear about from Steve Simon. With bipartisan cooperation put in place and we will get the guardrails that we need, we can harness the potential of AI, the great opportunities while controlling the threats we now see emerging and safeguard our democracy from those who would use this technology to spread disinformation and upend our elections, whether it is abroad, whether it is domestic, I believe strongly in the power of elections. I also believe in innovation and we have got to be able to draw that line to allow voters to vote and make good decisions while at least putting the guard rails in place. With that, I turn over to my friend, Senator Fischer. Thank you.

Sen. Deb Fischer (R-NE):

Thank you Chairman Klobuchar, and thank you to our witnesses today for being here. I do look forward to hearing your testimony. Congress often examines issues that affect Americans on a daily basis. Artificial intelligence has become one of those issues. AI isn't new but significant. Increases in computing power have revolutionized its capabilities. It's quickly moved from the stuff of science fiction to being a part of our daily lives. There is no question that AI is transformative and is poised to evolve rapidly. This makes understanding AI all the more important. In considering whether legislation is necessary, Congress should weigh the benefits and the risks of ai. We should look at how innovative uses of AI could improve the lives of our constituents and also the dangers that AI could pose. We should consider the possible economic advantages and pitfalls. We should thoughtfully examine existing laws and regulations and how they might apply to ai.

Lately, AI has been a hot topic here in Washington. I know many of my colleagues and committees in both chambers are exploring this issue. The Rules Committee's jurisdiction includes federal laws governing elections and campaign finance, and we're here today to talk about how AI impacts campaign politics and elections. The issues surrounding the use of AI in campaigns and elections are complicated. On one hand, there are concerns about the use of AI to create deceptive or fraudulent campaign ads. On the other hand, AI can allow campaigns to more efficiently and effectively reach voters. AI-driven technology can also be used to check images, video, and audio for authenticity. As we learn more about this technology, we must also keep in mind the important protections our constitution provides for free speech in this country, those protections are vital to preserving our democracy. For a long time, we didn't have many reasons to consider the sources of speech or if it mattered whether AI was helping to craft it. Our first amendment prohibits the government from policing protected speech, so we must carefully scrutinize any policy proposals that would restrict that speech. As Congress examines this issue, we need to strike a careful balance between protecting the public, protecting innovation, and protecting speech. Well-intentioned regulations rushed into law can stifle both innovation and our constitutional responsibilities. Again, I am grateful that we have the opportunity to discuss these issues today and to hear from our expert witnesses. Thank you.

Sen. Amy Klobuchar (D-MN):

Thank you very much, Senator Fischer. I'm going to introduce our witnesses. Our first witness is Minnesota Secretary of State, Steve Simon. Secretary Simon has served as Minnesota's Chief Elections Administrator since 2015. He previously served in the Minnesota House of Representatives and was an Assistant Attorney General. He earned his law degree from the University of Minnesota and his bachelor's degree from Tufts. Our second witness is Trevor Potter, president of the Campaign Legal Center, which he founded in 2002 and former Republican chairman of the Federal Election Commission after his last appointment by president h w Bush. He had appeared before this committee last in March of 2021 and didn't screw up, so we invited him back again. Mr. Potter also served as general counsel to my friend and former colleague, Senator John McCain to thousand and 2008 Presidential campaign and has taught campaign finance at the University of Virginia and at Oxford he earned his law degree from University of Virginia, bachelor's degree from Harvard. Our third witness is Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights. Ms. Wiley is also a professor of public and Urban Policy at the New School. Previously she served as counsel to the mayor of New York City and was the founder and president of the Center for Social Inclusion. She earned her law degree from Columbia Law School and her bachelor's degree from Dartmouth. And with that I will have Senator Fischer introduce our remaining two witnesses.

Sen. Deb Fischer (R-NE):

Thank you Senator Klobuchar. Again, I thank our witnesses for all being here today we have with us also Neil Chilson, who serves as a senior research fellow at the Center for Growth and Opportunity, a nonpartisan think tank at Utah State University that focuses on technology and innovation. Mr. Chilson previously served as acting chief technologist at the Federal Trade Commission. We have Ari Cohn who serves as free speech counsel at Tech Freedom, a nonpartisan nonprofit devoted to technology, law and policy, and the preservation of civil liberties. Mr. Cohn is a nationally recognized expert in First Amendment law and defamation law and co-authored amicus briefs to state and federal courts across the country on vital First Amendment issues. Welcome to all of you.

Sec. Steve Simon, MN:

Members of the committee thank you for this opportunity. I'm Steve Simon. I have the privilege of serving as Minnesota's Secretary of State. I'm grateful for your willingness to engage on this important topic and I really am honored to be here. Artificial intelligence is not a threat to American democracy in and of itself, but it is an emerging and powerful amplifier of existing threats. All of us who touched the election process must be watchful and proactive, especially as the 2024 presidential contest approaches. A year ago we weren't talking so much about generative ai, the release of the newly accessible tools such as chat, G P t challenged all that and in the hands of those who want to mislead. AI is a new and improved tool. Instead of stilted communications with poor grammar, generative AI can provide apparent precision and clarity. The potential threat to the administration of elections is real.

We're talking about an old problem, namely election misinformation and disinformation that can now more easily be amplified. One possible danger could come from an innocent circumstance. AI software simply might fail to grasp the nuances of our state by state election system. A prominent computer scientist in Minnesota named Max Hain made this point in an article several months ago. He asked ChatGPT questions about Minnesota election law much as Senator Klobuchar said that she did and the program gave the wrong answers to several questions. Now, was that intentional misdirection? Probably not. Still, it is a danger to voters who may get bad information about critical election rules in the wrong hands. AI could be used to misdirect intentionally and in ways that are far more advanced than ever. I remember seeing a paper leaflet from an election about 20 or more years ago, distributed in a particular neighborhood that told residents that in the coming election, voting would occur on Tuesday for those whose last names begin with the letters A through L while everyone else would vote on Wednesday.

Now that was a paper leaflet from a couple or more decades ago. Now imagine a convincing seeming email or deepfake conveying that kind of disinformation in 2024. The perpetrators could be domestic or foreign. In fact, the Department of Homeland Security has warned recently that our foreign adversaries may use AI to sharpen their attacks on our democracy. One last point on potential consequences. The Brennan Center recently identified a so-called liars dividend from the very use of ai. Simply put, the mere existence of AI can lead to undeserved suspicion of messages that are actually true. A video, for example, that contradicts a person's preconceived ideas may now be simply dismissed as a deep fake. The bottom line is that misdirection in elections can cause disruption. So if AI misdirects, it could become an instrument of that disruption. So what can be done about it? Well, in our office we're trying to be proactive.

First, we're leading with the truth. That means pushing out reliable and accurate information while also standing up to miss and disinformation quickly. Second, we've been working with local and federal partners to monitor and respond to inaccuracies that could morph into conspiracy theories on election related topics. Third, we've emphasized media literacy. The National Association of Secretaries of State has helped with its trusted sources initiative urging Americans to seek out sources of election information from secretaries of state and local election administrators. Fourth, our cyber defenses are strong. We've invested time and resources in guarding against intrusions that could introduce misleading information to voters. As for possible legislation, I do believe that a federal approach would be helpful. The impacts of AI will be felt at a national level. So I applaud bipartisan efforts such as the Protect elections from Deceptive AI Act and the Real Political Ads Act.

Recently, the Minnesota legislature enacted similar legislation with broad bipartisan support. There is a critical role for the private sector too. Companies have a responsibility to the public to make sure their AI products are secure and trustworthy. I support the efforts already underway to encourage adherence to basic standards, but let me end on a note of some cautious optimism. AI is definitely a challenge. It's a big challenge, but in some ways we have confronted similar challenges before with each technological leap, we have generally been able to manage the potential disruptions to the way we receive and respond to information. The move to computerization, the arrival of the internet, the emergence of social media all threatened to destabilize information pathways, but in short order, the American people got smart about those things. They adapted and Congress helped. AI may be qualitatively different from those other advances, but if we get better at identifying false information and if we continue to rely on trusted sources for election information and if Congress can help, we can overcome many of the threats that AI poses while harnessing its benefits to efficiency and productivity. Thank you for inviting me to testify today. I look forward to our continued partnership.

Sen. Amy Klobuchar (D-MN):

Thank you very much. Appreciate it. Mr. Potter.

Trevor Potter:

Good afternoon and thank you for the honor of appearing before you today to testify about artificial intelligence and elections. My testimony will focus on how political communications generated through AI relate to the conduct of campaigns and why federal regulation is urgently needed to address the impact of some aspects of this technology on our democracy. To summarize the overarching concern, AI tools can increasingly be used to design and spread fraudulent or deceptive political communications that infringe on voters'. Fundamental right to make informed decisions at the ballot box. Every election cycle, billions of dollars are spent to create and distribute political communications. Before voters cast their ballots, they must parse through these many messages and decide what to believe. Our campaign laws are intended to protect and assist voters by requiring transparency about who is paying to influence their election choices and who is speaking to them.

However, AI could make voters task much more difficult because of its unprecedented ability to easily create realistic false content. Unchecked, the deceptive use of AI could make it virtually impossible to determine who is truly speaking in a political communication. Whether the message being communicated is authentic or even whether something being depicted actually happened, this could leave voters unable to meaningfully evaluate candidates and candidates unable to convey their desired message to voters. Undermining our democracy, it opens the door to malign even foreign actors to manipulate our elections with false information. Foreign adversaries may not favor specific candidates. They may just seek to create chaos and sowdistrust in our elections, thereby harming both parties and the whole country. I believe there are three concurrent paths to proactively addressing these risks. Three paths flagged by the chair in her opening remarks. First, Congress could strengthen the F'S power to protect elections against fraud.

Under current existing law, the FEC can stop federal candidates and their campaigns from fraudulently misrepresenting themselves as speaking for another candidate or party on a matter which is damaging to that candidate or party. I believe the FEC should explicitly clarify through the rulemaking process that the use of AI is included in this prohibition. Then Congress should expand this provision to prohibit any person, not just a candidate from fraudulently misrepresenting themselves as speaking for a candidate. Second, Congress should pass a new law specifically prohibiting the use of AI to engage in electoral fraud or manipulation. This would help protect voters from the most pernicious uses of ai. While any regulation of campaign speech raises first amendment concerns that must be addressed. Let me also say this, the government has a clear compelling interest in protecting the integrity of the electoral process. In addition, voters have a well-recognized first amendment right to meaningfully participate in elections, including being able to assess the political messages they see and know who the actual speaker is.

There is no countervailing first amendment right to intentionally defraud voters in elections. So a narrow law prohibiting the use of AI to deceptively undermine our elections through fake speech would rest on firm constitutional footing. Thirdly and finally, Congress should also expand existing disclosure requirements to ensure voters know when electoral content has been materially altered or falsified by ai. This would at least ensure voters can treat such content with appropriate skepticism. These proposals are not mutually exclusive or exhaustive. Congress could decide to use a combination of tools while a single solution is unlikely to remain relevant for long. Congress should carefully consider how each policy could be most enforced with options including overhauling the often gridlocked and slow FEC enforcement process, new criminal penalties, enforceable by the Justice Department and a private right of action allowing candidates targeted by deceptive AI to seek rapid relief in federal court. Thank you for the opportunity to testify today. I look forward to your questions.

Sen. Amy Klobuchar (D-MN):

Thank you very much Mr. Potter. The rules committee, as Senator Fischer knows, is the only committee on which both Senator Schumer and Senator McConnell serve. This makes our jobs very important, so we're pleased that Senator Schumer is here and working to give him the opportunity to say a few words.

Sen. Charles Schumer (D-NY):

Thank you. Well, thank you Senator Klobuchar and whatever committee you chaired would always be important. Same with Senator Fischer and I'd like to congratulate you, Mr. Potter. You made it as a witness without being from Minnesota.

Anyway, thank you. And I want to thank my colleagues for being here. As you all know, ai, artificial intelligence is already reshaping life on earth in dramatic ways. It's transforming how we fight diseases, tackle hunger, manage our lives, enrich our minds, ensure peace and very much more, but we cannot ignore AI's dangers, workforce disruptions, misinformation, bias, new weapons, and today I'm pleased to talk to you about a more immediate problem, how AI could be used to jaundice even totally discredit our elections as early as next year. Make no mistake, the risks of AI on our elections is not just an issue for Democrats nor just Republicans. Every one of us will be impacted. No voter will be spared, no election will be unaffected. It will spread to all corners of democracy and thus it demands a response from all of us. That's why I firmly believe that any effort by Congress to address AI must be bipartisan and I can think of few issues that should both unite both parties faster than safeguarding our democracy.

We don't need to look very hard to see how AI can warp our democratic systems this year, and we've already seen instances of AI generated DeepFakes and misinformation reach the voters. Political ads have been released this year right now using AI generated images and text to voice converters to depict certain candidates in a negative light. Uncensored chatbots can already be deployed at a massive scale to target millions of individual voters for political persuasion. And once damaging information is sent to a hundred million homes, it's hard oftentimes impossible to put that genie back in the bottle. Everyone has experienced these rampant rumors that once they get out there, no matter how many times you refute them, still stick around. If we don't act, we could soon live in a world where political campaigns regularly deploy, totally fabricated, but also totally believable. Images and footage of Democratic or Republican candidates distorting their statements and greatly harming their election chances.

And what then is to stop foreign adversaries from taking advantage of this technology to interfere with our elections? This is the problem we now face. If left unchecked AI's use in our elections could erode our democracy from within and from abroad, and the damage unfortunately could be irreversible. As Americans prepare to go to the polls in 2024, we have to move quickly to establish safeguards to protect voters from AI related misinformation. And it won't be easy for Congress to legislate on. AI is for us to engage in perhaps the most complex subject this body has ever faced. I'm proud of the rules committee and the leadership on this issue, and thank you Chairman Chairwoman Klobuchar for your continuing work on important legislative efforts to protect our elections from the potential harms on ai. And thank you again for organizing this hearing, holding this hearing on AI and our elections is essential for drawing attention to the need for action and I commend you and ranking Member Fischer for doing just that.

In the meantime, I'll continue working with Senator Rounds, Heinrich and Young to host AI insight forums that focus on issues like AI and democracy to supplement the work of the Rules committee and our other committees. And I look forward to working with both Senators Klobuchar and Fischer and all of the rules committee members. Thank you for being here to Senators Welch and Merck Lee and Britt to develop bipartisan legislation that maximizes AI's benefits minimizes the risks. Finally, the responsibility for protecting our elections won't be congresses alone. The administration should continue leveraging the tools we have already provided them and private companies must do their part to issue their own safeguards for how AI systems are used in the political arena. It'll take all of us, the administration, the private sector, Congress, working together to protect our democracy, ensure robust transparency and safeguards, and ultimately keep the vision of our founders alive in the 21st century. So thank you again to the members of this committee. Thank you to Chairwoman Klobuchar, ranking member Fischer for convening the hearing. I look forward to working with all of you on comprehensive AI legislation and learning from your ongoing work. Thank you.

Sen. Amy Klobuchar (D-MN):

Thank you very much, Senator Schumer. And I'll note it was this committee with your and Senator McConnell's support that was able to pass the electoral reform bill, electoral count act with near support and got it over the finish line on the floor. And so we hope to do the same with some of these proposals and thank you for your leadership and your willingness to work across the aisle to take on this important issue. With that Ms. Wiley, you're up next. Thanks.

Maya Wiley:

Good afternoon, chairwoman Klobuchar, ranking Member Fischer, my own senator, our majority leader, Schumer Brooklyn, to be specific, sorry, and all the members of this esteemed committee. It's a great honor to be before you. I do just want to correct the record because I am no longer on the faculty at the new school, although I have joined the University of District Columbia School of Law as the Joseph RA professor. I'm going to be brief because so much of what has been said I agree with, but really to elevate three primary points that I think are critical to remember and that I hope we will discuss more deeply today and in the future. One is that we know disinformation misinformation is not new and it predates artificial intelligence and that's exactly why we should deepen our concern and why we need government action because as has already been said and we at the leadership conference have witnessed this growth already, even in the last two election cycles, artificial intelligence is already expanding the opportunity in the depth of not only disinformation in the sense of elevating falsehoods about where people vote, whether they can vote how to vote, that goes directly to the ability of voters to select candidates of their choice and exercise their franchise lawfully.

And we have seen that it disproportionately targets communities of color. Even the Senate Intelligence Committee noted that when it was looking at Russian interference in the 2016 election, that African-American community was really disproportionately targeted by that disinformation. The tools of artificial intelligence we are already seeing in the generative sense of artificial intelligence, the deep fakes already being utilized by some political action committees and political parties. And that is something that already tells us that it's already in our election cycle and that we must pay attention to whether or not people have clear information about what is and is not accurate what a candidate did or did not say in addition to the other things that we've talked about. But I also want to talk about the conditions in which we have to consider this conversation about generative artificial intelligence and our election integrity. We only have a democracy if we have trust in the integrity of our election systems.

And a big part of the narrative we have been seeing driving disinformation in the last two cycles has been the narrative that our elections in fact are not trustworthy. This is something we are continuing to see increase. We have also watched as social media platforms have turned back from policies have gutted staffing to ensure that their public squares essentially that they maintain as private companies are adhering to their user agreements and policies in ways that ensure that everyone online is safe from hatred, safe from harassment, but also is clear what is and is not factual information. I say that because we cannot rely on social media companies to do that on their own. We have been spending much of our time over the past few years focus on trying to get social media companies both to improve their policies as well as to ensure that they're policing them fairly equally.

And with regard to communities that are particularly targeted for mis and disinformation, I can tell you what you've seen in many news reports. In many instances we've seen a gutting of the staffing that has produced the ability to do some of that oversight. And even when they had that staffing, it was inadequate. So we as a civil rights community, as a coalition of over 240 national organizations are very, very, very much in favor obviously of the bipartisan processes that we're able to participate in, but also to say unless we start to recognize both how people are targeted, who is targeted and its increase in violence in our election cycles, this is not just theoretical, it's practical, it's documented and we're seeing an increase. F B I data shows it that we are at risk, but that we can take action both in regulating artificial intelligence, in ensuring the public knows what is artificially produced and also ensuring that we have oversight of what social media companies are doing and whether they're complying with their own policies and ensuring that they're helping to keep us safe. Thank you.

Sen. Amy Klobuchar (D-MN):

Very good. Thank you very much Ms. Wiley. Mr. Chilson.

Neil Chilson:

Good afternoon, Chairwoman Klobuchar, Ranking member Fischer esteemed committee members. Thank you for inviting me to discuss the influence of artificial intelligence on elections. Imagine a world where our most valuable resource intelligence is abundant to a degree. We've never seen a world where education, art and scientific innovations are supercharged by tools that augment our cognitive abilities. Where high fidelity political speech can be created by voices that lack deep pockets, where real-time fact checking and inexpensive voter education are the norm where AI fortifies our democracy. That's the promise of AI's future and it seems plausible to me, but if you take one message from my comments, it should be this. Artificial intelligence and political speech is not emerging. It is here and it has been for years. AI technologies are entangled in modern content creation. This isn't just about futuristic tech or DeepFakes, it's about the foundational technologies that we use to craft our political discourse today.

Let's follow a political ad from inception to distribution. Today an ad campaign director doesn't just brainstorm ideas over coffee. She taps tools like ChatGPT to rapidly prototype variations on her core message. When her media team gathers assets, automatic computer vision tagging makes it a breeze to sift through vast image databases. Her photographer's cameras use ai. The camera sensors adjust to capture images based on the lens attached or the lighting conditions. AI powered facial and eye detection ensures that subjects remain in focus. Apple's newly announced iPhone 15 takes this to the next level. Its dedicated neural nets powering its computational photography. With those, it's no exaggeration to say that every photo taken on an iPhone 15 will be generated in part by ai. AI also powers post-production speech recognition tools make it easy to do text-based video edits. Sophisticated software automatically joins multiple raw video streams into a polished final product.

Blemishes disappear and backgrounds are beautified because of AI and tools like Hey Gen, make it possible to adapt the audio and video of a final ad into an entirely different language seamlessly. These are just some of the tools, the AI tools that are involved in creating content. Now some are new but many others have been here for years and in use, AI is so intricately woven into the fabric of modern content creation that determining whether a particular ad contains AI content is very difficult. I suspect each senator here has used AI content in their ad campaigns knowingly or not. Here's why this matters because AI is so pervasive in ad creation. Requiring AI content disclosures could affect all campaign ads. Check the box disclosures won't aid transparency. They will only clutter everyone's political messages and to address what unique problems, AI will facilitate more political speech, but there's no reason to think that it will shift the ratio of truth to deception.

Historically, malicious actors don't use cutting edge tech, cheap fakes, selective editing overseas content farms and plain old Photoshop are inexpensive and effective enough. Distribution, not content generation is the bottleneck for misinformation campaigns, money and time spent creating content is money and time that they can't spend spreading it. This committee should continue to investigate what new problems AI raises. It could review AI's effects on past elections and should obviously closely monitor its use and effects on the coming election cycle. More broadly, Congress should establish a permanent central hub of technical expertise on AI to advise the many federal agencies dealing with AI related issues. Remember, AI is here now already affecting and improving how we communicate, persuade and engage imprecise legislative approaches could burden political speech today and prevent the promise of a better informed, more engaging political dialogue tomorrow. Thank you for your attention. I'm eager to address any questions that you have.

Sen. Amy Klobuchar (D-MN):

Thank you, Mr. Chilson. Mr. Cohn.

Ari Cohn:

Chair Klobuchar ranking member Fischer members of the committee, thank you for inviting me to testify today. It's truly an honor. The preservation of our democratic processes is paramount and that word processes, I think highlights a measure of agreement between all of us here. False speech that misleads people on the electoral process, the mechanics of voting where to vote, how to register to vote. Those statements are particularly damaging and I think that the government interest in preventing those specific process harms is where the government's interest is at its most compelling, but a fundamental prerequisite to our prize. Democratic self-governance is free and unfettered discourse, especially in political affairs. First amendment protection is at its zenith for core political speech and has its fullest and most urgent application to speech uttered during a campaign for political office and even false speech is protected by the First Amendment.

Indeed, the determination of truth and falsity in politics is properly the domain, the voters and to avoid unjustified intrusion into that core, civic right and duty, any restriction on political speech must satisfy the most rigorous constitutional scrutiny, which requires us to ask a few questions. First, is the restriction actually necessary to serve a compelling government interest? We're not standing here today on the precipice of calamity brought on by seismic shift. AI presents an incremental change in the way we communicate much of it for the better and a corresponding incremental change in human behavior that predates the concept of elections itself. And surely deceptively edited media has played a role in political campaigns since well before the advent of modern AI technology. There is simply no evidence that AI poses a unique threat to our political discussion and conversation. Despite breathless warnings, DeepFakes appear to have played little if any role in the 2020 presidential election.

And while the technology has become marginally better and more available in the intervening years, there is no indication that DeepFakes pose a serious risk of materially misleading voters and changing their actual voting behavior. In fact, one study of the effect of political DeepFakes found that they are not uniquely credible or more emotionally manipulative relative to non-AI manipulated media. The few instances of AI use in current election cycle appear to back that up even where not labeled AI generated media that's been used recently has been promptly identified and subject to immense scrutiny even ridicule. The second question is whether the law is narrowly tailored. It would be difficult to draft a narrowly tailored regulation in specifically at ai. Such a law would be inherently under inclusive, failing to regulate deceptively edited media that does not utilize AI media, which not only poses the same purported threat but also has a long and demonstrable history of use compared to the relatively speculative fears about ai.

A law prohibiting AI generated political speech would also sweep an enormous amount of protected and even valuable political discourse under its ambit, much like media manually spliced to create the impression of speech that did not in fact occur. AI generated media can serve to characterize a candidate's position or highlight differences between two candidates beliefs. In fact, the ultimate gist of a message conveyed through technical falsity may even turn out to be true to prohibit such expression, particularly in the political context, steps beyond what the First Amendment allows. Even more obviously prohibiting the use of political AI generated media broadly by anyone in any place at any time, no matter how intimate the audience or how the low the risk of harm clearly is not narrowly tailored to protect against any harms the government might claim it has the right to prevent. The third question is whether there's a less restrictive alternative when regulating speech on the basis of content, the government must choose the least restrictive means by which to do so.

Helpfully the same study revealing that AI does not pose a unique risk also points to a less restrictive alternative. Digital literacy and political knowledge were factors that uniformly increased viewer's discernment When it comes to deep fakes, Congress could focus on bolstering those traits in the polity instead of enacting broad prophylactics. Another more fundamental alternative is also available more speech in over a decade. As a First Amendment lawyer, I have rarely encountered a scenario where the exposition of truth could not serve as an effective counter measure to falsity. And I don't think I find myself in such a position today. Nowhere is the importance, potential or efficacy of counter speech more important than in the context of political campaigns. That is the fundamental basis of our democracy, and we have already seen its effectiveness in rebutting deep fakes. We can expect more of that campaign related speech is put under the most powerful microscope we have, and we should not presume that voters will be asleep at the wheel. Reflexive legislation prompted by fear of the next technological boogeyman will not safeguard us. Free and unfettered discourse has been the lifeblood of our democracy and it has kept us free. If we sacrifice that fundamental liberty and discard that tried and true wisdom that the best remedy for false or bad speech is true or better speech, no law will save our democratic institutions, they will already have been lost. More detail on these issues can be found in my written testimony. And thank you again for the opportunity to testify today. I look forward to your questions.

Sen. Amy Klobuchar (D-MN):

Thank you Mr. Cohn. I'm going to turn over to Senator Merkley in the interest of our schedule here, but I wanted to just ask one question then I'll come back. A twofold question. I want to make sure you all agree that there is a risk posed by the use of AI to deceive voters and undermine our elections. Do you all agree with that? There's at least a risk. Okay, great. And then secondly, last, do you believe that we should work, and I know we vary on how to do this, but do you believe that we should work to ensure guardrails are in place that protect voters from this threat? Okay, great. Well, that's a good way to begin. I'm going to turn over to Senator Merkley and then we'll go to Senator Fischer and then I think Senator Warner, who just so been kindly joined us, has a scheduling crunch as well. So Senator Merkley.

Sen. Jeff Merkley (D-OR):

Thank you so much, Madam Chairman. And really this is such an important issue. I'm struck by a conversation I had with a group of wife's friends who said, how do we know what's real in political discourse? Because we hear one thing from one cable television to another from another. And I said, well, one thing you can do is go to trusted sources and listen to the candidates themselves. But now we're talking about deep fix where the candidates themselves might be profoundly misrepresented. I wanted to start by turning to you, Mr. Potter, in your role as a former chair of the Federal Election Commission. And currently it's not uncommon in ads to distort a picture of an opponent. They get warped, they get blurred, they're kind of maybe tweaked a little bit to look evil. Is there anything about that right now that is a violation of federal election law? No, it's not. Okay, thank you. You got your microphone on there. Okay. He said, said, no, it's not. And what if in an ad an individual quotes their opponent and the quote is false, is that a violation?

Trevor Potter:

No, it's not a violation of law. Well, wait a minute. If you had a candidate misrepresenting what their opponent had said under the current FEC rules, if the candidate did it themselves and they were misrepresenting the speaker, then it possibly could be.

Sen. Jeff Merkley (D-OR):

So an advertisement in which one candidate says, Hey, my opponent took this position and said such and such, and that's not true. That is not true. That's a violation.

Trevor Potter:

If you are characterizing what your opponent said, I think that would not be a violation. It would be perhaps a mischaracterization. If you create a quote and put it in the mouth of your opponent and those words are inaccurate, then the FEC would look at it and say, is that a misrepresentation of the other candidate? But it would have to be a deliberate creation of something that the opponent had not said, quoting it as opposed to the candidate's opinion of what they had said.

Sen. Jeff Merkley (D-OR):

So would a candidate's use of a completely falsified digital image of the opponent saying something that the person had never said, would that be illegal under current election law?

Trevor Potter:

I think it would. And that's what I have urged the FEC in my testimony to make clear that if a candidate creates a completely false image and statement by an opponent through this artificial intelligence, which is what could be done, that would violate existing law.

Sen. Jeff Merkley (D-OR):

Okay. Great. Secretary Simon, you talked about a leaflet that told people if their name ends in I think M through Z, they were to vote on Wednesday. And I picture now with modern technology, having that message come from a trusted source, a community leader in the voice of or the sound of, even if they weren't identified as whomever, suddenly Barack Obama is on the line telling you you're supposed to vote on Wednesday. Is such a presentation today a violation of election law?

Sec. Steve Simon, MN:

Boy, that's a tough one. Senator, thanks for the question.

Sen. Amy Klobuchar (D-MN):

You could add me.

Sec. Steve Simon, MN:

I'm hung up on a couple details of Minnesota law. I don't know if it came up in the federal context. I think Mr. Potter might have the answer to that one, but not, I would say arguably yes, it would be maybe not election law, but other forms of law. I mean it's perpetrating a fraud.

Sen. Jeff Merkley (D-OR):

Okay. I recognize there's some uncertainty about exactly where the line is, and that's part of why this hearing is so important as we think about this elaboration and Mr. Cohn, you said that DeepFakes are not credible. There was a 2020 study that 85% of the folks who saw the DeepFakes said, oh, these are credible and it's much improved since then. I'm not sure why you feel that a deep fake done, a well done one is somehow not credible when studies have shown that the vast majority of people that see them go, wow, I can't believe that person said that they believe the fake.

Ari Cohn:

Thank you for the question, Senator. So a study in 2021 that actually studied a deep fake of Senator Warren actually, particularly so that they could test whether or not misogyny also played a role into it, found that in terms of identifying whether something is a deep fake or not, the road is pretty. It's not really more likely that someone is going to be moved by deep fake than another piece of non-AI generated manipulated media. Okay,

Sen. Jeff Merkley (D-OR):

Thank you. My time's up. I just want to summarize by saying my overall impression is the use of deep fakes in campaigns, whether by a candidate or by a third party, can be powerful and can have people, can you believe what so-and-so said or what position they took because your eyes see the real person as if they're real. And so I'm really pleased that we're holding this hearing and wrestling with this challenge. And I appreciate you all's testimony.

Sen. Amy Klobuchar (D-MN):

Very good. Thank you very much. Senator Merkley. Senator Fischer.

Sen. Deb Fischer (R-NE):

Thank you Madam Chair. Mr. Chilson, you mentioned that AI tools are already common in the creation and distribution of digital ads. Can you please talk about the practical implications of a law that would ban or severely restrict the use of AI or that would require broad disclosure?

Sec. Steve Simon, MN:

Thank you for the question. So laws like this would mean that requiring disclosures, for example, would sweep in a lot of advertising content. Imagine you're a lawyer advising a candidate on an ad that they want to run. If having AI generated content in the ad means that ad can't be run or that it has to have a disclosure, the lawyer is going to try to figure out whether or not there is AI generated content in that. And as I pointed out in my testimony, that is a very broad category of content. I know we all use the term deep fake, but the line between deep fake and tweaks to make somebody look slightly younger in their ad is pretty blurry. And drawing that line in legislation is very difficult. And so I think that in ad campaigns, as a lawyer advising that candidate, you will tend to be conservative, especially if the penalty is a potential private defamation lawsuit with damages where the defamation is per se. And so I think that if the consequences are high there, that lawyers will be conservative and it will chill a lot of speech.

Sen. Deb Fischer (R-NE):

And it could add to increased cost of elections, couldn't it? Because of the increased cost in ads where you'd have to meet all those requirements in an ad for the time you are spending there.

Sec. Steve Simon, MN:

Absolutely increased cost, also less effective ads in conveying your content, you're crowding out the message that you want to get across and it could raise a barrier too.

Sen. Deb Fischer (R-NE):

You also advocated an approach to preventing potential election interference that judges outcomes instead of regulating tools. What would that look like in practice?

Sec. Steve Simon, MN:

Well, part of the issue that I'm hearing a lot of concern about deceptive content in ads and in campaigns overall. And the question is why are we limiting? If that's a concern, why are we limiting it to only AI generated content? And so when I say an outcome neutral test, we would test based on the thing that we're worried about, not the tool that's used to create it. And so I would encourage this committee to look at broader than ai. If the concern is with a certain type of outcome, let's focus on that outcome and not the tools used to create it.

Sen. Deb Fischer (R-NE):

Mr. Cohn, I understand that while all paid political and advertisements already require at least one disclaimer, the Supreme Court has long recognized that compelled disclaimers could infringe on First Amendment rights. In your view, would an additional AI specific disclaimer in political advertisements violate political speakers First Amendment rights?

Ari Cohn:

Thank you for the questions, Senator. I think there are two things to be concerned about. First, the government still has to have a constitutionally sufficient interest. And when it comes to the kinds of disclaimers and disclosures that we see presently, the informational interest that we are protecting is the identification of the speaker who is talking to us, who is giving us this ad, which helps us determine whether we credit that ad or view it with some kind of skepticism. Now, it's one thing to further that informational interest and certainly it can make a difference in how someone sees a message. But that ties into the second problem, which is that pretty much as Mr. Chilson said, everything uses AI these days. If the interest is in making people a little more circumspect about what they believe, that actually creates the same liar's dividend problem that Secretary Simon said, if everything has a disclosure, nothing has a disclosure, and it gives cover for bad actors to put these advertisements out. And the deceptive ones are going to be viewed just as skeptically as the non deceptive ones because everything has to have a disclosure on it. So I'm not sure that the proposed disclosure would actually further the government interest unless it's much more narrowly drawn.

Sen. Deb Fischer (R-NE):

Some people have proposed using a reasonable person standard to determine whether an AI generated image is deceptive. You've used that word here. Can you tell us how this type of standard has been used to regulate speech in other contexts?

Ari Cohn:

Well, that's a great question because who knows what the reasonable person is, but generally speaking, I think that's a harder standard to impose when you're talking about something like political speech. And it ties in closely, I think with materiality. What is material to any particular voter or what's material to a group of voter? How does the reasonable standard person correspond with the digital literacy of a particular person? A reasonable person of high education level may be much less likely to have a fundamentally different view of what a piece of edited material says than the original version, whereas a person with a lower education level might be more susceptible to it. So it really defies a reasonable person's standard, particularly with such sensitive and important speech.

Sen. Deb Fischer (R-NE):

Thank you. Thank you. Madam Chair.

Sen. Amy Klobuchar (D-MN):

I returned Senator Warner, the chair of the Intel Committee, one of the esteemed members of the rules Committee.

Sen. Mark Warner (D-VA):

Well, thank you Madam Chairman. And I was actually just at a hearing on the PRC's use of a lot of these disinformation and misinformation tools. And candidly, I'm not going to debate with the planner. I completely disagree with them on a number of topics, and I would love them to get some of the classified briefings we receive. I really appreciate fact that you have taken Madam Chair a lead on AI regulations around elections. As I think about the exponentially greater power of AI misinformation, disinformation, the level of bot usage is child's play to what happened in terms of Russia's 2016 interference, the tools that are existing now, I think it would be naive to underestimate that, that we are dealing with a threat of a different magnitude. And I applaud what you're doing. I actually think if we look at this, where are existing AI tools right now with very little increase in power, where can they have the most immediate effect?

That could have huge negative consequences. It doesn't have to be necessarily generated by a potential adversarial nation like China, but just generally. And I would say those are areas where public trust is the key glue that keeps an institution stuck together. You have identified one in the question of public elections, and we have seen how public trust has been eroded, again using somewhat now tools in 2016. And while we thank goodness the FEC finally required the fact that a political ad on Facebook has to have some level of disclosure, as you know, so it was your legislation, we still haven't even passed law number one to equalize disclosure requirements on social media to equalize with traditional TV and broadcast. I think that's a mistake. The other area I would argue for the consideration for the panel maybe for a later time, is the fact that the other institution that is as reliant on public faith as public elections, that we could have the same kind of devastating effect if AI tools immediately are used, our faith in our public markets.

There has been one example so far where AI tool of a false depiction of the Pentagon burning had a disruption in the market child's play. Frankly, the level of what could take place, maybe not in Fortune 50 companies, but Fortune 100 to 500 companies, the ability to not just simply use DeepFakes, but to generate tools that would have massive false information about products across the whole series of other areas. The imagination is pretty wild. And again, I will welcome my colleagues to come for a classified briefing on the tools that are already being deployed by our adversaries using ai. And somehow this notion that there's, well, if it's already a law, well, why do we need anything else? Well, there are plenty of examples, and I'll cite too where because the harm is potentially so great, we've decided either in a higher penalty level or certain times a lower threshold of proof, or in more extreme cases, even a prohibition, if the harm is so great that we have to think twice as a society.

I mean, murder is murder, but if that murder is created by a terrorist, there is a higher and differential level of societies and implied a different level of heinousness of that. We have lots of rules or tools of war, but we've decided that there may be some tools of war, chemical weapons, atomic weapons that go beyond the pale. And I think it would be naive to make assumptions at this point that with the potential that AI has that we shouldn't at least consider if these tools are unleashed. And I again applaud the fact that we're starting to drill down this issue around public elections and obviously there's first amendment rights that would have to be respected. It might even be easier on public markets because I could very easily see massive AI disruption tools being used to disrupt public markets that could have hugely catastrophic effects. And we might then overreact. But I do want to make sure I get in a question. I'll go to Ms. Wiley. One of the things we found in the 2016 elections were Russia disproportionately targeted black community in this country with misinformation and disinformation. We just came from the hearing I was referencing where the Freedom House indicated that prcs current influence operations, some using AI tools, some not, are once again targeting the black community in our country.

Don't you think if the tools that were used in 2016 are now a hundred x, a thousand X, a million X because of the enormous power of large language models on generative AI, don't we need to take some precautions in this space?

Maya Wiley:

Thank you, Senator. And we absolutely must. What you are quoting is extremely important. It is also important to note, and when we look at the research and the RAND study that came out just last year showed that a minimum of 33 to 50% of all people in their subject pool of over 2,500 people took the deepfake to be accurate. And what they found is increased exposure actually deepened the problem. So the notion that you see it over and over again and from different sources actually can deepen the belief in the deep fake. I'm saying that because part of what we've seen, and it's not only foreign governments, but it certainly includes them, but also domestic hate groups utilizing social media and utilizing the opportunity. And we're starting to have a lot of concerns about some of the ways that technology, particularly with chatbots and text to message actually can vastly increase exponentially the reach, but targeting communities that are more easily made, afraid or given false information about where and how to vote.

But also, I want to make this clear too, we're seeing it a lot with people who are lawfully allowed to vote, but who for whom English is not their first language. They have also been targeted, particularly Spanish speakers, but not also in the Asian community. So we know that there is, and a lot of social science shows that there's real targeting of communities of color, and it does go to the way that we see even with political parties and political advertising, the attack on the integrity of our election systems and even whether voters are voting lawfully or fraudulently in ways that have made people more vulnerable to violence.

Sen. Amy Klobuchar (D-MN):

Very good. Thank you. Senator Warner. I know Senator Britt was here early and we thank her for being here. Senator Haggerty.

Sen. Bill Haggerty (R-TN):

Thank you. Senator Klobuchar, ranking Member Fischer. Good to be with you both. Mr. Chilson, I'd like to start with you, and if I could just engage in a thought experiment with you for a few minutes. Let's go back to 2020. Covid pandemic hits many policy makers. Experts are advocating for things like mask mandates, shutting down schools, going to mandatory remote learning, that type of thing. And many localities and many states adopted mandates of that nature at the outset. And I think we know the result of those mandates and had great economic damage, particularly to small businesses. Children's learning set back considerably loss of liberty. And what I'm concerned about is that Congress and the Biden administration may be finding itself right at the same place again when we're looking at artificial intelligence. And I don't want to see us make the same set of mistakes. So I'd like to start with a very basic question in my mind and that is artificial intelligence a term with an agreed upon legal definition,

Sec. Steve Simon, MN:

It, it does not have even an agreed upon technical definition. If you read one of the leading treatises that many computer scientists are trained on the Russell Norvig book, they describe four different categories of definitions and underneath those there are many different types. And then if you run through the list of things that have been considered AI in the past and which nobody really calls ai, now you have everything from edge detection, which is in everybody's cameras, to letter detection, to playing chess, to playing checkers, things that once it works, we kind of stop calling it ai. That's the classic phrase by the person who actually coined the term AI. And so there is not an agreed upon legal definition and it is quite difficult actually.

Sen. Bill Haggerty (R-TN):

Got it. But using broadly how we think about AI and AI tools to political candidates and others that engage in political speech, use AI today for routine functions like taking and editing pictures like you just mentioned, or for speech recognition or for processing audio and video content.

Sec. Steve Simon, MN:

Absolutely. Ads are created using, and all content is created using many different algorithms. This little device here has many, many different AI algorithms on it that are used to create content. So

Sen. Bill Haggerty (R-TN):

I'd like to use this scenario to illustrate my concern. And Madam Chair, I'd like to introduce this article for the record. It's one of many that cites this particular be allowed in the record. I'll come back to one of the proposals that's under consideration now would prohibit entities from using deceptive AI generated audio or video visual media in election related speech. This would include altering an image in a way that makes it inauthentic or inaccurate. That's a pretty vague concept. For example, age may be a very relevant factor in the upcoming 2024 elections. You may recall recent media reports. Again, this is one of them right here describing how President Biden's appearance is being digitally altered in photographs to make him look younger. So my next question for you, Mr. Chilson, if the Biden campaign were to use photo editing software that utilizes AI to make Joe Biden look younger in picture on his website, could that use of artificial intelligence software potentially violate such a law against inaccurate or inauthentic images?

Sec. Steve Simon, MN:

Potentially? I believe it could. And the question should be why does that the use of those tools violate it and not use of makeup and use of lighting in order to make somebody look younger?

Sen. Bill Haggerty (R-TN):

Is there a risk then in your view that hastily regulating in a very uncertain and rapidly growing concept like AI might actually chill political speech?

Sec. Steve Simon, MN:

Absolutely.

Sen. Bill Haggerty (R-TN):

It's my concern too. My point is that Congress and the Biden administration should not engage in heavy handed regulation with uncertain impacts that I believe pose a great risk to limiting political speech. We shouldn't immediately indulge in the impulse for government to just do something as they say before we fully understand the impacts of the emerging technology, especially when that something encroaches on political speech. It's not to say there aren't significant number of issues with this new technology, but my concern is that the solution needs to be thoughtful and not be hastily implemented. Thank you.

Sen. Amy Klobuchar (D-MN):

Thank you very much. Senator Haggerty, I will start with you Senator Simon and get some of, I'm sorry, secretary of state Simon and get at some of the questions that Senator Haggerty was raising. Just first, just for off note, because all my colleagues are here and I haven't asked questions yet, which state has consistently had the highest voter turnout of all the states in America?

Sec. Steve Simon, MN:

Senator. That would be Minnesota.

Sen. Amy Klobuchar (D-MN):

Okay. Thank you very much.

Sec. Steve Simon, MN:

All right. Yes, that would be Minnesota.

Sen. Amy Klobuchar (D-MN):

Especially because Senator Bennet is here and he is always in a close race with me for Colorado. So I thought I'd put that on the record. Okay. So Senator Haggerty has raised some issues and I wanted to get at what we're doing here with a bill that Senator Hawley, certainly not a member of the Biden administration that Senator Hawley and I have introduced with Senator Collins and Senator Ricketts, Senator Bennett, who's been such a leader on this, Senator Coons and others will be getting on it as well. So this bill gets at not just any cosmetic changes to house someone. This gets at materially deceptive ab. This gets at the fake ad showing Donald Chomp hugging Dr. Fauci, which was a lie. That's what it gets at. It gets at the person that looks like Elizabeth Warren, but isn't Elizabeth Warren claiming that Republicans shouldn't be allowed to vote? It is of grave concern to people on both sides of the aisle. Can you talk about and help us with this kind of materially deceptive content has no place in our elections?

Sec. Steve Simon, MN:

Thank you Senator for the question. And I think that is the key. The materiality test and courts, it seems are well equipped to use that test in terms of drawing lines. And I don't pretend to say, and I think Senator Haggerty is correct and right to point out that this is difficult and that Congress and any legislative body needs to get it right, but though the line drawing exercise might be difficult, courts are equipped under something like a materiality standard to draw that line. And I think that materiality really in the realm of elections is not so different from other realms of our national life. And it's true as Mr. Cohn and others have said that the political speech, the bar for political speech is rightly high. It is and it should be, but in some senses it's no different than if someone were to say something false in the healthcare field. If someone said something just totally false, a false positive or negative attribute, if someone said that breath mints cure cancer or breath mints cause cancer or something like that, I don't think we'd have quite the same hesitation. Political speech, of course there's a high bar, but courts given the right language such as a materiality test could navigate through that.

Sen. Amy Klobuchar (D-MN):

Right. And I'm going to turn to Mr. Potter, but I note that even in a recent decision, Supreme Court decision by Justice Barrett, a seven to two decision, the Supreme Court would join by Justice Barrett, Justices, Thomas Alito, Kagan, Gorsuch, and Kavanaugh stated that the First Amendment does not shield fraud. So the point is that we are getting at a very specific subset, not what Mr. Cohn was talking about with the broad use of some of the technology that we have on political ads. Mr. Potter, you'd be a good person to talk to. You are a Republican appointee chair of the FEC. Can you expand on how prohibiting materially deceptive AI generated content in our election falls squarely within the framework of the constitution?

Trevor Potter:

Thank you, Madam Chair. The court has repeatedly said that it is constitutional to require certain disclosure so that voters have information about who is speaking. And there, I think Justice Kennedy and Citizens United was very clear in saying that voters need to know who is speaking to put it in context, so who the speaker is informs the voters' decisions as to whether to believe them or not. So in those circumstances where we're talking about disclosure, it seems to me particularly urgent to have voters know that the person who is allegedly speaking is fake, that the person who they think is speaking to them or doing an act is actually not that person. So there it is the negative of yes, who is paying for the ad, but is the speaker actually the speaker that would fit within the disclosure framework in terms of the prevention of fraud?

I think that goes to the fact that the court has always recognized that the integrity of our election system and citizen faith in that system is what makes this democracy work. And so to have a circumstance where we could have the deep fake and somebody is being alleged to say something they never said or engage in an act where they never did is highly likely to create distrust where you have a situation where that occurs. The comment has been made, well the solution is just more speech. But I think we all know and there is research sowing this, but we intuitively know that I saw it with my own eyes as a very strong perspective. And to see somebody hear them engaging in surreptitiously recorded racist and misogynist comments and then have the candidate whose words and image have been portrayed say, that's not me. I didn't say that That's all fake. Are you going to believe what you saw? Are you going to believe a candidate who says that's not me? So I think that thank you for inherent problem.

Sen. Amy Klobuchar (D-MN):

Thank you for doing it also in neutral terms because I think we know it could happen on either side and why we are working so hard to try to get this done. I'd also add in, this was on the disclosure comment, it was Scalia who said in 2010, an opinion concurrence for my part, I do not look forward to a society which thanks to the Supreme Court campaigns anonymously hidden from public scrutiny and protected from the accountability of criticism. This does not resemble the home of the brave. So there's been a clear indication in why Senator Hawley and Collins and Senator Bennet and a number of the rest of us drafted a bill that had the ability to look at this in a very narrow fashion but also allowed for satire and the like. And I did find Mr. Cohn's went over and told Senator Warren some of your points and I'm turn it over here.

Interesting. When we get to beyond the ones that would be banned of which ones the disclaimer applies to and that we may want to look at that in a careful light so that we don't have every ad, it becomes meaningless as you said. So I really did appreciate those comments. So with that, I am going to... I think our order, Senator Ossoff because he has to leave, is this correct? And then we go to Senator Welch who's been dutifully here for quite a while. Then Senator Bennet and then Senator Padilla, even though he does represent the largest state in our nation and is a former Secretary of State. So hopefully that order will work out. If you need to trade among each other, please do. Thank you.

Sen. Jon Ossoff (D-GA):

Thank you Madam Chair. And I think you just got to the root of the matter very efficiently and elegantly and Mr. Cohn appreciates your comments, but I think that the matter of its being discussed here is not subjective, complex judgements about subtle mischaracterization in public discourse. We're talking about, for example, Senator Fischer, one of your political adversaries, willfully, knowingly, and with extreme realism falsely depicting you or any of us or a candidate challenging us, making statements that we never made in a way that's indistinguishable to the consumer of the media from a realistic documentation of our speech. That's the most significant threat that I think we're talking about here. And Mr. Potter, in your opinion, isn't there a compelling public interest in ensuring that that kind of knowingly and willfully deceptive content whose purpose again, is not to express an opinion, it's not to caricature, but it's to deceive the public about statements made by candidates for office. Isn't there a compelling public interest in regulating that?

Trevor Potter:

I think absolutely there is. That the court would recognize that compelling interest. I also, there is no argument that there's a compelling interest in fraudulent speech as the chair noted. So I think what you'd find here is that in a circumstance where we are talking about this sort of deep fake as opposed to the conversations about did you use a computer to create the text, but where you are creating a completely false image, I think we would have a compelling public interest and no countervailing private interest because the first amendment goes to my right, our right to say what we think even about the government and in campaigns without being penalized. But the whole point of this conversation is you are falsifying the speaker. It is not what I think my first amendment right, it is creating this fake speech where the speaker never actually said it. So that I think is where the court would come down and say, creating that is not a first amendment right.

Sen. Jon Ossoff (D-GA):

And indeed, as you point out, there's substantial jurisprudence that would support the regulation of speech in this extreme case with knowing and willfully deceptive fabrication of statements made by candidates for office or public figures.

Trevor Potter:

Yeah, I think the distinction I draw is that the court has protected a candidate saying, I think this, even if it's false or my opponent supports or opposes abortion rights, that may be a mischaracterization, it may be deceptive, but if it's what I am saying, engaging in my First amendment speech, mischaracterizing an opponent's position, that's in the political give and take. But I think that's completely different from what we're talking about here where you have an image or a voice being created that is saying something it never said. And it's not me characterizing it, it is putting it in the image of this candidate.

Sen. Jon Ossoff (D-GA):

Thank you. And Mr. Cohn, I invoked your name earlier, give you the chance to respond. But is it your position that broadcast advertisements which knowingly and willfully mischaracterize a candidate for office, and I don't mean mischaracterize as in mischaracterize their position or give shaded opinions about what they believe stand for or may have said in the past, but depict them saying things they never said for the purpose of misleading the public about what they said. Is it your position that that should be protected speech?

Ari Cohn:

Thank you for the question, Senator. So I think there's two things. First of all, it's one thing to say the word fraud, but fraud generally requires reliance and damages. And so stripping those requirements out of here into and effectively presuming them takes this well outside of the conceptualization of fraud that we know. I think there are circumstances in which I would probably agree with you that things cross the line, but take for example, just two examples. First in 2012, the Romney campaign cut some infamous lines out of President Obama's speech in the you didn't build that campaign ad that made it seem like he was denigrating the hard work of business owners, but instead he was actually referring to the infrastructure that supported those businesses. And just in this last election, the Biden campaign was accused of cutting out about 19 sentences or so from a President Trump campaign rally that made it sound like he was calling c Ovid 19 a hoax. My point is not that these are good or valuable and that we need people to say these, it is that this is all already a problem. And by trying to legislate them with AI specifically instead of addressing as Mr. Chilson said, the broader effect causes a constitutional concern that the government interest isn't actually being advanced.

Sen. Jon Ossoff (D-GA):

Advanced. I see. So if I understand correctly, and don't let me put words in your mouth, but you agree broadly speaking with the premise that certain forms of deceptive advertising in the political arena are subject to regulation on the basis there's a compelling public interest preventing outright willful knowing deception such as putting words in Senator Fischer's mouth, she never put on in a highly realistic way. Your argument is that the question is not the technology used to do so the question is the materiality, the nature of the speech itself, is that your position?

Ari Cohn:

Yes. And I think that drawing the statute narrowly enough is an exceedingly difficult task. I think in principle is a pie in the sky concept. I think I agree with you. I just am not sure how to get from point A to point B in a manner that will satisfy strict scrutiny.

Sen. Jon Ossoff (D-GA):

And forgive me, Senator Fischer for invoking your example in that hypothetical. Thank you all for your testimony.

Sen. Amy Klobuchar (D-MN):

Okay, very good. Thank you. And I will point out that while network TVs have some requirements and they take ads down when they find them highly deceptive, that's not going to happen online. And that's one of our problems here, why we feel we have to act and why we have to make clear that the FEC has the power to act as well because otherwise we're going to have the Wild West right now on the platforms where a lot of people, as we know are getting their news and there's no rules. Senator Welch?

Sen. Peter Welch (D-VT):

Thank you. And kind of following up on Senator Ossoff and Senator Klobuchar, nobody wants to be censoring, so I get that. And what that line is is very porous, but the example that Senator Ossoff just gave was not about political speech, it was a flat out fraud and whether it was AI generated or it was used with older technologies in broadcast, would you guys agree that there should be a remedy for that?

Ari Cohn:

Well, thank you Senator. I'm not entirely sure that we can define it exclusively as....

Sen. Peter Welch (D-VT):

All right, so let me stop for a second because what I'm hearing you say is it's really, really difficult to define, which I think it is, but your conclusion is we can't do anything. I mean the issue with AI is not just ai, it's just the amplification of the deception and something that happened to Senator Fischer is so toxic to trust in the political system, and that's getting out of control as it is. So I'll ask you Mr. Potter, how do we define that line between where you are doing something is totally false versus the very broad definition of political speech? And then one other thing I want to ask. There has to be some expectation that the platforms like say Google take some responsibility for what's on the platform and they've been laying off the folks whose job it is to monitor this and make a judgment about what's a flat out deception. So how do we deal with this? And then second, what's your observation about the platforms like Twitter now X, Google, Facebook essentially laying off all the folks whose job it was within those organizations to be reviewing this material that's so dangerous for democracy?

Trevor Potter:

Yeah, let me start with the first one, which is I think what you're hearing from all the panelists is it is important to have a carefully crafted narrow statute to withstand Supreme Court scrutiny, but also to work. And so the language that gets used is going to be the key question,

Sen. Peter Welch (D-VT):

So we all agree on that, but there's a real apprehension, understandably so that this is going to be censoring speech. So I don't know who's going to draft the statute. We'll let all of you do that, but it's a real problem. But what about the platforms laying people off so that we don't even get real time information. It gets on the false, the deceitful advertising is out there and we don't even know it and can't verify that it's false.

Trevor Potter:

Right. If I could one more line on your first question and then I'll jump to your second. The first one, I think the comment, the example cited by Mr. Cohn in terms of snippets being taken from a Romney speech or snippets from a Trump speech and then mischaracterized, that to me falls on the line of that's defensible, permissible political speech that falls into the arena where we argue with each other over whether it was right or wrong because in his example, those people actually said that and it was their words and you are interpreting them or misinterpreting them, but they said it. That is where I draw the line and say, where you are creating words, they didn't say the technology we've heard about where my testimony today because I've been talking enough can be put into a computer and my voice pattern can be used and it can create an entirely different thing where I sat here and said, this is ridiculous. You shouldn't be holding this hearing and you shouldn't regulate any of this. So that could be created, be false.

Sen. Peter Welch (D-VT):

Would there be any problem banning that? I mean, why would that be legitimate in any campaign? I'll ask you Mr. Chilson or Mr. Potter.

Sec. Steve Simon, MN:

Rearranging somebody's speech to say something truthful, even if it's misrepresentation. I don't think you could ban that, right? So if I had this, your recording of this speech,

Sen. Peter Welch (D-VT):

No, we're talking about using whatever technology to have somebody, me saying something I never said at a place I never went. So why don't, yeah, sorry. Thank you.

Sec. Steve Simon, MN:

So I think that it would really depend if you could have somebody saying something that they didn't say in a place that they didn't went, and maybe it makes you look good, right? It's not defamatory in any way. It's truthful and it's positive on you. It would be hard to draw a line that would ban one of those and not the other.

Sen. Peter Welch (D-VT):

Okay. Senator Bennet.

Sen. Michael Bennet (D-CO):

Thank you Madam Chair, thank you very much for holding this hearing and thank you for the bill that you've allowed me to co-sponsor as well. I think it's a good start in this area and thank you the witnesses for being here. Not everybody up here, and I think everybody on this panel is grappling with the newness of ai. Disinformation itself of course is not something that's new. Ms. Wiley, this is going to be a question for you once I get through it. It was common in the 20th century for observers of journalism or maybe journalists themselves to say that if it bleeds, it leads. And digital platforms, which have in many cases I think tragically replaced traditional news media have turned this maxim into the center of their business model, creating algorithms that are stoked by outrage to addict humans, children in particular, but others to their platforms to sell advertising, to generate profit.

And that has then found its way into our political system and not just our political system. In 2016, foreign autocrats exploited the platform's algorithms to undermine American's trust in our institutions, our elections and each other. I remember as a member of the intelligence committee just being horrified by not just the Russian attack on our elections, but also the fact that it took Facebook forever to even admit that it had happened, that they had sold ads to Russians that were then used to anonymously attack all our elections and spread falsehoods in our democracy. In 2017 it was meta, now meta or it was Facebook. Now, meta's algorithms played what the United Nations described as a determining role in the Myanmar genocide. Facebook said that they lose some sleep over this. That was their response. Clearly not enough sleep. In my view, thousands of Rohingya were killed, tortured, and raped and displaced as a result of what happened on their platform with no oversight and with no even attempt to try to deal with it.

In 2018, false stories went viral on WhatsApp warning about gangs of child abductors. In India, at least two dozen innocent people were killed, including a 65 year old woman who was stripped naked and beaten with iron rods, wooden sticks, bare hands and feet. And just last night the Washington Post reported by the way, these aren't hypotheticals. This is actually happening in our world today. Just last night, the Washington Post reported how Indian political parties have built a propaganda machine on WhatsApp with tens of thousands of activists spreading disinformation and inflammatory religious content. And last month when the Maui wild fires hit, Chinese operatives capitalized on the death of our neighbors and the destruction of their homes claiming that this was the result of a secret weather weapon being tested by the United States and to bolster their claims, their post included what appeared to be AI generated photographs and big tech has allowed this false content to course through our platforms for almost a decade.

We've allowed it to course through these platforms. I mean, I'm meeting every single day. It's not the subject almost every day at home. I literally did on Monday with educators in the Cherry Creek School district listening to them talk about the mental health effects of these algorithms. I know that's not the subject of today's hearing, but let me tell you something. Our inability to deal with this is enormously costly. I'm a lawyer, I believe strongly in the First Amendment, and I think that's a critical part of our democracy and a critical part of journalism and politics. We have to find a way to protect it, but it can't be an excuse for not acting. The list of things that I'm talking about here that I read today, these are foreign actors to begin with that are undermining our elections and the idea that somehow we're going to throw up the First Amendment in their defense can't be the answer.

We have to have a debate about the First Amendment to be sure. We need to write legislation here that does not compromise or unconstitutionally impinge on the First Amendment. I totally agree with that. We can't go through another decade like the last decade. Ms. Wiley, I have almost out of time, but just in the last seconds that I have left, could you discuss the harm disinformation has played in our elections and the need for new regulation to grapple with traditional social media platforms as well as the new AI models that we're talking about here today? And I'm sorry to leave you so little time.

Maya Wiley:

No, thank you. And just to be very brief and very explicit, we have been working as a civil rights community on these issues for a decade as well, Senator Benne t. And what we have seen sadly, is even when the social media platforms have policies in place prohibiting conduct, which they are constitutionally allowed to do to say you can't come on and spew hate speech and disinformation without us either demoting it or labeling it or kicking you off the platform potentially for the worst offenders. And yet what we've seen is sadly and frankly not consistent enforcement of those policies, and most recently actually pulling back from some of those policies that enable not only a safe space for people to interact, we should just acknowledge that for eight-year-olds and under, we have seen double the rate of eight year olds on YouTube since 2017 double. So it really is significant what we've seen both in terms of telling people they can't vote or sending them to the wrong place. But it's even worse because as we saw with YouTube a video that went viral out of Georgia that gets to Arizona, and then we have an elected officials who call out vigilantes to go armed to mail drop boxes intimidating vote, which essentially intimidates voters from dropping off their ballot.

Sen. Michael Bennet (D-CO):

My colleague from California has waited, I apologize. I think we're going to let him go. Make one observation about that, Ms. Wiley. It's such an important point. In 2016, the Russians were telling, the Russian government was telling the American people that they couldn't go someplace to vote. It's the point you're making. They don't have a First Amendment right to do that and we need to stop it.

Sen. Amy Klobuchar (D-MN):

Okay. Thank you for your patience and your great leadership on elections. Senator Padilla.

Sen. Alex Padilla (D-CA):

Thank you Madam Chair. Want to just sort of associate myself with a lot of the concerns that have been raised by various members of the committee today, but as a senate as a whole is having a more of a complete comprehensive conversation about ai. I think Leader Schumer and others have encouraged us to consider balanced thinking. We want to minimize the risk, the negative impact of ai, but at the same time be mindful of the potential upside in benefits ai, not just in elections but across the board. So while I share some of the concerns, I have a question relative to the potential benefits of ai. One example of the potential benefits is the identification of disinformation. Super spreaders, right? We're all concerned about disinformation. There's some small players and big players. I'm talking about super spreaders who are influencers, accounts webpages, other actors that are responsible for wide dissemination of disinformation.

AI can, if appropriately implemented, helps scrape for these actors and identify them so that platforms and government entities can respond accordingly. I see some heads nodding, so I think the experts are familiar with what I'm talking about. Another example is in the enforcement of AI rules and regulations, one example, Google just announced that it'll require political ads that use synthetic content to include a disclosure to that effect. Using AI to identify synthetic contact will be an important tool for enforcing this rule and others like it. Question for Mr. Chilson, what can you think of one or two other examples of benefits of AI in the election space?

Neil Chilson:

Absolutely. As I said in my statement, it's already integrated deeply into how we create content and it's made it much easier to produce content. One of the things that comes to mind immediately is a relatively recent tool that lets you upload a sample of video and then pick a language to translate it into that. And it translates not just the audio, but it also translates the image so that it looks like the person is speaking in that language, that type of tool to quickly be able to reach an audience that maybe it was harder to reach for the campaign before, especially if you don't have deep resources. I think that is a powerful potential tool.

Sen. Alex Padilla (D-CA):

Thank you. Question for our colleague, Secretary Simon. I think that one short-term tool that could benefit both voters and election workers is the development of media literacy and disinformation toolkits that could then be branded and disseminated by state and local offices. Do you think it'd be helpful to have additional resources like this from the federal level to boost media literacy and counter disinformation?

Sec. Steve Simon, MN:

Thank you Senator, and good to see you. We and the Secretary of State community miss you, but we're glad you're here as well. Thank you for the question. Yes, I think the answer to that is yes. And when it comes to disinformation and misinformation, I think you put your finger on it. Media literacy really does matter. I know you're aware, and I alluded to earlier in my testimony, the trusted sources initiative of the National Association of Secretaries of State, the more we can do to channel people to trusted sources. However, they may define that. I'd like to think it's a secretary of state office, but someone may not think it's a county or a city or someone else. I think that would be quite helpful.

Sen. Alex Padilla (D-CA):

Thank you. And while we can't combat, we cannot combat disinformation, whether it is AI, disinformation or any other form without fully understanding where disinformation comes from and how it impacts our elections. We know there's numerous large nonpartisan organizations. I want to emphasize that nonpartisan groups that are dedicated to studying and tracking disinformation in order to help our democratic institutions combat it. But these organizations are now facing a calculated legal campaign from the far right under the guise of fighting censorship to halt their research into and work to highlight disinformation. Just one example, the Election Integrity Partnership led jointly by Stanford Internet Observatory and the University of Washington Center for an Informed Public tracks and analyzes disinformation in the election space and studies how bad actors can manipulate the information environment and distort outcomes and the face of a legal campaign. By the far right, this work is now being chilled and the researchers are being silenced. And this has happened even as some platforms are getting their own trust and safety teams that previously helped guard against election hoaxes and disinformation on their platforms. Ms. Wiley, what impact does the right wing campaign to chill this information researchers have on the health of our information ecosystems?

Maya Wiley:

Well, quite sadly, and disturbingly, we are saying the chilling effect, take effect, meaning we are seeing research institutions changing what they're researching and how. And I think one thing I really appreciate about this panel is I think our shared belief, not just in the First Amendment, but in the importance of information and learning and the importance of making sure we're disseminating it broadly. And there's nothing more important right now than understanding disinformation, its flow and how better to identify it in the way I think everyone on the panel has named. So I think we have to acknowledge, and certainly there's enough indications from higher education in particular that it has had a devastating impact on our ability to understand what we desperately have to keep researching and learning about.

Sen. Alex Padilla (D-CA):

Thank you. Thank you, Madam Chair.

Sen. Amy Klobuchar (D-MN):

Well, thank you very much and thank you for your patience and that of your staffs. I want to thank everyone. We've couldn't have had a more thorough hearing. I want to thank Senator Fischer and the members of the Committee for the Hearing. I also want to thank the witnesses for sharing their testimony on the range of risks with this emerging technology and going in deep with us about potential solutions and what would work. And I appreciated that every witness acknowledged that this is a risk to our democracy and every witness acknowledged that we need to put on some guardrails. And while we know we have to be thoughtful about it, I would emphasize the election is upon us. These things are happening now. So I would just ask people who are watching this hearing, who are part of this, who are with the different candidates or on different sides, that we simply put some guardrails in place.

I personally think giving the FEC some clear authority is going to be helpful. I think doing then of course, doing some kind of ban for the most extreme fraud is going to be really, really important. And I'm so glad to have a number of centers joining me on this, including conservatives on the Republican side, and then figuring out a disclaimer provisions that work. And that has been the most eyeopening to me as we had this hearing today about which things we should have them cover and how we should do that. So that's where I am on this, and I don't want that to replace the ability and is what I'm very concerned about to actually take some of this stuff down that is just all out fraud in the candidate's voices and pretending to be the candidate. So clearly the testimony underscored the importance of congressional action, and I look forward to working with my colleagues on this committee in a bipartisan manner, as we did in the hardest of circumstances in the last Congress and last years, including by the way, not just the Electoral Count Act bill that we passed through this committee with leadership in this committee, but also the work that we did in investigating security changes that were needed at the Capitol along with the Senator Peters and Portman at the time over in the Homeland Security Committee.

The list of recommendations that Senator Blunt, the ranking member at the time. And those two leaders came up with most of which have been implemented with bipartisan support. And we just have a history of trying to do things on a bipartisan basis. And that cries out right now for the Senate to take a lead hopefully before the end of the year. We look forward to working on this as we approach the elections and certainly as soon as possible, the hearing record will remain open for a week, only a week because like I said, we're trying to be speedy and hope the Senate is not shut down at that time. We will find a way to get your stuff even if it is, but we are hopeful given that nearly 80% of the Senate, actually 80% because a few people were gone, that supported the bill last night that Senator McConnell and Senator Schumer put together to avoid a government shutdown. So we go from there in that spirit and this committee is adjourned. Thank you.

Authors

Gabby Miller
Gabby Miller was a staff writer at Tech Policy Press from 2023-2024. She was previously a senior reporting fellow at the Tow Center for Digital Journalism, where she used investigative techniques to uncover the ways Big Tech companies invested in the news industry to advance their own policy interes...

Topics