Home

Transcript: US Senate Hearing on Oversight of AI & Election Deepfakes

Prithvi Iyer / Apr 17, 2024

The Senate Committee on the Judiciary, Subcommittee on Privacy, Technology, and the Law hearing entitled “Oversight of AI: Election Deepfakes” in the Dirksen Senate Office Building Room 226, Tuesday, April 16, 2024. Source

On Tuesday, April 16th, 2024, The Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law held a hearing titled “Oversight of AI: Election Deepfakes.”

Witnesses included:

What follows is a lightly edited transcript of the hearing.

Sen. Richard Blumenthal (D-CT):

The hearing of the Subcommittee on Privacy Technology and the Law will come to order. Welcome everyone. I apologize for the delay. Senator Hawley, the ranking member, will join us when he arrives, and I want to apologize to our witnesses about the delay. As you know, the full Senate today heard the articles of impeachment from the house, and so we were in our chairs to hear them. We're here today at this subcommittee meeting because a deluge of deception, disinformation, and deepfakes is about to descend on the American public. The form of their arrival will be political ads and other forms of disinformation that are made possible by artificial intelligence. There is a clear and present danger to our democracy. This world of disinformation doesn't have to be our future. We have an agency we can take action, and we are here today not only to hear about the dangers but also to look forward to action that we can take in the United States Congress.

But we should make no mistake: the threat of political deepfakes is real. It's happening now. It's not science fiction coming at some point in the future, possibly or hypothetically. Artificial intelligence is already being used to interfere with our elections. Sowing lies about candidates and suppressing the vote. We already have a chilling example this January: thousands of New Hampshire residents received a call impersonating President Biden, telling them not to vote or not to vote in the state's primary. And it's important for the American people to hear exactly what was said.

Audio impersonation of President Joe Biden:

Yeah, participate in their primary, a bunch of malarkey. We know the value of voting Democratic when our votes count. It's important that you save your vote for the November election. You'll need your help in electing Democrats up and down the ticket voting. This Tuesday only enables the Republicans in their quest to reelect Donald Trump. Again, your vote makes a difference in November, not this Tuesday. If you would like to be removed from future calls, please press two now.

Sen. Richard Blumenthal (D-CT):

It's important for the American people to hear what impersonation and deepfakes look like. It's also important to know that that's what suppression of voter turnout looks like. The deepfake of President Biden wasn't made by a computer whiz, a computer science graduate student, or anybody with any particular skill. It was made by a street magician whose previous claim to fame was that he has world records in spoon bending and escaping scrape jackets. The voice cloning technologies used in that call were inconceivable just a few years ago. Now, they are free online, available to everyone, and it's not just voice cloning. Deepfake images and videos are disturbingly easy for anyone to create. Protecting our elections isn't about Democrats versus Republicans. Already, deepfakes have targeted candidates from across the political spectrum, and no one, literally no candidate, no voter, no one is safe from them. And if a street magician can cause this much trouble, imagine what Vladimir Putin or China can do.

In fact, they're doing it. National security officials and law enforcement have been shouting from the rooftops, as well as in our classified briefings, their fears about AI and foreign disinformation. It's happening, it's here. Earlier this month, Microsoft revealed that social media accounts linked to the Chinese Communist Party were using AI to meddle in American politics. China has been caught using deep fakes to impersonate Americans to sow division and conspiracy theories such as deep fake images to push the lie that the United States military caused wildfires in Hawaii. Between the ease of use and the increasing interest from foreign adversaries and domestic political interests, our democracy is facing a perfect storm. When the American people can no longer recognize fact from fiction, quite literally, it will be impossible to have a democracy. As we discussed in our last hearing, these deep fakes and rampant disinformation are also happening at a time when local journalism is hanging by a thread.

Deepfakes have targeted not only presidential candidates but also Senate campaigns and local elections like the recent Chicago mayoral election. Anyone can do it, even in the tiniest race. In some ways, local elections present an even bigger risk. A deep fake of President Biden will attract national attention. It will be publicized as disinformation and deception, but deepfakes in a local election, state legislative contests, or city council, probably not. And when a local newspaper is closed or understaffed, there may be no one doing fact-checking, no one to issue those Pinocchio images, and no one to correct the record. That's a recipe for toxic and destructive politics. Congress has the power indeed, the obligation to stop this AI nightmare. There are common-sense, bipartisan bills ready to go right now. I'm supporting them. A number of my colleagues have offered and supported them as well.

Sen. Josh Hawley (R-MO):

Thank you very much, Mr. Chairman. Thanks for holding this hearing. Thank you for being here.

All right, I'm glad we got that on the record. Thanks to the witnesses for being here. I just want to add I don't really have much to add to what the Chairman said because I just agree with all of it. I think this issue it's not just an issue anymore; it's not just a theory, and effective AI in elections is not something deepfakes in elections is not something that is any longer just a theory. We've seen it. I mean, we've seen it happen. I mean, some of you're here today to testify about it. We've seen it with fake robocalls. We've seen it with fake images, and fake videos produced and disseminated on social media having to do with candidates. It's not confined to one political party or to one primary. It's happened multiple times all across the country. And I think the dangers of this technology without guardrails and without safety features are becoming painfully, painfully apparent.

I think the question now is, are we going to have to watch some catastrophe unfold? Already, we're watching everyday people have their images stolen, their likenesses used, commandeered. We're watching folks having their images taken and being turned into pornographic material. We're watching news anchors have their images ripped off, and turned into false information, dubbing effectively things that they didn't say. We're watching the effect on elections. Are we going to have to have a further disaster? Are we going to have to have an electoral disaster before Congress realizes, gee, we really should do something to give the public some sense of safety, some sense of certainty that what they're seeing and hearing is actually real, or is it in fact manufactured? And I think that is a baseline that we're talking about here. But I want to echo and amplify something the Chairman just said, which is if there are multiple bipartisan bills that are common sense bills that are ready to go. I'm proud to have worked on them with everybody sitting on this dais, beginning with the Chairman, Senator Klobuchar, who has worked very hard on this; it's time that these bills got a vote.

I mean, we can talk and talk and nobody has done a better job of surfacing this issue and bringing facts into the public domain than the Chairman has. But now it's really time to vote. And I just call on the leadership of both parties in the Senate, both parties, the leadership needs to support an effort to get a vote, and I say an effort to get really, they just need to schedule a vote. Let's put these bills on the floor, and let's vote. Let's not allow these same companies that control the social media technology, that control this country, that control the news in this country to also now use AI to further their hammer-hold on the United States of America and our political process. So thank you, Mr. Chairman. Again, thanks to all the witnesses.

Sen. Richard Blumenthal (D-CT):

Thanks, Senator Hawley, for your work on this issue. I want to turn to Senator Klobuchar, who's really been a leader on this committee, the Judiciary Committee. She also chairs the Rules Committee, which will oversee a lot of this legislation when it hopefully gets to the floor. We're certainly here because we believe there should be a vote. Thanks to Senator Klobuchar for your leadership.

Sen. Amy Klobuchar (D-MN):

Thank you. Well, thank you, Chairman, and thank you, Ranking Member Hawley, for this important hearing and this opportunity to keep this on the front burner. As Senator Hawley just said, we cannot wait. We are scheduling a markup of our bill, and we are going to have to work. It's the only committee that both leaders are on: a fun committee to chair. And so I will seek Senator Hawley's help and others on our bill, which includes Senator Coons and Collins, Senator Bennet and Ricketts, and a whole lot more support on both sides of the aisle to get the votes, not just so we can obviously pass it, but I'd like to get a really strong vote coming out of committee so we can immediately get this thing heard because we really can't wait. The elections are upon us. And like any emerging technology, AI has great opportunities but also significant risks.

And this is the one right before us as well as other issues related to scams. We have to put rules in place, and we can't let the same thing happen as every one of the four of us has been out front on this has happened with Section 230 and what happened when they just acted like these companies were things in a garage. Now they're humongous monopolies, and now it's, we are all challenged and trying to get these bills forward, whether it's on fentanyl, whether it is on child pornography, whether it is on competition policy, and we have to move these. The fake robocall; I hadn't actually heard it myself, so thank you for that. And it is just impossible to tell that that's not Joe Biden as it was impossible to tell one video that ran during the Republican primary that wasn't accurate involving Donald Trump.

That also wasn't accurate. Or we had an Elizabeth Warren video in which she says Republicans shouldn't vote. That wasn't her, but you couldn't tell. We had a Minnesota, and this is not AI, but it just shows how devastating this can be. We had a photo the day after the heroes, the two police officers in Burnsville, Minnesota, were killed after rescuing seven kids. And then, the paramedic was killed who was performing CPR. A photo of an actual rally picture from 2022 that I was kind of in the background on started going around. At the same time, there's some kind of Russian photo going around saying that I fund Nazis in Ukraine that's been going around for three years. This photo had a red circle around me in the background, and then they put defund the police signs in the hands of the people at the rally who were never there.

So they were literally using the people who did this. I personally think it was foreign interest, but they took a photo and put those defund the police signs after these officers had been killed, and to their credit, X and Meta put altered contact with the big sign. But it took us about a day to get all this down. It was going all around the internet. That is actually not AI. That's a real photo that they doctored, and people thought it was real. It looked real. And so this kind of thing is just going to keep happening and keep happening unless we take immediate action. Eleven states, including my own, have enacted laws to address these threats to our elections, and that's great, but it doesn't cover federal, and some of these states are, they're not all blue states. Whatever purple states, people are taking action, as seen by the bipartisan nature of our legislation. And we also need disclaimers on other ads that aren't deep fakes, and that's a bill that we will also be marking up in the rules committee. So I want to thank my colleagues for doing this. I want to thank them for their willingness to stand up on this issue and look forward to hearing the testimony of the witnesses. Thank you.

Sen. Richard Blumenthal (D-CT):

Thanks, Senator Klobuchar. I will now introduce our witnesses. This panel is extraordinarily distinguished. So Zohaib Ahmed is the CEO and founder of Resemble AI, a research and development lab focused on the creation of generative voice models. He and his team have spent the last five years developing and researching AI, voice, and detection technology. They are uniquely positioned to understand both the remarkable potential and possible risks associated with the rapid advancement of voice synthesis and cloning capabilities.

Rijul Gupta is the visionary founder and CEO of Deep Media, a leading deepfake detection and AI security company with a foundation in machine learning from Yale University and over 15 years of experience writing AI algorithms. He has dedicated his life to developing Deep Media's patented AI technologies and establishing the company as the gold standard in combating threats posed by unethical AI and deepfake misinformation. Ben Colman is the co-founder and CEO of Reality Defender, a cybersecurity company helping enterprises and governments detect deepfakes and data science for over 15 years. He has had ten years at Goldman Sachs and Google. David Scanlan (R-NH) became New Hampshire Secretary of State in January 2022 after serving 20 years as Deputy Secretary of State. Prior to that, he served eight terms in the Hampshire House of Representatives, including a term as majority leader before joining the Secretary of State's office. As is our custom, I'm going to ask you to stand and be sworn. Do you swear that the testimony you will give is the truth, the whole truth, and nothing but the truth, so help you God? Thank you. Why don't we go down the panel beginning with you, Mr. Ahmed?

Zohaib Ahmed:

Absolutely. Thank you. Chairman Blumenthal, Ranking Member Hawley, and members of the committee. Thank you for the opportunity to discuss the oversight of AI as it relates to understanding the impact the technology could have on the election. As I was introduced, Resemble AI is a research and development lab focused on the creation of generative voice AI. We've worked with large media companies, game studios, telecom companies, as well as content creators to produce AI voices. And we've spent the last five years developing and researching this voice technology. We've created terabytes worth of data sets that were uniquely positioned to understand the remarkable potential and the possible risks. Over the last nine months, a lot of the research we've opened up regarding responsible voice cloning technology, including research on speaker identification, watermarking, and deepfake detection. In my testimony, I want to share some of the technologies that we've developed since Resemble AI was founded, especially around watermarking and deep fake detection, and share some of the recommendations I might have around transparency and disclosure safeguards and mitigation and integrity verification.

I'd like to pull up a couple of slides just to help the audience understand, pull them up. Great. Sounds great. So, before I jump into any of these audio clips that you'll hear in a few seconds, I want to walk through how some of these AI voices are created. I think it's very important to understand how the technology works, and if we take a few minutes and seconds of audio, like Chairman Blumenthal said, it's super easy to create some of these voices, and these models have become widely accessible at this point. We've always held ourselves to exceptional standards of ethics, and we've developed many guardrails since the inception of the company to make sure that our technology is used safely. The first of which is a built-in speaker identification model, which we have open sourced about two and a half years ago. We actually use it internally to make sure that we get consent from every speaker who uses our technology. So there's no way anyone can basically go in, upload any seconds of audio minutes of audio, and create voices from there. We also have clear terms for what you can and cannot use the AI voices for. So, I would like to play some of these audio clips in the presentation itself. Maybe we can start with the first one on the left-hand side. One of these audio clips that you're hearing before is real, so we'll go ahead.

Voice 1:

Its lively Establishment now lay in ruins and its memories and stories buried under splintered wood and twisted metal.

Zohaib Ahmed:

The second one there.

Voice 2:

A severe storm tore down the bar and scared the animals...

Zohaib Ahmed:

Go for the third.

Voice 3:

Envision engaging your audience with dynamic real-time conversational agents effortlessly translating voices across multiple languages and effortlessly crafting thousands of personalized messages.

Zohaib Ahmed:

And the last one.

Voice 4:

The storm took everyone by surprise as it created chaos on the streets.

Zohaib Ahmed:

So hopefully, you can take a guess of which one's real. You can do it in your heads right now. We'll go to the next slide,

There we go. The second one is real, so I'm not sure how many of you got that right. We can move on to the next slide. Again, if you guessed incorrectly, you wouldn't be the only one. These, as Senator Blumenthal mentioned, these voices fakes the Biden one. We've heard him so much that you know what he sounds like. You could pick up on nuances. This was my colleague, so these were all generated, well, three of them were generated, and one of them was real, as you saw, and as you're all aware, this is happening in much more frequency right now.

We acknowledge that consumer education and awareness is a critical piece of addressing this situation. For the last 12 months, we've been publishing detailed incident reports of every case where AI is utilized for scams. We have analyzed the Joe Biden incident yesterday. We analyzed the last LastPass CEO that was used as a deepfake incident on WhatsApp. So you have enterprises, you have consumers who're all being targeted by the widespread technology, and this is all available for anyone on our blog. Lemme go over to the next slide.

After creating an open-sourcing speaker identification model, we then worked to create a neural speech water marker as well as a deepfake detection model that has 98% accuracy. We've found that this is so critical that we've actually made the deepfake detection tool absolutely free. You can go to detect resemble.ai, and anyone can drag and drop any file point on a YouTube link and figure out whether it's fake or real. We've also integrated into tools like Google Meet, making it widely accessible. I want to jump to some recommendations really quickly here. First and foremost, we support the proposed legislation that requires clear labeling of AI-generated content to wait; take it one step further. We propose there'd be the creation of a public database where AI or all generated election content is registered, allowing voters to easily access information about the origin and nature of the content that they encounter.

This includes the deepfakes that may be out there to adequately safeguard against misinformation, particularly during critical events like elections; collaboration is key by distributing and allowing platforms or enforcing platforms to use watermarking technology or using deep fake detection will instantly tell the consumer whether something is real or fake. You won't have instances where you have to wait a whole delay while things propagate throughout the world, and then you've realize, oh, we have to create community notes to backstep. We believe that AI watermarking technology is a readily available solution that can already check the integrity of audio content. We propose that all election-related audio content, including political advertisements, campaign messages, and public state statements by candidates, be watermarked with the technology. One of the key aspects of our watermarking is that it can actually persist through training, so generative models, when they scrape data and train models, we can actually figure out from the output of the model where this data came from, which is significantly important. The traceability aspect is really important. We also recommend the establishment of a certification program, much like you have the check marks and you have e-signatures. Setting standards for the effectiveness and reliability of watermarking solutions ensures that only trusted and vetted technologies are used. We're always willing to help facilitate partnerships between private and public sectors to ensure today's innovation is used responsibly. Thank you for the opportunity to provide insight into voice cloning technology and preventative measures that can be taken now to ensure the integrity of this year's election.

Sen Richard Blumenthal (D-CT):

Excellent. Thank you very much. Mr. Colman.

Ben Colman:

Thank you, Senator. There you go. Ahead of my five minutes, we had a chance to work with Senator Blumenthal's office to record a few real and fake audio clips. Would you like to play for the group? If we can start with the first one. We're going to ask the audience and those on the dias, which ones are real and which ones are fake.

AI impersonation of Sen. Blumenthal:

Hi, my name is Richard Blumenthal, United States Senator from Connecticut, and I'm a diehard Red Sox fan.

Ben Colman:

And the second one, please.

AI impersonation of Sen. Blumenthal:

Hi, my name is Richard Blumenthal, United States Senator from Connecticut, and I'm a diehard Royals fan.

Ben Colman:

And the third and final one.

AI impersonation of Sen. Blumenthal:

Hi, my name is Richard Blumenthal, United States Senator from Connecticut, and I'm a diehard Yankees fan.

Ben Colman:

And as you guys think about which ones are real and fake, we're going to share with you the surprise that they're actually all fake. And really the challenge and opportunity is that anybody with a Google search internet connection can make something as entertaining or as dangerous as they can imagine.

Sen. Richard Blumenthal (D-CT):

Anybody ought to know I'm not a Royals fan.

Ben Colman:

No comment on that.

Sen. Richard Blumenthal (D-CT):

With all due respect to the Ranking Member.

Ben Colman:

Chairman Blumenthal, Ranking Member Hawley, Senator Klobuchar, and committee members, I thank you for your stated concerns on the impacts deep fakes have had on our elections and our democracy, and I thank you for holding this hearing as well as requesting my presence here. It's an honor to provide insights that may help both your committee and the American people, and I applaud the committee's efforts in surfacing this problem in front of the nation. For three years led by unmatched innovation in American tech companies, rapid advancements in generative artificial intelligence are now a permanent fixture in society. As a longtime cybersecurity professional, myself and my team at Reality Defender foresaw the harm these technologies could bring. Years before the current AI boom, we built our company because after seeing how weaponized content and disinformation impacted our loved ones, we sought to combat the future technological drivers of advanced disinformation, which are called deepfakes.

Now. Deepfakes are AI-manipulated media that impersonate our citizens or create synthetic identities to spread disinformation or commit fraud. They hit the heart of what makes us human, realistic enough that even those of us who started AI for years and at PhDs have been at times, unable to tell the difference between real and fake with our naked eyes. Now, not all AI technology is bad. Now, while they have their benefits, they can also hit core tenets we hold dear. We've seen foreign adversaries use deep fakes in sophisticated disinformation campaigns, with Russian media falsely depicting Ukrainian forces as the perpetrators of the devastating attack in the Moscow Music Hall. We saw this in America with the robocall of a fake President Biden to thousands of New Hampshire constituents asking them not to vote. I cannot list every malicious and damaging use of deepfake of the past, present and future.

What I can do is sound the alarm on the impacts they have, not just on democracy but also on America. Anybody with internet access can create AI-generated audio, video images, or text to convince and persuade millions of people. This fake media can be distributed and shared instantly over social media platforms. The more incendiary the content, the faster it spreads. Trust and Safety teams at these platforms once blocked misinformation and fraud from spreading, then now the teams barely exist, leaving the onus on detection and verification on the users. Ahead of our 2024 election. Also, a year when two-thirds of the world will be voting in similar elections. We've seen the blueprints of deepfake-fueled interference in Taiwan and Slovenia's most recent elections. In these cases, materials appeared and instantly spread to millions. The responses pinpointing them as deepfakes took substantially longer to spread. By the time the deepfake widely spreads, any report calling a deepfake is also too late.

Uncovering the truth will always be slower and harder than spreading a lie. The same type of deep fake-enabled operations can and have happened here. They'll continue to have more damaging results as deep fake technology catapults ahead. This is not fear-mongering, AI, alarmism, or conspiracy-minded hyperbole. It is simply the logical progression of the weaponization of deepfakes. To protect our democracy and the media that drives it, legislation must mandate that content platforms are responsible for the urgent detection and removal of dangerous deepfakes and AI generated media. I applaud members of this committee on their Protect Elections from Deceptive AI Act. Unlike measures that have more or less given the pen to the largest content and social platforms, this law has great teeth, has a great start, but we can go further by imposing real penalties on bad actors doing defects to morph reality and on the platforms that fail to stop their spread.

Federal laws should outline penalties specific to the severity of using deepfakes in election disinformation crimes. As the state of Minnesota has done, AI developments move fast, legislation must move faster, forecasting and potentially anticipating the rapid improvements in the quality and application of deep fakes, all built by companies who move fast and break things. The things here are aspects of society everyone in this room holds dear : democracy, truth, trusting your eyes and your ears. It's not a stretch to say that these are at stake. When anyone can instantly create a deep fake against millions of people, they're anybody saying anything. We must treat deep fakes with equal or greater importance and the worst kinds of content that existed before, before it precisely because it gets to the heart of what makes us human. We must act quickly or we'll be taken by surprise, by new attacks on democracy, on elections, and on the very concept of truth.

Sen. Richard Blumenthal (D-CT):

Thanks, Mr. Colman. Mr. Gupta.

Rijul Gupta:

Senator Blumenthal. Senator Hawley, Senator Hirono. Senator Padilla, thank you for having me here. I am truly humbled to be here in front of you. My name is Rijul Gupta. I was born and raised in a small town in Oklahoma. I started building apps and websites when I was 10 years old. I'm a hacker at heart, but an entrepreneur by trade. I started building machine learning applications when I was just 15. I went to Yale where I studied machine learning academically, and after graduating after a couple of years, I started reading papers about what we now call generative AI. In 2017, I founded Deep Media because I knew deep fakes were coming, and I committed my life in that moment to solving the deep fake problem. Ever since then, we have worked tirelessly to make sure that people have technology to solve this problem. But first, before getting into that, I think it's important that we define what a deep fake is.

A deep fake is a synthetically manipulated, AI manipulated image, audio, or video that can be used to harm or mislead. This does not cover text, right? Whether you believe that human beings evolved over time or whether we were designed this way, the human mind is hijacked by image, audio, video, and that type of synthetic media content really has the potential to completely dismantle society. I'm not going to go into too much tech detail, but as legislators, if you're going to legislate medicine, you need to know the difference between Tylenol and Tamiflu. So I want you to keep three terms in mind when you're talking about this technology. The first is the transformer. It is a type of architecture. The second is a generative adversarial network, a GAN, and then the third is a diffusion model. Those three fundamental technologies are what generative AI is about.

That covers about 90% of it. All of these models require massive amounts of compute resources and massive amounts of data. We're talking about millions of identities here, so just keep that in your heads when you're thinking about this technology. We've all talked about how deepfakes are coming and how they're basically here, but one thing that I think is hard for most people to understand just intuitively is scale, right? These deepfakes and these AI's, they're getting really, really good, really fast, right? The quality is basically perfect. Now they're getting really cheap to produce. Right now it's about 10 cents per minute for video that's going down to 1 cent really, really quickly. And the amount of content, the percentage of content that is on online platforms is approaching as much as 90% by 2030. So we all know the harms. It's important you know the scale of these harms. Now, we've already seen them impact the elections, right?

We have the deepfakes of President Biden announcing the draft, the deepfakes of President Trump getting arrested, the deepfakes of Hillary Clinton endorsing Governor DeSantis, right? All of those are about political assassination. We are also seeing deep fakes be used to create groundswell support, the deep fakes of President Trump with black voters. So it's important to know that these deep fakes are going to be used for political assassination, but also for the opposite to make politicians seem more relatable. But I think a bigger threat is actually not the fake content. It's what the fake content does to the real content, right? When anything could be fake, you don't know what's real anymore. And so we're going to start seeing plausible deniability come into play here where politicians or anyone in business or anyone at all could just claim an image, audio or video is a deep fake, and that is fundamentally dangerous.

People think that AI is going to be like the Terminator. It is much more likely to create a society like 1984. That's what we need to be worried about when we're talking about deep fakes. But in Silicon Valley, we like to take a solutions approach. So I am here to tell you today that solutions to this exist, but they need to have buy-in from government stakeholders, generative AI companies, the platforms, investigative journalists, even local journalists and deepfake detection companies themselves. Those five groups of people need to work together to solve this problem. I am proud to say that we have helped people like Donie O’Sullivan at CNN, Geoff Fowler at Washington Post, and Amanda Florian at Forbes detect and report on deep fakes. We are members of the WITNESS organization, an independent group that surfaces deep fake detection to reporters. We are part of the DARPA SemaFor and AI force program that brings in researchers, corporations, and government resources to solve this problem.

We are part of the content authority initiative alongside companies like Adobe that try to label real content and fake content. We also have several of our own committees that we're leading that bring in the deep fake generative AI folks to label their content. People in research for detection and big tech platforms adopt this technology to keep people safe. I am a believer in the free market. I fundamentally think AI can be used for good. I believe deepfakes represents a market failure. They represent a tragedy of the commons, and that fraud and misinformation is a negative externality, and that if we legislate this properly, we can internalize that negative externality and make the AI ecosystem flourish. And with that in mind, I would like to take just a couple of minutes to show you how we can solve the deepfake problem. So I have a couple of slides that I'd like to show you, and I want you to get in the mindset of how an AI sees media, right?

That's kind of what we think about. We try to look through the AI's eyes in order to detect it. So again, if we can show the slides here on the screen, go to the next one, please. Here's some examples of what our system looks like. Again, we are mapping on the left there the proliferation of deepfakes over time, as well as the cost to society. If we don't solve this problem, this is a cost for fraud, misinformation, and other crimes, right? However, our platform can deliver solutions at scale, across image, audio, and video. Next slide, please. Here we see some examples of real content and fake content. Again, it's not just about detecting a deepfake, right? It's being able to detect a deep fake while not saying a real thing is fake, right? That's critical. So our false positive and our false negative rate is very, very low. And if we have a little bit of time, I'd like to show you just how an AI sees audio. We have some images up here, but on the next slide we're going to see how an AI sees audio. And this is actually a real piece of audio here. That yellow and blue graph, that's what an AI sees when it's seeing a person's voice and trying to learn from it. It's seeing that, and this is an example of our detectors picking this up as real, and you can fast forward through this. I don't want to take up too much time, but if you want to go to the final slide, that's an actual real political deepfake.

Fast forward? This one, the illegal Russian. This is a real video. We picked it up as real, right? And the AI is tracking the face. It's picking up certain key points on a person's face, and if you go to the final slide here, this is a deep fake that was produced recently, and maybe we can just play the whole thing.

AI voice illustration:

Hi, I'm Kari Lake. Subscribe to the Arizona Agenda for hard hitting real news and a preview of the terrifying artificial intelligence coming your way in the next election. Like this video, which is an AI deep fake the Arizona Agenda made to show you just how good this technology is getting.

Rijul Gupta:

This is the highest quality deepfake made today. It is using detection or generative models that aren't released publicly. They use their own detection, or sorry, generation models that they created hyper high quality and we picked it up, right? So it's about staying on top. At Deep Media, we are both the cat and the mouse in the cat and mouse game. We have generative AI technology, but we don't give it out to people. We keep it internally and use that to train our detectors, and that is why we're here setting the gold standard. So I'm again honored to be here and happy to answer any questions. You all are the policy folks, and I'm here to provide as much information as possible about what the solutions from a technical standpoint actually are.

Sen. Richard Blumenthal (D-CT):

Thank you. Thank you. Thank you, Mr. Thank you, Mr. Gupta. Secretary of State Scanlan.

Secretary David Scanlan (R-NH):

Thank you, Chairman Blumenthal and Ranking Member Hawley and Senate members, for the invitation to be here today. Actually, Senator Padilla, it's great to see a former Secretary of State here on the committee as well. On the weekend before January 23rd, when New Hampshire held its first in the nation presidential primary, everything was going very smoothly. The candidates were out doing their last-minute campaigning. All of the polling places were set up and ready to go. They had plenty of ballots and typical to New Hampshire fashion, we were ready to conduct a really good election. Weekend went fine, and all of a sudden on Sunday, I started getting some phone calls from reporters asking if I knew anything about a robocall that was taking place with President Biden. I went to bed that night, wondering what was up first thing in the morning. We conferred with the attorney general's office, and it was apparent that there was a robocall using AI with President Biden's voice on it, asking individuals not to vote in the election because, for Democrats, their vote was more important to support him in the general election.

Interestingly, the robocall was spoofed, and I understand that's a term where a call is assigned to somebody else's phone number to a prominent Democrat in the state of New Hampshire, who was a former state party chair and a former member of the Democratic National Committee. Because her phone was associated with the robocall, she started getting calls from acquaintances, asking her to clarify what was being asked in the robocall. She very quickly figured out what was happening and reported the incident to the attorney general's office, and they opened up an investigation. Fortunately, when there is a major election taking place in New Hampshire, the media, both state and national media, are on top of it. They are looking for something to report, especially when things are running very, very smoothly. And so when this surfaced, they jumped all over it, which was actually an opportunity for my office, the attorney general's office, and the governor's office to inform voters of what was occurring, let them know that what was being said on the robocall was a form of voter suppression, that it was illegal, and that in that specific instance, they should ignore it and make sure that they participate in the election.

And every indication is that they did. New Hampshire had a record turnout in both the Republican primary but also in the Democratic primary. When you have an incumbent president running for a second term, New Hampshire broke a record in the turnout. So, it is hard to tell how much of an impact that particular robocall using AI actually had on the voting population. We estimate, or the Attorney General's office estimated through the investigation that they have done to date, that there were between five and 25,000 calls made in the state of New Hampshire to voters with that information. So clearly, it did not seem to have an impact on that election. In hindsight, looking back, the call itself was kind of primitive, and it is something that could have been done with an impressionist, somebody that could, a real live person that could imitate the vote of the president in this case and could have done the same thing with a robocall.

What was concerning was the ease with which a random member of the public who really doesn't have a lot of experience in AI and technology was able to create the call itself. And I think that if you add what happened with the video to go along with that, we saw some great examples here. You could show candidates in compromising situations that never existed. It could be a state election official giving misinformation about elections and worse. And to me, that is incredibly problematic. Now, I know that there are instances where there's a parody and there's humor, and I've seen AI with prominent politicians doing funny things, and it is funny, but it's also quite obvious. I think we have to get a handle on when AI in elections is intentionally deceptive and malicious; we need to be able to recognize it, stop it, and prosecute it.

Sen. Richard Blumenthal (D-CT):

Thank you very much. Thanks to all of our witnesses for your really excellent testimony. Mr. Scanlan, you hypothesize because we can't know for sure that the Biden deepfake had minimal impact, but we can't be certain what the vote would have been. But for those calls, and I understand there is an investigation ongoing, the Attorney General is conducting it. It's under New Hampshire law. I assume it's criminal law as well as civil, but there are no federal remedies. In your view, would it be helpful to have criminal penalties under federal law specifically aimed at this kind of deception? And I think it was Mr. Colman who suggested that criminal penalties could be an effective deterrent, but they have to be really more specific and stringent than they are now.

Secretary David Scanlan (R-NH):

Mr. Chairman, I have to agree that we truly don't know what the impact was on the New Hampshire presidential primary. We only know that we had a good turnout, and the results were what they were. And we still have an active prosecution going on. The AG in New Hampshire has identified a company or companies that participated and an individual who is a suspect, and they're moving forward with that at some point. I believe that there is a federal component to this because it's going to be a national problem, and I would like to give a shout-out to Kate Conley, who works with Jen Easterly at CISA. Kate was in New Hampshire on the day of the presidential primary, and she traveled around to polling places with me to try and get a handle on how big this thing actually was, even though that was difficult to determine. But yes, I think that these things in a national election are going to be generated nationally, whether it's foreign actors or some other malicious circumstance, and I think we need uniformity and the power of the federal government to help put the brakes on those instances that happen locally. Certainly, federal government assistance would be helpful, but I think that should remain the prerogative of state law enforcement and the Attorney General.

Sen. Richard Blumenthal (D-CT):

With assistance from federal authorities where it's appropriate. Let me ask you and the other witnesses. Senator Hawley and I have proposed a framework that includes an independent oversight entity, a set of standards that would be imposed by that entity, a requirement for some licensing before models were deployed, testing to assure that they were safe and effective just as the FDA reviews drugs to make sure they are safe and effective and potentially penalties such as we've been discussing, as well as export controls to assure that our national security is protected. I'm assuming just for the sake of speed, I'm assuming that all of you would agree that some kind of framework like that one makes sense.

Rijul Gupta:

I actually have specific thoughts on that framework. I think it's a good start, but I really think it's important that whatever framework we set adopts what's called a defense in-depth approach, right? So we need metadata watermarking and cryptographic hashing, which is a little complicated, but it's invisible watermarks and hash database, kind of like NCMEC, AI detection, and AI poisoning. It also needs to cover both the generative AI platforms and the online platforms. We need both of those folks. We can't just say licensed generative AI companies and leave it at that. Honestly. We need government buy-in, generative AI buy-in platform buy-in journalists, buy-in, and then detection companies.

Sen. Richard Blumenthal (D-CT):

And all of those points are encompassed by our framework, particularly the watermarking.

Rijul Gupta:

Yeah, I really think watermarking is getting a lot of attention here, and it really doesn't solve that much of a problem. You need cryptographic hashing, invisible watermarking. That's really important.

Sen. Richard Blumenthal (D-CT):

Mr. Colman.

Ben Colman:

Yeah, just to add onto that, and I think just to unpack two things here. We're talking about watermarking and cryptographic hashing. Effectively, what's called provenance. It's either there or it's not. The challenge with that is it presupposes that everybody's going to follow those same rules. All the bad actors will follow the same rules, and we've seen time and time again, a lot of the applications, whether they're on your phone and the app store or online or they're open-sourced, they just aren't going to follow the rules. So we can't expect everyone to say, Hey, we're going to play nicely within this walled garden when the bad actors, by definition, are not playing by the rules at all. And so, with Reality Defender, we focused on inference. We don't touch any watermarking. We don't touch any personal data. We actually assume we'll never see the ground truth. We'll never even know if it is real or not, which means instead of saying yes or no, we're taking a more measured probabilistic approach, a probability saying maybe we're 95% confident, maybe we're 62% confident. We build that into a larger framework of just one signal among many to make a better insight, to have a platform or a team decide to block or flag a piece of media or a person or an action.

Sen. Richard Blumenthal (D-CT):

We're going to adhere to five minute rounds on the first round. I hope to come back to this line of questioning, and I apologize that others of you, Mr. Ahmed, you may have some comments as well, but in deference to my colleagues who have other commitments, I'm going to turn to the ranking member.

Sen. Josh Hawley (R-MO):

Thank you, Mr. Chairman, and thanks to everybody for being here. Mr. Colman, you raised in your opening statement. What is, I think my nightmare scenario, which is you made the comment that pretty soon it's going to be anybody with an internet connection is going to be able to access and use deepfake technology. I wonder if we're there already, though. I mean, I'm looking at this article from the New York Times just a few days ago. The headline is: “Teen Girls Confront an Epidemic of Nudes in Schools.” The details of this are just unbelievable. I mean, this is a young girl, Francesca Mani at a high school in Westfield, New Jersey, and she's a 10th grader. All of this is in the article. She's a 10th grader, and she found out that a number of boys in her class had used artificial intelligence software to fabricate sexually explicit images of her and a number of her friends, and then were circulating them online and showing them to classmates, but putting them onto platforms.

Now, I presume that these teenage boys didn't pay a lot of money in order to do this. In fact, the article goes on to say that they used widely available nudification apps to create these fake photos. So they take the photos of their classmates from Instagram or wherever and use those, feed them into this app, and then here you are, and it probably costs them almost nothing. So I guess my question to you is, are we at the point now with this technology where we're going to see a flood of AI-generated CSAM, a flood of other sexually explicit material created of adults or young adults? I mean, is this the point that we're at now.

Ben Colman:

Ranking Member Hawley, we were at that 6 months ago, and the challenge for us right now is where the US is leading the development of a lot of these novel technological tools. We're not leading in regulations to protect from these tools. We have Taiwan; we have Singapore, we even have China with more advanced regulations in this space. And to your point, beyond elections, thinking about different types of equally or more dangerous risks from deepfakes, there was recently an house oversight and accountability committee referencing various scary statistics that 98% of all deepfakes are actually pornographic. 99% of people targeting deepfakes are women. The 40 most popular deepfake pornography websites have over 143,000 deepfakes pornography, unpermissioned just in the last year, getting over 4 billion views. Now, these two numbers are more than the previous 10 years combined. So when I say this is already a problem, it's been a problem. We're waking up to it now, and the selection is just one that the larger world of regulations can solve for.

Sen. Josh Hawley (R-MO):

So I guess my question is, given that, what are the most effective regulatory avenues to pursue? I mean, how are we going to empower people like Francesca and her parents and the hundreds of thousands? I mean, is it soon going to be millions of women who have had their images used commandeered, we'd say in a legal sense, and turned into this sexually explicit material? How are we going to empower them?

Ben Colman:

It's quite simple. You mentioned CSAM imagery. There's a really nice framework in both national and also state level regulations in the space. When you upload something on, for example, YouTube, it's checking for a few things. It's checking for violence, it's checking for underage imagery. It's checking for, are you uploading the latest Drake song? That's because of regulations. So, to scan for generative media, which would be another check with that same flow, it's nothing new, nothing novel. It just needs your teams to actually push it forward to acquire the platforms to protect the consumers. In the absence of this, we have things like community notes, which only actually cause anything. Once things have been shared a hundred million times or worse than that, we have content moderation teams, which we've seen be slashed, and they don't really do anything at all. So, the challenge here right now is that the technology exists. We have folks on this dais who can actually solve for it but need regulations to acquire the platforms themselves to use us the same way they're required to scan for underage imagery.

Rijul Gupta:

Sorry, a really quick point on that. I think it's important to understand that for a deep nude image, they're shared on WhatsApp and things like that. Those are end-to-end encrypted. You can't detect that. It doesn't make any sense to solve that specific deep nude. You need AI poisoning, which is part of the defense in depth. Anytime you upload an image to Instagram, you can poison it so that if someone tries to deep nude, it turns out as garbage. So specifically for deep nudes of images posted to Instagram, AI poisoning is that solution.

Sen Josh Hawley (R-MO):

Yeah, I hear what you're saying. All of that sounds good, and I hear what you're saying about the platform's current obligations and the current rules that they have in place, for instance, to detect CSAM and so forth. But the problem is, is that this committee and other committees have heard mountainous evidence that these same platforms are absolutely awash in CSAM that is not digitally created. It's not synthetic. It is, I mean, real; it's actual people, which is even worse. And I mean, they say that they're trying to do their best, but it is absolutely the internet, particularly Facebook, Instagram, absolutely overrun. Which brings me to what I think has got to be part of this conversation: is we have got to allow Americans, ordinary Americans, individual citizens, we have got to allow them to get into court and to hold accountable legally companies who are producing or hosting this content. If we don't do that, I don't see how we change the incentive structure. If Instagram fears that it's going to get a billion-dollar jury verdict against it, they'll adopt all kinds of new technology. But if they don't, they won't. Thank Mr. Chairman.

Sen. Richard Blumenthal (D-CT)

Thanks, Senator Hawley. Senator Klobuchar.

Sen. Amy Klobuchar (D-MN):

So true. Senator Hawley. Right now, I'm going to focus on elections, but I will say those are startling numbers, Mr. Colman, and I think it is just what we are seeing both real people and AI-created people. It's one of the reasons that we got the SHIELD Act through here, which is not the liability issue that I also support that Senator Hawley was mentioning, but also getting the information to law enforcement and the like to be able to make it easier to go after these perpetrators. And if we can just sit here and do nothing, we can pass resolutions. But unless we empower people to go after these cases and then equally make liability, it's just going to get worse and worse. And at some point, the public will have had it. I don't know if that's what this year, but it's going to happen.

And so I keep telling my colleagues this. So, let's go to a few things here. The bill that I mentioned that Senator Hawley and Coons and Collins and Bennett, Ricketts, others I have, could you tell me, Mr. Colman, how AI has this potential to turbocharge election-related disinformation and why we can't just rely on the disclaimers and watermarks? I think you can do that for a set of it. I don't think you should do it for all uses of AI. And we have a labeling bill that I think differentiates that. But for this, really bad stuff that Secretary of State Scanlan was referring to, tell us why it's not enough. This is softball, but to run a whole thing and have a little label at the underneath when they think it's the actual candidate, but it's not.

Ben Colman:

We agree on that. I think that to paint the larger picture, what we saw during the primaries was a single static deepfake prerecorded kind of a one-to-many attack. It didn't change. It wasn't even live. Imagine a world where that was a one-to-one attack, where instead of it being prerecorded, it was actually live. And instead of being from one to many, it was one-to-one where it's coming from your husband, your wife, your boss saying, Hey, Ben, we need you in the office. 6:00 AM I know it's a voting day or to an election official. Hey, we're moving your precinct. We need you to be across town, three hours away. And so that's where this is going to go. It's not going to be a single pre-recorded, arguably medium-level deepfake. It's going to be a real time custom deepfake in conversational language, having people do all kinds of things at all levels of the election system.

So, on our side, we see this as a massive issue, not just in the US but globally. And what's great here is on the dIas, we have different technologies all solving very much the same issue. It's all possible. Now we have large companies, we work with large banks, large government groups, and large media organizations that are thinking long-term and already solving for this. We have banks scanning incoming phone calls, every single phone call. We don't have anyone protecting average consumers, my parents, my grandparents, they just don't stand a chance with other technologies, whether it's CSAM or for example, a computer virus. They don't have to be experts. They don't have to tell ransomware or an APT. They just know that their email provider will actually block it for them. We're looking for the bare minimum there, which is just letting us know that maybe something might be fake and then allowing us to decide maybe we wouldn't want to see it, maybe it won't go viral. But right now, the things that are most extreme go the most viral, and the platforms that do think about this are already solving that using technology like ours.

Sen. Amy Klobuchar (D-MN):

Right. Very good. And I do want to note that drafting this deepfake bill wasn't easy. We had a look at allowing satire and all these kinds of things within the framework of the Constitution and having Democratic and Republican lawyers look at this to figure out what gave us a chance. I just think if the platforms can point to something as opposed to laws that aren't quite on point, which 11 states have done for states, but not for federal and say, we've got to take this down. We're going to be in a much better place than we are with a little label that they may not even notice. And also the labels. I think it's important for some of this, but I don't think it can be the only answer. Mr. Scanlan, birthplace of democracy. No kidding. Spent a little time in your state there. I know you cherish democracy very much. Could you talk about what other federal support would be helpful in taking this on? In addition to stronger laws.

Secretary David Scanlan (R-NH):

Secretaries of State for a good decade now, have been dealing with misinformation and disinformation generally, and that takes on many different forms. And there's no question that today, voters receive their news in different formats than they did 20 years ago. And a lot of that news is electronic. It's on their cell phone. Many voters believe exactly what they see on that format, on that media, without question. So in addition to whatever might be appropriate to help states recognize and put breaks on malicious technology in terms of deepfakes, I think we have to spend a really strong effort on the fundamentals of transparency and helping and educating voters on the election systems and how they run and what the checks and balances are that are protecting them in the polling place. Right.

Sen. Amy Klobuchar (D-MN):

I've always found interesting, like those Baltic states on the border with Russia, they're putting up misinformation, lying things and they over time, because of education, they kind of have seen through some of it. It is possible. It can't be our only answer because of what everyone's being exposed to. But I think it's a good point. And we had the election assistance, of course commission, but I did want to say appreciate as a Republican Secretary of State, how seriously you and the Attorney General and others in New Hampshire took this egregious breach with the guy that did an interview afterward, maybe they should have hired a mime instead of a magician. But in the end, I just think that we've got to make clear there are consequences when this happens as well. I have other questions. I don't want to go over my colleague's time. I already have that. I'll ask you on the record, Mr. Ahmed and Mr. Gupta, thank you so much for being witnesses today, but we have to be as sophisticated as the people who are messing around with our democracy and our laws, and that's why we got to get these bills done. Thank you.

Sen. Richard Blumenthal (D-CT):

Thanks, Senator, Senator Hirono.

Sen. Mazie Hirono (D-HI):

Thank you. Mr. Chairman. Maybe this is something that Secretary Scanlan can talk about. We have laws that we're contemplating passing in this committee as well as in the larger committee, the full committee, but where does educating the public come into play to let them know that as we are approaching or we already in an election situation that they should expect to experience AI created audio, video, fake stuff, that where does educating the public to the fact that they will be subjected to all of this come into play to inoculate them against the impact of this kind of fake deep fake material?

Secretary David Scanlan (R-NH):

I think the states are best suited to deliver the message.

Sen. Mazie Hirono (D-HI):

Are they though to their voters?

Secretary David Scanlan (R-NH):

I believe they are. I believe that there is a role for the federal government to assist them in that because elections are run differently in every state. But there is, I mean, this issue with AI and the impact that it can have; it could actually bring down an election if it's done successfully. And that's a national problem. It's not a state problem. The states I think, are prepared to help deal with it, but the narrative has to be uniform.

Sen. Mazie Hirono (D-HI):

Well, what I'm getting at is that the use of AI, I think is going to be very prevalent in this election, upcoming election season. The voters should know that they will be subjected. They may not know it; they may not even believe that it's happening to them. Something happens, and then after the fact, you have a press conference, and you say, oh, there was a fake president Biden telling people not to vote. That's after the fact. How do you inform people that they should be aware, are states doing this, and does that play a role? That's the question that I have because I don't know that that is happening in the states. It's usually after the fact that they are informed.

Secretary David Scanlan (R-NH):

In New Hampshire, we're trying to raise the level of awareness of voters so that they know what to expect during an election cycle. I don't know that we can do any more than that. Some of these deepfakes can be incredibly real and I don't know how we deal with that in real time.

Sen. Mazie Hirono (D-HI):

Mr. Colman, do you have something to add?

Ben Colman:

Yeah. At Reality Defender, we don't sell directly to consumers. We sell to large companies. For example, large investment banks and large investment banks have an internal challenge educating their employees, whether it's deepfakes ransomware or spam, or different kinds of scams. And what we've seen work, and the only thing that works is to actually try and scam the employees, obviously teach them what's happening and let them look back and see what happened. An example, with phishing email campaigns, one of the most standard tools to educate employees about phishing campaigns is to actually send them phishing campaigns and then afterward ask them if they thought it was real or fake, and do it over a continuous basis over weeks and months. And I would imagine, I can't speak to whether it's state or federal, that's your world, not mine. But I can imagine, given that we're all talking about cybersecurity education and hygiene, a very similar approach could work on a very large level.

Rijul Gupta:

I do think it's worth pointing out that deepfakes image audio video is a new paradigm, right? It's a new reality. There is evidence from the University of Maryland School of Public Policy that when voters are informed about what policy looks like, there is wide bipartisan support for federal regulation. There just is—I believe it's 89% support across the board. And I can share that report with you if you're interested.

Sen. Mazie Hirono (D-HI):

I think one of you said that there are countries, Taiwan, and even China, that already have legislation to protect against the use of AI-created deepfake. So are any of them applicable, do you think, to our country, Mr. Colman?

Ben Colman:

Absolutely. And just at a high level, the majority of use cases for AI and generative media are great. They're going to help the world do a lot of great things. They're going to help create medicine faster, and solve all kinds of societal issues. And this is one very small issue that has very large asymmetric penalties, as they say. And so what we're seeing in other countries, whether it's in Taiwan, Singapore, or Japan, even China, but also UK, European Commission, Canada is the first step, is the bare minimum is just indicating that something may be fake. I am not saying they're blocking it or flagging it or damaging the user, but just saying it may be, and full disclosure: I'm a Google alum, but Google has taken an interesting approach of now requiring uploaders to confirm whether or not they're generative media. And if you don't confirm it and it's later found out, you might lose your account. And so certain platforms, certain organizations are thinking long term about this and saying it's going to happen anyways. It's already working in other parts of the world. It might work here as well. It is a stepwise approach along with education, which I think you're mentioning with the Secretary of State as well. But I think there are certain stepwise approaches that are absolutely applicable here in a year where elections are going to be paramount.

Sen. Mazie Hirono (D-HI):

I think that these platforms will start paying a lot more attention to the content on their platforms if we start to move toward eliminating Section 230 liability protections. Thank you, Mr. Chairman.

Sen. Richard Blumenthal (D-CT):

Thanks. Senator Hirono. Senator Whitehouse.

Sen. Sheldon Whitehouse (D-RI):

Thanks, Chairman. I just wanted to find out from each one of you if you think that there's a particularly good source, like, for instance, New Hampshire's Attorney General or a particularly good article or analysis on where gaps are in the criminal law that should be plugged in order to deal with the problem of deliberate and malicious AI fakery. Do fraud laws need to be adjusted? Does it need to become a RICO predicate? What are the, and I don't know, go ahead, Mr. Gupta, but also I'd like to hear from any of you, if you're not the expert, but you have somebody or admire or think does good work in this space, if you could let us know because we're trying.

Rijul Gupta:

Yeah, definitely. Again, this report from the University of Maryland School of Public Policy by Steven Kull, it's really in-depth and it talks about what the people think should be done. They actually take time to educate the people about certain policies and then pull them about whether should there be an independent federal organization, what kind of laws should exist, and how extreme should those laws be? I'm not the expert, legislators are the experts, but I fundamentally believe that the public's opinion should inform legislators. So I would highly recommend that.

Sen. Sheldon Whitehouse (D-RI):

Court would be one good place to look. Mr. Secretary.

Secretary David Scanlan (R-NH):

Thank you, Senator. I believe New Hampshire is probably the first state relative to elections where the Attorney General is investigating and hopefully prosecuting the individuals responsible for the deepfake. It's probably going to take that exercise and that experience to figure out where the gaps are, and I'd be happy to report that to you when they complete that.

Sen. Sheldon Whitehouse (D-RI):

That would be great. And if you don't mind, when you get home to your Granite state, let 'em know that we were asking about the Attorney General and if they could let us know or have their policy person check-in, that would be helpful. Mr. Colman.

Ben Colman:

On the topic that Ranking Member Hawley mentioned, particularly around deepfake, non permissioned or non-consensual pornography, what we've seen time and time again is that students; young men are creating deep fake imagery of women in their classes. And what's double challenging here is that while it might break rules within the schools, it's not breaking any local laws. And so the challenge of this potential patchwork of laws, which doesn't exist in most states where they're committing an issue, they'll get suspended from their school, they won't be arrested. So this is an area that can follow other types of emerging regulations and also penalties around CSAM imagery because I would argue that any image that's of an underage person that's nude definitely is effectively CSAM, whether it's real or fake.

Zohaib Ahmed:

I could perhaps just add a couple. I think these are all great points. I think we could probably look back to 1998 when the internet was really young. We had piracy follow a very similar path. If you try uploading an NFL video on YouTube, it won't make it past the upload screen today. That is because there is watermarking technology that is broadcasted, so there are also very strict penalties. Piracy law makes it very strict in terms of if you proliferate, if you upload pirated data, there are consequences. There are letters sent to your home. So I think we've solved some of these issues in the past already. The folks who are doing the trust and safety work at YouTube have probably been dealing with a lot of this stuff, non-AI generated, but still flipped images, mirrored images, et cetera, altered forms of the same material being uploaded. So I think this is, we could take a lot of inspiration from there.

Rijul Gupta:

One more thing, Senator. I think it's really important that the United States military, the intelligence communities are funded properly to help integrate this technology because if the US government doesn't know what's real and what's fake, then we have a really big problem.

Sen. Sheldon Whitehouse (D-RI):

Thank you very much. Thanks, Chairman. I'm going to ask more questions and thanks for being patient with our schedule today. We had to group visitors from the house who had something to deliver.

Sen. Richard Blumenthal (D-CT):

Very well said. I have a few more questions and I apologize for the lateness of the hour, so I'm going to try to be quick in my questions. Mr. Ahmed, I think you had a comment on my question. If you can't remember it, you are more than forgiven, and I want to prompt you with another question. Along the same lines we were discussing watermarking. The point was made that this labeling watermarking was insufficient in and of itself. The idea of an independent entity would be not necessarily only to set a licensing regime but also to establish something more than just simplistic watermarking. As all of you have suggested, you actually use voice cloning software. The voice cloning software, incidentally from New Hampshire, I understand, was from a company known as ElevenLabs, or it was created using software from ElevenLabs. Most of these voice cloning tools don't require the consent of the person being impersonated. You suggested earlier, I think the idea of a public database traceability. Could you expand on that point? Because it seems to me we've been talking about essentially defensive measures of a very simplistic kind. If we can use the technology to flip the model, so to speak, if there's a technology that can be used to apprehend and trace the bad guys, that might be a strong deterrent.

Zohaib Ahmed:

Yeah, absolutely. That's exactly how we're thinking about watermarking. And to the earlier point, the watermarking that we're talking about here is imperceptible, it's inaudible, it's deeply sophisticated. It's a neural network that's embedding watermarks, so it's very difficult to replace or remove these watermarks. I think this kind of indirectly kind of answers a bunch of questions and topics today. I think we need to hold a lot of generative companies accountable. It shouldn't be the case where someone can go in, and as unsophisticated as these attacks are at the moment, they're largely there because they're not even writing code. They're actually going to online websites, sometimes even to Apple's app store and downloading an app and effectively doing something there. So I think we could start at the generative models, bottling perspective there. I think the idea of traceability is extremely important.

The world that we don't want to live in is a world where everyone just shuts down, creates their data lake, and says, this is our data. No one else can touch it. No one else can access this data. And you're already seeing that with Reddit. You're seeing that with Twitter, with API access being shut down, you have other companies that rely on that data to suffer. So I think the twofold answer to your question, one, the generative models, if we can create some sort of a watermark. There's tons of research we're doing in this area where a subset of the data, even if it's watermarked, the watermark persists through model training shows up on the other side, you could see like, oh, this was created with this source of audio, and I think that's extremely important. The second is the idea of this public database.

I think a lot of this does come to education. When I was in school, you were told how to use the internet. You were told how to chat online; you were told where to go. I think the world has significantly changed. I think people, kids, adults, et cetera, people who work in enterprises need a place and a source where they can look at vulnerabilities. And I think as you grow up, I studied computer science in our forensics class. We would look at reports that Google, Apple would publish as incident reports and try to analyze them. I think a public database of all incidents that is very easy to find, neatly categorized, serves as good educational material, trying to demonstrate how attacks were created, and that basically can be a great resource for education, but also a great resource for understanding where the gaps are in terms of holding these generative companies accountable and making sure that they're not that easy to access.

Rijul Gupta:

I would like to just commend that statement and also let you know that that currently exists with the DARPA AI force. The DARPA is working on that exact idea, and I think it is a great idea.

Sen. Richard Blumenthal (D-CT):

Thank you, Mr. Colman.

Ben Colman:

I think everything they're describing is a fantastic start, but it presupposes that again, everyone's going to follow the same rules, whether it's downloading open-source software or a state-level actor using software that does not follow the rules or has hacked the rules. The best case is you have a watermark or a cryptographic hash that's wrong, that gives a false sense of security. And the worst case, you don't have anything at all because the bad actor doesn't care about the rules and doesn't follow them. And so our focus is on the probabilistic view that doesn't need any watermarks, and I think this is a world where we could all work together, but I just want to share that there are two sides of the same coin.

Sen. Richard Blumenthal (D-CT):

Assuming that no one follows the rules voluntarily.

Rijul Gupta:

I do think really it's more about that Swiss cheese model, if you're familiar with that, right? That you have all of these different things in place, and the stuff's going to fall through the holes and deepfake detection. Companies like ours, we are good at deepfake detection. We have really high-quality accuracy. We present heat maps and information and probability scores, and all of that's delivered to big tech companies at scale; militaries at scale. That's what we do. But that's not, by itself is not enough. You need everything.

Sen. Richard Blumenthal (D-CT):

You need some kind of punishment when people fail to follow the rules.

Rijul Gupta:

That too. And you need to know what's real and fake, right? This is counterfeit truth, and we have counterfeit currency, right?

Sen. Richard Blumenthal (D-CT):

Well, you need to be able to know and to

Rijul Gupta:

Prove and punish. Yeah,

Sen. Richard Blumenthal (D-CT):

Just as you would in counterfeiting, just as you would, somebody's speeding. We don't assume that everyone's going to follow the speed limits.

Rijul Gupta :

You need police with radar detectors, flexibility, right?

Ben Colman:

But I think beyond all of this, these are all fantastic ideas that, at some level, we'll solve for. But I think we can all agree we need to start somewhere. We need to start somewhere with baby steps. We can start adding in digital things, but we still haven't started yet, and we're months away from an election year.

Sen. Richard Blumenthal (D-CT):

Well, you have just crystallized or expressed the anxiety that many of us feel because we are approaching the election; as I mentioned at the very outset, we're facing a deluge of this stuff. And by the way, it's not the first time that we faced distorted electioneering or fake ads. When I first started out, our great fear was on the Sunday before elections, someone would go around with mimeograph pieces of paper and put them on the windshields of cars parked at church without identifying who it was from, but distorted images of the candidate doing something terrible. So the idea, and you go back to the founders, they were worried about false electioneering as well as Secretary of the State in New Hampshire, you have a concern with making sure that elections are fair and honest. This problem didn't just arise, but you're absolutely right. We're facing an election where we need to take some steps right away. I'm not going to say they're baby steps, but steps right away.

Rijul Gupta:

I would like to caution us against taking, adopting the Chinese model approach. There's a really great book by a Columbia law professor, Dr. Bradford, where she outlines the Chinese state-driven approach, the European rights-driven approach, and the historical United States market-driven approach. So something needs to be done, but a state-driven approach has serious and significant harm to the public, and I want to caution us against adopting regulations that China has put in place.

Sen. Richard Blumenthal (D-CT):

Let me come back to one of the key questions. I mentioned earlier that Senator Hawley agrees that Section 230 does not apply to AI, and therefore, we have a legal basis to hold big tech accountable here. To what extent does Big Tech know or should it know that these kinds of artists are using their platforms?

Rijul Gupta:

I can say that we are currently engaged with the big tech companies. These companies are deploying our technology to fight against deep fake misinformation, and that's already happening.

Sen. Richard Blumenthal (D-CT):

Well, they're not doing a very good job.

Rijul Gupta:

I know they need to use it more.

Ben Colman:

Mr. Chairman, I'd say some large tech companies are thinking long-term about this. Others of them, without naming any specifics, have completely decimated their teams that are focused on trust and safety and have cut their budgets on actually using software from any of us. A lot of this is public. It's all in…

Sen. Richard Blumenthal (D-CT):

From what I've read publicly, I'm guessing that Meta is one of the companies that has cut its staff.

Ben Colman:

I won't comment on that, but I will say that every one of them needs to expand these teams. They need to see them as an expected requirement from the government and not a cost function. Because we've seen the most recent layoffs. These are the first teams to go, and that's just 10 or 20% of the teams we're talking about, the whole teams, everybody in trust and safety, everybody in identity, KYC, everybody in fraud, completely gutted.

Zohaib Ahmed:

There are many, many great startups. So I feel really small sometimes, and I share the frustration because we're building this technology, and for a while, we tried to get it to the platforms, and we realized the elections are months, a year, years when we started months. Now at this point, and this is one of the reasons why we kind of skipped the line. We said, okay, you know what? The consumers need a tool that they can access today. We need to give them a way to get to the tool themselves. This is exactly where we are right now is 1998 GeoCities. People are building websites in GeoCities, and they're shipping them, and there's malware despite where there are all sorts of stuff where we're not at the point where we can expect a browser to pop up a red screen that happened in the mid-2010s, right?

That's very recent news. But for a long time, we lived on the internet and it was still very early days. And so our goal and what we've shifted our focus towards is providing a tool that anyone can access not just for enterprise, not just for the companies that are out there, but for normal consumers. So they can go and they can validate against YouTube videos, TikTok, Twitter, et cetera. They can go all over the place and try to validate that. And our goal is to try to get this technology ourselves as much as we can do to the consumer as quickly as we can.

Sen. Richard Blumenthal (D-CT):

I'm taking from these answers that the social media companies know or should know because what you have said, basically, whether it's cutting their teams or using your software or inventing technology that can trace the deepfakes that they have the capacity they know or they should know when their platforms are being misused.

Rijul Gupta:

One more quick thing here: I want to highlight that issue of scale. The firing of the trust and safety teams is a tragedy. It is, but the amount of deepfakes and the amount of misinformation is not going to be solved by hiring those teams back. They would have to hire 10, 20 times more people. AI must be fought with AI. And for every deepfake you see on the platform, there are hundreds more that were removed and not submitted because they were filtered out.

Sen. Richard Blumenthal (D-CT):

Thank you. You may recall the devastating wildfires that spread across Hawaii. As I mentioned earlier, the Chinese Communist Party decided to spread the disinformation that the disaster was a result of a United States weather weapon test, as it was called, weather weapon. This conspiracy theory showed Beijing's willingness to directly medal in American Affairs. I'm sure there have been others, but it was supported by AI-generated images. Maybe you can talk about some of the dangers to the United States from our foreign adversaries, not just within the country like the New Hampshire primary person.

Ben Colman:

Yeah. I'll give a very vivid example. A few months ago that was shown online on X on Twitter, what looked like an explosion of the Pentagon, and part of the reason because no regulations to automatically scan for it on upload; it took millions of shares and re-shares for it to be flagged by community notes. By that time, it led to a hundred billion dollars flash crash in the market. Now, the market did come back, but this is a really simple image, which arguably was a diffusion based deepfake. We detected it by doing what's called frequency domain analysis. But this is an example of how you can not only move an election but also move markets with a single photo correctly placed on the right social media platform and then just letting it go viral on its own.

Rijul Gupta:

I would like to highlight the difference between misinformation through a telecom like the Biden robocall and misinformation through the platforms. Those are very different things. They're operated very differently. I believe Reality Defender works very closely with financial services and telecoms to solve that problem. Deep Media works very closely with the platforms to solve that problem. We also work very closely with the Air Force Cyber Command Division 16th Air Force Unit, as well as NASIC and the US Army, to fight foreign interference from a military standpoint. So, while in the interest of national security, I would like to not go into specifics. We are monitoring for Chinese and other near-peer adversary, deepfake-based misinformation, and narrative redirection campaigns.

Sen. Richard Blumenthal (D-CT):

Let me conclude my questions by going to Mr. Scanlan. You mentioned the possibility that AI could be used to bring down an election. Maybe you could expand on that point.

Secretary David Scanlan (R-NH):

What's most important in the election process is that the voters have confidence in the outcome and the results of an election. And as long as they are confident, they're going to participate. New Hampshire and Minnesota are consistently among the states where we have very high voter participation. But if voters start believing that the government is corrupt or the election outcomes are not accurate, they've been manipulated somehow, then participation is going to decline. And it only takes one really serious event where an election at least has the appearance of having a major breakdown in terms of the outcome to throw doubt in the voting population. And I think that's a really, really significant concern. That is probably the most fundamental important thing that I perceive in my role as Secretary of State, is to make sure that an election is not messed up, and that the voters believe and know that it was run fairly and accurately to the highest standards possible. And if we lose that, it will be very, very hard to get it back.

Sen. Richard Blumenthal (D-CT):

Thank you. I'll open it to any final comments that anyone may have if you haven't had an opportunity to say something.

Zohaib Ahmed:

No, I think I'll echo the point. Voter confidence is so key. Again, we have a list of these attacks that occur in other places in politics, et cetera. One to point out is actually in the British opposition leader, Sir Keir Starmer, there's a deepfake of him berating his staff that gives an impression to voters that is not correct. It's not fair to him. And overall, once what my colleague said earlier, when everything in the world is fake, you don't know what's real anymore, and I think that's extremely important.

Ben Colman:

Yeah, I'll just add and reiterate that I don't think any single solution is ever a hundred percent, and the opportunity that exists with developing the world's best technology can also apply here as well. It's really exciting for the collaboration among startups and big companies, but also elected officials in solving this very important issue.

Sen. Richard Blumenthal (D-CT):

Thank you.

Rijul Gupta:

I would like to highlight two things. I definitely agree with Ben that no single solution works. I believe the defense-in-depth solution provides a comprehensive way forward. I also want to highlight that again, there are frameworks for legislation. There's the state-driven approach with China, the rights-driven approach with the European Union, and the market-driven approach with the United States. A market-driven approach is good for the generative AI companies and the platforms, and it's good for the US people. It is not dissimilar to the proposed legislation that you've put in place. I actually think that solves a lot of these problems, but to me, it's about internalizing these negative externalities so the AI ecosystem can grow safely, largely quickly, and we can all get rich.

Sen. Richard Blumenthal (D-CT):

Mr. Scanlan, your synopsis or your summary of the dangers here I thought, was very eloquent and powerful, but I know you're on the firing line. Literally every election, every time people go to the polls, that issue of trust and credibility is there, and it relates not only to those of us who are candidates but anybody who goes to polls and wants to have confidence that the outcome's going to have integrity. Thank you for being here. Thank you all. This was an excellent and very informative, and helpful session. Again, my apologies for the lateness and this hearing is adjourned. The record will stay open for a week in case my colleagues have additional questions for you in writing and the hearing is adjourned.

Authors

Prithvi Iyer
Prithvi Iyer is a Program Manager at Tech Policy Press. He completed a masters of Global Affairs from the University of Notre Dame where he also served as Assistant Director of the Peacetech and Polarization Lab. Prior to his graduate studies, he worked as a research assistant for the Observer Resea...

Topics