Home

Donate

Transcript: US Senate Subcommittee Hearing on "Protecting Consumers from Artificial Intelligence Enabled Fraud and Scams"

Justin Hendrix / Nov 20, 2024

Sens. Marsha Blackburn (R-TN) and John Hickenlooper (D-CO) hosted a subcommittee hearing on "Protecting Consumers from Artificial Intelligence Enabled Fraud and Scams" in the Russell Senate building on November 19, 2024.

On Tuesday, November 19, 2024, the US Senate Commerce Committee on Commerce, Science and Transportation Subcommittee on Consumer Protection, Product Safety and Data Security convened a hearing titled “Protecting Consumers from Artificial Intelligence Enabled Fraud and Scams.” The hearing explored "how artificial intelligence (AI) has enabled harmful consumer fraud and scams, how technologies like AI labeling, detection and disclosures can protect consumers, and how to empower consumers to recognize and avoid AI-enabled fraud and scams."

Witnesses included:

  • Hany Farid, Professor, University of California Berkeley, School of Information
  • Justin Brookman, Director, Technology Policy, Consumer Reports
  • Mounir Ibrahim, Chief Communications Officer & Head of Public Affairs, Truepic
  • Dorota Mani, Mother of Deepfake Pornography Victim

A number of themes were discussed during the hearing, including the need for accountability and consequences, education and prevention, content authentication and provenance, and platform responsibility.

1. Need for Accountability and Consequences

  • Dr. Farid: "I believe in action and consequences. And I think that's whether it's an individual, a trillion-dollar company or a US Senator for that matter... Silicon Valley and trillion-dollar companies have gotten away with it for 25 years and frankly we've let them."
  • Mr. Brookman: "Deterrence is incredibly important... If you're putting a product out in the market that's overwhelmingly likely to be used for harm, you should bear some responsibility."

2. Education and Prevention

  • Ms. Mani: "I strongly believe there is a critical missing component in our approach to artificial intelligence, which is education to prevent misuse... They will learn to cross-reference and validate what they see, read and hear, therefore avoiding manipulation as it comes."
  • Mr. Brookman: "User education is sadly the most important thing. Talk to your family members. The government should be doing more."

3. Content Authentication and Provenance

  • Mr. Ibrahim: "Content provenance is cryptographically and securely attaching information metadata, to digital content, images, videos, audio, that we see online every day. It can help you tell if something is AI generated, or if it was camera captured."
  • Dr. Farid: "The first thing to understand is as a consumer, your ability to distinguish reality from AI is becoming increasingly more difficult. I do this for a living and I'm pretty good at it, and it's getting really hard. So putting that onus on the consumer to look for the hands, look for this, look for that. That is a false sense of security that will simply not be true tomorrow, let alone three months from now."

4. Impact on Vulnerable Populations

  • Mr. Brookman: "Of those who had encountered scams, I think a third of Black consumers and a third of Hispanic consumers had lost money. And I think it was about half of that for white Americans."
  • Ms. Mani: "I do believe school districts should consider implementing AI literacy programs, to ensure our children comprehend the implications and responsibilities associated with using such powerful technologies safely, and ethically."

5. Platform Responsibility

  • Dr. Farid: "These things are being not just hosted on Facebook, on TikTok, on YouTube, on Instagram, on Twitter. They are being algorithmically amplified by those platforms because they monetize it. They have a responsibility for that and we're letting them off the hook."
  • Mr. Ibrahim: "Are they doing everything they can from a technical perspective to try and mitigate that?... There are things that some of these social media companies can do to mitigate those chances."

What follows is a lightly edited transcript of the discussion.

Sen. John Hickenlooper (D-CO):

Now this meeting is called to order. Welcome to the Subcommittee on Consumer Protection, Product Safety, and Data Security. We are now in order. We stand today. I think everyone agrees that we are at a crossroads where American leadership in AI is going to depend on which of many courses Congress takes going forward. This has been, can, and should remain a nonpartisan, or let's say that a bipartisan effort that focuses on certain core issues. I think promoting transparency in how developers build new models, adopting evidence-based standards to deliver solutions to problems that we are aware of, that we know of that exist in AI.

And third, building American's trust in what could be a and has been on occasion a very disruptive technology. I think our constituents in our home states, but across the country and really around the world, are waiting for us to take the reins and strengthen America's leadership, but at the same time, not compromising our commitment to innovation and transparency in AI. I believe in my heart and soul, I think most of us on this committee believe that American leadership is essential for AI for a whole variety of reasons. I think we'll go through a lot today. We already know that the capabilities in the field of artificial intelligence, those capabilities are evolving and changing rapidly. Generative AI tools allow almost anyone to create realistic synthetic video.

Well, synthetic text, image, audio, video, you name it. Whatever content you can imagine, we can now create. And while AI will have enormous benefits, and I truly believe that, benefits in our daily lives and sectors like clean energy, medicine, workplace productivity, workplace safety. For all those benefits, we have to mitigate and anticipate the concurrent risks that this technology brings along with it. Just one example, this image behind me, there's a poster. Oh, there we are. This was created using AI tools depicting a young girl holding a dog in the aftermath of Hurricane Helene. We had a label to clearly show that the image is AI-generated. Well, not a real image. It appears to be extremely realistic at first glance, although I grilled my staff on exactly what were the details, how could the trained observer recognize this as synthetic? But I think without a clear label, it really does take experience and training, a much closer look to see the flaws in the image.

The young girl's left hand somewhat misshapen from a natural photograph. I won't go into it. I'm so proud of myself I can follow the argument, but I think we recognize and should all own up to the fact that scammers are already using this new technology to prey on innocent consumers. There are a number of bad actors out there that see this as a vehicle for sudden and dramatic enrichment. One example scammers have cloned the voices of loved ones saying they've been kidnapped, they've been abducted, and this familiar voice is begging for ransom payments. Other deepfake videos show celebrities endorsing products or candidates who they had never endorsed and really had no intention of endorsing. Many troubling examples exhibit that include children, teens, women depicted in non-consensual, intimate and violent images or videos that in many occasions cause deep emotional harm. These AI-enabled scams don't just cost consumers financially, but they damage reputations and relationships.

And equally important, they cast doubt about what we see and what we hear online. As AI-generated content gets more elaborate, more realistic, literally almost anyone can fall for one of these fakes. I think we have a responsibility to raise the alarm for these scams and these frauds and begin to be a little more aggressive in what can be done to avoid them. During our hearing today, we will begin to understand and look at some of the tools and techniques companies and consumers can use to recognize malicious deepfakes, to be able to discuss which public and private efforts are needed to educate the public with the experiences and the skills necessary to avoid AI-enabled scams.

And then thirdly, to highlight enforcement authorities that we can establish to deter bad actors and prevent further harm coming to consumers. This committee is already at work on this, has already drafted, amended, passed several bipartisan bills focused on AI issues. I'll just run through these. The Future of AI Innovation Act fosters partnerships between government, private sector and academia to promote AI innovation. Validation and Evaluation for Trustworthy AI Act creates a voluntary framework which will enable third-party audits of AI systems. AI Research, Innovation, and Accountability Act increases R&D into content authenticity, requires consumer AI transparency and creates a framework to hold AI developers accountable. And then lastly, the COPIED Act, C-O-P-I-E-D. COPIED Act, has not yet been considered by the committee, but increases federal R&D and synthetic contact detection, and creates enforceable rules to prevent bad actors from manipulating labels on content. Lastly, the Take It Down Act, makes it a criminal offense to create or distribute non-consensual intimate images, NCII, of individuals. These are each bipartisan bills. They lay the groundwork for responsible innovation, address real problems with thoughtful solutions. They're not perfect, and I trust will come out of information where we'll get improvements from your sage wisdom.

We look forward to working hard together to get these across the finish line, and passed into law in the coming weeks. But we know that the bad actors unfortunately still continue to try and use this technology for fraudulent purposes. To combat fraud, the Federal Trade Commission, the FTC recently adopted rules to prohibit the impersonation of government agencies and businesses, including through the use of AI. The FTC is also considering extending this protection to individuals including through visual or audio deepfakes. This is one very good example of a federal agency taking decisive action to address a specific harm, but we need to all encourage further targeted specific efforts with this basic common sense rule.

States across the country have begun to enact legislation, the states being the laboratories of democracy, to try and address the distribution of deepfake media. Again, the map behind me here shows states taking action. The yellow states have enacted legislation related to AI use in election contexts. States in purple have enacted legislation related to non-consensual intimate imagery, and states in red have enacted legislation related to both of these, or instead to address other AI-generated media concerns. And as we can see, this is right now, a patchwork of protections, which is defying the need for predictability, which pretty much any industry needs, to prosper.

A consistent federal approach would be tremendously beneficial to fill in these gaps and make sure we're protecting all Americans. Promoting responsible AI, also rely on our partners in the private sector. A number of companies, Anthropic, Google, Microsoft, OpenAI, a number of others, have made voluntary commitments to responsible AI practices. These include commitments to help Americans understand whether types of content they see is AI-generated. It can also be done through watermarks, similar technology that identifies the origin, ownership. Or permitted uses of a piece of AI-generated content.

Today we're going to hear from leading experts, all of you, in artificial intelligence and AI-generated media, about what from your perception what's already happening. But more importantly, what else needs to be done, where we should be focusing. Hopefully working together, we can do a better job of protecting Americans from these potential risks, and the scams and frauds. But at the same time, make sure that we unleash the innovation that this country is known for, and especially in this emerging technology. I don't have to go into the importance of the competition around AI, that this is a competition that some of our global rivals take very seriously.

And if we're any less focused on that, that will be to our own detriment. I want to welcome each of our witnesses who are joining us today. Dr. Hany Farid, professor of the University of California at Berkeley School of Information. Justin Brookman, director of Privacy and Technology policy at Consumer Reports. Mr. Mounir Ibrahim, chief communications officer and head of Public Affairs at Truepic. And Ms. Dorota Mani. You guys are running me around, with a name like Hickenlooper, I usually get granted a certain latitude. But Ms. Mani is an entrepreneur, mother of a victim of non-consensual imagery. Now like to recognize Ranking Member Blackburn, for her opening remarks.

Sen. Marsha Blackburn (R-TN):

Thank you Mr. Chairman. I am absolutely delighted that we are having this hearing today and focusing on scams and fraud. Now in Tennessee, I say we've got the good, bad and ugly relationship with AI. A lot of our manufacturers, people that are working in healthcare, predictive diagnoses, disease analysis, logistics, they are utilizing AI every day, to achieve efficiencies. Our songwriters and our entertainers are saying, "Hey, wait a minute, we got to have some help with this." And you referenced the COPIED Act, the No Fakes Act, that some of my colleagues and I have done on a bipartisan basis.

And then the ugly is really what is happening to people with these scams, and with these frauds, and especially when it comes to senior citizens, and what we are seeing happen there. Now, I thought it was very interesting that the FTC with the Consumer Sentinel Network data book, they listed that scams increased a billion dollars over the last 12 months, to $10 billion. This is what it's costing. And from a year prior, it was up a billion dollars. When you think about the rise in this, you have to look at what this is doing, and of course we know AI is what is driving a lot of this. It is a technology that is advancing so incredibly fast, and of course legislation never keeps pace with technology.

So we know that it is catching a lot of people that are really quite savvy, when it comes to conducting a lot of their transactional life online. And we know that older Americans are the ones who have been hit most frequently with this, as an emerging threat. Now, the fraudsters that are utilizing AI to deceive consumers have gotten crafty, when it comes to creating these highly personalized and convincing attacks. And I think what is surprising to a lot of people, is the fact that they are doing this at scale. And the replication of these attacks, and the tailored emails, the text messages, the images... As the chairman showed, the altered images. And those are used to trick people to click that link, and then once they have clicked that link, they are then downloading malware. They're divulging personal information, and the fraudsters feel like that they've got them.

But these attacks have become very realistic. The spear phishing emails that really use a lot of this, make it appear that it's coming from a trusted source. And adding to this, the ChatBox which make it appear that you are having an actual real-time conversation with someone, is very disarming. So we know that these are becoming... The use of these tools by the scammers, are becoming more prevalent, they're becoming more precise, more widespread and harder to attack. And when we get to, "What do you do about this, and how do you combat this?" We know that it does require an encompassing approach. It's got to be comprehensive. And of course consumers need to be armed with knowledge about what is happening here.

And also looking to improve their digital hygiene and their digital literacy, so that they know more about what they need to look for with those red flags. And we know that it is also going to require that we move forward with having an actual online privacy standard, which we've never passed. And this committee, when I was in the house, we continued to look at this, so that individuals have the ability to actually protect their virtual you. Which is their presence in the virtual space. And it is going to require that we take those actions, we're really thrilled to have you all before us today. It helps to inform not only our committee, our subcommittee, informs our colleagues, and builds into the record for the need to move forward on legislation that will enhance the privacy, and help to protect our citizens in the virtual space. So welcome to each of you. We look forward to hearing your statements.

Sen. John Hickenlooper (D-CO):

Thank you, Senator Blackburn. Now we'll ask for each of your statements one after the other. First let me reintroduce again. Dr. Hany Farid, professor of the University of California, Berkeley School of Information. He's also a member of the Berkeley Artificial Intelligence Lab. Senior Faculty Advisor for the Center for Long-Term Cybersecurity. I have a son who's finishing, he's a senior doing engineering and computer science at Stanford, and he tells me that Berkeley is an inferior institution?

Dr. Hany Farid:

All right.

Sen. John Hickenlooper (D-CO):

I've heard there's a lot of contention there. Dr. Farid?

Dr. Hany Farid:

I won't hold it against you, Senator. Thank you. Thanks for having me. I am by training an applied mathematician and computer scientist. And for the past 25 years as an academic, I've been working to create and deploy technologies that combat the spread of child sexual abuse material. Combat online extremism, and dangerous disinformation campaigns, and most recently combating the ugly world of deepfake imagery. I'd like to start with a quick overview of today's world of AI in which there are two main branches, predictive AI and generative AI. And I know we'll talk a lot about generative AI. We should look at the entire ecosystem here, because I think there are issues on both sides.

Predictive AI is tasked with predicting anything from the movies you want to watch on Netflix, to who will default on a loan. Who will be a high-performing employee, who will recidivate if given bail, to what neighborhood should be patrolled by law enforcement. Generative AI is tasked with creating content. From texts to images, to audio to video. For text this can mean answering simple questions, summarizing a brief, helping a student with their math homework, but there are also troubling applications where we've seen interactive AI bots give instructions on how to create dangerous weaponry, and encouraging someone to harm themselves, or someone else.

For images, audio and video. While there are also many creative applications, we're seeing some deeply troubling applications. From the creation of non-consensual intimate imagery, child sexual abuse, to imposters audio and video, used to perpetrate small to large-scale fraud. Everything you enumerated in your opening remarks. Although not fundamentally new, new advances in machine learning have over the past few years fueled rapid developments in both of these areas, with, not surprisingly, very exciting applications, but also some worrisome applications. So as we consider how to embrace and yet protect ourselves from this latest wave of technology, I'd like to offer a few thoughts.

When it comes to predictive AI, it's useful to consider a risk or harms-based approach. Predictive algorithms that make movie recommendations should be considered in a different category from algorithms that make decisions regarding our civil liberties, our finances, or our employment. For the latter, we've seen troubling failures of so-called black-box algorithms that are not well understood or auditable. Most notably predictive algorithms being used today in policing and criminal justice, have been found to be biased against people of color, and algorithms being used in HR departments, have been found to be biased against women.

Fundamentally, the limitation of today's predictive AI is that they're neither artificial, nor are they intelligent. They're pattern matching solutions that look at historical data, and recreate historical patterns. So if the historical data is biased, then your AI-powered future will be equally biased. Today's predictive AI is not much more than a sophisticated parrot. When it comes to generative AI, the harms I enumerated earlier were completely predictable. Before the more polite term generative AI was coined, we referred to this content as deepfakes, which has its very roots in the moniker of a Reddit user, who used the then nascent AI technology, to create the earliest examples of non-consensual intimate imagery.

The harms we're seeing today from deepfakes are neither unintended nor unexpected, they are baked into the very DNA of this technology. And as Senator Blackburn mentioned in their opening remarks, the theft of the intellectual property that is driving all of this, should also worry everybody greatly. So when considering mitigation strategies, we need to consider not only the underlying AI technology, but the entire technology ecosystem. So with respect to NCII and CSAM for example, there are many players that fuel the harms. So there's of course the person creating the content, there's the tool used to create the content. But then there's the service used to distribute the content, mostly social media, but also the financial institutions that enable the monetization of the content.

Web services from search engines that gleefully surface this content, to cloud computing and domain name services that provide the infrastructure for bad actors. Importantly, because AI is not the first, nor will it be the last example of technology weaponized, we shouldn't become overly focused on just AI, but we have to seek broader thinking about the technology, when we are looking at mitigation strategies. Lastly, you'll hear many loud voices, many of them from Stanford by the way, that claim that reasonable guardrails to make products and technology safer, will destroy innovation. I reject these claims as nothing more than self-serving, and blind or indifferent to the actual harms that are facing individuals, organizations, and societies. We fell for these same lies by the way, from the same voices for the past 25 years, and failed to rein in the worst abuses of social media. We need not make those same mistakes in this latest AI revolution. Thank you.

Sen. John Hickenlooper (D-CO):

Thank you. Dr. Farid. Mr. Brookman, you're next, director of technology policy for Consumer Reports. And the first subscription I ever made to a magazine was Consumer Reports, just because I thought that the consumers needed someone to stick up for them. And someone told me recently that there are now 93,000 trade associations, as far as I know, I think there's only one Consumer Reports?

Justin Brookman:

Thank you very much for that interest, Senator, and thank you very much for the opportunity to testify here today. As you say, I am here from Consumer Reports, where I head up our work on tech policy advocacy. We are the world's largest independent testing organization. We use our ratings, our journalism, our survey, and our advocacy to try to argue for a better fairer marketplace for consumers. This hearing is about AI-enabled fraud, but I do want to say a couple words about the very real consumer benefits from AI.

Certainly there've been a number of tremendous scientific advances that even we're seeing everyday tangible, real-world benefits to it. Autonomous taxis in San Francisco, real-time language translation, these are amazing advances that do make consumers lives better. However, the widespread availability of AI tools does lead to some very real harms. Companies can now collect, process, share more information about us, giving them more power over us. Including the power to do personalized pricing based on how much we're willing to pay, it can rip off content creators who see the products of their labor turned into AI slop of dubious value. And it can let people create non-consensual, intimate images of acquaintances, celebrities, or classmates.

Another harm is that AI makes it a lot easier for scammers to rip people off, and take their money. It makes targeted spear phishing attacks a lot more efficient, spammers can spin up hundreds, thousands of personalized solicitations using generative AI tools. This is the scale that Senator Blackburn was talking about. It lets companies generate dozens of fake reviews, with just a few keystrokes. Review fraud is already a massive terrible problem online, AI just makes it easier to scale, and flood the zone. One of the most dangerous tools is one that you talked about, Senator Hickenlooper, realistic voice impersonation? Scammers get a clip of someone's voice, take it to a company like ElevenLabs and then get it to follow a script.

There's been numerous examples of people calling family members with some degree of urgency saying, "I've been in an accident," or calling a co-worker saying, "I need you to wire money to us." We've heard from dozens of Consumer Reports members who say they got these calls, and it sounded from family members, sounded incredibly convincing. Some actually lost money, and even though they felt like they were savvy, smart people. These tools are really easy to find online. You can search Google for voice impersonation tools, to get lots of options, including sponsored results who pay Google to get higher up in those results. Consumer Reports is in the middle of evaluating some of these companies, and a lot just don't have any, or very few protections in place, to stop fraudulent uses. Like requiring someone to read a script, so you know this person is consenting to have their voice used. A lot of them don't have that.

So what should be done? In my written testimony, I lay out five areas where I think we need to see some improvements. One, stronger enforcers. The Federal Trade Commission did do the impersonation rule that was talked about, that did bring five important cases as part of Operation AI Comply, in September. Those are great, but they just don't have the capacity today to deal with this wave of AI-enabled fraud. FTC is understaffed, they can't get statutory penalties, for the last several years. They often can't even get scammers to give up the money they have stolen. These are problems that need to be addressed.

Tool and platform responsibilities. Some of these AI power tools are designed such that they're mostly going to be used for illegitimate purposes. Whether it's through the creation of deepfake intimate images, or voice impersonation. And these companies should have heightened obligations to try to forestall harmful uses. If they can't do that, maybe they shouldn't be publicly, or so easily available. General purpose AI can be harmful too, but it's going to be harder for those companies to think about every possible negative use case, and more expensive and may forestall good uses. But it does seem like in many cases they should be doing more.

You can go to ChatGPT today and say, "Hey, create 20 fake reviews for my restaurant," and ChatGPT will say, "sure, here are 20 fake reviews, and here are my favorite dishes, and here's what great service I had." Which again, can help companies flood the zone with fake reviews. Transparency, people deserve to know when they're interacting with a real person, or when they're seeing fake content. I know there's been a lot of bills introduced in this Congress, and legislation passed in California, AB-982, that would require some advances there.

Fourth, stronger privacy and security laws, the United States in general has very weak protections, as Senator Blackburn talked about, we have seen a lot of progress at the state level in recent years. These laws still aren't strong enough. And then five, a user education and better tools. I don't want to put all the burden on consumers, but this is the reality of the world we live in. We need to educate people about what to expect.

We are part of a campaign called Take9, that tries to teach people where when you get some sort of urgent call to action... Pause, take nine seconds, think about it, doesn't mean that, that's real or a scam. And over time the tools need to improve too. I look forward to hearing from some of the other witnesses about tools that are advancing, and hopefully these can be built into the platforms that we use to access the internet. Thank you very much for your time, and I look forward to answering your questions.

Sen. John Hickenlooper (D-CO):

Great. Thank you, Mr. Brookman. Next, Mr. Mounir Ibrahim who's the chief communications officer and head of public affairs at Truepic. Also a former state department official I think, so it brings a broad wealth of more international, more global experience perspective to this.

Mounir Ibrahim:

Thank you Chairman Hickenlooper, Ranking Member Blackburn, and members of this committee. My name is Mounir Ibrahim. I'm with Truepic, a technology company that's focused on digital content, authenticity, and transparency online today. As you noted, Senator, I was a former Foreign Service Officer with the U.S. Department of State. And it was there I saw first-hand, the importance and critical need for transparency and authenticity in digital content, from conflict zones around the world, including on the ground in Damascus, Syria. I fundamentally believe that deciphering what is human-generated from AI, is one of the most pressing challenges we have today.

I applaud this committee for its leadership on this issue. I would like to address the threat very briefly, echoing my colleagues here, to local communities and everyday individuals. This dynamic is most clearly seen in the prevalent and alarming rise of non-consensual pornography, as we heard, often targeting young women and even minors. Meanwhile, catfishing and sextortion stands are on the rise, often powered by AI. I fear we are witnessing the early stages of AI being weaponized against local communities, and individuals who do not have the resilience, or resources to defend themselves.

We know that businesses face the same challenges too. It is reported that deepfake phishing scams on businesses grew by 3,000% in the last year alone, 700% in the Fintech sector. These trends harm consumers and jeopardize America's economic competitiveness in the AI age. The reality is, there is no single solution to stop this problem. However, there are effective strategies to mitigate the threat, and one key approach I'd like to talk about, is digital content provenance. Content provenance is cryptographically and securely attaching information metadata, to digital content, images, videos, audio, that we see online every day. It can help you tell if something is AI generated, or if it was camera captured, and what edits might have been made along the way.

Today, content credentials are being used to protect businesses and people alike. It is being led by the Coalition for Content Provenance and Authenticity of the C2PA, of which Truepic is a proud steering committee member. Very briefly, I'd like to share how Truepic is leveraging content credentials with its partners, in business and for consumer benefit. Business credentialing and re-credentialing is one of the fastest growing industries that is adopting content provenance. We work with partners like Equifax, TransUnion, Dun & Bradstreet, all of which to securely transform their digital verification process, backed by provenance.

This protects consumers by enabling our partners, to verify the authenticity of those people buying credit reports, thereby helping safeguard data. Other industries like insurance, product recalls, and others are all embracing the same approach. Moving on to more consumer-facing benefits, OpenAI's ChatGPT strengthens safeguards for misuse. We are honored that our technology helps back that, with a certificate authority that allows these images that come from ChatGPT, to be read and deciphered by adoptive social media platforms, like LinkedIn, et cetera.

Last month we produced the world's first authenticated video with content credentials on YouTube, and thereby you can see the content credentials seeing that it is in fact captured with a camera. We're working with partners like Qualcomm to deliver these benefits directly on chipsets so that the future smartphones we all own and operate, that will both create AI and capture authentic content, can add these content credentials, if chosen so. While these are all highly encouraging points, there are challenges and I would like to be transparent about those challenges.

The first challenge is adoption. For content provenance to truly transform and create, make the internet a more authentic and transparent place, we need adoption. We expect the C2PA standard to become a global standard under the ISO next year, but more must be done. Education. Even if all our digital content has content credentials, it is imperative that content consumers and businesses, and those interacting with it, know what they mean and what they don't mean. These are not blind stamps of trust and that's a critical piece. Moving forward, I think Congress can help make this a reality.

First and foremost, education and funding. There are countless academic institutions and research institutions, that are looking at how content provenance can scale, and can improve the health of the internet, and Congress can help support them. Engagement, hearings like this, to help emphasize that content provenance is not just a safeguard, but it also is an un-locker for innovation and efficiency across government, and the private sector. Thank you very much, and I look forward to your questions.

Sen. John Hickenlooper (D-CO):

Thank you very much, Mr. Ibrahim. Next Dorota Mani, who's an entrepreneur in their own right, and has experienced some of the consequences of these fraudsters in a powerful way, and I really appreciate you being here.

Dorota Mani:

Thank you for having me. Thank you for the opportunity to testify before this committee today. My name is Dorota Mani, and I stand before you not only as an educator and advocate, but also as a mother. My 14-year-old daughter, along with her sophomore classmates at Westfield High School, was a confirmed victim of AI deepfake misuse, perpetrated by her peers, last year. Boys in my daughter's grade, used AI to generate sexually explicit images of her and other girls. AI technology, as complex and powerful as it is, presents both advancements and challenges to our society. While AI can greatly enhance many aspects of our life, it also has the potential to harm your constituents deeply.

I am here not to dwell on how this incident has made us feel, but rather to shift the narrative away from the victims, and towards the perpetrators. The lack of accountability, lack of loss, and the urgency of implementing effective safeguards. Based on personal experience, I strongly believe there is a critical missing component in our approach to artificial intelligence, which is education to prevent misuse. While not an area directly related to this committee, I do believe school districts should consider implementing AI literacy programs, to ensure our children comprehend the implications and responsibilities associated with using such powerful technologies safely, and ethically.

But more than that, those programs will prepare our children not only to protect themselves, but also to thrive in increasingly digital world. Unfortunately AI and its capabilities, will allow them to critically assess and navigate understanding AI and its capabilities. Will allow them to critically assess and navigate potential manipulations by AI, across several aspects of life. They will learn to cross-reference and validate what they see, read and hear, therefore avoiding manipulation as it comes. At the same time, they can harness this tool to bridge gaps between affluent and underserved communities, as well as rural and urban schools. In this manner, providing equal access to resources and opportunities.

Simultaneously, we need robust regulations and legislation for the specific harms AI creates, like deepfake, sexually explicit images. Currently, the absence of school policies on AI, alongside a lack of specific civil and criminal laws, leaves a significant gap in our ability to hold bad actors accountable. In the incident involving my daughter, the lack of AI specific guidelines at the school, led to minimal disciplinary actions against the perpetrators. Not only remained at the school and continue to attend the classes, but also represent the school till this day, in sports teams.

The repeated excuse by the school was, the absence of state and federal laws. Even though there are many important AI bills like the SHIELD, DEFIANCE Act, Labeling Act, just to name a few. Senator Cruz's Take It Down Act, deeply resonates with my daughter and I, as it allows the victims to take control over their image, by ensuring that those images can be easily taken off any sites, where they can be seen by anyone, protecting the victim's reputation. It also creates a federal law that criminalizes the publication of deepfakes, giving schools a tool to respond against AI misconduct, instead of hiding behind the lack of laws. I thank the members of this committee for anonymously passing this legislations in July, and I call for members of the Senate to pass it urgently, to protect my daughters and other girls in this country. We should also consider further AI regulations that protects people from deep fake images like labeling tool for AI-generated content, to inform recipients of its source. Deep fake images circulating within the digital ecosystem can harm victims professionally, educationally, personally and emotionally, and more, potentially destroying reputations and futures.

We should not try to reinvent the wheel, but rather learn from existing models. I also would like to emphasize the urgent need to reform Section 230, especially given the rapid evolution of technology and the challenges posed by AI. Just as laws have adapted for self-driving versus traditional vehicles, it's crucial that Section 230 evolves to stay relevant and effective in today's digitals landscape.

Let's not misunderstood C2, that should not be touched, of Section 230. To tackle the spread of harmful online content, we should focus on solutions starting with a round table involving leaders from major content holding platforms like Microsoft, Google, Apple, and Amazon. Those companies have the expertise and resources to enact significant changes immediately driven by social and ethical responsibility while legislators are crafting laws to hold bad actors accountable. Thank you.

Sen. John Hickenlooper (D-CO):

Thank you. Thank you each for being here. Now we'll stop, and this is a kind of a crazy day, so I think people are going to be coming in and out, but you'll be on a non-fraudulent video record, so we'll make sure that you'll have... Despite the lack of senators sitting right at the table, we'll make sure many of them see this. I know there is tremendous interest.

Mr. Brookman, why don't I start with you? We've seen AI technology evolve so rapidly the last few years. I think we have to anticipate that that trend is going to continue and probably accelerate. In previous hearings, we've highlighted how comprehensive privacy legislation is an essential building block to addressing some of the harms presented by AI, those that are anticipated and those that have already happened. As AI continues to evolve, how would you see data privacy protections helping to mitigate fraud against consumers?

Justin Brookman:

So a lot of these scams are social engineering attacks, and those are a lot more effective when they have data about you. And in the current environment, there's a lot of data out there available about us. There's hundreds of data brokers. California has a data broker registry. I think there are 600 different companies on it. And you can get the sort of things that are valuable for this sort of attack. You can find out family members, you can go to dozens of data brokers and find out who family members are. You can get location. Sometimes you can get precise geolocation. You can find out about interests or what people have done. And so these little facts, these little factoids are things that they can say, "Oh, you were at the Braves game three weeks ago. You were the one millionth fan for the year and you won a big prize, and we're going to wire you the money. Give us your routing number."

So all these sort of things can be used by hackers, and I think as y'all talked about, now they can do it at scale. AI can just kind of automate it all for you, automate the script, automate the email, automate whatever the attack is, and like you say, who knows what they're going to be? Strong legislation, privacy legislation, can help rein that in. Like I said, about 20 states now have passed legislation, Colorado being one of the first.

Those are great advances, but it still puts a lot of onus on users. Users have to go out of their way to track down these 600 data brokers and opt out. That's hard to do. Even the opt-in solutions are hard, you need permission and that's very difficult. So something that protects by default, so you don't have to put all the onus on people to track down their data is something that we would recommend.

Sen. John Hickenlooper (D-CO):

Right, I agree completely and I think it's been frustrating to almost everyone. This is normally a large committee, but frustrating that the data privacy continues to be elusive. It's hard to imagine. Dr. Farid, you discussed different tools and techniques available for detecting AI-generated media, including humans manually labeling content and automated detection tools. How effective are the different kinds of tools and techniques to detect deepfakes and all these things from voices to video? How well are they working?

Dr. Hany Farid:

Yeah, so the first thing to understand is as a consumer, your ability to distinguish reality from AI is becoming increasingly more difficult. I do this for a living and I'm pretty good at it, and it's getting really hard. So putting that onus on the consumer is look for the hands, look for this, look for that. That is a false sense of security that will simply not be true tomorrow, let alone three months from now.

In terms of automation of tools, there's two basic approaches to automation, what we call proactive and reactive. Proactive, you've already heard of it from Mounir and Truepic. This is you are at the point of creation, whether it's an AI-generated voice, an AI-generated image, or an image that I took with one of these devices in my hand, you can cryptographically sign the source of that so you know provenance.

That today is the only tool that works, and this is the important part, at scale. Your question needs a little bit of additional thing, which is that what scale do you want to operate? And if you want to operate at the scale of the internet with billions of uploads a day, this today is the only technology that works at that scale.

The reactive techniques say, "Well, we have a bad actor who's not using credentials or the credentials have not been deployed widely," so we wait. We wait to get the phone call, we wait for the image to show up on social media and we respond to that. And there, the answer of efficacy is it depends. If you give us enough time, we'll figure it out. We're pretty good at this. But the half-life of a social media post is measured in about 30 to 60 seconds, so it's not like you have a lot of time once it gets onto the social media platforms for you to respond.

So my view of these things is we need the proactive solutions, we need the reactive solutions to clean up the mess that's left behind. And for consumers, I think it's not so much about educating them on how to detect this, but it's educating them on good digital hygiene on how to be careful and how to protect themselves.

Sen. John Hickenlooper (D-CO):

Great. I agree completely. Ms. Mani, again, thank you for sharing your family story. I know that's not an easy thing, and thanks to you for all your advocacy to address some of the harms that are being perpetrated through AI technology. From your perspective, why is it so important to consider the perspectives of the people impacted by technology when we're looking at policy responses that address some of the harms to individuals?

Dorota Mani:

I feel we have to deal with perspectives, many times misinformation. So education equals prevention, I feel should be at forefront in any type platform, so regulations in school, legislations in the government, as well as conversations with our students, children. I think it's equally important to send a message to our girls that they are worthy and they should demand certain respect, but at the same time, we need to hold the bad actors accountable and educate them as well.

We all know that the biggest protection is just the fear from going to jail. So I mean at this point, FBI released this year statements saying that sexually explicit deepfakes are illegal, and it's just surprising that our educational department didn't follow with any regulations. We all know what has happened right now in Pennsylvania to 46 girls, and it's really not a unique situation. It has been happening and it will be happening unless we are going to start educating our youth because they are our future.

Sen. John Hickenlooper (D-CO):

Absolutely. I have to go vote. So I'm going to excuse myself, turn it to Senator Luján from New Mexico, but I will be back. I have more questions.

Sen. Ben Ray Luján (D-NM):

Thank you, Mr. Chairman. I want to recognize our chair and our ranking member for this important hearing to each of our panelists who are with us today. According to the latest US Census data, over 43.4 million people speak Spanish at home, making it the second most spoken language in America. This holds very true in my home state of New Mexico. My question is for Dr. Farid, how accurate are current deepfake detection tools for audio and video deepfakes in Spanish compared to English?

Dr. Hany Farid:

Good. That's a good question. There are two different categories of detection schemes. So first of all, the proactive techniques are agnostic as to the language, right? Their there at the creation, they don't care what language you're speaking. So the kind of stuff that you'll hear from Truepic, it doesn't matter. The reactive techniques, there's two categories. There are ones that are language-dependent, and there are ones that are language-agnostic.

We like the language-agnostic ones. It's a big world and there's a lot of languages out there, and English is a relative minority. Many of the techniques that we have developed don't care about the language and the accuracy stays roughly the same. The accuracies that you should expect however, in the wild, when you're dealing with things that are coming through your phone or what you are absorbing on social media, on a good day, 90, 95% accuracy. So you're going to misfire, and that's true whether it's in Spanish or English. There are ones that are language specific, they fire around the same accuracies, that 90 to 95% accuracy.

Sen. Ben Ray Luján (D-NM):

What's driving these differences in performance, and what's needed to reach parity? Understanding non-agnostic tools out there as well, is there more that can be done not just here in the United States, but as we engage with the global partners in this space as well?

Dr. Hany Farid:

Yeah, the bias comes mostly from the training data. Most of the content online is in English, and certainly what most of academics look at is English speaking because most of this research is happening here in the US. I think the next most common language that we see is Spanish, and so it is a close second for obvious reasons. But I would say the gap is not as bad as you think it is because most of the techniques that we and others have developed are language agnostic. I'm worried about all kinds of bias in the system, by the way, this is not on my top 10 list.

Sen. Ben Ray Luján (D-NM):

I appreciate that sir. Mr. Ibrahim, do the C2PA standards online, how to display content origin information in Spanish so that Spanish speaking consumers can understand whether the content they are viewing is AI generated?

Mounir Ibrahim:

Thank you, Senator. So the mechanism of provenance as Dr. Farid mentioned, is largely mathematical, right? So you are going to see the content credential pin on the digital content. I don't know if you saw some of the sample images we had earlier, but that aspect of it would largely be fairly standard across languages. However, the information, the spec itself and information about our spec, is something we want to move towards internationalization.

We've spent a lot of time this year within the C2PA to raise awareness internationally because this is not just an American or English language standard. This is, in fact, a global standard. So one of the things we did in October is we worked with the government of France at the French Embassy here in Washington to raise awareness for the G20 community and other countries that are interested in looking at digital content provenance. So internationalizing it, not only in Spanish language but in parts of Asia Africa, et cetera, is something we are certainly focused on.

Sen. Ben Ray Luján (D-NM):

It's my understanding that displaying of this information, the position widely is that it should remain voluntary. Do you agree with this approach? Why or why not?

Mounir Ibrahim:

Yes, Senator, for authentic capture. So if I have my smartphone and I am taking an authentic picture or video, absolutely, it is my choice if I want to add content credentials. Now for gen-AI, the platforms themselves have the provenance worked in. So all the outputs, think of it almost as an assembly line, are getting cryptographically hashed with this standard. So I do believe it's slightly different approach for gen-AI or AI, but for authenticity and authentic capture, 100%, sir.

Sen. Ben Ray Luján (D-NM):

One year ago at a hearing before the Senate Aging Committee, Garrett Shildhorn testified about a call that he received that appeared to be from his son Brett pleading for help from jail. Mr Shildhorn was so upset by the authentic-seeming call that he just about sent the $9,000 that was being demanded from a fake public defender in bail money.

A 2024 FTC report found that older consumers have experienced an estimated 61 billion in financial loss due to scams over the last year. Older consumers lose more money to scams on average compared to younger consumers. Nearly 1 in 10 adults over the age of 65 have dementia. Approximately 25% of older adults live with mental health conditions. Older consumers are a clear target of phone scams, personalized phishing emails and other deepfake tactics. Mr. Brookman, what is the most important thing that policymakers must do to protect older consumers from being scammed through these AI-enabled tactics?

Justin Brookman:

Yeah, thank you for the question. Yeah, there have been lots of academic research showing that older Americans, even setting aside the dementia, or more likely to fall prey to these schemes. I think anecdotally, we probably all recognize that my own grandfather called me and said, "Hey, I got an email from Frenchlotteryatyahoo.com. I'm a billionaire now." I said, "No. No, papa, you're not." But I think user education is sadly the most important thing. Talk to your family members.

The government should be doing more. Like Senator Blackburn mentioned, the FTC says it's a $10 billion problem that probably massively understates that. I'm sure is much greater than that. So we need to expend the resources to teach people about it, and it's going to be a constantly changing battle, because again, these techniques are going to get more and more sophisticated. So each of us individually should be talking to our family members, but the government should be investing a lot more in education.

Sen. Ben Ray Luján (D-NM):

I appreciate that very much. [inaudible 01:19:25] the subcommittee for question. Senator Blackburn.

Sen. Marsha Blackburn (R-TN):

Thank you, and thank you all for the discussion. I know that Senator Hickenlooper asked about data privacy and the importance of getting that done so that people have the ability to protect their presence online. And Mr. Farid, I'd like to just ask you, when you look at this issue, what kind of added protection could having a federal privacy standard provide before people get into so much of this AI-driven content?

Dr. Hany Farid:

Yeah, I wish I had good news for you, but I don't. So first of all, there's about 350 million people in this country, and our data is out there and there's no bringing that data back in. So maybe we can protect the next generation and the one after that, but the 350 million of us that'd been on the internet for a few years, that game is over.

Here's the other problem, is if you look at gen-AI today, to clone your voice, I need about 20 seconds of audio. That is not hours of data that are being mopped up, it's 20 seconds, and soon it will be 10 seconds, and soon after that will be 5, which means somebody can be sitting next to you in a cafe and record your voice and they have it. To put you into a deep fake imagery, one image. You got a LinkedIn profile? Done. So how do you protect somebody when it's not about vacuuming up tons and tons of data? Don't get me wrong, it should be criminal that we don't have a data privacy law in this country, but I don't think-

Sen. Marsha Blackburn (R-TN):

Well, it would've given us those protections-

Dr. Hany Farid:

It would've helped.

Sen. Marsha Blackburn (R-TN):

... years ago. We've tried for 12 years-

Dr. Hany Farid:

I-

Sen. Marsha Blackburn (R-TN):

... to get this done and big tech has fought it, so-

Dr. Hany Farid:

Yes, they have.

Sen. Marsha Blackburn (R-TN):

... it is truly frustrating. Mr. Brookman, with Consumer Reports, you all have quite a reputation with consumers coming to you for information. So talk just a little bit on how you educate people.

Justin Brookman:

Yes. So we have a lot of tools. We have a tool called Security Planner that we try to franchise out to folks. We try to make a lot of our guidance available in Spanish language as well to make folks just aware of what are you worried about? What's your threat model? Are you worried about an ex-boyfriend? Are you worried about a scammer? And we tried to walk through them through what they should be do. Look for the warning signs, false urgency, right? Like, "We need the money right now." Anyone asking being paid through dodgy means. If your boss is asking for $500 of gift cards from CVS, that's probably not actually a legitimate thing.

And so we try to give practical guidance. Again, we can't make it too overwhelming for folks. This can't be a full-time job for folks. We try to give people the basic information that they have and they need to get through their day, basic cybersecurity practices to having different passwords because if you do get hacked on one site, you can use a different passphrase on another site.

One thing that could potentially help for some of these grandparents' scam attacks is having a family safe word, so it's only an accident if you say, "rutabaga" or whatever your family's safe word happens to be. Again, this does put a lot of onus on individuals, but unfortunately, that's the world that we live in and so we all have an obligation to be a little bit more savvy.

Sen. Marsha Blackburn (R-TN):

Yes. And I know some of the senior-specific organizations are beginning to do education because so many people are now using at home health and other conveniences, that much of their data is being transmitted, and sensitive data, things that are HIPAA protected. Mr. Farid, I want to come back to you on as you're looking at this, and we say 5 years from now, 10 years from now, because we look at the fact that, as I just said, privacy, we've never gotten that federal standard, so people have treasure troves of information is out there. So when you look at advancements in changes in AI 5 years down the road, what do you expect? 10 years down the road, what do you expect?

Dr. Hany Farid:

Everything's going to get worse. And by the way, I don't think it's 5 years, 5 years... Think about, here's something to think about. ChatGPT went from zero to 1 billion users in one year. 5 years is an eternity in this space. We need to be thinking tomorrow and next year. But here's what we know. Hundreds and hundreds of billions of dollars are being poured into predictive AI and generative AI.

The technology's going get better, it's going to get cheaper and it's going to become more ubiquitous. And that means the bad guys are going to continue to weaponize it unless we figure out how to make that unbearable for them. I think the threats are continuing, and I'm not extrapolating, right? We have seen this from the very early days of this AI revolution. From the very first days we have seen the weaponization of this technology and it's only getting worse. You cited the numbers in your opening statements, those numbers will continue. The threats will continue.

Sen. Marsha Blackburn (R-TN):

Okay, then you talk about making it a heavy price to bear. So that means penalties and enforcements and involves law enforcement. So Mr. Ibrahim, when we look at this as a global issue, what do you see as cooperation with some of our allies that are there? I know many times when we're at the EU working on privacy, data security, things of that nature, they talk about our hesitancy to put these regulations in place. So what kind of international cooperation do we need?

Mounir Ibrahim:

Thank you, Senator. Indeed, I do believe that working across like-minded countries to establish, at the very least, best practices that we can all agree on, encourage private industry and technology and consumers to the extent they're following that to all align on. We can educate together. We can work to establish, at the very least, a framework in which transparent information might be marked differently, elevated, because as Dr. Farid and my colleagues here on the panel noted, we're not going to be able to stop every bad actor. That's just not going to happen. You have open source models that are ungovernable, that are forked and are used by bad actors to do whatever that they please.

But what we can do is we can empower those that want to be transparent, and that can change the dynamics of the digital content we see and hear online. Social media companies, for example, can algorithmically design things, so at the very least, you're seeing transparent things without suppressing anything else. So aligning with our partners, I think the G7, the G20, are great places to start on, at the very least, best practices even if we don't agree on full regulations, for example, like the EU's AI Act. That might go too far for the American audience, but there are areas of overlap.

Sen. Marsha Blackburn (R-TN):

Excellent. Thank you. And I yield back.

Sen. John Hickenlooper (D-CO):

Thank you. And thank you for filling in, Senator Luján. Senator Baldwin.

Sen. Tammy Baldwin (D-WI):

I want to thank all of our witnesses for appearing here today to share your expertise and your personal experiences. I believe it really greatly benefits this committee and the American public to hear how technology is being used to increase the sophistication and prevalence of crimes and scams. I wanted to start with Ms. Mani. I especially want to thank you for sharing a deeply personal story with the public and with the committee about your family's experiences with artificial intelligent deepfakes. Would you be willing to share with the committee the various steps that your family took after finding out your daughter's likeness had been used in this way? What sort of steps did you take, and how did you know what to do and what to do next?

Dorota Mani:

Well, thank you, Senator. I'm going to start with saying that our situation is really not unique. It has been happening and it is happening right now. I mean, we just heard about the Pennsylvania story. So last year when we found out, or when we were informed by the school what has happened to us, the first thing we did, obviously we called lawyer.

In the school sector, we were informed that nothing can be truly done because there are no school policies and no legislation, and the lawyers repeat exactly the same thing. So when my daughter heard from the administration that in parenthesis she should be wearing a victim's badge and just go for counseling. And when she came home and she told me, "I want to bring laws to my school so that way, my sister, my younger sister, will have a safer digital future," I said, "Oh, okay." And that's how we started.

We've been advocating on multiple fronts just because as complex this technology is, the context of misuse, it's very complex as well. So we've been looking out, obviously, at the legislation as our first road for change, and surprisingly so, it was the easiest one. There are so many legislations right now that are hopefully will be passing soon and helping victims of deepfake misuse.

We were looking in to the root of education as prevention, as setting an example of how victims should be treated. The old tale of consent that I feel needs to be reinforced even in 2025. But most importantly, we started in acknowledging that it's time for us to change the rhetoric. No matter where we were going and talking to whom we were talking, it was always about how do you feel as a victim? And even though it's very important, how victims feel is an important part of the story, I feel it's time for us to start talking about the perpetrators and the lack of accountability and the need for stronger protections.

For example, and as I mentioned, there's so many AI bills and so many AI laws out there, and they're all important. Some of them resonate deeper with us than others. Take It Down Act, it's very close to our heart for one important reason, which I feel public should be educated about. Civil laws can be very expensive, and not many can utilize them. Criminal laws, not everybody would like to go this route for multiple of reasons, cultural, personal, religious, you name it. The Take It Down Act allows the victims to take control over their own image. And I think that is so important, which gives the freedom to anybody affected to just move on with their life, which sometimes that's all they want.

Sen. Tammy Baldwin (D-WI):

Thank you. Mr. Brookman, in Wisconsin, we're seeing a prevalence of technology being used to scam families, including seniors. This past March, the Dane County Sheriff's Office put out a warning about phone scams using artificial intelligent tools to mimic the voices and faces of loved ones to trick senior citizens into believing their loved ones are in danger and need help.

I know other law enforcement agencies in Wisconsin across the country are hearing from victims of similar scams. I also hear directly from constituents on this matter. In 2023, Wisconsinites lost approximately $92 million to fraudsters and scamsters. So once people realize they have become a victim of a financial fraud or scam, can you speak to what steps people must take to report the crime and recover damages?

Justin Brookman:

Yeah. So first they're going to try to get the money back, and it really depends on how they did it. If you paid for something with the credit card, it's actually pretty good protections in this country. If you used a peer-to-peer payment app, actually the protections are a lot weaker. I know there's some legislation before this committee to try to... Because again, it's just a different button on your phone, and suddenly the protections are a lot weaker. If you paid with crypto or a gift card or a wire transfer, it's going to be tough.

And so again, educating people just about those differences is really important. Freezing your credit can often be a good idea, again, depending on what got compromised. It's gotten a lot easier in some ways since the Equifax data breach, but it's still three different bureaus. If they're family members you need to get too, that's still labor you need to do. I mean, it is a good idea to report. They report to local law enforcement, report to the FTC or report to your state attorney general.

If you get locked out of your accounts, that can be just an absolute battle to get back in if the scammer takes that over. I think Matt Honan famously wrote a really detailed story about how he lost access to Gmail and the journey to get back there. So it can be absolutely overwhelming and dispiriting, and it puts a tremendous burden among people who've already been ripped off.

Sen. John Hickenlooper (D-CO):

Great. And as the chair of the subcommittee, I do have the discretion to ask more questions, even though I had to race off to vote, so hopefully you'll indulge me. I won't go on too long. I realize everybody's busy and really do appreciate you making the time. Dr. Farid, we talked a lot. A lot people talk about watermarks and other ways of designating, some way of telling authentic from synthetic content, at least from what the origin is. But some kinds of watermarks can be manipulated, removed, impersonated. What are example for methods to make these types of designations more secure, and how resistant can they be?

Dr. Hany Farid:

Yeah, so when we talk about contract credentials, we have to go beyond watermarks. There's three legs to the stool here, all right? So the first leg is what's called metadata, so this is where you just simply attach text to the file associated with the image, audio, or video that says, "I created this at this date and time, and it is AI or it is natural." And that text can be added or removed, so it's not the most resilient, but 10% of the people don't know what metadata is, and you'll catch them.

The watermark is the second technique where instead of adding it to the file, you literally embed it into the underlying content. So what's nice about watermarks and metadata is you are tagging content and then putting that content into the wild where it can be identified from the piece of content and only the piece of content. But that upside is also the downside because if the tag is there, somebody, a sophisticated actor, can reach in and rip out the metadata and yank out that watermark. And different watermarks have different resilience.

So the third leg to the stool is to extract a digital signature from your content. So if, for example, OpenAI creates an AI-generated image, insert the watermark, insert the metadata, and then pull out a signature that I will store server-side. And what that allows me to do is if somebody attacks my watermark and my metadata, I can reattach the credential.

So those three things, each one in isolation doesn't get you where you want to, but the three, now to your last question, are good. Are they perfect? No, but they lop off a nice chunk of the problem. And then those reactive techniques that I talked about, plus the public education, plus the regulatory, plus all those other things start to fill in the gaps. So I'm all for it, but we should be very clear that this is not bulletproof, that this can and will be attacked, right? But we raised the bar. Everything in cybersecurity is mitigation strategy. We don't eliminate threats. We simply mitigate them.

Sen. John Hickenlooper (D-CO):

Don't say that. Don't say that.

Dr. Hany Farid:

We don't eliminate threats. We simply mitigate them.

Sen. John Hickenlooper (D-CO):

Don't say that. Don't say that, however true it might be. And we've talked a lot about this notion of how we better inform customers. Mr. Ibrahim, Truepic is obviously a leading voice along with other industry partners in developing content provenance technology to better inform consumers about the content they see, what we're just talking about. But what are the incentives to encourage more industry adoption of labeling standards and investments in related research?

Mounir Ibrahim:

Thank you, Senator. So from our perch, the incentives, the largest incentives has been the scale and the proliferation of generative and AI material. We have seen entire industries that were digitizing because of the efficiency and the cost savings well before the advent of AI unable to compete in today's economy if they don't have authenticity, transparency, or provenance in the digital content that they're operating on. So we've seen that as the greatest incentive, and that's why entire industries like insurance, business credentialing, warranty, supply chain monitoring, et cetera, et cetera, are adopting this technology.

However, on the consumer facing side, think peer-to-peer commerce, home rental sites, online dating sites, the incentives, although are there, there have not been the financial incentives, at least from our perch, or consequences for these platforms to better protect or at least give more transparency to their consumers. And that has been lagging. In the past year, we have seen four of the largest social media platforms, LinkedIn, all the Meta sites, YouTube, and TikTok, I believe, all begin implementing at various levels and at various commitments the C2PA Open Specification. And that's great, but much more could be done in this regard.

Sen. John Hickenlooper (D-CO):

Right. Interesting. Obviously, that's something this committee's got to pay attention to. Ms. Mani, you've discussed the need for increased AI literacy and our schools really at all ages, whether elementary school or all the way through high school. How do you calibrate what the top benefits are as you look at it? When you look at benefits and risks, what do you present to our students? What do you think they should learn? What do they have to have top of mind? Let's put it that way.

Dorota Mani:

Thank you, Senator. I think that's such an important question because I feel personally many of the misunderstanding comes from mis-education and misrepresentation and just lack of understanding. So I think when we talk about AI, especially to our youth, we need to talk simultaneously about these pros and cons. It goes hand in hand. It's an amazing tool that has been advancing our society right now for years in research, in education, in art, in medicine, et cetera, et cetera. At the same time, it needs to be regulated because it has repercussions if misused. And I think it's equally important to educate about the repercussions to both the victims and the bad actors, because at this point, since it is illegal and in many states we already have legislations, it can affect equally the victims professionally, personally, educationally, if they apply for schools, emotionally and socially.

But it can also affect the perpetrators because they can end up in jail, and if we have no control over how certain children are raised at home or just simply they are too young to understand or comprehend, we must put things in the perspective so they understand the consequences. And I also think, just to add to it, context is extremely important. Because we have K to 12 children who are right now misusing AI. We have college, which is a very different context, and we have perpetrators and pedophiles, which should be looked from a singular perspective.

Sen. John Hickenlooper (D-CO):

Yeah, I couldn't agree more. And consequences, we need to. As we do that informing and proselytizing, there's got to be consequences for that. Because you're right, people want to do the right thing, but also people act properly because they have some consequence frequently.

Dorota Mani:

Correct.

Sen. John Hickenlooper (D-CO):

Mr. Brookman, in your testimony, you advocate for stronger enforcement, clearer rules of accountability to mitigate customer harms and damage from these various scams and frauds. Love that word fraudster. Somehow sounds softer. It's probably the wrong word. We should actually use a word that has a much harsher edge. What action do you think the FTC, the Federal Trade Commission, should take? If you were to have an opportunity, what additional powers or authorities do you see is needed to protect the customers from all these various harms we've talked about today?

Justin Brookman:

Yeah, absolutely. Thank you, Senator. So I mean, again, I want to see more aggressive enforcement. There were the five cases they brought in September. Those were very welcome. Want to continue to see cases against the tools when they're being used for bad. You talk about the fraudsters, the scammers are a dime a dozen, and they're all over the world, but if they're all using the same tool and the tool could really only logically be used for harm, and that was one of the cases that the FTC brought, that can be very powerful. I think we need to see more staffing. Again, they're considerably smaller than they were in 1980. They need more technologists. I was actually part of the first technology group that was created at the FTC, the Office of Technology Research and Investigation, but it was only five people. I think it's expanded somewhat under Chief Technologist Nguyen, but they still need more resources.

Two more things. Penalties, when someone breaks the law, they need to pay a fine. That seems very basic. I worked for an Attorney General. We had that authority. All Attorneys Generals have it. FTC doesn't have it most of the time. And then restitution. Since the AMG Supreme Court case, if they catch a fraudster who stole $20,000, $200,000, they often lack the legal authority to make them even just give that back. Not even penalties, just give the money back. I know there have been hearings I think in this committee and in the House on restoring the FTC's 13B authority. I think there had been bipartisan consensus around it, but since after the initial being upset that it was taken away, I think that it has fallen behind as a priority, which is too bad. There's actually a news story that came out just yesterday that the FTC said, "We got a lot less money back for consumers this year because we can't get money back for consumers." So I think that needs to fit can be fixed.

Sen. John Hickenlooper (D-CO):

That's so great. Thank you. Senator Sullivan from Alaska.

Sen. Dan Sullivan (R-AK):

Thank you, Mr. Chairman. And I'm going to just follow up on this line of questioning because I think it's a really important one. I want to start with you Ms. Mani. And I'm really sorry about what happened to your daughter. I have three daughters and it's horrendous. I can only imagine what you guys are going through. So I want to start with you, but I'm going to pose the question to everybody once you to answer. And it just goes to what Senator Hickenlooper was asking about. I think it's what we're all struggling with and it's responsibility and consequences. And when the responsibility and consequences are too diffuse, then there's nobody who is deterred from taking this kind of action. So in the 2022 Violence Against Women Act, Congress passed legislation that created a civil cause of action for victims to sue individuals responsible for publishing non-consensual intimate imagery.

So that's different from the deepfakes, but it's along the same lines. So my question as I'm thinking through this, the consequences could fall on a number of different entities. The AI platform that generates a deepfake of someone's daughter and something that's fake but looks real. The social media platform that posts it. And I would welcome to hear about your experience on trying to get them to take it down. Because just having constituents in Alaska who deal with this, trying to get these companies to take something down that's harmful can be very challenging. And then third, of course, the person who was behind it. In this case, I think it was a classmate of your daughter's who only received a brief suspension. So all of these are different potential parties to be responsible. So I would ask for your view first in your specific situation.

And then I'd ask all of you, I'm a big believer in deterrence. You lay it out, hey, if you do this, you're going to get hammered, hammered. You're going to go to jail. You're going to spend five years behind bars thinking about it. That creates an incentive for people to not do these kinds of things. But right now, I think the system is set up in a way that's so diffuse in terms of who's going to pay consequences that it enables everybody to just ignore them. So if you'd like to try to address a few of those in your own situation with what happened to your daughter and then the rest of the witnesses, however you want to address that question.

Dorota Mani:

Thank you, Senator. And I think that's such a broad question, and I think it depends on the context. Each incident has a completely different context and should be looked from its own perspective. In our case, and I think that's why talking about taking it down, I think that's why we are so fiercely advocating for Take It Down Act because it allows the victims to take ownership.

Sen. Dan Sullivan (R-AK):

And did that happen in your case or did it take a long time?

Dorota Mani:

So our situation was slightly different because we've never had the image. We've never seen the image.

Sen. Dan Sullivan (R-AK):

Oh.

Dorota Mani:

But I can tell you that I'm in close contact with Allison from Texas with her mom. And Ali, we testified together and our daughters did as well. And in their case, I know they been, they have the image, maybe a few images actually, and they were contacting Snapchat and asking them to take it down for eight months. And until Senator Cruz reached out personally, it has not been down. After that, it took them literally 24 hours to put it down. So I think accountability-

Sen. Dan Sullivan (R-AK):

But it shouldn't take calling. I mean, Senator Cruz is a great friend of mine. He's soon going to Chair this committee, but it shouldn't take a phone from a United States Senator to get one of these companies to act. Trust me, I've made those phone calls.

Dorota Mani:

It shouldn't take a Senator or a Congressman and it shouldn't take a law. It should be an ethical responsibility of every platform.

Sen. Dan Sullivan (R-AK):

Yeah, I think, with some of these companies, they need a law.

Dorota Mani:

Some of them do.

Sen. Dan Sullivan (R-AK):

You can't rely on their good moral standing,

Dorota Mani:

But I think that's a clear example of how easily this can be mitigated, because if there's a will, there's a way.

Sen. Dan Sullivan (R-AK):

Did the perpetrator, and if you don't want to talk about it, I understand because I'm sure it's difficult on your family, but did the perpetrator in the instance with your daughter, was there any punishment for the student? I heard he was suspended for a day or something pretty minor.

Dorota Mani:

That is correct, and I feel that's where lies a big responsibility too on our educational system, that they educate our boys and girls, just in general our community on what will be acceptable and what is not. And they're hiding behind the lack of laws. But every school has a code of conduct that they fall into and it's simply each administration's decision of how they're going to handle an incident.

Sen. Dan Sullivan (R-AK):

Great. Any other thoughts from the witnesses? I know I'm already out of time, but it's a general question, but I think it's a really important one. Who should we make pay the consequences? And I think that helps with regard to the whole issue.

Justin Brookman:

I have thoughts. I mean, I strongly agree with you. Deterrence is incredibly important. Everyone in the stack probably had some responsibilities, certainly the perpetrator. But the people who make these tools, some of these tools have very, very limited positive societal use cases to create images based on a real person. Voice cloning, I mean, maybe some very edge cases, but overwhelmingly likely to be used for harm. If you're putting a product out in the market that's overwhelmingly likely to be used for harm, you should bear some responsibility. That was the FTC's Ryder case.

Sen. Dan Sullivan (R-AK):

What would you say, when you say there's very few of these products that have a societal good, give me just a quick list of the ones you're thinking about that don't have a societal good?

Justin Brookman:

I think generating images based on a real person, I can imagine some very theoretical cases, but 99 times out of 100 will be used for evil. Voice cloning, maybe one in 30. But I mean, again, maybe a voice actor wants to franchise their voice, maybe a person who's losing their voice wants to maintain it for longevity, but overwhelmingly likely to be used for scams. At the very least, make them read a script to say this. Image generation maybe should not be available. Maybe it should be, per se, harmful and illegal to offer that tool. Finally, the social media sites. This has been an issue I've been working on for 20 years.

Back when Facebook was a baby company, I was at an Attorney General's office. We would send in complaints like, "Hey, I'm an underage user being sexually harassed. Do something about it." It was crickets. And then we brought a settlement against them to make them promise to do things within 24 hours. Over time, they have dedicated more resources to it. Have they dedicated enough resources to it? I would say no. And so maybe clarifying they have some more obligations to respond to these things seems absolutely worthwhile.

Mounir Ibrahim:

Okay. Anyone else on the witness?

I would just add, Senator, one question that comes to my mind, particularly when it comes to distribution channels, the social media channels where a lot of this bad actions are taking place, are they doing everything they can from a technical perspective to try and mitigate that? While maybe you're always going to have bad actors who use open source models and create deep fakes, there's a rising problem of sextortion, particularly on social. There are things that some of these social media companies can do to mitigate those chances. It will never eliminate it completely. Are they taking those actions? And asking the question why not I think by itself would be critical.

Sen. Dan Sullivan (R-AK):

Okay.

Dr. Hany Farid:

I'll add a few things here. First of all, I believe in action and consequences. And I think that's whether it's an individual, a trillion-dollar company or a U.S. Senator for that matter. And I think the fact is that Silicon Valley and trillion-dollar companies have gotten away with it for 25 years and frankly we've let them. Now, I think with the specifics, I think it depends on the content. I think if it's child sexual abuse material, the perpetrator, the AI company, the tech company that's hosting it, everybody goes to jail. This is easy. I think when it comes to non-consensual imagery, I think it's more complicated. I think if you're a 12 to 14-year-old and somebody has handed you a tool to create porn, are we surprised that this is exactly what they do? Do we think that 12-year-olds should go to jail for this? No. I think there should be consequences, but I don't think it should be jail.

But I think the person who created the tool and advertised it to say, "Take women's clothes off," and I think the financial institutions that tag on, a Visa and MasterCard tag that says, "Pay for it with Visa and MasterCard," and I think the Googles of the world that surface that from a web search all have responsibility for putting that power into that 12-year-old. And then, of course, with frauds, it's the same thing. It's not just the person creating. It's not just the tool. These things are being not just hosted on Facebook, on TikTok, on YouTube, on Instagram, on Twitter. They are being algorithmically amplified by those platforms because they monetize it. They have a responsibility for that and we're letting them off the hook. So up and down the chain, I think people have to be held accountable, and until we change the calculus, the fact is not reigning in abuses is good for business and we have to make it bad for business.

Sen. Dan Sullivan (R-AK):

Good. Great. Good answers. Thanks, panelists. Thank you, Mr. Chairman.

Sen. John Hickenlooper (D-CO):

Thank you, Senator. Now, we have on Zoom, Senator Klobuchar from Minnesota.

Sen. Amy Klobuchar (D-MN):

All right, well, thank you so much Chairman Hickenlooper for having this hearing, as well as Senator Blackburn and thank you to Chair Cantwell as well as our Ranking Member Cruz. I care a lot about this subject. I guess I'd start with you, Ms. Mani. As you know, Senator Cruz and I lead the Take It Down Act, and we've actually passed the bill through the committee, through the Commerce Committee. And I'm feeling very good. We had some issues we had to work out on the floor, and I think we have resolved these so we can actually pass this bill. And I think you know the reason for this more than anyone just when you look out from your personal situation. In 2016, on in 25 Americans reported being threatened with or being a victim of revenge porn. Now, eight years later, that number is one in eight.

Meanwhile, the proliferation of AI generated deep fakes is making this problem worse. Approximately 96% of deep fake videos circulating online are non-consensual porn. So I know and you testified about your daughter, and I am so sorry about what happened to her. And as you know, our bill would ban the non-consensual publication of intimate images, real or deep fake, and require the platforms to take the images down within 48 hours notice. In your testimony, you mentioned that schools have cited the absence of state and federal laws as a reason for not taking action when kids are victimized. How would a federal law on this have changed the horrific experience that your daughter and your family went through?

Dorota Mani:

Well, thank you, Senator. I think I'm going to just start with saying that, in our high school, it will allow the platform for the school to act and do anything at this point. In schools in general, I feel, as I mentioned it before, laws are a form of education to our society, especially right now that we're dealing with schools and the problem of deep fakes over there. They're a form of education in our society of what is acceptable and what is not, and what has not been delivered at home or in school can be delivered through laws. And by fear of being criminally or in a civil way, any way affected by their actions. It will prevent at least in some instances from deep fakes being created.

At the same time, as I mentioned before, criminal laws and civil laws, even though they are so important, not always the victims will choose to use them in their advocacy for their image. But the 48 hours take it down component of your bill, it's something that gives the victims an immediate way to take the ownership of their image. And some of them, that's what they want.

Sen. Amy Klobuchar (D-MN):

Thank you, I appreciate that. Another difficult story is one of my own employees. Her son is in the Marines and her husband got a call. Someone had scraped the son's voice off the internet and left this very believable message and talked to the dad and said, "Dad, I need help. I need help. I need money." And the dad thought it was suspicious because where he was stationed, he wasn't allowed to call. Anyway, we have since, I've looked into this a lot, and it was, of course, of fake call, and we are starting to see families of people in the military preyed upon. And it only takes a few seconds of audio to clone a voice using AI.

Criminals can pull the sample from public sources, as we know, like social media. As a result, AI-enabled scams are becoming far too common. Dr. Farid, while there is technology to detect synthetic images and videos, I'm concerned we're behind on finding ways to detect a synthetic voice when it's heard over the phone. How can the federal government best leverage available technology to verify the authenticity of audio, particularly in cases where a consumer does not have access to metadata?

Dr. Hany Farid:

Yeah, Senator, you're right to be concerned. The problem with the telephone is the quality of the audio that comes over is quite degraded as opposed to, for example, a YouTube video or a TikTok video. And that inherently makes detection very difficult. Also, there are serious privacy concerns. Are we going to listen to everybody's phone call and monitor that for a deep fake or not? So I think the burden here has to shift to the producers. If you're an AI company, and you heard this from my colleague here, and you're allowing anybody to clone anybody's voice by simply clicking a box that says, "I have permission to use their voice," you're the one who's on the hook for this.

So going after the telecoms, look, we haven't been able to get them to stop the spam calls, why do we think we're going to get them to stop the AI fraud? So I think we have to go after the source of the creation of these deep fake audios.

Sen. Amy Klobuchar (D-MN):

All right, well, thank you. I see Senator Markey's face on the screen, so I will forego any additional questions. So thank you. And thank you, Mr. Chairman.

Sen. John Hickenlooper (D-CO):

Thank you, Senator Klobuchar. Senator Markey. You got to hit. You're on mute. Senator Markey, you're on mute. You're still on mute. Okay. You can't hear anything. Maybe you better talk to Senator, Klobuchar.

Sen. Ed Markey (D-MA):

Can you hear me now?

Sen. John Hickenlooper (D-CO):

Oh, there you are. You were there good for a second. Try again. You were there for a second.

Sen. Ed Markey (D-MA):

Can you hear me now?

Sen. John Hickenlooper (D-CO):

Yes.

Sen. Ed Markey (D-MA):

Hello?

Sen. John Hickenlooper (D-CO):

Yes, we hear you. You're loud and clear now.

Sen. Ed Markey (D-MA):

Can you hear me now?

Sen. John Hickenlooper (D-CO):

Yes.

Sen. Ed Markey (D-MA):

Oh, beautiful. Thank you. Our witnesses today have done a very effective job in demonstrating that artificial intelligence poses serious new risks of fraud and scams across different sectors. From AI, voice cloning tools to deep fake images and videos, AI is threatening to undermine our ability to tell truth from falsity to sow distrust in our understanding of reality itself. These scams and frauds will affect all Americans, but if history is any guide, marginalized communities are likely to be the greatest targets. Mr. Brookman, in your written testimony, you discussed a recent report by Consumer Reports that found Black and Hispanic Americans were more impacted by digital attacks and scams. Can you elaborate on your research?

Justin Brookman:

Yeah, so this is a report that we do every year to track how people are adopting cybersecurity techniques. Are they using sophisticated passwords? This is the first year we asked about scams. Have you encountered scams? Has someone attempted a scam? At least half the people had, and I think overall about 10% of people, the respondents to our nationally representative sample, said they had lost money in a scam. And the percentages, I can get them for you, but I think I believe it was twice as much for Black and Hispanic people. Of those who had encountered scams, I think a third of Black consumers and a third of Hispanic consumers had lost money. And I think it was about half of that for white Americans.

This is consistent with the research that Federal Trade Commission has done. The Federal Trade Commission's done a couple of reports looking at similar issues. Also found that. So I know we've talked a lot about in this hearing about the need to educate senior citizens and maybe military families as well. And I strongly agree with that. But we also probably need to reach out to marginalized communities too, because that's where the money is, that's where people are losing money, to make sure they're educated and prepared for this new wave of AI powered scams.

Sen. Ed Markey (D-MA):

Thank you, sir. Thank you for your great work, because these issues now are well documented. In fact, in 2021, both AARP and the Federal Trade Commission published reports that determined that Black and Latino adults were more likely to be targeted by scammers and by fraudsters. As artificial intelligence gives these bad actors new tools to target the public, communities of color will inevitably be the most impacted. So, Mr. Brookman, do you agree that AI-enabled frauds and scams will have a disproportionate impact on marginalized communities?

Justin Brookman:

Yeah, I think the evidence so far indicates that and that traditional scams are more likely. And so I think for AI-empowered scams as well, it seems perfectly logical. So, again, yes.

Sen. Ed Markey (D-MA):

Yeah, so I agree, because any effort to address AI-enabled fraud and scams must be giving special attention to the unique harms that are facing those populations. And as we discuss AI's impact on marginalized communities, we must also remember that AI-powered algorithms can supercharge pre-existing bias and discrimination. For example, in a 2019 report found that due to biased mortgage approval algorithms, lenders were 80% more likely to reject Black applicants than similar white applicants. On another occasion, a major tech company found that its AI resume screening tools penalized that included the words women's and recommended male applicants for jobs at much higher rates than similar female applicants. That's unacceptable.We cannot let AI stand for accelerating inequality.

And it's why in September, I introduced the AI Civil Rights Act, which would ensure that companies review and eliminate bias in their algorithms and put Americans back in control of key decisions in their lives. Mr. Brookman, do you agree that Congress should pass my AI Civil Rights Act?

Justin Brookman:

Yes, I definitely agree that algorithmic harms from bias from existing inequalities is something that used to be addressed. Dr. Farid testified about that earlier. We've also supported legislation in law in Colorado, the first law in the country to address that. But yes, the federal government should do it, and we have endorsed your bill on this issue specifically.

Sen. Ed Markey (D-MA):

And I appreciate that. We shouldn't just leave it to the individuals to figure out how to protect themselves. We have to take action against the companies that set up these algorithms. So as Congress considers AI legislation, we just can't ignore how those algorithms will impact marginalized communities, and that's why we have to pass my AI Civil Rights Act. And all of this requires that consumers be able to authenticate the digital content and we need to put those protections in place. So I thank all of you for everything that you're doing to help to inform the public about these very important issues.

Sen. John Hickenlooper (D-CO):

Thank you, Senator Markey. And before closing, I think we're out of Senators for now, I would like to ask unanimous consent that statements from Microsoft on the NCII and from the Center for AI and Digital Policy on Consumer Protection be entered into the record without objection. I see no objection. So done. Let me just say thank you. I appreciate it so much. I know you are every bit as busy as we are and that you took time out of your lives and schedule to come and share your experiences and your perspectives with us. I think these are important discussions, and it can be very frustrating to be in a body that moves at such a careful slow pace when things that you are describing are so blatantly problematic and screaming for consequences, I think several Senators said.

I do think, just in the same sense, that every great journey starts with the first step. I think great legislation starts with the first hearing, or maybe this might be the second or third hearing for some of you. But I remain very optimistic and I do feel a great sense of urgency for all the reasons that you have each made clear. So the Senators will have until Tuesday, December 3rd to submit questions for the record. Witnesses will have until Tuesday, December 17th, our traditional two weeks, to respond to written questions. And with that, this hearing is adjourned.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics