Home

Donate

Transcript: House Hearing on Safeguarding Data and Innovation

Gabby Miller / Oct 27, 2023

Gabby Miller is Staff Writer for Tech Policy Press.

US House hearing on "Safeguarding Data and Innovation: Setting the Foundation for the Use of AI," October 18, 2023, Washington DC. Source

Last week, on Oct. 18, 2023, the US House Energy and Commerce Committee kicked off its artificial intelligence series with a hearing titled “Safeguarding Data and Innovation: Setting the Foundation for the use of Artificial Intelligence." Led by Committee Chair Rep. Cathy McMorris Rodgers (R-WA) and the Innovation, Data, and Commerce Subcommittee Chair Rep. Gus Bilirakis (R-FL), the series’ stated goal is to create a national privacy standard. It’s also meant to “explore the role of Artificial Intelligence across every sector of the economy, including healthcare, telecommunications, emerging technologies, and energy,” according to a press release.

In their opening statements, both Representatives Rodgers and Bilirakis emphasized the need for data privacy protections for the Americans whose data powers new AI tools–an issue the House Energy and Commerce Committee took up last year as well when it passed the American Data Privacy and Protection Act (ADPPA) almost unanimously out of Committee. “In fact, our legislation was touted as having the strongest online protections for children to date, and overall would provide stronger protections than any state law,” said Rep. Bilirakis in his opening remarks. For the purpose of this hearing, the co-chair said he hoped to hear from witnesses about the role that data privacy law can play to set the US up for success as a global leader in AI. Members hinted at the possible re-introduction of the ADPPA this term, while the witnesses at the hearing all endorsed it.

Witnesses included:

The hearing was the second appearance that Victoria Espinel has made on Capitol Hill in less than two months, with the other being a Senate hearing on “The Need for Transparency in Artificial Intelligence.” As the President and CEO of BSA, whose members are enterprise software companies, Espinel expressed the company’s support for a comprehensive federal consumer privacy law because it “recognize[s] that protecting consumers’ personal data is a key part of building customer trust.” This would include things like consumers’ right to access, correct, and delete their own personal data. Witness Clark Gregg, an actor and SAG-AFTRA union member, similarly called for comprehensive data privacy protections. He stressed the need for actors and others to maintain control of their digital identities, especially relating to biometric data and likeness.

Emerson Collective Chief Technology Officer Raffi Krikorian discussed specific ways in which the Committee could build upon last year’s version of ADPPA. This might include expanding digital literacy, offering mechanisms for users to engage with applications in ways that limit data collection and, as Espinel also argued, allowing users full access to their data, including the sales and swaps of user data between companies.

Amba Kak, executive director at AI Now Institute, advised the Committee to stay the course in prioritizing data privacy, consumer protection, and competition frameworks to govern and regulate AI effectively. In her opening testimony, she said that starting from scratch and creating entirely new frameworks benefits large industry players. “It serves to delay, and to provide current actors with significant influence on the scope and direction of such policymaking,” Kak said.

On the other hand, former Federal Trade Commission (FTC) Commissioner (2004-2009) and Chairman (2009-2013) Jon Leibowitz worried whether existing laws and regulatory authorities are “an adequate match” for the problems caused by the misuse of AI. While it’s likely that things like AI-created deepfakes, for instance, could meet the FTC’s bar for a “deceptive practice,” it’s unclear if it could clear other hurdles needed for the FTC to act on an alleged violation.

What follows is a lightly edited transcript of the hearing.

Rep. Gus Bilirakis (R-FL):

For an opening statement. Good morning everyone, and thank you to all the witnesses for being here for this very important hearing. For years Congress and especially this committee has been examining one form or another of artificial intelligence, whether it was exploring how social media companies develop their algorithms or looking at next generation vehicles and how they will transform safety and mobility. Central to these discussions has always been the need for America to lead in the development of standards and deployment and what AI means for our data. We kicked off our subcommittee hearings this year with a focus on our competitiveness with China, where we learned why it is critical for America to lead the world and key emerging technologies and why it is imperative for Congress as a first step to enact data privacy and security law. AI has so many different applications from auto filling text messages or Excel spreadsheets all the way to generating unique images and speeches, but at the base of these applications is a need to collect and properly permission information to train and grow these AI models.

Without a data privacy and security standard that dictates the rules for how companies can collect, process, store, and transfer information, bad actors may have unfettered access to use and exploit our most sensitive information. We have seen true innovators in this space using information they collect to help provide goods and services or improve what they are offering to users. For example, I've heard of internet service providers using information they collect from their customers to build AI processes to better understand when an outage may occur and how to prevent it. Additionally, in the healthcare sector, AI can unlock greater speeds to processing information on diagnostic imaging and screenings or discovering new possibilities within the drug development pipeline. That being said, there are entities that don't have our best interests in mind when they collect, purchase, or disseminate our information. In several instances, this is done under the radar without the consumer ever knowing it happened.

We've seen how data collection practices have allowed data brokers to build profiles on Americans and sell them to any bidder or even giving them to foreign adversaries, unfortunately, or how we've seen Chinese companies like TikTok collect everything they need to build out their algorithms, which are blocked from leaving their own borders of China, but used to push harmful content to our children here in the United States. Earlier this year, we saw the horrible content TikTok has pushed to children like self-harm and suicide encouragement, and now war crimes and terrorist content are being touted on the platform. Unfortunately, this committee examined issues like this last year when we passed a comprehensive data privacy legislation out of committee almost unanimously, which included requirements for companies to conduct impact assessments on how their algorithms could be harmful to children. In fact, our legislation was touted as having the strongest online protections for children to date and overall would provide stronger protections than any state law.

Unfortunately, the current reality is millions of Americans have no protections or control when it comes to their sensitive information, and there are bad actors as we know, and companies who will abuse this gap in protections to their own benefit. Americans deserve more transparency around what companies do with their information, more control over how their information can be used and better data security practices from the entities that use it. I look forward to hearing from our witnesses on the importance of enacting a data privacy law and how that can set the United States up for success to lead the world and ai. I think all of you for being here today and I look forward to your testimony. I yield back and now I recognize the gentle lady, my friend from Illinois, Ms. Schakowsky for five minutes for her opening statement.

Rep. Jan Schakowsky (D-IL):

Thank you. Thank you, Mr. Chairman, and thank you to our witnesses being here today. I want to say that artificial intelligence or AI has transformed our lives from healthcare to Hollywood in so many ways. The fuel for AI is consumer data, and in a way I feel like this is deja vu all over again. Here we are talking about how consumers feel afraid online and when we talk about AI today, most people really don't even know what we're talking about and yet they may feel the impact in all kinds of negative ways because of AI and algorithms. There may be people who are discriminated against that people of color may not be able to get the healthcare that others are able to get. We are aware of all kinds of experiences that people have had and I wanted to give you another example. There used to be, well, there still is a scam that says you get a phone call that says your son is in deep trouble and you better send us some money in order to make sure that we take care of him.

And people have particularly older people have fallen for that. Now they can have your son's voice. All it would take is maybe three seconds to develop that voice and you absolutely think that you better act immediately or your child is in danger. And so AI presents all kinds of challenges to us. On the other hand, of course there are advantages too, but we passed legislation. We passed a comprehensive consumer safety bill to protect people's data. Our bill passed, not the house unfortunately, passed this full committee almost unanimously to make sure that we protect consumer data and we need to get back to that right away. And we need then to include in, excuse me, to include in that ai as we know, states are moving ahead without us. There are now 13 states in the last year who have adopted data privacy legislation. One of the things that we wanted to do was have something nationwide that would protect consumers, and I feel like we made such great progress in a bipartisan way in doing that and we could move ahead now in adding AI to that as well and we ought to get on it right now.

So I am really calling on all of us to be able to get back to being a congress that can act. And when we do that, data privacy is among the very first things that we do. Data protection is something that we can do. People can be protected from scams, we can protect workers, we can protect businesses that also can become victims of ai. And so my call today is, let's get going. Let's get a congress that can function and when we do, let's move ahead on protecting consumers that are still finding their most precious and private data. And in our bill congressman Bill Araki, when we worked together, we did a lot for children as well and protecting children and the most vulnerable. So I say let's get back to business and let's finally get it done across the finish line. And with that, I yield back,

Rep. Gus Bilirakis (R-FL):

Gentle lady yields back. We appreciate those comments. I now recognize the chair of the full committee, my good friend Mrs. Rogers for her five minutes for her opening statement.

Rep. Cathy McMorris Rodgers (R-WA):

Good morning everyone. Yesterday, Congresswoman Debbie Lesco announced that she would not be seeking reelection and I just would like to start by honoring her service. Is she still here? Yeah. Oh yeah. There she is. Yes. I wanted to begin by honoring her service to our nation as well as her leadership on the House Energy and Commerce Committee. I know that she's going to finish strong and we are going to have many more times where we can honor her, but I just wanted to recognize her and this decision and just let her know that we look forward to her finishing strong and the days ahead, but we're going to miss her in the next Congress.

Rep. Debbie Lesko (R-AZ):

Well, thank you chairwoman. And you guys still have 15 months to put up with me and I don't have to worry about reelection, so I might get wild. Who knows?

Rep. Cathy McMorris Rodgers (R-WA):

The best is yet to come. Well welcome everyone to our series of AI hearings and the seventh data privacy related hearing that we've held this year. The promises of artificial intelligence are extensive from more affordable energy and better healthcare to a more productive workforce and a better standard of living. Unlocking this technology's potential could radically strengthen American economic and technological leadership across the board. In addition, the power of AI can also be abused and raises serious concerns and challenges that must be addressed. It is critical that America, not China, is the one addressing those challenges and leading in AI's development and deployment. The best way to start is by laying the groundwork to protect people's information with a national data privacy standard. This is foundational and it must be the first step towards a safe and prosperous AI future. If used correctly, AI can be a source for good, could help us unlock life-changing technologies like self-driving vehicles and enhanced health diagnostic systems, enhanced protections against national security threats and data breaches while assisting companies and law enforcement to better scan internet platforms for illegal activity like child sexual abuse material and fentanyl distribution.

To unlock these benefits though we need to first establish foundational protections for the data that powers many of these new AI tools and it's vital that it be led by the US. Data is the lifeblood of artificial intelligence. These systems learn from processing vast amounts of data and as we think about how to protect people's data privacy, we need to be considering first and foremost how the data is collected and how it's meant to be used and ensure that it is secured. It's time that we provide people with greater transparency and put them back in control of the collection and the use of their personal information. Key to this is ensuring the safety of algorithms used by online platforms which serve as the instruction manuals for artificial intelligence. By making sure algorithms are being developed, operated, and training AI responsibly, we can provide Americans with greater transparency for how their data is analyzed, how these systems identify patterns, how they make predictions, and how their interactions with online platforms are used to determine what content they see put simply trustworthy algorithms are essential components in a responsible deployment of ai.

Failing to enact a national data privacy standard or allowing China to lead the way heightens the risk over the collection and misuse of data, unauthorized access and transfers and greater harms for Americans and our families. We need to prioritize strengthening data security protections to safeguard people's information against threats. The theft and exploitation of sensitive information, especially biometric data, pose severe risk to individuals and organizations. If we establish stronger data privacy protections for Americans without equally robust data security requirements along those rules on collection and use, the number of data breaches and abuses will continue to rise and compromise people's information. Building those laws early would ensure greater public trust in ai, which will ensure future innovations are made in the us. To ensure American leadership, we must strike the right balance with ai, one that gives businesses the flexibility to remain agile as they develop these cutting edge technologies while also ensuring the responsible use of this technology a national standard for collection and handling of data will provide businesses, creators and every American with clear and understandable protections wherever they are. I look forward to discussing the path forward today and I yield back.

Rep. Gus Bilirakis (R-FL):

I thank the chair. I now recognize my friend from New Jersey, the ranking member of the full committee for his five minutes of opening statements.

Rep. Frank Pallone (D-NJ):

Thank you Chairman Bilirakis. And I have to say that I also regret Debbie Lesko leaving. She's always smiling, pleasant and tries to work in a bipartisan basis, so maybe we can convince you to change your mind, but probably not to the issue of the hearing today. Let me just say that despite what the chairwoman says, the I'm very concerned about what we can actually accomplish if this paralysis with the speakership continues. It's now 16 days since the house has been paralyzed without a speaker. We're 30 days away from another potential government shutdown. This hearing comes at a time when House Republicans dysfunction is hurting the American people, weakening our economy and undermining our national security. In my opinion and all year house Republicans have caved to the extreme elements in their party who have no interest in governing. They've forced cuts to critical federal programs in spite of a funding agreement between the former speaker and President Biden. And they came close to a government shutdown that would've cost our national economy upwards of $13 billion a week and forced our troops to work without pay.

And I just think the American people deserve better. Democrats have repeatedly tried to stop this dysfunction from hurting everyday Americans, but it's long past time for house Republicans to reject the extremists and their party. We should be working together to lower costs for American families and to grow our economy in the middle class and it's time for the chaos to end. Now last year now, chair Rogers and I were able to work across the aisle and pass the American Data Privacy and Protection Act out of the committee by a vote of 50 to two. That legislation included many important provisions including provisions focused on data minimization and algorithmic accountability. Clearly defined rules are critical to protect consumers from existing harmful data collection practices and to safeguard them from the growing privacy threat. The AI models pose and I strongly believe that the bedrock of any AI regulation must be privacy legislation that includes data minimization and algorithmic accountability principles.

Simply continuing to provide consumers with only notice and consent rights is wholly insufficient In today's modern digital age, artificial intelligence is certainly not new. However, the speed at which we're witnessing the deployment of generative AI is staggering and the effects that that'll have on our everyday lives are tremendous. There's been an explosion of AI systems and tools that answer consumer's questions, draft documents, make hiring decisions influence the way patients are diagnosed and make employment and housing decisions. Many of these systems are trained on massive amounts of data Big tech has collected on all of us, and that's why the lack of nationwide protections around what data companies can collect, sell, and use to train these AI systems should concern every American. Now since sufficient guardrails do not exist for America's data and AI systems, we're unfortunately hearing of a growing number of reports of harmful impacts from the use of AI systems.

And this has included the creation of deep fakes, leaking of personal data and algorithmic driven discrimination. There have been instances where AI has been used to mimic real people's voices to convince consumers to send money to someone they think is a friend or relative. Chatbots have leaked medical records and personal information and AI systems have discriminated against female candidates for jobs and people of color in the housing market. This is all extremely concerning. We cannot continue to allow companies to develop and deploy systems that misuse and leak personal data and exacerbate discrimination. And that's why we must make sure developers are running every test they can to mitigate risk before their AI models are deployed. Congress must also continue to encourage agencies like the F T C to enforce the laws they already have on the books. I commend the F T C for their work to fight scammers who have turned to new AI tools like the ones that mimic the voice of a friend or loved one in order to trick consumers out of their life savings. We must continue to fully fund these agencies as technology continues to advance and the threats to consumers continue to grow. So we'll also continue to push for a comprehensive national federal privacy standard is the only way we can limit the aggressive and abusive data collection practices of big tech and data brokers ensure that our children's sensitive information is protected online, protect against algorithmic bias and put consumers back in control of the data. So I look forward to the discussion today. Mr. Chairman, I yield back the remainder of my time.

Rep. Gus Bilirakis (R-FL):

I thank the gentlemen. And now we're going to, first of all, I want to thank the witnesses for being here and we're going to try to stick to that five minute rule. We are going to stick to the five minute rule. For obvious reasons, we'll have a vote on the floor at approximately 11 that may change, but we're going to anticipate a vote at 11 and we'll recess and come back. So I want to thank you in advance for your patience. Our first witness is re Korian good Armenian name, chief Technology Officer at the Emerson Collective. You're recognized for five minutes. Thank you.

Raffi Krikorian:

Thank you. Subcommittee chair, Bilirakis, subcommittee, ranking member Schakowsky, chair Rodgers and ranking member Pallone and members of subcommittee. My name is Raffi Krikorian. I'm the chief technology officer at Emerson Collective and I appreciate the subcommittee's ongoing interest in protecting the digital privacy rights of Americans. Personally, I've been fortunate to work in the tech industry for over 20 years at Twitter as a vice president of engineering at Uber, I was the director in charge of the self-driving car efforts and I now have the pleasure of working at Emerson Collective where we recognize complex societal problems require innovative solutions. We use a unique combination of tools, philanthropy, venture investing, even arts and others to spur measurable and lasting change in number of disciplines including technology. So I'd like to start with a very simple fact. We live in an age of rapidly increasing digital surveillance and very few users understand the tradeoffs they make when they're using the phones or the web.

Not only are applications doing more with users' data than users expect that usage is accelerating and involving at an unprecedented speed. And within this regime, notice and consent are failing us by now. We're so used to seeing advisory popups listing our consent to accept cookies that we're more annoyed by them instead of being informed by them. So in order to move forward, I propose we need to step back and look at the heart of the problem first. The data economy is becoming incredibly complicated. It's increasingly difficult to explain to everyday consumers how their data is being collected and being used. Amazon knows every product a user has ever viewed, how long they've dwelled on a specific page on their Kindle, as well as searches across all of Amazon's retail partners. And that's just Amazon users are generating lots and lots of data and that data is being found in lots and lots of different places.

And don't get me wrong, users are generally delighted by these personalized experiences, but again, I contend that users don't understand the trade-offs that they're making for these experiences. So a problem though, the notion of data minimization comes in direct conflict with data hungry artificial intelligence algorithms. Retailers and advertisers are gathering our personal data so they can make better predictions on how to sell us things. But in the case of AI and deep learning models, more data is essential to make AI function at all. AI developers pride themselves on models that detect patterns that humans themselves will not be able to see, so therefore behooves them to feed the machine as much data as they possibly can. Data collection is no longer just a sales tactic, is an existential necessity. And for more problems, we're seeing technological trends that go beyond just capturing data via applications.

One trend I might call out is this notion of voluntary data. Certainly users are willingly sharing data about themselves all the time, unaware or unconcerned about whose hands that might fall into. I'm speaking about social media of course, which along with the prevalence of cameras on smart phones, has caused an explosion of data to be put online. And one can argue that there might be no expectation of privacy in a public space, but I would contend that we're seeing 21st century technology collide with 20th century norms and laws. AI tools are being trained on these vast data lakes found in these public spaces and we're training them to do things like identify people from an image on any camera anywhere. And these tools can do more than simply identify people, but they can mimic them as well. Today's hearing alone will generate enough samples of my voice that anyone will be able to make a convincing synthetic replica of me and I don't be alarmist.

That's only one trend– I can obviously name more. So notice and consent won't be able to mitigate any of this. So what do we do? Well, first off, I believe we need increased efforts to promote and expand digital literacy, especially around the ideas of data and privacy. Users should better understand the data economy in which their personal information is being used and traded within, and we need to incentivize application developers to do a better job to explain to users upfront what they're consenting to and how their data will be used after the initial consent process. Users should have agency over how their information is flowing through software applications. Companies and end users need access to that full lifecycle and have visibility and that full lifecycle of their data. They should be given clear ways to understand the trade-offs between what they've given away and what benefits or harms might come from them or their community and that users should be able to both revoke consent and delete their data from the application of this.

So choose and these are just things we can do in a user-centric way, giving power and agency back to users. There's an entire other class of solutions that I'm happy to talk about around companies and application developers. So I sincerely praised the bipartisan work this committee has done in this advancement of the American Data Privacy and Protection Act, and I believe that this should be treated as a foundation for more work going forward. The problems that we can identify today are just that the problems of today, there will almost certainly be new issues to tackle as these technologies continue to evolve and setting up legislative frameworks so that we can adapt quickly as these new issues appear is vitally important. So I thank you for the opportunity to share my perspective here.

Rep. Gus Bilirakis (R-FL):

Thank you very much, I appreciate it. Our next witness is Amba Kak, executive director of the AI Now Institute. You're recognized for your five minutes. Thank you again,

Amba Kak:

Chair Bilirakis, ranking member Shakowsky, chair Rodgers and ranking member Pallone as well as members of this committee. Thank you for inviting me to appear before you. My name is Amba Kak and I'm the executive director of the AI NOW Institute and I have over a decade of experience in global technology policy. I want to make one overarching point in today's testimony and that is that we already have many of the regulatory tools we need to govern AI systems. Now is the time to extend what we have in pursuit of ensuring that our legal regime meets the moment. Specifically, I encourage this committee to prioritize the passage of a data privacy law like the ADPPA and in particular its strong data minimization mandates which have already received the resounding support of this committee. In fact, this notion that we need to create new frameworks from scratch largely serves large industry players more than it does the rest of us.

It serves to delay and to provide current actors with significant influence on both the scope and the direction of policymaking. Data privacy law is a core mechanism that can help mitigate both the privacy but also the competition implications of large scale AI and I'll build on this argument making three specific points. The first, the data privacy regulation is AI regulation. Soon after the public release of Chat GPT, there were questions from the public on what data these models had been trained on, followed by panic when people began to realize that ChatGPT was sometimes leaking personal data accidentally. This example was not a one-off. There are ongoing privacy and security challenges introduced by large language models, which both routinely and unpredictably produced highly sensitive and inaccurate outputs, including personal information regulators in many parts of the world with strong data privacy laws moved very quickly.

Italy even issued a temporary ban on ChatGPT based on concerns that it was out of compliance and this ban was lifted only after OpenAI provided an opt-out for users to prevent their conversations from being used for training data here in the US while enforcement agencies have and continue to do all they can with existing authorities. The lack of a federal privacy law undoubtedly held us back from demanding accountability, particularly Hispanic began to spread.

And taking the ADPPA as an example, here are a few of the tools we would have had and would have to regulate AI. First we'd have data minimization which would mitigate the supercharged incentives to excessively hoover up data about users. Second, we'd have data rights which could compel transparency into these largely opaque AI systems. And finally, we'd have its civil rights provisions to boost what federal agencies are already doing under existing laws to curb algorithmic discrimination.

My second point is that when regulating AI privacy and competition goals must proceed in concert, they're two sides of the same coin as it stands today. There is no large-scale AI without big tech. Companies like Google, Microsoft and Amazon dominate access to computational resources and other companies as a rule depend on them for this infrastructure. This is closely related to their data advantage, which enables them to collect and store very large amounts of good quality data about millions of people through their vast market penetration. This data advantage can give models that are developed by big tech and edge over those developed without the benefit of such data. Now this push to build AI at larger and larger scale only increases the demand for the very same resources that these firms have accumulated and are best placed to consolidate any regulatory effort that must also address this market.

Reality. Privacy and competition law are too often siloed from one another leading to interventions that could easily compromise the objective of one issue over the other and which is why to conclude of all of these provisions, we must strongly recommend legally binding data rules that draw clear lines around collection use and retention. Tech firms already have very strong incentives for irresponsible data surveillance, but AI pours gasoline on them, fueling a race to the bottom. Data minimization acts as a systemic antidote that addresses both first party data surveillance as well as the consolidation of the existing data advantage. In big tech. The FTC recently penalized Amazon for storing children's voice data and Amazon justified this by saying that they would be using it to improve their Alexa algorithm. We can't let these practices continue. In conclusion, it's worth underscoring that there is nothing about the current trajectory of AI that is inevitable and as a democracy, the US has the opportunity to take global leadership here in setting a trajectory for innovation that respects privacy and upholds competition. Data minimization would be a major step forward on both counts. Thank you.

Rep. Gus Bilirakis (R-FL):

We appreciate it very much and thanks for sticking to the five minutes. Now I recognize Clark Gregg, who is an actor by the way, I'm a fan and a screenwriter at the Screen Actors Guild, American Federation of Television and Radio artists. You're recognized, sir for your five minutes.

Clark Gregg:

Thank you very much. Thank you Chairman Bilirakis, ranking member Schakowsky, chair Rodgers and ranking member Pallone. For me, it's a great honor to appear before this important committee. My name is Clark Gregg, as you said, I'm an actor. I'm a screenwriter. I'm a proud member of SAG-AFTRA and of the Writer's Guild. Some of you might remember me as agent Phil Colson in the Marvel Cinematic universe. In that role, my character had access to advanced and even alien technology that worked through biometrics, but that futuristic comic book tech has already become a reality. Data privacy issues affect everyone. Given that more and more of our data is protected by biometric technology, it's critical that we protect data such as voice prints, facial mapping, even personally identifying physical movements. We strongly support the committee's work to construct national data privacy and security protections so that our personal information cannot be used without our consent.

I'm here because this issue has been top of mind this year for my fellow writers and sag after members, actors, broadcasters, recording artists. We are currently in a fight, excuse me, to protect personal information such as voice likeness and audio visual material. Online actors like anyone else, deserve to have their biometric information protected from unauthorized access and use our voices. Images and performances are presently available on the internet both legally and illegally because we do not have data and privacy and security protections across the nation, AI models can ingest, reproduce and modify them at will without consideration, compensation or consent. For the artist, that's a violation of privacy rights, but it's also a violation of our ability to compete fairly in the employment marketplace. And these fakes are deceptively presented to viewers as if those performances are real.

Like any performer, like any performer in a marvel or any visual effects driven film, I've been scanned, I've been scanned many times. You step into a tiny dome where there's literally hundreds of cameras, they record every detail and angle of you and they create something called a digital double, which scared me 10 years ago. It really scares me. Now this can be used with your voice, either real or synthesized to recreate your character to create a new character or in the wrong hands. Ironically, as you said, chairman Bill Araki a bad actor, it can create a new you that can roam the internet wreaking havoc in perpetuity. Now it's hard enough for me to keep this me out of trouble. I don't have time to wrangle another one. Tom Hanks stolen likeness was recently used to sell a dental insurance plan. Drake and the weekend released a new single that was streamed by millions.

This came as quite a surprise to both Drake and the weekend because they had not released a new single. Even in my starving artist days, which went on for quite a while, I chose never to work in the adult end of my business, although a few of the cinema projects I read for came uncomfortably close, but I was recently sent very lifelike images of myself engaged in acrobatic pornography with I will admit abs that I would kill for. It's funny, but it's also terrifying. Deep fake porn is already a thing and it's not a thing that I or my fellow performers signed up for, especially if we're not getting paid. I'm kidding. People and indeed humanity are more than just bits of digital information to be fed into a computer. And as AI grows exponentially by the minute, it's not just the film and television studios we have to worry about.

This issue impacts every single American biometric information. Even something as routine as a voice print or a facial map can be exploited in ways that pose a danger not just to the broader public but to national security. As you well know, as more companies use biometric information to verify identity, these risks expand exponentially. We must be vigilant and protect our data. We ask that key questions be answered. How and why is our biometric information being collected? How is it being used? Are there limitations on its use? What control do we have over the data in our sag? After AI guidelines, we demand the following answers. Our voice and likeness assets being safely stored, who has access to them? What happens to data when the contractual relationship ends? What happens if there's a data breach? Privacy laws in over two dozen other countries? And many US states address these essential standards for biometric data, but overall, the US is behind the curve.

There are no comprehensive federal privacy laws, so individuals must depend on our inconsistent state laws. SAG-AFTRA will fight to protect our members' voices and likeness from unauthorized use. But all individuals deserve safeguards against unauthorized access to their biometric data. In addition to the protections in this bill, we believe Congress must put guardrails in place now to prevent future misappropriation of creators, digital identities and performances. Our sector is under assault today. It may be your sector tomorrow. In closing, I want to say that being an actor can be a strange way of life. What you spend your life learning to create with is yourself, your face, your body, your memories, your life itself. When it works, that very uniqueness creates a character, a story that is universal, ineffable, something that brings people together. For artists and creators, this is an existential threat. If we don't protect our words, our likenesses, they will be harvested, mimicked, essentially stolen by AI systems and those that use or own the technology. We've arrived at a moment that's eerily reminiscent of the moment when indigenous peoples first saw cameras and expressed a prescient fear that the machines might steal their very souls. As the dystopian sci-fi classics tell us, the computers may be coming for us, but we don't have to make it easy for them. I thank you for your time.

Rep. Gus Bilirakis (R-FL):

Alright, next we have Victoria Espinel, president and chief executive officer of BSA, the software alliance. You're recognized for your five minutes.

Victoria Espinel:

Thank you. Good morning Chair Bilirakis, ranking member Schakowsky and members of the subcommittee. My name is Victoria Espinel and I'm the CEO of BSA, the software Alliance. BSA is the advocate for the global business to business software industry. BSA members are the forefront of developing cutting edge services, including artificial intelligence and their products are used by businesses of all sizes across every sector of the economy. I commend the subcommittee for convening today's hearing and thank you for the opportunity to testify. Safeguarding consumer's personal data and responsibly regulating artificial intelligence are among the foremost technology issues. Today, constituents in your districts rely on a wide range of data services to support the local communities and economies, but to fully realize the potential requires trust that technology is developed and deployed responsibly. The United States needs both a comprehensive federal privacy law and a federal law that creates new rules for companies that are developing and using high risk AI systems.

Actions on both priorities will help promote the responsible use of digital tools and protect how consumer's data is used. We appreciate this committee's strong bipartisan work to pass the American Data Privacy and Protection Act last year and your decision to address both privacy and artificial intelligence in that bill, your effort proves that bipartisan consensus on privacy and AI can be achieved and we look forward to continuing to work with you as you refine your work on these issues.

For too long, consumers and businesses in the United States have lived in an increasingly data-driven and connected world without a clear set of national rules. We need a federal privacy law that does three things, requires businesses to only collect, use and share data in ways that respect consumers' privacy, gives consumers new rights in that data, including the right to access, correct and delete their data and ensures that companies that violate their obligations are subject to strong enforcement.

The tremendous growth of AI has underscored the importance of these issues. As this committee has recognized, a federal privacy law will create important new requirements for companies that collect and use consumers information, including in connection with ai. Thoughtful AI legislation is needed too. It can further protect consumers by ensuring that developers and employers of artificial intelligence take required steps to mitigate risks and including conducting impact assessments to reduce the risk of bias and discrimination. Privacy and AI legislation will help support the digital transformation of our economy and spread benefits broadly that lead to growth in new jobs across industries. Farmers can use AI to analyze vast amounts of weather information to use less water and maximize their harvest. Manufacturers can revolutionize how their goods are designed and made. Suppliers and distributors can retool how goods are ordered and delivered. And construction companies can build AI generated digital twins of real life cities.

To better understand the impacts of a proposed design, thoughtful federal legislation is the best way to promote trust and technological adoption. I want to emphasize that in order for legislation on these issues to be effective and workable, it has to reflect that different companies have different roles in privacy. There is widespread recognition that laws must distinguish between companies that decide how and why the process of consumer's data and the service providers that handle that data on behalf of other businesses. In artificial intelligence, there is a similar dynamic at BSA. Some of our companies develop AI, some of our companies deploy AI. Many of our companies do both and both need to have obligations. This committee recognized the importance of these distinctions as you advanced privacy legislation last year and we look forward to continuing to work with you. I want to conclude by emphasizing the importance of US leadership on both privacy and artificial intelligence.

There is widespread consensus from industry, from civil society and from consumers that the United States needs federal privacy legislation. We also need legislation that sets thoughtful rules for high risk uses of ai. The bill this committee passed last year almost unanimously already reflects key aspects of those rules. Other countries are addressing these issues, adopting privacy legislation and moving quickly on AI regulations. The US is a leader in technological innovation and we should be a leading voice in shaping the global approach to responsible ai. The time to do so is now. Thank you for the opportunity to testify and I look forward to your questions.

Rep. Gus Bilirakis (R-FL):

Thank you so very much. And our final witness is John Leibowitz, who's the former chair and commissioner of the FTC. You're recognized, sir, for your five minutes.

Jon Leibowitz:

Chair Bilirakis, ranking member Shakowsky members of the subcommittee, thank you for inviting me to speak today on two important and related issues, the need for a statutory framework governing artificial intelligence and why federal privacy legislation is a critical, critical first step towards responsible development and deployment of ai. As you have heard from my fellow panelists, the rapid growth of AI technologies is bringing extraordinary benefits to every American, but it can also be used to create very real harms. Your committee deserves credit for tackling this issue with a series of hearings. But as we engage in that important debate, let's not forget the essential need for federal privacy legislation, which as you also heard from my fellow panelists, addresses many of these very issues including the use of personal data through ai. Now we live in an era in which data is incessantly collected, shared, used, and monetized in ways never contemplated by consumers themselves.

AI has amplified these disturbing trends. It is because consumers have so little control over their personal data and it is shared at will by companies that AI can be deployed so pernicious, some large companies have developed ethical approaches to the use of ai, but most businesses are looking for direction and unfortunately they're not going to get too much direction from existing laws and regulatory authorities, which are not an adequate match for the problems created by misuse of ai. For example, the FTC has authority to prohibit unfair deceptive acts or practices inter affecting commerce. And some commercial behavior like using AI for identity theft or fraud clearly violates the FTC Act and companion state laws. That's good. The FTC could likely enjoin the company that recently created an AI generated version of Tom Hanks without his permission and used that image to pedal a bogus dental plan.

And by the way, I couldn't have made that up. It's not clear that an AI driven deepfake though, even if it's deceptive, always comes within the definition of commerce. In other words, legislation is by far the best way to clarify in advance what responsibilities deployers of AI must consider and what risks they must disclose to others. What's the best approach? I doubt we know that yet. The European Union through its AI act, would classify systems according to risks they pose to users. Some states are starting to look at regulating AI and states can be laboratories of democracy, but no matter how well intentioned and thoughtful state laws may be, federal legislation around AI is far more preferable than a patchwork of state statutes. And at the same time, Americans deserve a muscular federal law that will give us greater control over our own information wherever we live, work, or travel and requiring more transparency and accountability by corporations.

I heard all the members say that today, last year you wrote that bill one that would create a foundation upon which AI rules could develop its provisions are stronger than any single state law and smarter in many ways. In the GDPR that governs Europe, it shows that members on both sides of the aisle could work together on a quintessentially interstate issue to create a privacy regime benefiting all Americans. The ADPPA was not a perfect piece of legislation and collectively you may decide to make some modest changes to it when you reintroduce this year, this year's version, but you reported it out of committee by an overwhelming and bipartisan 53 to two margin, 53 to two. In contrast, Congress will need to do a lot of collective thinking before it decides where it wants to end up on ai.

Now the unprecedented interest by lawmakers and AI related issues is a welcome development. Indeed, Congress should work on crafting a framework for ai at the same time it protects consumer privacy, but a comprehensive AI law may be several years away. Comprehensive privacy law though should not take that long and it is entirely within this committee's jurisdiction and Congress's reach. In fact, your privacy proposal included many of the same components upon which responsible AI development will be built. Requirements for data security restrictions on collecting information without consumer permission, mandatory risk assessments obligations for companies to minimize data prohibitions against the use of discriminatory algorithms, finding authority for the FTC and protections against targeted advertising to minors 17 and under. So as you begin to consider the regulatory needs and bounds for AI, let me urge you to keep in mind your groundbreaking work on privacy legislation. Last Congress, even if enacting such a law requires some complicated negotiations and a few difficult votes, which it will, you will have done something meaningful for American consumers. If you succeed, you will enhance American competitiveness if you succeed and you will have laid the groundwork for legislation making AI safe and effective. Thank you.

Rep. Gus Bilirakis (R-FL):

Thank you. I appreciate that. I agree. And I'll begin with the questioning now. We're going to try to get as many members as possible to ask questions before we recess. So I'll start again with Mr. Leibowitz. You have an extensive amount of experience from chairing the FTC to serving in civil society groups and even staffing senators. There's a lot of interest right now about what to do as you know, obviously what to do about ai. But despite years of trying, we have no foundation for how consumer data is collected, used and properly secured. We need to get the fundamentals in place. You mentioned in your testimony legislation this committee passed nearly unanimously with stronger than any state law. And I appreciate you emphasizing that sir, today. Can you expand on that and speak to how important it is for us to have a preemptive national standard to ensure data privacy and security for our constituents and for American leadership on ai? I know you did talk about it, but please if you have anything more to say, I want to give you the time.

Jon Leibowitz:

Thank you Mr. Chairman. And I guess I would make this point, data travels in interstate commerce. It's not contained in a particular state and consumers deserve a very high level of privacy protection. This committee knows that better than anyone else, wherever they live, wherever they work, wherever they travel. And that's what a comprehensive privacy law would do. And look, we should give Californians and the California legislature a lot of credit because they were the first state legislature to show us that lawmakers can pass a law that protects consumer privacy. But your data minimization isn't in the California law. Your limits on sensitive data by default, not in the California law, the prohibitions on discriminatory use of algorithms, not in the California law, the prohibition on targeted advertising to children not in the California law. So it's almost like we are comparing apples and oranges. We need a federal law. Your federal law is stronger than any state law or your federal proposal. And I would just urge you to move forward with it as I know you want to.

Rep. Gus Bilirakis (R-FL):

Thank you very much, sir. I appreciate it very much. I couldn't agree more, Mr. Greg, I appreciate you traveling across the country to be here. Your testimony on the collection and use of what is fundamentally your data is insightful, not just as a Hollywood creator but also as an everyday American. I'd like to discuss another important element of your efforts, which is the need for better security of our data. We know these large AI systems harvest and scrape the internet for data, and that includes personal information due to the data breaches and hacking amongst other causes. This can be used for deep fakes and other scams that I will call digitized fraud. And the question is, how have you and the general public been harmed by this data being exploited and used for purposes that it wasn't intended for because there wasn't enough security around it. And I know you gave some examples, but if you could elaborate on that, sir, I'd appreciate it.

Clark Gregg:

Like so many Americans. I'm a consumer. I'm a human. I have a family. In addition to being a performer, I shared some of the ways that your image will show up in ways that you never agreed to. Some that are quite offensive and I never signed off on that. And it's only a matter of time till those start to show up in video form that become, as AI expands exponentially more and more lifelike, as I said, that is a violation of the ultimate freedom is my right to free speech, my right to privacy, my right to exist as an entity that makes my own decisions. But I think what's most disturbing about this as I've studied it and been very interested in it, because as I said, the ramifications both for writers and filmmakers and actors are huge and they're being fought right now.

One of the honors about being on the picket lines for us is that we feel that we're in a flection point that's coming all around the nation. We just happen to be in a visual visible union. But what I'm struck by is the way that the CEOs themselves who run these corporations, they have all signed a letter saying that the threat is equal to or greater than thermonuclear war. The experts on ai, they can't even really quite tell you what it's going to become. And so I think to answer your question, we don't quite know what this is going to be. And there's, in my experience and probably yours, when there's technology that can generate a profit, very often the profit is what drives the pace, not what's best for humans and machines. When we first picked up a stick, the concept was that they would work for us. And what it feels like to me, and I admit I watch too much, it feels like we're on a fast track to be working for them.

Rep. Gus Bilirakis (R-FL):

Thank you very much, and I have other questions, but I'll submit them for the record now, recognizing the ranking member of the subcommittee, Ms. Schakowsky, for her five minutes of questioning.

Rep. Jan Schakowsky (D-IL):

First of all, lemme just say how much I appreciate that pretty much to a person, all of your witnesses have now said that we need a comprehensive privacy bill and we were well on our way, so we need to continue that. Ms. Kak, I wanted to ask you, you mentioned in your testimony right at the front that many of the tools that are needed to protect consumers from an AI are already in place. Are you saying that we could move ahead right now because there are mechanisms that we have and what are they?

Amba Kak:

Thank you. Ranking member Schakowsky. I think the recent joint statement of our nation's enforcers said it eloquently, which is there is no AI shaped exemption to the laws on the books. And so the moment we're at right now I think is to first and foremost clarify and strengthen the laws on the books to apply to ai. The FTC has already opened an investigation against OpenAI based on its deception authorities, the EEOC and the CFPB have also issued guidance in their particular domains. And the chair of the SEC recently, Gary Gensler recently said that he is worried and looking into the fact that the lack of diversity and competitiveness in AI models is really a financial stability risk. So that's one I think we really need to adequately resource these agencies commensurate to the growing scale of the problem. But second, and we've all said this, there are laws in waiting. We have the ADPPA, we have done the hard work of distilling a globally leading privacy standard. So I guess my simple point today is that we shouldn't be reinventing the wheel. We have the tools. It's the time to act.

Rep. Jan Schakowsky (D-IL):

Thank you so much. Mr. Gregg. First of all, I know that your industry right now, we all love our actors and the opportunity to see them have been on strike now for six months. I'm sure some of those issues as a supporter of labor are more traditional labor issues. But I know that AI is one of those, and you've been talking about that. What would you say is the biggest concern now of workers, actors that really threatens your business and their livelihood? Thank you so much.

Clark Gregg:

Every time I think that I've done everything that could possibly make a human nervous, they come up with something else. Thank you so much for your question. Our concerns, as I expressed, can be far reaching and existential, but they're also very simple in that we have an example in that 15 years ago as a member of the Writer's Guild, we were on strike about compensation, the things that we have to survive in a business where you're essentially an independent contractor. We got through strikes, we got health insurance that way we got residuals so that you have some chance to make some monetization of the work you do. When that happened, one of the 15 years ago, the thinking was, well, listen, we can't really give you any real protections in streaming the internet. That's not a thing. The day after the strike was resolved, Hulu was announced. And so when things moved to streaming our compensation models without really, our consent suddenly changed. And all of a sudden, perhaps it's a coincidence. All of the content moved, all the stories all moved to streaming, and our compensation drastically trailed off. And so while we have the existential concerns about committing our lives to telling stories and bringing the human soul to a collective medium, we also have survival issues. And that's that we need to have, I'm, I'm going to get the three C's from my colleague.

Yeah, okay, good. Thanks. I should know these. I know what they mean. Consent, compensation and credit. Just to have our image. The work we've done. I heard a really amazing AI researcher speaking about this last night that what AI does is it takes human cognitive labor, which is something, it's different than minerals and it harvests it essentially. So our work is something that's harvestable and we need to be credited, compensated, and we need to have control over the way ourselves are used.

Rep. Jan Schakowsky (D-IL):

Thank you. Thank you so much. I know I'm out of time. Let me just say that I am going to want to talk to all of you as we move forward working on legislation that we can do on privacy and on ai. So thank you very much for your testimony. I welcome that as well, generally yields back. Now I'll recognize the chairman of the full committee, Ms. Rodgers for her five minutes.

Rep. Cathy McMorris Rodgers (R-WA):

Thank you, Mr. Chairman. As I mentioned, this is the first in a series of AI hearings for our committee, it's the seventh in terms of data privacy. And just let me add that the importance of protecting our kids runs throughout Chairman Bilirakis highlighted data security. We've worked on data minimization and we've had multiple layers that provide even more protections within National standard for data privacy. The legislation isn't just about one provision, it's about all the provisions working together to achieve the strongest protections possible for everyone, including kids. I'd like to start with Ms. Espinel. Can you tell me how conducting impact assessments and calculating risk will service well in data privacy legislation and prevent harms in ai?

Victoria Espinel:

Yes, thank you very much. So impact assessments are a very important tool in terms of assuring that there's accountability in terms of ensuring that companies are acting responsibly. I want to start off by saying we believe it's important in privacy and in AI that impact assessments apply both to what are often called controller processors and privacy law, but also developers and deployers in ai. Those that are creating the AI systems and the companies that are using AI systems both should have obligations to conduct impact assessments. Those impact assessments will be slightly different because those companies are doing different things. They have access to different data and they can take different steps in order to mitigate risks. And so the obligations and the impact assessments should mirror that. They should reflect that. Thank you. But you can think of an impact assessment in a way, like a report card for a developer, someone creating an AI system. They need to have an impact assessment that looks at what is the purpose of that AI system and what are the limits on that AI system. In other words, what should that AI system not be? Excuse me for me.

Rep. Cathy McMorris Rodgers (R-WA):

Thank you. That's great. I know there's a lot more, I have a couple more questions though, but thank you Mr. Kikorian. I appreciate the way you distilled potential AR harms and dovetails with Mr. Gregg's points on how sophisticated technology can be. It underscores our concerns of how adversarial nations, including China with TikTok, could access and train AI models using our data to cause harm and manipulate us. So just very briefly, would a data privacy, comprehensive data privacy security bill similar to ours from last year restrict that restricts data brokers and big tech counter this threat to our security?

Raffi Krikorian:

Thank you for that question. I believe it would for portions of it. I think that giving people transparency into where their data is going and giving them consent on how to actually control, it's extremely important. I do highlight that I think there is some concern around these public spaces, these digital public spaces that people are voluntarily surrendering their data into unknowingly and at large volumes. And it's unclear whether or not those could be fully covered because those are simply scrapable and downloadable by different parties. But I think from the explicit data collection standpoint, yes, I do believe that some form of this Act would prevent those.

Rep. Cathy McMorris Rodgers (R-WA):

Okay. Thank you. Mr. Gregg. My kids would be disappointed if I didn't let you get away without a question. Not to mention coopting one of your lines. So we focused a lot on the importance of foundational protections and data privacy for also being important for trustworthy ai. It's a massive undertaking with concessions from all parties and stakeholders. So would you agree this is never going to work if we don't have something to unite behind? And that enacting data privacy legislation is important. This Congress,

Clark Gregg:

First of all, thank you, and my greetings to your kids, and I'm really impressed that they remembered that line from an episode I loved. Yeah, absolutely. I think what was just said about voluntary, voluntary means that you put in, I see all these things popping up in California now. Which part of your data do you agree? I don't know. I don't know. I'm busy. I don't like that. I think of something and two days later I get an ad for it. It's creepy. Pardon me. I dunno if that's, anyway, but I guess I can say creepy, but I think that we're depending on guardrails to come from you. As was said, this is a national, the borders don't exist here. This is all happening around the planet in microseconds. We depend on the guardrails that can be put in place nationally. This committee is so important to me to protect us as we figure out what this even is and what the ramifications are because clearly most of us,

Rep. Cathy McMorris Rodgers (R-WA):

Thank you. Thank you, thank you. Really appreciate everyone being here. This has been a great panel. We do have more questions, but I have to yield back right now. I'm out of time. Thank you Mr. Chairman.

Clark Gregg:

Thank you. I thank the gentle lady and now I recognize my friend from New Jersey, ranking member of the full committee, Mr. Pallone, for his five minutes of questioning.

Rep. Frank Pallone (D-NJ):

Thank you. Chairman Bilirakis. Last Congress. I was proud to chair this committee as we advance the strong, comprehensive and bipartisan American Data Privacy and Protection Act. And this bill put consumers back in control of their data, stopped aggressive and abusive data collection by big tech and rejected the failed notice and consent regime. It also required data minimization and algorithmic accountability in order to ensure companies collect only the data they need to serve their customers. So my questions are of Ms. Kak, do you believe that notice and consent is a sufficient mechanism to protect consumers and their data from the harmful and abusive practices of tech companies?

Amba Kak:

Thank you, Mr. Pallone. The short answer is no. Notice and consent mechanisms are necessary, but they're far from sufficient. So can they be a powerful lever? Yes. And I think the best example we have of this is Illinois BIPA law where consent has actually been leveraged to shut down some of the most concerning uses of ai. For example, Clearview ai, but even there it is buttressed by a bright line prohibition which prevents companies from profiting off the sale of our biometric information. Now the core weakness of consent of course, is that it completely breaks down anytime there is stark power asymmetry. It breaks down in the workplace, it breaks down in schools. But as Mr. Gregg just pointed out, arguably that power asymmetry affects all of us and is all pervasive. Right? And that is particularly why the ADPPA is so strong because it has the whole suite, it has consent, it has data rights, but crucially it's setting the rules of the road so that these rules of the road apply regardless of what the consumer so-called chooses.

Rep. Frank Pallone (D-NJ):

Alright, thank you. Now, how does the rapid growth of AI impact the urgency of implementing strong, comprehensive federal data privacy legislation that implements clear rules about or around data minimization and algorithmic accountability?

Amba Kak:

Thank you Mr. Pallone. I would argue that data privacy has been an urgent priority for the last decade in the United States, but the events of the last few years and maybe particularly of the last few months, only reinvigorate calls for urgency. There are three main reasons that AI makes the passage of a data minimization mandate crucial. The first is the obvious one, which is privacy. We're seeing new privacy threats emerge from AI systems. We talked about the future threats and how we don't know where AI is going to go, but we absolutely do know what harms they're already causing. They're leaking personal information, they could potentially be leaking patient data in healthcare context. These privacy risks are not abstract. Even if the technologies are portrayed as these abstract magical systems. The harms are very, very real. The second is competition. As I mentioned in my testimony, this is very, very crucial at a moment where unchecked commercial surveillance is being incentivized by AI systems and unless we have rules of the road, we're going to end up in a situation where this is where the kind of state of play against consumers is entrenched.

And thirdly, this is crucial. Data privacy law is crucial for national security as well. We need security norms in place to make sure that the way we like to say it is that data that is never collected is not at risk or data that is deleted after a reasonable period of time will no longer be at risk. So we need to really minimize the surface area of attack. And I think at the current moment in the absence of a federal privacy law, we risk not only competitive and privacy threats, but I would argue national security threats as well.

Rep. Frank Pallone (D-NJ):

Well that gets to my last question, Ms. Kak, in addition to protecting consumer privacy, what other harms can be addressed with data minimization principles? For example, you mentioned doesn't it help address data security challenges and national security concerns?

Amba Kak:

Absolutely. The way we like to say it, Mr. Palone, is that we're essentially creating goldmines for and honeypots really for cyber criminals of all varieties. And this can be, and as Mr. Gregg and others pointed out, we are actually incentivizing the creation of databases including databases of kids' information that's kids' images and videos. And we have an example recently where it was reported that we have a large children's database that was created and was being sold by a company called Mega Face. These kinds of practices are going to become the norm and AI is only supercharging existing incentives for unchecked and invasive data surveillance. Alright, thank you so much.

Rep. Frank Pallone (D-NJ):

Thank you Mr. Chairman, you'll back.

Rep. Gus Bilirakis (R-FL):

I thank the ranking member and we just gaveled in, the House did. But out of respect to the witnesses and the audience, we're going to go keep going as long as we possibly can. So I'll recognize Dr. Bouchard for his five minutes of questioning.

Rep. Larry Bucshon (R-IN):

Thank you Mr. Chairman, for calling today's important hearing on something that will play an even larger role in American's lives, AI. With AI already being deployed in a multitude of economic sectors from healthcare to manufacturing to defense, it is critical that Congress create an environment to foster innovative uses of such technology while also protecting Americans from possible harms. Enacting a national data privacy framework such as we passed through this committee last year with the ADPPA to establish clear rules of the road for the US is a key factor in deploying effective AI that will help us innovators keep their edge against competitors abroad. One of the goals of implementing a national data framework will be to provide some certainty to consumers. And as you mentioned in your testimony with the case of the deepfake Tom Cruise ad, the use of AI in generating such content. Mr. Liebowitz, do you think that regulation of AI content should or could include the use of something like a watermark or other indicators to consumers that content is AI generated?

Jon Leibowitz:

Yes, I absolutely do. I think a watermark is something you should strongly consider as you think through putting the right guardrails on AI and making sure that people are compensated for their work. And also that consumers know when something is generated by AI and when it's generated directly by a human brain. But I'd also say one more thing, which is that, and as you heard from my fellow panelists, so much of what you want to do to regulate and to put into place appropriate standards for AI, it's in the privacy bill that you reported out last year. It's the requirements for data security. It's the restrictions on collecting data without consumer permission. It's the mandatory risk assessments that Ms. Espinel pointed out. And so I would just encourage you, and I know you want to do this not to kick the privacy can down the road.

You're the committee that has shown leadership in the last Congress and you can take it to the next level by enacting bipartisan legislation.

Rep. Larry Bucshon (R-IN):

Thank you for that answer. And I mean you are a regulator. What are some of the challenges and then enforcement enforcing such requirements and what might you suggest for us to how we would address that?

Jon Leibowitz:

Well, I would say this one challenge and the FTC's is Ms. Conos, she was there and Ms. Hone knows because she was there when I was there, it has terrific lawyers and they want to represent the public interest and they have some expertise in privacy and AI issues and they're building it. I would say they'll need more resources because this is a comprehensive and important piece of legislation and you'll want to give it to them. And I think when I was at the FTC, we started the division of privacy and identity protection because we thought privacy was a more important issue.

Sure, we had our first chief technologist brought in 2009 because we thought that was important. I would probably, one of the things I like in your legislation is it would create a bureau bigger than a division on top of a division of privacy. And privacy is so important in America today that I think that would be, I'm a little surprised the FTC hasn't done it unilaterally itself, but I think that's an important way to enhance and validate the importance of protecting consumer privacy.

Rep. Larry Bucshon (R-IN):

Great. Thank you. I was a doctor before I was in Congress. So one specific sector I'm excited to see AI make strides in is healthcare. I really believe technology is, is going to really advance this down the field in healthcare and also decrease costs. And I know already we're deploying many apps in the healthcare systems. Mr. Krikorian, what role do you think that enacting a national data privacy standard will have on the security and privacy of data for health information that's not explicitly covered by a current HIPAA law?

Raffi Krikorian:

Thank you for that question. I mean, I would contend that especially with these machine learning algorithms that require even more data that might be captured in order to make better predictions that we need to expand those type of protections to beyond just healthcare information, whether it be perhaps information that's collected by different consumer applications, whether it be Apple Health or others that might not be already covered. I think we need to extend those protections there. I think HIPAA has a really good framework for us to be thinking about what this could look like. SOC 2 might be another good framework on how maybe those could be thought about, but an extension to what these consumer data realm looks like. So those could be also used as inputs into these deep learning models I think would be quite important.

Rep. Larry Bucshon (R-IN):

Yeah, it'd be important to also continue to have researchers have the availability of de-identified data if we're going to continue medical research. And that's one of the challenges of balancing that. So my time has expired. Mr. Chairman, thank you very much. I yield back.

Rep. Gus Bilirakis (R-FL):

Thank you doctor. I appreciate it and I went as long as I could, but we do have a vote on the floor so we'll, we'll recess for flow activity. Of course the subcommittee stands in recess subject to the call the chair.

So thank you very much for your cooperation and your patience and we'll be back. Okay, we're going to reconvene. I know members are starting to come in, but we want to get started as soon as possible. I want to thank you all for the witnesses for your patience. We really appreciate it so much. So with that, I am going to ask Chairman Castor, acting chairman Castor to ask her, she's recognized for five minutes of questioning.

Rep. Kathy Castor (D-FL):

Well thank you very much, chairman Bilirakis and thank you to our witnesses for bearing with us. Your opening statements were very persuasive. It's really refreshing to hear advocates like you pressing Congress to act. It's long overdue for the United States of America to adopt a basic fundamental privacy law online. So as you raise your voices and encourage us to do that, it is very well received here in the committee. Thank you for recognizing our bipartisan work on ADPPA for many years I've had particular concern for what big tech platforms do to exploit our children, to target them with advertising, to surveil them. Even though we have a COPPA law, it still is not followed. It must be updated. So when you think about AI and kids, it seems like all of the online harms directed towards children when it comes to AI would just be exacerbated, would be even more severe. And Ms. Kak, in your testimony, you raised a couple of examples. Could you dive in deeper in AI and the special considerations when it comes to young people?

Amba Kak:

Absolutely. Thank you, Ms. Castor, for that question and for raising this crucial issue, I think let's start with the ADPPA, which has been the subject of today's discussion across the board. The ADPPA is we feel one of the strongest ways of not just protecting everybody's privacy, but specifically protecting the privacy of children. So kids data is sensitive data under the act, which affords it the highest and strictest levels of protection. We have specific prohibitions against targeted advertising on children. And I'll stop there to say that these particular specific prohibitions or targeted advertising are essentially getting at the root of the business incentives for unfettered collection of data, right? They're fixing the business incentives. And what we've tried to emphasize today, what I've tried to emphasize is that if you're tackling the business incentives, you're really future-proofing the law. So when people ask what do we do about AI, we can point to the ADPPA and say it is structurally changing the business incentives so that companies, whether it's AI or it's the next big thing five years from now, are structurally not incentivized for the irresponsible collection of data, particularly when that data is of children.

And I think I would kind of moving away from this particular example of the latest AI fad, which is generative ai. We can look at examples of where facial recognition systems are absolutely proliferating across schools today. We're really happy to hear that the New York state has actually banned the use of facial recognition systems in school because speaking of notice and consent breaking down, the clearest example of that would be in a situation where you're at a school where you have essentially that are in no position to consent, choose or otherwise to these invasive face surveillance tools. And I think the need of the hour is not just to put in place regulation, but specifically to put in place bright line rules against the kinds of data collection that we never think should be permissible. It's kind of like the example that Mr. Gregg gave when it comes to an actor in their personal privacy, the ability to control their own image.

So is what you're saying, gosh, think about children who do not have the ability to consent when it comes to facial recognition and that could be exploited by others when it comes to artificial intelligence? Absolutely. Ms. Castro, and I don't think that these are abstract or theoretical concerns. We already know that there are databases of children's images and videos that are being used in real time by AI companies to produce to generate further material that is extremely sensitive and implicates these children's faces among other things. So again, we're not living in a moment where we need to hypothesize about these risks. These risks are ever present and a data privacy law in particular that protects everybody's privacy would be the best way to protect children's privacy as well.

Jon Leibowitz:

Yeah. And if I may add something.

Rep. Kathy Castor (D-FL):

Yes, go ahead.

Jon Leibowitz:

And you've been a leader on protecting kids, but as we know, kids act impulsively. They are a vulnerable population. That's why Congress passed COPPA in the first place and in the ADPPA and the bill you'll reintroduce this year, they're going to are going to be protections against minors 17 and under that we don't see anywhere else, not in a single state law. So just coming back to one of the topics of this hearing, I think it's critically important that you move that legislation and you will be taking a major step forward If you can enact the ADPPA or the ADPPA 2.0, it'll be very protective of children from abusive AI.

Rep. Kathy Castor (D-FL):

I agree. I don't think we have time to waste. I look forward to the bipartisan bill coming back. I yield back. Thank you.

Rep. Gus Bilirakis (R-FL):

Thank you. Gentle lady yields back. Now I recognize the vice chairman of the subcommittee, my good friend Mr. Walberg from the great state of Michigan.

Rep. Tim Walberg (R-MI):

Thank you Mr. Chairman. And thanks to the panel for being here and staying here, waiting around for us as well. Many of my colleagues know that protecting kids has been a top priority of mine while serving on this committee. Although protections currently exists for some children, the scope is limited and doesn't cover the terrible harms we are hearing about in the news. We know the stories like the one in Marquette, Michigan, way up in the upper peninsula of Michigan, very rural area, very few people up in that area where scammers pretended to be another student and extorted a 17 year old football player, good guy who sadly took his own life after being blackmailed. And another were innocent photos and personal information of 10 middle schoolers, females were turned into explicit images.

These kids are met with harassment and extortion by vile actors scraping their data and threatening digital forgeries, explicit pictures to be given to their friends and family. This is abhorrent and Congress must work to protect children from these evil actions. I ask for unanimous consent to enter this article, Mr. Chairman into the record, how AI makes it even harder to protect your teen from sextortion online.

Rep. Gus Bilirakis (R-FL):

Without objection. So ordered.

Rep. Tim Walberg (R-MI):

Thank you Mr. Leibowitz. How can the work of this committee in doing what we're doing on enacting a comprehensive dated privacy and security law prevent these type of harms to our young people?

Jon Leibowitz:

Well, you have a number of provisions in the privacy legislation that this committee reported out 53 to two last year that really helped protect young people, vulnerable population from abusive ai. It's not the end of the all the protections, so you should keep on working on AI issues, but it's important. So requirements for data security restrictions on collecting data without consumer permission, mandatory risk assessments, obligations for companies to minimize data, finding authority for the FTC and state attorneys general, the protections which we just talked about on minor 17 and under and then also…

Rep. Tim Walberg (R-MI):

And these will have the teeth and do its really do the job well?

Jon Leibowitz:

There's an absence of teeth now, right? And current law is inadequate. So I think this will take a really critical first step, an important step in not letting all of that information out of the barn, particularly as it hurts consumers. And I just want to make one other point, which was that COPPA was brilliant and COPPA is 20 years old now and it needs to be updated, but it was really well written by Congress. So for example, one thing you did was you allowed state ags to also enforce the law. Another thing is you gave the FTC rulemaking authority. And so when I was at the FTC, we updated the COPPA rule to prohibit the collection of precise geolocation information. Now, when COPPA was written, nobody knew what geolocation information was. When I got to the FTC, I didn't know what geolocation information was, but we realized it was a big gap in COPPA and we were able to fix it. So I think you need to give some rulemaking authority possibly within guardrails, and I think you did that last year to the FTC to do some rulemaking because they have expertise in this area.

Rep. Tim Walberg (R-MI):

Okay, thank you. Thank you Mr. Krikorian. I'd be remiss if I didn't bring up your testimony on self-driving vehicles. Michigan's the auto capital of the world, an area that is of great priority to Republicans on this committee, that self-driving space. I've had reservations in the past especially on how to make sure other road users like motorcyclists remain safe. I do believe these vehicles can be made safe, but there are certainly limitations to where and under what parameters they can be tested in order to become better and have greater chance to deploy broadly. Self-driving vehicles need to collect data, lots of data to improve. I think this is analogous to other areas that chair Bill Araki discussed in his opening statement. Good actors can use the data they collect to improve products and service, increase cybersecurity or other improvements that they think is reasonable. Can you explain the harms that could arise if we limit this type of improvement and innovation?

Raffi Krikorian:

Thank you for that question. If the question is what happens if we don't do enough data collection in order to make these things safe, then we won't have enough data to power our algorithms. We won't have enough data to power our simulations and therefore we won't understand all the different complexities of what we might encounter on the road. We used to collect terabytes of information at Uber for every hour that we drove on the road so we can properly analyze it after the fact and make sure that we're encountering for every single possible edge case that we might see on the road. And if we didn't do that, then we would have a problem. We just wouldn't be able to understand all the situations, say a toddler running across the road, which happens maybe one in a million miles, but that one in a million miles is incredibly important. So if we don't capture all that data, it's incredibly important that we have major issues on the simulation and the data validation side. I will say though, there are ways to capture this data and preserve privacy at the same time. You could be capturing data and scrubbing faces before it hits disc. You could be doing all these things that allows us to still get what we need from the information without violating, say, where a person is going or becoming a mass surveillance mechanism that's roving on the roads.

Rep. Tim Walberg (R-MI):

Thank you. My time has expired. I yield back.

Rep. Gus Bilirakis (R-FL):

Thank you. I recognize Representative Kelly for her five minutes of questioning.

Rep. Tim Walberg (R-MI):

Before I started I wanted to yield a minute to Representative Cardenez.

Rep. Tony Cárdenas (D-CA):

Thank you for yielding and thank you Mr. Chairman and ranking member for having this important hearing. The advancement of artificial intelligence brings massive opportunity for the United States and the world. Through the development of AI, we will see unprecedented progress in research and innovation improvements in the existing industries and the creation of entirely new ones. However, there are negative aspects as well that we could actually make better with legislation through Congress. For example, artists deserve to be compensated when their work is used. And importantly, they should not be competing in marketplaces with products that are artificially generated without AI generated products being clearly labeled. This can and will vastly drastically change the workplace, not throughout America only, but throughout the world. And I think that we can do a good job of legislating to make sure that we mitigate as much as we can. And with that, thank you so much for yielding time, Ms. Kelly.

Rep. Robin Kelly (D-IL):

Thank you Chair Bilirakis and ranking member Schakowsky for holding this important hearing. As many of you know, a few years ago had the pleasure of working with former congressman Will Hurd of Texas in partnership with the bipartisan policy center to produce four white papers related to a national artificial intelligence or AI strategy. As such, I spent the past few years following the development of these technologies and systems. And I agree that AI has great potential to create new opportunities and greatly improve the lives of Illinoisans and all Americans. But we have already seen many ethical challenges that previously existed. Like bias talked about privacy and power. Asym asymmetries evolve and can be greatly exacerbated by the emerging use of AI technologies and systems. For these reasons, I strongly believe the issue of civil rights and liberties must be front and center. In discussions about the development, deployment and oversight of AI technologies, we must protect against the potential for these AI technologies and systems to harm Americans and reduce the ability of all communities to participate in the digital transformation of the economy.

Ms. Kak, ADPPA has strong provisions regarding the need for data minimization, which I strongly support. Some critics have suggested that the principle of data minimization would hamper US companies ability to develop and deploy AI systems, leaving them at a competitive disadvantage to companies based in other countries. Do you agree with these concerns or do you believe American companies can deploy AI while also having strong federal data privacy regulations?

Amba Kak:

Thank you Ms. Kelly. I disagree with those statements because they're based on a misleading caricature of data minimization itself. I think this caricature presents data minimization is somehow stopping access to data wholesale. Whereas the truth is very far from that. We're simply setting guardrails on permissible purposes and we're kind of providing an antidote with data minimization to the otherwise incentive supercharged with ai, the incentive that exists to kind of hoover up as much data about users and store it for as long as possible. Now one other way to put this is that in America we want to be incentivizing the right kind of innovation. The whole premise of the AI race against China is that democratic AI needs to beat out authoritarian ai. And the way in which we do this is by heightening the contradiction there. The way in which we establish US global leadership on AI is by setting the precedent for what democratic AI looks like. And I think a strong privacy law does that is a very strong step in that direction.

Rep. Robin Kelly (D-IL):

In your written testimony, you talk about regulating the collection of certain kinds of sensitive material. How do the Illinois Biometric Information Privacy Act and other similar state laws limiting the use of biometric data, protect Americans' privacy?

Amba Kak:

So the Illinois BIPA is one of our favorite examples of a very strong data minimization mandate for two reasons, right? Because although it has the lever of consent and that's very important and it sets a high consent bar, it is at the same time drawing a bright line and saying no company should be able to profit from our biometric data. And it is because that line is drawn so very clearly in the sand that it has led to such successful litigation. Most recently, the fact that Clearview AI has now been permanently banned from selling its face database of millions of our faces to private entities for profit.

Rep. Robin Kelly (D-IL):

And Ms. Espinel, would you please elaborate on why robust data protection is so critical?

Victoria Espinel:

Well, I think as AI becomes more powerful privacy protection, as AI becomes more powerful, privacy protection has become more important. So we've already had a conversation about the risks, deep fakes, hackers access to consumer data. These concerns are real. They are happening now. That makes the need for federal privacy legislation even more urgent. But I would also say we need legislation on high risk uses of AI as well. And I commend the committee for the bill that you developed last year that addresses both of those issues.

Rep. Robin Kelly (D-IL):

Thank you so much. I yield back.

Rep. Gus Bilirakis (R-FL):

Okay, gentle lady yields back, and now I'll recognize the representative from South Carolina, Mr. Duncan for his five minutes of questioning.

Rep. Jeff Duncan (R-SC):

Thank you Mr. Chairman, and very timely hearing. We may have to tap into AI to figure out how to elect a speaker of the house. We're going to have a hearing tomorrow on AI and the energy sector, our chair of the energy subcommittee. So I look forward to that. On July 13th, 2023, the People's Republic of China issued final measures aimed at regulating generative AI services such as ChatGPT to the Chinese public. Under these provisions, content providers must uphold the integrity of state power and ensure the development of products aligning with the socialist agenda of the Chinese Communist Party. These actions along with censorship and the CCP's growing control of a private sector have stifled domestic growth and innovation. While this committee recognizes the need for a framework to address AI's potential risk to ensure responsible development and deployment, how do we also ensure that AI systems align with our own democratic values while promoting a fair and open regulatory environment? So for Ms. Espinel, you've talked about the different roles of companies and privacy and AI legislation. Tell me more about those roles and why legislation should distinguish between them.

Victoria Espinel:

So first I'll say I think it's very possible to have regulation that leads to a responsible ai. It takes a different approach than other countries, and I think the United States being a leader on what a regulatory approach to AI that reflects our values, I think that's critically important. One of the things that it also needs to do is recognize the different roles that you refer to. There are companies that develop AI, there are companies that use AI. Our companies do both, and there should be obligations on both. But they should reflect the fact that whether you're training a system, creating a system, or using a system, you're going to have access to different information. And importantly, you're going to be able to take different kinds of steps to identify risks and then fix those risks. So having obligations for both and having impact assessments that will give companies a tool to identify what the risks are and then importantly go out and fix those risks is critically important.

Rep. Jeff Duncan (R-SC):

Yeah, thank you for that many privacy laws around the world in the us. In the US statutes governing privacy and security of financial and health information distinguished between controller versus a processor in order to recognize the different roles they play in the ecosystem and tailor responsibilities accordingly. Why should Congress support maintaining this distinction in federal privacy legislation? That question's to you too.

Victoria Espinel:

For the same reason because whether you are a service provider that is processing data, so for example, if you are a grocery store and you are collecting data about your consumers, you should be responsible for making decisions about how to limit use of that data. For example, if you're a service provider that is processing that data, you often will not have access to it and we don't want service providers to have access to it. That would undermine privacy. So that's a practical example of why those distinctions, which again, this committee is recognized and has been recognized I think in 126 privacy laws around the world is important. And so commend you for the work on that.

Jon Leibowitz:

And if I could add just one small point to that, and I agree with everything Ms. Espinel said, but she works for an association of the best companies and they are, and sometimes if you don't have a standard or a floor, then the companies that aren't the best companies and sometimes even the companies that are almost the best companies, they go down to the bottom because they're at a competitive disadvantage. And so at the ftc, we went after a lot of scammers and they would come into my office or companies that engaged in deception, and they weren't bad companies, but they would say, yeah, I would like to protect consumer data this much, but if I did that we couldn't earn money because everybody else is at a lower level and that's why you need a privacy baseline and that's one reason.

Rep. Jeff Duncan (R-SC):

I've heard concerns about how AI generated content taking, using profiling from an individual's image and voice without acquiring consent. While this committee has been focused on privacy and data security for some time, we've focused on NIL and how it impacts people such as college athletes. To that end, I'm curious if you see issues beyond privacy like property, right protection, image protection, whatever as an essential part of what Congress should be addressing when tacking AI policy. And I don't care who answers it, we've got 45 seconds.

Victoria Espinel:

Well, I think there are important issues that are raised here. I think one of the things that Congress could think about is whether or not a right of publicity at a federal level would be helpful here. There are rights of publicity at the state law, but again, not all states have those law and there's not one sort of federal standard for that. So a suggestion would be that Congress consider creating a federal right of publicity to address some of those concerns.

Rep. Jeff Duncan (R-SC):

I mean, Mr. Chairman, I've seen some videos recently where image of someone was taken and AI generated content of that person speaking something that they never said and how that could be used in the political realm or against adversaries for black male. Other things, even putting college athletes and students in bad positions where they've got to say, that wasn't me. It looked real and it's scary. And with that, we're always concerned about the privacy of American citizens.

Rep. Gus Bilirakis (R-FL):

Thanks for bringing that up. Appreciate it. Mr. Gregg, you want to respond to that at all?

Clark Gregg:

Oh good. I did.

Rep. Gus Bilirakis (R-FL):

Briefly. Yeah, please. Very briefly. Yeah, No. Okay. Alright. Now I'll recognize my good friend from the state of Florida, Mr. Soto, for his five minutes of questioning. Thank you.

Rep. Darren Soto (D-FL):

Thank you Mr. Chairman, and thank you to our witnesses for being here and having patience. As we had to interrupt the hearing for a little while. We know whether it's DeepFakes or ChatGPT looking at fraud by impersonation or advanced manufacturing, that AI is awe-inspiring, but it also can be disruptive. We see it's particularly being disruptive in professional services and entertainment in various different parts of our economy. Mr. Gregg, first I want to unequivocally announced my support for those SAG-AFTRA members who are striking for better wages, benefits in workplace rights. We have many of your members in Florida and really appreciate it. I represent the Orlando area. We have many artists working in our theme parks as actors, as musicians in production. And I know that you've already mentioned AI is being used increasingly to replicate well-known actors and things of that nature. It'd be great to hear how you think it affects those who are in production, all those workers who are helping to produce movies and minor characters and extras and the vast majority of folks that are involved in this SAG-AFTRA Union.

Clark Gregg:

Thank you so much for your support for our union's activities. The question if I understand it is how AI and this issue affects the broader rank and file of the crews and the, I dunno what not movie stars, et cetera, who work in our business For us, we were just talking moments ago. For us this is not a future concern. This is already happening. Now as I said, the move from streaming felt like a test case for us where because it was monetized in a different way, that was advantageous more to the corporations and disadvantaged the right rank and file actors and production people. We already saw a huge shift down for the workforce. The median income of writers went down 14% during the time when obviously cost of living has gone up of the entire rank and file of SAG-AFTRA, which is before this strike. A lot of people thought was just, oh, that George Clooney or whoever's got a lot of money.

But what's been able to be communicated in my opinion is that the vast majority of people, whether they're crew members or actors or writers are in the middle. There's a reason it's called the middle because everyone's there and they are making 26% of actors at any given time in any given year make the minimum they need to just get basic health insurance. So when you start to take our image, our work, our likeness, and turn that into product that's being made by counterfeits, by bad actors in terms of foreign entities and counterfeits, you're taking away what's left of a pie that's already being shrunk in the economy as it stands.

Rep. Darren Soto (D-FL):

Thank you so much. Ms. Espinel, University of Central Florida has a world-class digital twin program. We work in everything from helping doctors with surgery prep to training firefighters and cops to improving efficiency of factories and distribution centers. How does the lack of AI legislation right now limit innovation in digital twin technology and other simulation and training types of technology?

Victoria Espinel:

There are a lot of exciting things that are happening in digital twins. You mentioned some of them, surgeons using them, urban planners using them so they can figure out what the impact of a design is going to be, reduce costs, increase sustainability. But I agree with what I think is your premise that passing AI legislation will help increase adoption. I think it will help increase trust in technology and increase adoption of AI in ways that would be beneficial to our society if it is somehow responsible.

Rep. Darren Soto (D-FL):

And why do you think that?

Victoria Espinel:

Because I think having the clarity and predictability of what the rules are, how companies should be, either in terms of privacy protections or how companies should, what they should be doing when there are high risk uses of artificial intelligence will give companies clarity and predictability that they need to make investments. It will give consumers trust that the technology is being used in a way that is responsible. All of that I think will lead to greater adoption and then greater benefits from the positive tools that we're seeing.

Rep. Darren Soto (D-FL):

Thank you so much Ms. Espinel chair. I've seen them scan people's bodies and practice surgery before they're even starting. I've seen logistics centers improve by digital twins. We've seen firefighters trained and cops trained on how to approach a major disaster like a block long fire, all because of the work that is being done in our area. So we want to continue to make it fertile ground for this innovation to continue. Thanks. And I yield back.

Rep. Gus Bilirakis (R-FL):

Thank you very much. I thank the gentleman he yields back and now Dr. Dunn from Florida, good friend of mine. I know he had some great questions. You are recognized sir for five minutes.

Rep. Neal Dunn (R-FL):

Thank you very much Mr. Chairman. And I think there've been some great points made here by my colleagues and of course the panel. And clearly we want responsible data privacy protection as AI moves forward, but what's also critical that we maintain a competitive edge, if you will, in acquiring and developing this technology in America. AI has obvious major applications in defense. National security systems in the global economy were presented with the challenge of preserving free markets while simultaneously protecting trade secrets, individual privacy and of course critical technology. Recently the commerce department issued rules that require US chip makers to obtain a license in order to export AI chips. I believe this is on the right track. Protecting the United States from global adversaries is a bipartisan non-negotiable issue. China's been using AI in its national strategy for year.

As early as 2017, China announced a national AI strategy that describes a new focus of international competition and AI and that was 2017, six years ago. This begs the question of what incentives should we be considering to ensure that our companies won't sell products to China and men or that deteriorates not only our competitive advantage but our national security. I have a July article in CSIS, which reads the reality of Chinese military purchases is not up for debate. It was openly published in unclassified Chinese military procurement contracts followed just between April and November of 2020. They followed over 21,000 such contracts that all specified specifically American chips had to be used in the contract. Not one single contract specified purchasing Chinese chips. So corporate profits in America shareholder earnings are a clear motivation for US-based AI chip manufacturers to export AI technology even to communist China. But we need corporate America's cooperation on this and it's an extremely dangerous area. Mr. Liebowitz, I know you have a lot of experience at the FTC. Do you think these export controls currently as they exist, are they going far enough or should it be tweaked?

Jon Leibowitz:

Well, I'm no expert on export controls, but I do, I agree with everything you've said and it makes sense to keep exploring this approach. I would say also that when it comes to leadership on protecting consumer privacy and ensuring the proper guardrails on ai, it would be much better if our companies didn't go looking to Brussels for rules but came to the United States and we set our own rules that are helpful to American corporations.

Rep. Neal Dunn (R-FL):

Okay, we look to you for ideas on these things that all of our panel members. On another note, the Bank of America noted that their prediction by 2030 AI will be driving 16 trillion –with a T – dollars worth of economic activity annually. That just impacts every sector. That's a vast opportunity for our domestic markets. But we do want to proceed intelligently. The sheer size of the big tech companies often give them an advantage. You alluded to that earlier, Leewood, with all your experience there, do you think that that's the large US corporations that they have an undue advantage in the market?

Jon Leibowitz:

I think large corporations sometimes do have an undue advantage in the market. And sometimes we looked at this at the FTC, Congress has looked at this, do create sort of barriers to entry for small and medium-sized companies that could be their competitors. I do think when you have a well-intentioned but not well-executed law, like for example the GDPR, you exacerbate those problems. And so one of the things that I think is very positive about the legislation that came out of this committee last year is you strike a proper balance between making sure that consumers are protected and also making sure that there can be compliance with those levels by companies.

Rep. Neal Dunn (R-FL):

I thank you very much for those comments. I do hope that you'll help us keep our balance. Mr. Chairman, thank you. T

Rep. Gus Bilirakis (R-FL):

Thank you, gentlemen yields back. Now I recognize Ms. Clarke, Ms. Clarke, excuse me, from the great state of New York.

Rep. Yvette Clarke (D-NY):

Thank you very much Mr. Chairman. I thank our ranking member Kowski for holding this hearing today. I'd also like to thank our witnesses for first of all, your indulgence for being here to testify on such a very important topic. I'd also like to take a moment to recognize the context in which this hearing is being held. Right now, the house is essentially paralyzed. Even if this committee were to finally approve bipartisan privacy and AI regulations, the house couldn't even vote on it. Republicans continue to demonstrate their inability not only to govern, but just figure out who to lead them. Or while we are marching closer to another Republican shutdown, Democrats remain committed to passing bipartisan legislation that meets the moment, whether that's a comprehensive privacy law or addressing the war in Ukraine and the escalating Israeli Hamas conflict. I believe as many of my colleagues do that one of the best ways to protect consumers and promote responsible use of AI is by passing comprehensive data privacy legislation. Having said that, Ms. Espinel, in your testimony, you reference how impact assessments are already being used across the industry. What are AI companies currently doing to evaluate risks created by their systems before and after the systems are deployed?

Victoria Espinel:

Thank you very much and thank you for your long leadership on these issues. So our companies I represent the enterprise facing part of the tech industry are taking quite a few steps. So sometimes that is testing their models, including doing red teaming. Sometimes that is often that is assessing the quality of the data that's going into it. But the specific steps that they're taking often fall into this framework of impact assessments that you refer to. And having developers of AI and having deployers of AI do impact assessments at every step of the process is very important to make sure that companies are acting responsibly to make sure that they are not cutting corners. So I like to give the impact assessment as a report card that measures the intent of the system, whether or not it's being used correctly, whether or not the data that has gone into it is the quality of data that it needs to be. And then I think what is important is that an impact assessment be part of a larger risk management program so that if the report card comes back and it shows that there are problems, that the companies have steps that they can take to address them. And the last thing I would say is that they publicly certified that they have done so.

Rep. Yvette Clarke (D-NY):

We can ensure that this is already currently a industry practice or are we at that stage.

Amba Kak:

It's a practice that many of the companies that I represent are taking. But thank you for saying that. I think what is really important is that we make it a law, not a practice that is required that companies do impact assessments and publicly certified that they have done so.

Rep. Yvette Clarke (D-NY):

Very well in cases of high risk AI, decisions where life altering opportunities are at stake, whether it's in healthcare, education, employment, housing, or where untested and biased systems can do the most harm. It is essential for companies to evaluate and mitigate the potential risks of a system before it is released. We need strong rules that prevent companies from releasing systems that replicate and amplify the harmful biases in our society. I was so glad to see so many of my provisions from the algorithmic Accountability Act included in the ADPPA last Congress. Ms. Kak, based on your testimony, I believe you agree that impact assessments are essential to ensure algorithms and AI systems do not perpetuate an amplify bias. What are the most critical parts of such an impact assessment?

Amba Kak:

Thank you so much Ms. Clarke. I think speaking of report cards, while we are absolutely in favor of impact assessments and that companies should be of course evaluating risks, we're worried about a situation where companies are essentially grading their own homework. So while we are in favor of impact assessments, we think for these impact assessments to have teeth and for them not to devolve into some superficial checkbox exercise, we need to make sure that there is independent and third party scrutiny of these evaluations. The question of when these impact evaluations happen are crucial. They need to happen before these systems are publicly released, not just during and after, and the events of the last few months only emphasize that. And finally, we need to have consequences associated with any harms that are uncovered through these impact assessments, including crucially no path dependency to going ahead. I think one of the options on the table with an impact assessment needs to be to abandon the system wholesale.

Rep. Jan Schakowsky (D-IL):

So should performance metrics be part of the algorithmic impact assessment? And if so, who would, what would those look like and who should have the input and access to those performance metrics?

Amba Kak:

Absolutely. Ms. Clarke, I don't think I can answer what the performance metrics should be in the next 15 seconds, but what I will say is that these performance metrics cannot be set by industry themselves. And in fact, there is a very high risk of industry capture when it comes to auditing standards. We need to make sure that the terms of the debate are set by the public and not the industry.

Rep. Yvette Clarke (D-NY):

Thank you very much, Mr. Chairman. I yield back.

Rep. Gus Bilirakis (R-FL):

Thank you. Thanks for yielding back. And now we'll have Mr. Allen from the great state of Georgia, you're recognized for five minutes of questioning.

Rep. Rick Allen (R-GA):

Sir, thank you Chair Bilirakis for convening this hearing. I want to thank the witnesses for your input today. It's been very informative and we've got a lot to do. Over the past year, we've witnessed a remarkable surge, the popularity of AI and it is transitioned into the mainstream of America. This transformation owes much of its success to the widespread adoption of large language models. This evolution is not only attributed to the organic expansion of the user base, but it also strongly influenced by publicly traded companies leveraging buzzwords to enhance their stock value. We must develop the ability to discern between marketing exaggerations and actual advancements in development. More than anything, we should use this opportunity to reinforce how important it is for the United States to have a national privacy standard. Mr. Corian, what data are LLMs usually trained off of?

Raffi Krikorian:

Thank you for that question. I mean, LLMs generally are trained on large corpuses of data. Now the question comes from where do they get those datas from? Or organizations such as OpenAI have used public crawl information where they've trained across the entirety of the internet or a decent subset of the internet that's been publicly available for them. Other organizations such as Anthropic and others do a little bit better work when it comes to curating that data coming in under the notion of garbage in garbage out. So making sure that good data goes in so that you get good data out upwards. But the training set is usually defined purely by the developer themselves and choosing of other own choosing.

Rep. Rick Allen (R-GA):

Do LLM companies pay publishers for using their data to train their models currently?

Raffi Krikorian:

I'm not a complete expert on this, but there is currently a lot of debate. Having advised a bunch of the publishers such as The Atlantic and others, there is no current compensation. Going back to those publishers, just publishers have taken recourse and blocking what their data, how their scrapers can access their data. But currently, as I understand it, no.

Rep. Rick Allen (R-GA):

Mr. Liebowitz, should the owners of LLMs be required to competence publishers for data their models are trained with?

Jon Leibowitz:

Well, I certainly think as you move forward on AI legislation, that should be something that you explore. The perspective of the FTC is always protecting consumers and certainly consumers need to be protected from abuses in ai. But I also worked for the film industry for a short period of time for a brief period of time. And of course people need, whether they're authors or whether they're and my wife is a journalist or whether they are creators or whether they're artists, they deserve some degree of compensation. And so I agree with the premise of the question for sure.

Rep. Rick Allen (R-GA):

Does the FTC already have authority to require these companies to compensate the publishers for using their data?

Jon Leibowitz:

Well, no. The FTC doesn't, I mean the FTC has, I would say it has about 80% of the authority it needs to protect consumers from fraud and deception and unfairness. But it also, the Supreme Court took away its equitable relief authority. So it no longer has the ability, for the most part, to get equitable relief to injured victims or to disgorge profits from corporations that violate the FTC Act. Most companies try to be on the right side of that, but that would be a thing. And I know your committee is working on it, giving the FTC equitable relief authority, that is an important element of it being an effective law enforcement agency.

Rep. Rick Allen (R-GA):

Thank you. Ms. Kak, should transparency about publisher's content and training data sets be part of the conversation about ethical ai?

Amba Kak:

Absolutely. I think what the landscape we're operating within today is that we don't have basic information about what data sets these models were trained on and what practices were taken care of to prevent risks. And so I think the start of any conversation and basic consumer AI literacy demands that we have answers to these questions before we can move further. And I think a data privacy law would be a strong step in that direction.

Rep. Rick Allen (R-GA):

Okay. I've got about 30 seconds left. A key issue associated with generative AI systems is the risk they pose for proliferating harmful content, including fake news, misinformation, disinformation, and entrenching bias against conservatives. If you, since I'm out of time, if you would submit this for the record, do you think, and this was all of our panelists today, do you think that one guardrail that can be attached to generative AI tools is requiring that outputs contain clear prominent sources so that consumers can evaluate an outputs, trustworthiness? And if you would respond to that in writing, I'm out of time and chairman, I yield back.

Jon Leibowitz:

But I would say yes.

Rep. Gus Bilirakis (R-FL):

Okay. Thank you. Thank you very much. Appreciate it. And gentlemen, yields back now, I'll recognize Mr. Falter for his five minutes of questioning.

Rep. Russ Fulcher (R-ID):

Thank you, Mr. Chairman. And to the panelists, thank you for your time and your expertise in sharing with us today. You probably are aware that some of us are bouncing in and out with multiple committees, and so some of this could be repeat questions and if that's the case, please, please be forgiving and know that we try not to cover ground twice, but sometimes it just happens. Mr. Gregg, I'm not familiar with your industry that well. In fact, I rarely get a chance even to see movies anymore, but occasionally I do. And not long ago, I saw the most recent installment of the Indiana Jones series and at the front end of that movie, there's a very young Harrison Ford in there. And I am told that that's AI generated deepfake recreation, whether it was or whether it wasn't. I'm not too worried about Harrison. My feeling is he probably got paid and he probably got paid pretty well. But that may not be the case with other people in your industry area that there might be some concerns. I'd like to ask you a two point question. One would be, has that issue reached high priority within your industry? And two, what is the proper role of the federal government when it comes to regulating that activity, that regeneration or recreation activity?

Clark Gregg:

Thank you very much. That's an excellent question. The first part I saw that I did a movie called Captain Marvel where they wanted to use a younger version of me. I would like to use a younger version of me, but they actually had the capability to do so and they did it. And there was Sam Jackson and I both in the nineties in a blockbuster anyway, which was a throwback. And that's weird, but they're able to do it if you ask me at times. It looked very realistic at times it didn't. But these are the beginning moments of this. They've been working on it for a while. And the ramifications, as you point out, are terrifying. They're terrifying to us professionally, as I outlined a bit earlier, but for example, one of the issues that came up very quickly in the labor action that we're involved in was that there was a request on the part of the corporations that make our content that, for example, the people who they're called background performers, a background performer, when you see a bad one, most of the time you don't notice them.

Excellent. They're professionals, they work long hours and they bring a whole world to life. They make the least money, they work the hardest. And what they wanted to do was scan these people once and then use them in perpetuity in whatever movie they wanted to use them in. And I think that's a fair jumping off point. You get to see what I heard someone saying that if you want to know where this is going, look what gets incentivized. And it's a way that you can remove the human element of this because it doesn't demand a new contract. It doesn't need insurance benefits. And unfortunately, this also goes to another question, which was how does this allow us to still be competitive? What it feels like is that the companies that we work for are focused on satisfying growth models and Wall Street. And since this technology, especially with AI involved, is advancing so quickly by the time they realized, my dad, who I lost recently said a great thing. He said, if you don't get the first moment of truth right, you probably won't see the second one. By the time they are able to do, and we realize that something terribly artificial has crept into this and that the quality is gone, that made our film and television business the best in the world, it'll be too late to do anything about it.

Rep. Russ Fulcher (R-ID):

That's a very good response. Thank you for that. And just because time is running so short, the second part of that question is, do you have any counsel for us? What is the proper role of the federal government in trying to regulate some of that activity?

Clark Gregg:

Thank you. Because every example I've heard, whether it's about protecting our children, which I have a 21 year old daughter, and I've watched her grow up trying to navigate social media, the algorithm, and I think AI is just the algorithm on steroids the left to the devices of commerce. I don't think we can trust the safety of our children. I don't think we can trust the safety of our performance. We need guardrails and we need them on the federal level. As we said earlier this morning, this takes place across all boundaries instantaneously.

Rep. Russ Fulcher (R-ID):

Thank you. And very quick, I'm just about out of time, Mr. Krikorian. You may be the best person to ask this question, but the industry, Mr. Gregg's in is one thing, and that's terrifying in and of itself to me. Another more terrifying component is what if a person in the president of the United States or whatever is shown or red depicted as making statements or even declarations that are not accurate in the industry. Do you have the technology to readily recognize a deep fake or recreation?

Raffi Krikorian:

Thank you for that question. It's an arms race right now. I can literally create a recreation on my laptop in a couple of hours and someone can maybe detect it and their tools are better, but without things such as watermarking, without ways to understand data, providence and others. Right now, it's an arms race that we're in right now.

Rep. Russ Fulcher (R-ID):

Thank you. And Mr. Chairman, I yield back.

Clark Gregg:

Can I add one last thing, sir?

Rep. Gus Bilirakis (R-FL):

Yes, please.

Clark Gregg:

What I failed to answer in a second. What our union is asking for is consideration, compensation and consent that there's rights you have as an individual and a performer. Thank you.

Rep. Gus Bilirakis (R-FL):

With regard, Mr. Greg, when you talked about the 1990s, the deep fake or whatever you want to call it in that particular movie, did you give them permission to use the 1990s image as opposed to the current image?

Clark Gregg:

Thank you, sir. That's a great question. Yes. In that case, the question that was put to me, it was a fair question, but I think it's an interesting one to bring up, was if you want to be in this movie, then we'd like to de-age you and we're going to put some spots on you. We're going to give you, thank God, a little bit more hair and we're going to going to put you back in the nineties, or we can cast someone. So I think a lot of times performers will be in the position of, if you want to work, you're going to have to go along with these things. And sometimes what we're afraid of is that you'll sign off on something that leaves you not protected as the scenarios evolve.

Rep. Gus Bilirakis (R-FL):

Yeah, I said it was nice. You want to be compensated for it.

Clark Gregg:

Correct. Exactly right.

Rep. Gus Bilirakis (R-FL):

Thank you. Thank you. I thank you. The gentleman yields back and now I recognize my friend from Florida. We're also very strong University of Florida football fans. She's from Gainesville, Florida. Go Gators. I recognize you for five minutes of questioning.

Rep. Kat Cammack (R-FL):

I appreciate it. Mr. Chairman, as the representative of the Gator Nation and home to the nation's supercomputer with the University of Florida, this is an interesting topic for us because we are investing millions and millions and millions of dollars into r and d. And I think that we are a little bit behind the eight ball in terms of how we manage some of the issues we're coming across. And I know we've talked at length today about digital twins, so I want to talk a little bit more about the dark side of digital twins. We see the tremendous opportunities for growth for expediting supply chains, different mechanisms, but can we talk about some of the malicious and possibly deceptive digital twins, how that might be mitigated, detected, et cetera? I'm going to open this up to you. I'm going to say Espinel. All right, perfect. Yes.

Victoria Espinel:

So I think digital twins, like other forms of AI, can create significant risks. And when we are some of the ones that you've just highlighted in high risk scenarios, in scenarios where AI is either being developed or used in a way that is having a consequential decision on someone's health, on their education, on their employment, on their civil rights, and those cases, we believe that you should pass a law that requires companies to do impact assessments, identify those risks, and then mitigate those risks. The bill that you passed out of committee last year does have provisions on impact assessments, and so I commend you for having thought that through, but I think that is an important element of trying to identify and then eliminate and reduce the risks that you are referring to is by requiring all companies to do those types of impact assessments and then certify that they have done the impact assessment, they've identified the risk and mitigated that risk.

Rep. Kat Cammack (R-FL):

You don't think that it should be a third party.

Victoria Espinel:

So I think there's a lot of discussion about that. I do think we need to ensure that there is not a check the box exercise. I think we need to make sure they're effective whole point. Government's very good at that. The whole point is to work responsibly. There are some pieces that are missing right now in order to have a system of third party audits that would work effectively at the moment. So for example, there's no accredited body to do third parties. There's no commonly agreed standards, but there are also groups that are working on those, and we're looking forward to working with many members of the community as those discussions continue.

Amba Kak:

Can I?

Rep. Kat Cammack (R-FL):

Go ahead.

Amba Kak:

I just had one very quick point, which is that I think we have industry support in general for impact assessments, but some of that support falls away when we say that these impact assessments need to happen before these products are publicly released. And if they are unable to mitigate the risks that they shouldn't be put on the market. I think that's where the rubber hits the road and why it's very, very important to structure audit or any assessment tools so that they're really, they have teeth and they're able to introduce reflexivity before the harm has already happened.

Victoria Espinel:

Absolutely. We believe impact assessment should be happening at all stages, including before products are released. Just to confirm the companies that I represent.

Jon Leibowitz:

And I do too, and I do think third parties is a good idea, but they have to have teeth, right? They can't be third parties that are just sort of reinforcing the views of the corporation. We put Facebook under order when I was at the commission, and I think Ms. Hun was at the commission too, and we required a third party auditor. And what did that lead to? Cambridge Analytica. So you have to be very, very careful and you have to make sure it has teeth. And the other point I would make is Ms. Espinel's companies are large important companies that can do mandatory risk assessments, and that's a good thing, but if you don't have a law in place, you might well have a race to the bottom and a race to the bottom starts with bad actors and it brings good actors down.

Victoria Espinel:

Absolutely.

Rep. Kat Cammack (R-FL):

In staying with you, Mr. Leibowitz, we see the benefits of what a digital twin can do both in the medical space, supply chains, manufacturing, et cetera. Can you talk about limitations though?

Jon Leibowitz:

Mitigation?

Rep. Kat Cammack (R-FL):

Limitations.

Jon Leibowitz:

Oh, limitations. Well, I mean, look, current law and current regulatory and enforcement agencies, they're just not a good match for the problems created by ai. And so you really need to craft a law. And I think that the bill, you came out of your committee, 53 to two last Congress is a really good first step, not the end, but very much the end of the beginning or the beginning of the end.

Rep. Kat Cammack (R-FL):

Well, I appreciate that. And Mr. Gregg, I've been trying to think of a way to incorporate Agents of SHIELD references in my testimony today. I'm just not that quick on my feet today. That being said, I was recently in India, and as I was catching television in and out of the hotel, it was a 24/7 anchor that was AI generated. We are heading for some very scary times, and as ai, as Mr. Altman has said, it is not designed to be a human experience. We're getting very close to that. And so I appreciate your efforts to be here and speak to the issue of creativity and talent and some of the issues that we're going to facing in both the entertainment and news media world beyond. So thank you.

Victoria Espinel:

Can I just make one point? So I represent the enterprise facing part of the tech industry. I represent some companies that are big. I represent a number of companies that are quite small. We believe that companies should be able to do impact assessments, whether they're big or small, and it should be doing them in high risk cases of ai. But I also want to say we think there should be a law passed. So it's not just our companies who are doing that voluntarily, but all companies in high risk situations being required to do that.

Rep. Kat Cammack (R-FL):

Not to presume that anyone has bad intentions here, but it does seem a bit like the fox in the henhouse. I see government agencies that have to do their own impact assessments on regulations. They're trying to force down people's throats and they don't ever match what the reality is. So I think that a third party system is probably an order.

Jon Leibowitz:

And going back to your earlier point about limitations and mitigation and what you witnessed in India, that's not commercial. So the FTC has no authority there. And so that's another reason why you want to look really closely at creating obligations for companies here. And the burden should be on the companies. It shouldn't be on the consumers. That's the problem with notice and consent historically. So I can tell this committee is not kicking the can down the road.

Rep. Gus Bilirakis (R-FL):

Ms. Kammack, I know I can't show any favoritism even though you're a Florida Gator, I can't. I've got to give the Georgia Bulldog an opportunity. So with that, I'll recognize the gentleman, Mr. Carter from Georgia.

Rep. Earl “Buddy” Carter (R-GA):

Well, thank you, Mr. Chairman. If I may take just a personal moment and compliment my colleague to my left on her choice of red and black today, she looks very attractive in that and I find it to be very attractive. Thank you. Mistake, mistake, mistake. Thank y'all for being here. Obviously this is extremely important. I will tell you this next, who's going to be the next speaker? The most prolific subject matter right now is AI on Capitol Hill. That's all anybody's talking about, AI, AI. It is the flavor of the month. It is really the topic of the month. So very important. Thank you, Mr. Chairman, for allowing me to wave on and thank you for having this hearing because we need to get this right. I'm real concerned we and the internet, we still got a law in the books that was created in 1997, and think of everything that has happened between that time and now on the internet, and yet we're still going by the two 30. I mean, it's just, we got to get this right and we got to get it right as it evolves. So your help on this is extremely important. Ms. Espinel, I want to ask you professionally, I'm a pharmacist, so healthcare is extremely important to me, and I've been especially interested in the promise of AI and healthcare. However, there have been questions that have come about as a result of health data. For example, there are reports of chatbots giving medical diagnosis. I'm real concerned about this. And just want to ask you, what kind of privacy gaps are there as it relates to health data?

Victoria Espinel:

Well, I would just say as the daughter and the sister of doctors, I share that concern. Now, I think that's a great example of a high risk use of artificial intelligence. It's impacting someone's health. There are other high risk uses, but there could not be a better example. And I think when AI, like a chatbot is being used, developed, or used, and it's going to have an impact in a high risk situation like someone's health, then there need to be limitations on that. There need to be obligations to do impact assessments. And if it's going to create a risk such as offering a diagnosis and appropriately, then that can't happen. And the companies need to have processes in place where they're identifying that that could happen and then addressing it. And by addressing it, I mean trying to ensure that it does not happen.

Rep. Earl “Buddy” Carter (R-GA):

Mr., you want to ...

Jon Leibowitz:

Yeah, I was just going to add a couple of points. So one is the benefits in healthcare of artificial intelligence could be enormous in a variety of areas, but there's also, as you point out, a big gap. We have hipaa, but there's a lot of sensitive information that's outside of hipaa, right? It's what you're looking at on the internet if you're trying to find a medical diagnosis and companies shouldn't be collecting that information and marketing it and selling it and transferring it without your permission. And so that is a sensitive category of information that your legislation on privacy would require affirmative express consent for it. In other words, it can't be taken by consumers without clearly them authorizing it.

Rep. Earl “Buddy” Carter (R-GA):

Well, I can see where it can be extremely beneficial, but I can also see where it can be extremely dangerous.

Amba Kak:

Mr. Carter?

Rep. Earl “Buddy” Carter (R-GA):

Yes, please.

Amba Kak:

I have a small point because I think privacy and competition are actually two sides of the same coin. And another practice we're seeing in the healthcare space is we're seeing big tech companies shore up medical databases, particularly those that are rich in patient data. So one of the things that we are really concerned about is we think there needs to be stricter review of mergers in this space because big tech is really at a perch where they shore up.

Rep. Earl “Buddy” Carter (R-GA):

I bless you. I have been on mergers in the space ever since I've been here. Thank you.

Amba Kak:

No, this is absolutely true.

Rep. Earl “Buddy” Carter (R-GA):

That is something we've got to be concerned about.

Amba Kak:

Absolutely. And FTC commissioners Slaughter and Bedoya have also sort of sounded the alarm in the Amazon one medical case that this is just going to lead to big tech entrenching their data advantage. And we think data minimization tools are a good antidote there as well.

Rep. Earl “Buddy” Carter (R-GA):

Good, good. Anyone else? This is my area and where I'm really interested. Any other comments, Mr. Gregg?

Clark Gregg:

I haven't even played a doctor on TV really.

Rep. Earl “Buddy” Carter (R-GA):

Well, let me ask you this. What are the effects of misleading and deceptive content on consumer protection and the entertainment industry? I know that's kind of what you're involved in, but what kind of misleading and deceptive content on consumer protection do we need to be aware of and we need to be concerned with?

Clark Gregg:

I think I understand the question content in terms of fakes.

Rep. Earl “Buddy” Carter (R-GA):

Yes, yes. Because I'm such a trusting person, I don't know what the difference is and whether it's real or it's not.

Clark Gregg:

Yes. Well, fortunately for you, you weren't here earlier when I was talking about the terrifying inappropriate images that were sent to me of me doing things that as far as I know of, I've never done and would never do, and that's disturbing to have out there with a daughter who's online. But it's just an example of, I think this is where my business transcends out into your business, which is if they can make me appear doing something that I would never do, it's very dangerous to think that they could make you the speaker if we ever get one, the president to say things, especially in really tense moments as we're going through right now with what's going on in the Middle East, the way things turn around so quickly. First of all, there's so much mistrust people will have if we tell them that wasn't real. They won't believe that either, that what's being eroded is truth. So I've said the other things I think about, it's taking the soul out of the art form that I perform in, but I also think the fingers of it reach way, way more broadly.

Rep. Earl “Buddy” Carter (R-GA):

And again, Mr. Chairman, thank you for indulging with me, but this is why this is so important. We need to get this right, andI think the role that we play in Congress is going to be extremely, extremely important, but the role that the private sector has, it's going to be even more important. So thank you.

Rep. Gus Bilirakis (R-FL):

Thank you all very much. I just want to thank you, I think we're going to make a good bill better due to your testimony and your input today. Please don't hesitate to come to our offices and offer suggestions. Again, we had a limited amount of time, and I appreciate your patience today. But again, this is a very important issue as my fellow SEC member, I went to the University of Florida. It's sad. It's so important. We got a real chance against you. We're going to, I don't know, Buddy. I think we got a shot. We'll see. By the way we played last week was very encouraging, but in any case, I want y'all to know our doors are open to you. And so with that, I'm going to say, I'm going to ask unanimous consent to insert in the record the documents included on the staff hearing documents today without objection. So ordered. And I remind members that they have 10 business days to submit questions for the record, and I ask the witnesses to respond to the questions promptly. Members should submit their questions by the close of business day on November 1st, so without objection, the subcommittee is adjourned, extremely informative and very productive. Thank you. Thank you.

Authors

Gabby Miller
Gabby Miller was a staff writer at Tech Policy Press from 2023-2024. She was previously a senior reporting fellow at the Tow Center for Digital Journalism, where she used investigative techniques to uncover the ways Big Tech companies invested in the news industry to advance their own policy interes...

Topics