Home

Donate

Transcript: Senate Hearing on Protecting Americans’ Privacy and the AI Accelerant

Gabby Miller / Jul 12, 2024

Senate Committee on Commerce, Science, and Transportation Ranking Member Sen. Ted Cruz and Chairwoman Sen. Maria Cantwell.

On Thursday, July 11, the US Senate Committee on Commerce, Science, and Transportation held a full committee hearing on “The Need to Protect Americans’ Privacy and the AI Accelerant.”

The hearing featured four witnesses:

Thursday's session touched on growing concerns from both industry and experts about the patchwork of state laws for AI and data privacy that continues to expand in the absence of national standards. These concerns led to a discussion about the American Privacy Rights Act, which was seemingly squashed earlier this month after key civil rights provisions were removed from the data privacy bill, among other changes. There was also a focus on the best approach to establish AI standards and rules, including whether the US should establish an independent AI agency and licensing regime, as opposed to a more risk-based, structural approach to regulation.

Related Reading:

Below is a lightly edited transcript of the hearing. Please refer to the official video of the hearing when quoting speakers.


Sen. Maria Cantwell (D-WA):

Good morning. The Senate Committee on Commerce, Science, and Transportation will come to order. I want to thank the witnesses for being here today for testimony on the need to protect Americans' privacy and AI as an accelerant to the urgency of passing such legislation. I want to welcome Dr. Ryan Calo, University of Washington School of Law and co-director of the University of Washington Technology Lab; Ms. Amba Kak, co-executive director of the AI Now Institute in New York; Mr. Udbhav Tiwari, global product policy for Mozilla, San Francisco; and Mr. Morgan Reed, President of the App Association of Washington, DC. Thank you all for this important hearing. We are here today to talk about the need to protect American's privacy and why AI is an accelerant that meets the need of us passing legislation soon. Americans' privacy is under attack. We are being surveilled, tracked online, in the real world, through connected devices and now when you add AI, it's like putting fuel on a campfire in the middle of a windstorm.

For example, a Seattle man's car insurance increased by 21% because his Chevy Bolt was collecting detailed information about his driving habits and sharing it with data brokers who then shared it with his insurance company. The man never knew the car was collecting the data. Data about our military members, including contact information and health conditions is already available for sale by data brokers for as little as 12 cents. Researchers at Duke University were able to buy such data sets for thousands of our active military personnel. Every year Americans make millions of calls, texts, chats to crisis lines seeking help when they are in mental distress. You would expect this information would be kept confidential, but a nonprofit suicide crisis line was sharing data from those conversations with its for-profit affiliates that it was using to train its AI product.

Just this year the FTC sued a mobile app developer for tracking consumer's precise location through software it embedded in the grocery list and shopping rewards app. The company used this data to sort consumers into precise audience segments. Consumers whose use of this app helped them remember when to buy peanut butter, didn't expect to be profiled and categorized into a precise audience segment like parents of preschoolers. These privacy abuses and millions of others that are happening every day are bad enough, but now AI is an accelerant and the reason why we need to help in speeding up our privacy law. AI is built on data. Lots of it. Tech companies can't get enough to train their AI models, your shopping habits, your favorite videos, who your kids' friends are, all of that, and we're going to hear testimony today from Professor Calo about this issue about how AI gives the capacity to drive sensitive insights about individuals.

So it is not just the data that is being collected, it is the ability to have sensitive insights about individuals into the system. This, as some people have said, you're referring to your testimony now, is creating an inference economy that becomes very challenging. That is why I think you also point out Dr. Kalo, that a privacy law helps offset the power of these corporations and why we need to act. I also want to thank Ms. Kak for her testimony because she is clearly talking about that same corporate power and the unfair and deceptive practices, which we've already given to the FTC as their main authority. But the lack of transparency about what is going on with prompts and the AI synergy is that people are no longer just taking personal data and sending us cookie ads. They are taking that and putting it actually into prompt information.

This is a very challenging situation and I think your question is, are we going to allow our personal data to train AI models is a very important question for our hearing today. We know that they want this data to feed their AI models to make the most amount of money. These incentives are really a race to the bottom where most privacy protected companies are at a competitive disadvantage. Researchers project that if current trends continue, companies training large language models may run out of new publicly available high quality data to train AI systems as early as 2026. So without a strong privacy law, when the public data runs out, nothing is stopping it from using our private data. I'm very concerned that the ability of AI to collect vast amounts of personal data about individuals and create inferences about them quickly at very low cost can be used in harmful ways like charging consumers different prices for the same product.

I talked to a young developer in my state, and I said, what is going on? And he said, well, I know one country is using AI to basically give it to their businesses. I said, well, why would they do that? Well, because they want to know when a person calls up for a reservation at a restaurant, how much income they really have. If they don't really have enough money to buy a bottle of wine, they're giving the reservation to someone else. So the notion is that discriminatory practices can already exist with just a little amount of data for consumers. AI in the wrong hands is also, though, a weapon. Deep fake phone scams are already plaguing my state. Scammers used AI to clone voices to defraud consumers by posing as a loved one in need of money. These systems can recreate a person's voice in just minutes taking the familiar grandparent scam and putting it on steroids.

More alarming, earlier this month, the director of National Intelligence reported that Russian influence actors are planning to covertly use social media to subvert our elections. The ODNI called AI a maligned influence accelerant saying that it was being used to more convincingly tailor video and other content ahead of the November election. Just two days ago, the Department of Justice reported that it dismantled a Russian bot farm intended to sow discord in the United States using AI. Russian created scores of fictitious user profiles on X and generated posts and then used those bots to repost, like, and comment on the post, further amplifying the original fake posts. So this was possible at tremendous scale given AI. I'm not saying that misinformation might've not existed before and may not have been placed in a chat group, but now with the use of bots and AI accelerant, that information can be more broadly discriminated against very, very quickly.

So privacy is not a partisan issue. According to the Pew Research Center, the majority of Americans across the political spectrum want more support for regulation. I believe our most important private data should not be bought or sold without our approval and tech companies should serve and make sure that they implement these laws and help stop this kind of interference. The legislation that Representative McMorris Rodgers and I have worked on I think does just that and I want to say I very much appreciate this morning legislation that Senator Blackburn and I will be introducing called The COPIED Act, which provides much needed transparency around AI generated content. The COPIED Act will also put creators including local journalists, artists, and musicians back in control of their content with a watermark process that I think is very much needed. I'll now turn to the ranking member Senator Cruz for his opening statement.

Sen. Ted Cruz (R-TX):

Thank you, Madam Chair. American prosperity depends upon entrepreneurs. These are ambitious and optimistic men and women who are willing to take risks, pursue their dreams and try to change the world. They mortgage their own homes, they put everything on the line to build a business that fills an unmet need or does something better than what's offered today. But throughout history, prosperity and human flourishing has been stymied or delayed by governments that impose regulatory policies to address supposed harms, but in actuality overstated risk in order to protect incumbent operators, often large and powerful companies that didn't want to compete and that just happened to give big campaign checks to the politicians in power. The United States has mostly chosen a different path.

One, we're a free enterprise system governed by the rule of law that allows Americans to freely pursue their ideas, grow their own businesses, and compete without having to obtain permission from all knowing bureaucrats. Today's hearing on data privacy and artificial intelligence is a debate about which regulatory path we will take. Do we embrace our proven history, one with entrepreneurial freedom and technological innovation, or will we adopt the European model where government technocrats get to second guess and manage perceived risks with economic activity, ultimately creating an environment where only big tech with its armies of lawyers and lobbyists exist? Consider this in 1993 at the dawn of the tech age, the economies of the United States and the European Union were roughly equal in size. Today the American economy is nearly 50% larger than the EU's. The tech boom happened in America, in part because Congress and the Clinton administration deliberately took a hands-off approach to the nascent internet.

The result was millions of jobs at a much higher standard of living for Americans. Unfortunately, the Biden administration and many of my colleagues are suggesting the European model for AI based heavily on hysterical doomsday prophecies to justify a command and control federal regulatory scheme that will cause the United States to lose our technological edge over China. The Biden administration's AI executive actions as well as many of the AI legislative proposals call for a new regulatory order that protects giant incumbent operators and discourages innovation with supposedly optional best practices or guidance written by all knowing bureaucrats, some of whom were recently employed by the same big tech firms they seek to regulate and some of whom hope to be employed again by those same big tech firms right after they write the rules that benefit those giant big tech firms. We already see federal AI regulators and Biden allies talking about the need to stop bias misinformation, quote, discrimination in AI systems and algorithms.

That's code for speech police. If they don't like what you say, they want to silence it. Now, AI can certainly be used for nefarious purposes just like any other technology, but to address specific harms or issues, we should craft appropriate and targeted responses. For example, Senator Klobuchar and I have introduced the bipartisan TAKE IT DOWN act, which targets bad actors who use AI to create and publish fake, lifelike, explicit images of real people. Our bill, which is sponsored by many Republican and Democrat members of this committee, would also require big tech to follow a notice and take down process so ordinary Americans who are victimized by these disturbing images can get them offline immediately. The bipartisan TAKE IT DOWN Act is a tailored solution to a real problem on behalf of the teenage girls and others who've been victimized by deep fake explicit imagery. I hope that this committee will take up soon the Take It Down Act and pass it and move it to the floor and get it signed into law.

As I conclude, I'd like to address a related matter, the American Privacy Rights Act, APRA. I support Congress, not the FTC or any federal agency, but Congress, setting a nationwide data privacy standard. Not only is it good for Americans to be empowered with privacy protections, but it's good for American businesses that desperately need legal certainty given the increasingly complex patchwork of state laws. But our goal shouldn't be to pass any uniform privacy standard, but rather the right standard that protects privacy without preventing US technological innovation. I've discussed APRA with Chairwoman McMorris Rodgers and will continue my offer to work with her, but right now this bill is not the solution. It delegates far too much power to unelected commissioners at the FTC. It focuses on algorithmic regulations under the guise of civil rights, which would directly empower the DEI speech police efforts underway at the Biden White House harming the free speech rights of all Americans as currently constructed. APRA is more about federal regulatory control of the internet than personal privacy. In the end, it's the giant companies with vast resources that ultimately benefit from bills like a PR at the expense of small businesses. The path that Congress needs to take is to put individuals in control of the privacy of their own data and give them transparency to make decisions in the marketplace, and I look forward to working with my colleagues to do exactly that.

Sen. Maria Cantwell (D-WA):

Thank you, Senator Cruz. We'll now turn to our panel starting with Dr. Calo. Thank you so much. We're really proud of the work that the University of Washington has done. I think we're on both Senator Cruz's mentioning the innovation economy. I think we have that down in the northwest, but we also want to make sure we have the important protections that go along with it. So thank you Dr. Calo for your presence.

Ryan Calo:

Chair Cantwell, Ranking Member Cruz, and members of the committee, thank you for the opportunity to share my research and views on this important topic. I'm a law professor and information scientist at the University of Washington where I co-founded the Tech Policy Lab and Center for an Informed Public. The views I express today are entirely my own. Americans are not receiving the privacy protections they demand or deserve, not when Cambridge Analytica tricked them into revealing personal details of 87 million people through a poorly vetted Facebook app. Not when car companies shared their driving habits with insurance companies without their consent, sometimes leading to higher premiums. As the senator mentioned, privacy rules are long overdue, but the acceleration of artificial intelligence in recent years threatens to turn a bad situation into a dire one. AI exacerbates consumer privacy concerns in at least three ways. First, AI fuels an insatiable demand for consumer data. Sources of data include what is available online, which incentivizes companies to scour and scrape every corner of the internet, as well as the company's own internal data, which incentivizes them to collect as much data as possible and store it indefinitely. AI's insatiable appetite for data alone deeply exacerbates the American consumer privacy crisis.

Second, AI is increasingly able to derive the intimate from the available. Many AI techniques boil down to recognizing patterns in large data sets. Even so-called generative AI works by guessing the next word, pixel or sound. In order to produce new text, art or music, companies increasingly leverage this capability of AI to derive sensitive insights about individual consumers from seemingly innocuous information. The famous detective Sherlock Holmes with the power to deduce who did it by observing a string of facts most people would overlook as irrelevant is the stuff of literary fiction. But companies really can determine who is pregnant based on subtle changes to their shopping habits as Target reportedly did in 2012. And finally, AI deepens the asymmetries of information and power between and companies that consumer protection law exists to arrest. The American consumer is a mediated consumer. We increasingly work, play, and shop through digital technology, and a mediated consumer is a vulnerable one.

Our market choices, what we see, choose and click are increasingly scripted and arranged in advance. Companies have an incentive to use what they know about individual and collective psychology plus the power of design to extract as much money and attention as they can from everyone else. The question is not whether America should have rules governing privacy. The question is why we still do not. Few believe that the internet, social media or AI are ideal as configured. A recent survey by the Pew Research Center suggests that an astonishing 81% of Americans assume that companies will use AI in ways for which they are not comfortable. 81%. Just for context, something between 30 and 40% of Americans identify as Taylor Swift fans. Meanwhile, the EU, among our largest trading partners, refuses to certify America as adequate on privacy and does not allow consumer data to flow freely between our economies.

What is the point of American innovation if no one trusts our inventions? More and more individual states, from California to Colorado, Texas to Washington are passing privacy or AI laws to address their resident's concerns. Congress can and should look to such laws as a model, yet it would be unwise to leave privacy legislation entirely to the states. The internet, social media and AI are global phenomena. They do not respect state boundaries and the prospect that some states will pass privacy rules is small comfort to the millions of Americans who reside in states that have not. Congress should pass comprehensive privacy legislation that protects American consumers, reassures our trading partners and gives clear achievable guidelines to industry data minimization rules, which obligate companies to limit the data they collect and maintain about consumers could help address AI's and satiable appetites. Broader definitions of covered data could clarify that inferring sensitive information about consumers carries the same obligations as collecting it and rules against data misuse could help address consumer vulnerability in the face of a growing asymmetry. Thank you for the opportunity to testify before the committee. I look forward to a robust discussion.

Sen. Maria Cantwell (D-WA):

Thank you, Dr. Calo. Ms. Kak, thank you so much. Welcome. We look forward to your testimony.

Amba Kak:

Thank you Chair, Ranking Member Cruz and esteemed members. In a world with a data privacy mandate, AI developers would need to make data choices that deliberately prevent discriminatory outcomes. So we shouldn't be surprised when we see that women are seeing far less ads for high paying jobs on Google ads. That is a hundred percent a feature of data decisions that have already been made upstream. I mean, the good news here is that these are avoidable problems, and it’s not just in scope for a data privacy law, it’s integral to protecting people from the most serious abuses of our data. And where specific AI practices have inherent well established harms, so-called “emotion recognition” systems that lack any scientific validity, or pernicious forms of targeted ads, the law could hold them entirely off limits.

Finally, and here’s the thing about large-scale AI: it is not only computationally, ecologically, and data intensive, it is also very, very expensive to develop and run these systems. These eye-watering costs will need a path to profit. By all accounts, though, a viable business model still remains elusive. It is precisely in this kind of environment, with a few incumbent firms feeling the pressure to turn a profit, that predatory business models tend to emerge. 

Meanwhile, new research suggests LLMs are capable of hyper personalized inferences about us even from the most general prompts. You don’t need to be clairvoyant to see that all roads might well be leading us right back to the surveillance advertising business model, even for generative AI, with all its attendant pathologies.

To conclude, there is nothing about the current trajectory of AI that is inevitable. As a democracy, the US has the opportunity to take global leadership in shaping this next era of tech so that it reflects public interest, not just the bottom lines of a handful of companies. This is a moment for action.

Udbhav Tiwari:

[break in official stream] –AI amplifying privacy violations. We know that online manipulation, targeted scams, and online surveillance are not new risks in our digital lives. However, AI technologies can supercharge such harms creating risks like profiling and manipulation bias and discriminations and deep fakes and identity misuse. To mitigate these risks, we need comprehensive federal privacy legislation. This should be accompanied by strong regulatory oversight and continued investments in privacy enhancing technologies. We must also ensure that AI systems are transparent and accountable with mechanisms in place to address privacy violations and provide recourse for affected individuals underpinned by disclosure and accountability. When it comes to the AI's potential to impact civil liberties, the risk cannot be understated. The same technologies that drive innovation can also be used to infringe upon fundamental rights and be used by big tech companies to trample on individual privacy. It is therefore imperative that AI development and deployment are guided by principles that protect civil liberties. This includes safeguarding freedom of expression, preventing unlawful surveillance, and ensuring that AI systems do not perpetuate discrimination or bias. In conclusion, protecting Americans privacy in the age of AI is a critical challenge that requires comprehensive legislation. As we navigate the complexities of AI and privacy, it is crucial to strike a balance between innovation and protection. Thank you for the opportunity to testify today. I look forward to your questions and to working with you to protect American's privacy in the AI era.

Sen. Maria Cantwell (D-WA):

Thank you Mr. Tiwari. Very much appreciate you being here. And Mr. Reed, thank you so much for being here. I'm not trying to ask you a question in advance, but I'm pretty sure you have thousands of members of your organization and I'm pretty sure you have quite a few in the Pacific Northwest, so thank you for being here.

Morgan Reed:

We do, many in the great state of Washington. Chair Cantwell, Ranking Member Cruz, and members of the committee. My name is Morgan Reed, president of ACT | The App Association, a trade association representing small and medium-sized app developers and connected device manufacturers. Thank you for the opportunity to testify today on two significant and linked subjects: privacy and AI. Let me say very clearly that the US absolutely needs a federal privacy law. For years, we have supported the creation of a balanced bipartisan framework that gives consumers certain protections and businesses clear rules of the road. Instead, what we have now is a global array of mismatched and occasionally conflicting laws including here in the US with either 19 or 20 depending on who you ask, state-level comprehensive privacy laws and more coming every year to prevent this morass of confusion and paperwork. Preemption must also be strong and without vague exceptions, and it must include small businesses so that customers can trust the data is being protected when they do business with a company of any size.

Unfortunately, the American Privacy Rights Act or APRA falls short of both of these objectives carving out small businesses from the definition of covered entities as APRA does is a non-starter because it would deny us the benefits of the bill's pre-emption provisions. Instead, small businesses would be required to comply separately with 19 state laws, and more importantly, the not yet passed laws in 31 states, exposing us to costly state by state compliance and unreasonably high litigation costs. And it isn't just small tech impacted by this. In today's economy, everyone uses customers' data, even bricks and mortar. A local bike shop in Nevada likely has customers coming from Utah, Arizona, California, Colorado, and Idaho. An alert reminding these customers about a tire sale with free shipping or that it's time to get their bike in for a tuneup requires at least a passing familiarity with each state's small business carve out.

In the ultimate irony, APRA may even incentivize small businesses to sell customers' data in order to gain the benefit of preemption. Congress must instead move forward with a framework that incorporates small businesses and creates a path to compliance for them. And this acceptance of small businesses' role in the tech ecosystem becomes even more pronounced when we turn to AI. Mainstream media is all abuzz about big companies like Amazon, Microsoft, Google, and Apple moving into AI. But the reality is small business has been the faster adopter. More than 90% of my members use generative AI tools today with an average 80% increase in productivity. And our members who develop those AI solutions are more nimble than larger rivals we're developing, deploying and adapting AI tools in weeks rather than months. Their experiences should play a major role in informing policy makers on how any new laws should apply to AI's development and use.

Here are two examples of how our members are using AI in innovative ways. First up is our Iowa-based swine tech. It's reshaping the management of hog farms. Farmers use their product pig flow. That's really the name to manage the entire process of using sensors to track the health of thousand piglets and then market analytics and public information to build AI powered holistic solutions. But as Paul Simon said, big and fat pigs supposed to look like that. For our member Metric Mate in Atlanta, the goal is the opposite. This Atlanta-based startup uses a combination of off-the-shelf and custom fitness trackers to help individuals and physical therapists to track and refine fitness goals. AI helped Metric Mate respond to the progress being made over time and instantaneously while their patented tap sensor gives the user the ability to track their workouts and seamlessly transmit data to the Metric Mate app.

These examples show how AI is useful for innovative yet normal activities. So rather than limit AI as a whole, policymakers must target regulation to situations where a substantial risk of concrete harm exists. The risk of AI use on a pig farm should not be treated the same as risks in sectors like healthcare or wellness, and yet both of them might be covered by future laws. Finally, I want to stress the importance of standards to the future of AI standards are a valuable way for new innovators to make interoperable products that compete with the biggest market players. As the committee considers bills on the subject, we need NIST to remain a supporter rather than an arbiter of voluntary industry led standards. The committee should also be aware of the threat to small business through standard essential patent abuse with non-US based companies holding the crown for the most US patents every year. Federal policy must combat abusive patent licensing in standards by ensuring that licenses are available to any willing licensee, including small businesses on fair, reasonable, and non-discriminatory terms. If we're not capable of doing that, the next generation of AI standards will not be owned or run through US companies, but through those outside of the US with different perspectives on human rights and our expectations. So thank you for your time and I look forward to answering your questions.

Sen. Maria Cantwell (D-WA):

Thank you, Mr. Reed. And on that last point, we'll have you expand today or more generally for the record, what we need to do about that last point. I definitely think in the past we have sided too much with the big companies on the patent side and not enough empowerment of the inventors, the smaller companies. So I want to make sure we get that part right. In addition to your recommendations, you've made a couple of very good recommendations. Thank you. I'd like to go to a couple of charts here if we could. One, when we first started working on the privacy law, I'm going to direct this to you, Professor Calo, but what got me going was the transfer of wealth to online advertising. I don't think people really quite understood how much the television, newsprint, radio, magazine, the entire advertising revenue shift went online. Now we're just talking about the internet.

Could you bring that a little closer? Please get that up a little closer. So we are now at 68%. I don't know if people can see that, but we're now at 68% of all spending, two thirds of all spending of advertising has now taken place online with data and information. So that trend's just going to keep continuing. Now, you and I have had many conversations about the effect of that on the news media, having community voices. Our community in Seattle, King5 or the Seattle Times, couldn't exist. If it had misinformation, it just wouldn't exist. But in the online world, you can have misinformation. There's no corrective force for that, but all the revenue is now gone to the online world. And the second chart describes, I think a little bit about your testimony that I want to ask a question about, and that is the amount of information that is now being derived about you that AI is this capacity to derive sensitive insights.

So that trend that I just described where two thirds of all advertising revenue, I mean somebody said data is like the new oil, right? It's just where everybody's going to go and make the money. So that's a lot of money already in that shift over those years that I mentioned on the chart. But now you're saying they're going to take that information and they're going to drive sensitive information about us. Ms. Kak said it's the way your voice sounds. You've described it as having various features. So could you tell me how to protect us against that in the AI model, why that's so important? And I just want to point out, we're very proud of what the Allen Institute is doing on AI. We think we're the leaders in AI applications. We're very proud of that both in healthcare, farming energy. We have an agreement today between the United States and Canada and principal on the Columbia River Treaty. I think water AI will be a big issue of the future. How do you manage your natural resources to the most effective possible use? So we're all big on the AI implications in the Pacific Northwest, but we're very worried about the capacity to drive sensitive insights and then as you mentioned, an insurance company or somebody using that information against you. Could you expound on that please?

Ryan Calo:

Absolutely. I was talking to my sister who sits on the board of the Breast Cancer Alliance about my testimony, and she said, Ryan, just make sure that people know how important it is for AI to be able to spot patterns and medical records to ensure that people get better treatment, for example, for breast cancer. And I agree with that, and I'm also proud of all the work we're doing at University of Washington and Allen. The problem is that the ability to derive sensitive insights is being used in ways that disadvantage consumers and they're not able to figure out what's going on and fight back. So for example--

Sen. Maria Cantwell (D-WA):

Thereby driving up costs.

Ryan Calo:

For example. I mean, we know why everything costs $9.99, right? It is because your brain thinks of it as being a little bit further away from $10 than it really is, but the future we're headed to and even the present is situation where you're charged exactly as much as you're willing to pay in the moment, right? Say I'm trying to make dinner for my kids and I'm just desperately trying to find a movie for them to stream that they both can agree on. If Amazon can figure that out or Apple can figure that out, they can charge me more in the moment when I'm flustered and frustrated because they can tell, right? If that sounds farfetched, Senator, Uber once experimented with whether or not people would be more willing to pay surge pricing when their batteries were really low on their phone because they'd be desperate to catch a ride. Amazon itself has gotten into trouble for beginning to charge returning customers more because they know that they have you and their ecosystem. This is the world of using AI to extract consumer surplus and it's not a good world, and it's one that data minimization could help address.

Sen. Maria Cantwell (D-WA):

Senator Wicker.

Sen. Roger Wicker (R-MS):

Well, thank you very much, Madam Chair. Mr. Reed, let me go to you. You've testified before on this topic, where are most of the jobs created in the United States economy? What size business?

Morgan Reed:

Right? So small businesses are the single largest source of new jobs in the United States.

Sen. Roger Wicker (R-MS):

Okay, and so let's talk about how the current situation affects small and medium sized businesses and how legislation would affect them positively or negatively. Let's pretend I am a small business owner in the state of Washington or the state of Mississippi, and I use the internet to sell products in multiple states. That would be a very typical situation, would it not?

Morgan Reed:

Yes. In fact, one of the hardest things that I think has been transformative, but yet it's hard for legislatures to understand, is our smallest members are global actors. My smallest developers are selling products around the world and are developing a customer base around the world. And in many ways, you can think of it this way, maybe there's 5,000 people in your hometown who want to buy your product, but there's 500,000 people who want to buy your product everywhere. So as a small business, the internet allows you to reach all of those customers. The problem with APRA carving out small business is all of the sudden now a small business that wants to grow from 10,000 customers in the great state of Mississippi to 500,000 customers throughout the United States has to comply.

Sen. Roger Wicker (R-MS):

The law needs to apply evenly to all actors. Let's talk about preemption. If I'm this small business person in the state of Washington and Congress passes a new data privacy law, but the preemption clause has enumerated exceptions for various reasons, how's that going to affect me?

Morgan Reed:

Once again, the people who are best equipped to deal with complex compliance regimes are very large companies that hire large numbers of lawyers. So we really need a compliance regime and a preemption regime that's easy to understand and is applicable. When my businesses don't have a compliance team, heck, they don't even have a compliance officer. It's probably that the chief dishwasher is also the chief compliance officer. And I think that's an important thing to understand about small businesses is they don't have teams of lawyers.

Sen. Roger Wicker (R-MS):

How about a broad private right of action? How is that going to affect me and my business?

Morgan Reed:

Well, I think it's interesting that you bring it up. When I testified before you on this same topic, we had great discussions about the fact that there may be needs occasionally for private right of action, but they should be limited in scope. And I think that's more true now than ever before. If we have a privacy regime that exposes small businesses and everyone else to individual private right of action from multiple states, it sets up small businesses to be the best victim for 'sue and settle.' Nobody wants to go to war with Microsoft, but I can send you a small business a 'pay me now' note for $50k. I'm going to go to my local small town lawyer. I'm going to say, can you fight this? He's going to say it's going to be $150,000 to fight it. So you're going to stroke a check and you're going to not pay an employee. You're not hire someone for $50k. So we want to avoid a private right of action that leads to the ability for unscrupulous lawyers to make a killing off of 'sue and settle,' particularly in small businesses where the cost of fighting is greater than the cost of the check they're asking for.

Sen. Roger Wicker (R-MS):

Okay, we're going to run out of time real quick, but Mr. Calo, on page six of your testimony, you say, expecting tech companies to comply with a patchwork of laws, and that would be small and large tech companies. A patchwork of laws, depending on what state a consumer happens to access their services is unrealistic and wasteful. I hear people say this, let's pass a nationwide data privacy act. But if a state like California wants to really, really protect even better, let's let them do so. Let's give states like that an option-out ability. Doesn't that start the patchwork all over again?

Ryan Calo:

Senator, it's an excellent question. My view is that in a perfect world, the federal government would set a floor and then the individual states, if they wanted to be more protective of their own residents, would be able to raise that floor for their own citizens. In this particular context, it is very difficult to deploy these large global systems in ways that differentiate depending on what state you're in.

Sen. Roger Wicker (R-MS):

Okay, and I'm sorry, I have to go. Mr. Reed, That's getting back to the patchwork, isn't it? So now you have to comply with two, and then who's going to decide which is more protective?

Morgan Reed:

Absolutely. That's it in a nutshell. Try to figure out what those are.

Sen. Maria Cantwell (D-WA):

I think Mr. Calo just said it's not realistic to have the patchwork. I think his testimony is quite clear. He says you have to have a federal preemption. Yes, I think it does. And I think Mr. Reed, I think--

Sen. Roger Wicker (R-MS):

Okay. Well, Madam Chair, since they're not dozens of us--

Sen. Maria Cantwell (D-WA):

We have one of our colleagues who's waiting, Senator Rosen, who actually has coded before, so I feel like we need to defer to her. So Senator Rosen.

Sen. Jacky Rosen (D-NV):

Well, thank you. Writing that computer code, it was a great experience, a great living, and I loved every minute of it. Now it prepares me for a lot of things I do here today. So thank you for holding this hearing. These issues are so important. I really appreciate the witnesses. I'm going to talk first a little bit about AI scams because Americans are generating more data online than ever before. And we know with advances in AI, data can be used in many ways depending on the algorithms that are written. And of course, bad actors are going to use AI to generate more believable scams using the deepfake cloning and all the pictures. You've seen it. Everyone has seen it everywhere. And these particularly target our seniors and they target our veterans. And so I'd like to ask each, Mr. Kak and Professor Calo, both of you, how can enacting a federal data privacy law better protect individuals from these AI enabled cyber attacks and scams that we know are happening every single day in every community?

Sen. Jacky Rosen (D-NV):

Ms. Kak, we'll start with you and then we'll go to the professor.

Amba Kak:

Thank you, Senator. It's interesting because when ChatGPT was released and there was all of this excitement about whether we were one step, who knows if it's tomorrow or in 50 years, but one step away from these fantastical Terminator-like scenarios. What we were really more concerned about was had we just seen the creation of the most effective spam generator that history had ever seen? And so I think what is it at the heart of these issues is that we are creating these technologies. They're moving to market clearly before they're ready for commercial release, and they're sort of unleashing these diffused harms, including, as you mentioned, the risk of sort of exacerbating concerns of deceptive and spam content. To your question on how would a federal data privacy law sort of nip these kinds of problems in the bud, I think it would do so in a very sort of structural sense.

It would say that there need to be certain rules of the road. There need to be limits on what companies are able to collect the ways in which they're training their models so that these companies are not creating inaccurate, misleading AI tools, which are then being integrated into our most sensitive social domains, whether that's banking or healthcare or hiring. So I think the final thing I'll say on the kind of spam generation point is that we have known for the last decade, we have evidence like garbage in, garbage out. So when we see these failures, we should see them not as output failures, but as failures that go back to very crucial data decisions that are made right from the training and development stage of these models all the way through to the output stage. And so the privacy risks--

Sen. Jacky Rosen (D-NV):

Thank you. Thank you. I only have a few minutes, so Professor Calo, if you could be really brief, because I want to get to a question about data ownership. Who owns your data and who has the right to your data? So if you'd be really brief so I could get that question in, I'd surely appreciate it.

Ryan Calo:

I think that we need to empower federal regulators such as the Federal Trade Commission, to go after all kinds of abuses that involve artificial intelligence, including such scams investing in the FTC and its expertise is the correct call in my opinion.

Sen. Jacky Rosen (D-NV):

Thank you. I appreciate that because I want to talk about something that everyone talks to me about is data ownership, AI transparency, because Nevadans today, people across this country do not have the right to access, correct, or delete their data. Who owns your data? What do they do with it? How do they use it? It matters to people. And so it's impossible for many to even understand, like I said, who holds their data and what they're going to do with it. So the supply chain of consumer data, it is full of loopholes, whereas third party resellers, they can just sell your data to the highest bidder. So Mr. Tiwari, can transparent AI systems exist without strong federal privacy regulations including consumer control over your own personal data?

Udbhav Tiwari:

Thank you, Senator. And in short, the answer is no. It is impossible for users to be able to effectively exercise the rights that they have, not only over their own data, but also their social experiences without knowing what companies are collecting, how that is being used, and more importantly, what rights do they have if that harm is occurring in the real world. We've already, in this hearing so far, discussed various examples of harms that we have seen occur in the real world, but yet the actions that regulators and governments have been able to take to limit some of those harms have been constrained by the lack of effective and comprehensive federal privacy legislation.

Sen. Jacky Rosen (D-NV):

Thank you. I appreciate it. And madam chair, I'm going to yield back with eight seconds to go.

Sen. Maria Cantwell (D-WA):

Thank you. Senator Blackburn.

Sen. Marsha Blackburn (R-TN):

Thank you so much, madam chairman. I am so delighted we're doing this hearing today. And as we have talked so many times when I was in the House and our Senate colleague Peter Welch was in the House, we took steps and introduced the first legislation to make businesses take steps to protect the security of our data, to require data breach and notifications, and allow the FTC and the state attorneys in general to hold companies accountable for violations. And as Senator Rosen just said, it is so vitally important to know who owns the virtual you, who has that and what are they doing with it? And now as we're looking at AI, I think federal privacy legislation is more important than ever because you've got to put that foundational legislation in place in order to be able to legislate to a privacy standard. And that is why we're working so carefully on two pieces of legislation, the NO FAKES Act that Senator Coon, Klobuchar, Tillis and I are working on to protect the voice and visual likeness of individuals from unauthorized use by generative AI.

And then madam chairman, you've mentioned the COPIED act that you and I are working on, which would require consent to use material with content provenance to train AI systems. And I want to come to each of you, and let's just go down the dais as we're looking at this, we're talking about ways that I'd like to hear from you all very quickly, ways that Congress is going to be limited in legislating if we do not have a privacy standard. And then how do we find the balance between that privacy and data security component so that people know they have the firewalls to protect their information and keep it from being used by open source and large language models. So let's just run down the dais on this.

Ryan Calo:

Thank you, Senator. I was really struck in my research for this hearing by The Pew Center's research, their survey suggests that overwhelming percentages of Americans are worried that their data is out there and is going to be used in ways that are concerning and surprising to them. I think without passing laws that per Senator Cantwell's remarks define sensitive information to cover, not merely already sensitive information like health status, but cover the inferences that can be made on top of that data, Americans are not going to feel comfortable and safe. I think that security and privacy go hand in hand. We could sit here and talk about the myriad horrible data breaches that have been occurring across our country. We can talk about ransomware and the way that hospital systems are being shut down, but ultimately, it all boils down to the fact that the American consumer is vulnerable and it needs its government to step in and set some clear rules.

Sen. Marsha Blackburn (R-TN):

Thank you.

Amba Kak:

Thank you, Senator. I'll say that the sort of incentives for irresponsible data surveillance have existed for the last decade. What AI does is it pours gasoline on these incentives. So if anything, we have a situation where all of our known privacy and security harms and risks are getting exacerbated to the second part of your question, which is what is the connection between privacy and security? I would say those are two sides of the same coin. So data never collected is data that is never at risk, and data that is deleted after it is no longer needed is also data that is no longer at risk. So I think having a strong data minimization mandate that puts what we would consider very basic data hygiene in place is absolutely essential, especially as you're seeing more concerning bad actors use this information in nefarious ways, some of which the legislation you mentioned is going to clamp down on.

Sen. Marsha Blackburn (R-TN):

Thank you.

Udbhav Tiwari:

Thank you, senator. Mozilla is an organization that has hundreds of millions of people that use its products because of this privacy properties without providing a consistent standard that allows American companies to compete globally just like they do on innovation, but also on privacy, the Congress will be unable to ensure that Americans not only get the privacy rights they deserve, but also that American companies can have a high baseline standard with which they can compete with organizations around the world. Thank you.

Sen. Marsha Blackburn (R-TN):

Thank you.

Morgan Reed:

And I understand I'm over time here, but very quickly, privacy laws should have a data security provision. It's one of our four Ps of privacy. I think data hygiene is absolutely critical, but it's different than a prohibition on processing. So let's be careful on how we use the term data hygiene rather than no collection at all. So let's be smart about how we do that. Thank you.

Sen. Maria Cantwell (D-WA):

Thank you, Madam Chairman. Thank you. And thank you for your leadership on the COPIED Act. I think your understanding of creators, artists, and musicians and being able to stick up for them has been really critical. So thank you, Senator Hickenlooper.

Sen. John Hickenlooper (D-CO):

Thank you, Madam Chair. State privacy laws and federal proposals agree that consumers should have more control, should have control over their personal data, including the right to have their data deleted. As has been pointed out, however, consumers really don't have the time or the expertise to go down and effectively manage all their data online to fill out the forms, to cookie notices, or these lengthy privacy agreements become more of a distraction. So Mr. Calo, let's start with you. The American Privacy Rights Act proposes, as has been discussed, minimizing personal data collection, but in addition, offering consumer facing controls like data deletion requests. And how do these two approaches work in tandem to protect consumers rather than anyone alone?

Ryan Calo:

That's a great question. In other contexts where we're trying to protect consumers, we do give them information and choices, and we know that privacy preferences are not completely homogeneous. However, in other contexts, not only do we give people choice, for example, how much fat content do they want in their milk, but we also place substantive limits, like there can't be so much arsenic. And so I think that needs to be a combination of both. People should have control over their data and they should be asked before that data is used in a separate context, like to set their insurance premiums. But there also have to be baseline rules because as you point out, consumers do not have the time or the wherewithal to police the market and protect themselves on their own.

Sen. John Hickenlooper (D-CO):

Right. Good answer. And Ms. Kak, you can opine on this as well, but I've got another question. So lemme start with the question. And you guys have discussed a little bit the advances in generative and traditional AI and how those advances really are fueled by data. Now, I mean, we recognize that, but training AI systems can't be at the expense of people's privacy and reducing the amount of personal sense of data. As you have all discussed, the notion of minimization does really reduce the likelihood that data privacy and data security harm could happen. And Senator Blackburn, among all the other bills she listed that these issues are covered extensively in various hearings. We've had on our subcommittee that she and I chair consumer protection, product safety and data security. So Ms. Kak, I wanted to say, how would you quantify, how often or what types of quantification do you say when you say how often is the data of consumers unnecessarily exposed within the AI model, and do you believe that the strict data minimization requirements can significantly help control this, let's call it data leakage?

Amba Kak:

Senator, there are two parts, two ways in which I'll answer that question. The first is to say we don't know, and that's sort of part of why we're here today, which is that we don't have basic transparency about whether our data is being used, how it's being protected. And what we do know is that companies like Meta and Google are sort of at will changing their terms of service to say, heads up, we're now using your data for AI. At the same time as we're seeing these chatbots routinely leak the personal information they were trained on. But I think if we did in the absence of clear data from these companies and a regulatory mandate that allows us to ask them for it, I think what we can already see just from the most obvious lapses is that this irresponsible data collection and use is happening everywhere.

It's happening all the time. And I think one of the ways in which the US might benefit from being somewhat of a laggard when it comes to data privacy law is to look at what hasn't worked elsewhere. What hasn't worked is a consent only based regime. That's why we need accountability. In addition to consent, what has worked is that the Brazilian data protection regulator recently banned meta from using user data to train their AI because they found that there was children's images in the training data and it was being leaked on the other side, right? So there's a lot to learn and there's a lot of, I think, a foundation from which to act from.

Sen. John Hickenlooper (D-CO):

Great. And Mr. Tiwari, just quickly, the general data protection Regulation, the GDPR of the European Union, it's been in effect since 2018, I guess, or 2019, 2018. Since then it's been amended. They're still sorting it out. I think in terms of governance structure. Without a US data privacy law, how can we resume have some leadership on the global stage on these issues?

Udbhav Tiwari:

Thank you, Senator. By most accounts, there are currently at least 140 countries around the world that have a national federal privacy law. By not having a privacy law, the United States of America is not allowing its companies to be able to effectively compete with the companies from these countries. And that's because privacy is now a competitive differentiator. Like I mentioned earlier, people use the Firefox product because they believe in the privacy properties of the Firefox product, and without a baseline of such standards, the small and medium companies that we've been talking about will be unable to compete with other small and medium companies around the world.

Sen. John Hickenlooper (D-CO):

Great. Thank you very much. I yield back to the chair.

Sen. Maria Cantwell (D-WA):

Thank you. Senator Moran.

Sen. Jerry Moran (R-KS):

Chairwoman, thank you very much and thanks to our panelists. Very important hearing. Thank you for holding it. Passage of federal data privacy legislation is long overdue. I chaired the subcommittee on data privacy that Senator Hickenlooper now chairs and Senator Blackburn is the ranking member. Senator Blumenthal was the ranking member. We have come so close so many times, but never just quite across the finish line and the problems and challenges with our lack of success continue to mount and it gets, I don't know that the issues get more difficult to resolve, but we still haven't found the will to overcome the differences of the things that each of us think is pretty important. I reintroduced my comprehensive data privacy, consumer data privacy and security Act again in this Congress. It gives Americans control over their own data, establishes a single clear federal standard for data privacy and provides for robust enforcement of data privacy protections that does not lead to frivolous legal actions that would harm small business.

I think these are common sense policies, and again, I think there's a path forward utilizing those. And I would again say in this hearing, as I've said numerous times over the years, I'm willing to work with any and all to try to find that path forward. I have a few questions for the panel. I think I start with you, Mr. Reed. It's important that data privacy requirements in my view, established by federal law, are shared by consumer facing entities, service providers and third parties, all of which may collect or process consumer data exempting segments of this value chain from requirements or enforcement under the data privacy law, I think places an unfair burden on consumer facing entities, including particularly small business. Mr. Reed, is that something you agree with when it comes to data privacy? Regulatory burden should be shared across each entity that collects or processes data?

Morgan Reed:

I do, but I want to be careful about one thing. I don't want to, I know this sounds weird coming from the business side, so to speak, but I want to be careful that we don't say that shared responsibility becomes everybody touching their nose and saying, I'm not it. So I think the point at which you give your data over the point at which you have that first contact, whether it's my members through an application or through a store that you visit, is the most logical point for the consumer to begin to say, Hey, I want data to be retrieved, or, I don't want my data used in this way. So I think yes, there's shared responsibility across the chain, but I don't want a situation where the front person who interacts with the customer can then say, Hey, that's down the food chain, three third parties from there avoid. I think that's important.

Sen. Jerry Moran (R-KS):

You avoid that by?

Morgan Reed:

Well, you avoid that by actually having a clear and concrete conversation, so to speak, with the customer when they provide the data. Here's what you're getting in exchange, here's what the rules of the road are, and that's why a federal comprehensive privacy bill, and we appreciate the bill that you've worked on already on data, moves us in a direction of having consumers nationally have an understanding of and what our responsibilities are with their data. Absolutely. But it has to start with an understanding at the first place and then the shared responsibility throughout the chain.

Sen. Jerry Moran (R-KS):

One more for you, Mr. Reed, 19 states, I think, is the number that have passed their own data privacy laws. It's a patchwork of state data, privacy laws increasing compliance costs. One estimate projects that compliance costs for business could reach $239 billion annually if Congress does not implement its own data privacy law. Incidentally, I came to Congress believing and still believe that government closest to home is better than government far away. But it seems like I spend a bunch of my time trying to figure out how we take care of a complex situation with 50 different states and 50 different standards. Kansas doesn't have a data privacy law, but borders two states that do. Would you describe the challenges for small businesses associated with working in states with different data privacy standards?

Morgan Reed:

Well, your state is one of the ones in particular that we find most interesting because you literally have businesses in your state that have a primary number of their customers that are actually from the bordering state because you have that crossroads that exists. So for Kansas and Oklahoma, you have customers across the border literally every day. So having a mixture of these privacy bills -- now to be clear, a lot of states have some sort of small business carve out, but it's different and the definitions are different in every state. So any business that participates is going to have to figure out who's going to tell 'em what they do. And if that customer walks in and says, my zip code is this, oh, sorry, I have to treat your data this way. If it's this, then I have to do it another way. It is soul crushing for a small business to have to interview each purchaser and ask 'em what county they live in or what state they live in. So it's a huge burden on small businesses and unfortunately, it's one that's really a lot easier for a multinational company to handle.

Sen. Jerry Moran (R-KS):

Let me quickly ask Mr. Tori and Ms. Kak. It's important for Americans to understand when their data is collected and processed by companies. This belief is reflected in our data privacy legislation, which requires covered entities to publish their privacy policies in easy to understand language and to provide easy to use means to exercise their right to control over data. How can a federal policy ensure consumers are aware that their data is being used even as AI potentially increases the complexity of how consumer data is processed?

Udbhav Tiwari:

Thank you, Senator. It is essential for us to recognize that purely relying on consent has proven to be ineffective to protect the privacy of the average consumer. Technology has now become such a complex endeavor that to expect an average consumer to understand everything that can happen when their data is collected is a burden that they will never be able to meaningfully fulfill. And therefore, any effective federal privacy legislation must include accountability measures that also place obligations upon these entities for what they cannot do, regardless of whether the consumer has consented to that behavior or not. Only then can consumers be sure that the government is working effectively to protect their privacy rather than hoping that they understand everything that may happen to that data.

Sen. Jerry Moran (R-KS):

Just a sad state of affairs actually, we can't understand it well enough to protect ourselves. Anything to add?

Amba Kak:

Yeah, the only thing I would add, Senator, is what do we do as consumers with that information? And that's really why transparency isn't enough. And why? Proposals like the bipartisan ADPPA and the APRA are so important because they're putting down sort of rules of the road that apply regardless of consent and what consumers choose.

Sen. Jerry Moran (R-KS):

Thank you both.

Sen. Maria Cantwell (D-WA):

Yes, I think Senator Budd.

Sen. Ted Budd (R-NC):

Thank you, chairwoman. So thank you all for being here. Technological leadership, it's foundational to American dominance in the 21st century in this country. We innovate, create products and services that consumers want, which increase productivity and it sharpens competition. This spurs a positive cycle of further innovation. The Internet's a perfect example of that and one of the factors that differentiates America. It's a regulatory environment that protects public interest and safety, while at the same time giving talented entrepreneurs and specialists the space to try new things. I think that AI should follow this tribe in this true path. But Mr. Reed, thanks again to all of you for being here. Mr. Reed, thank you for your opening example. A few moments back in your opening comments about Hog Farms, being from North Carolina, I really appreciate that and it makes me only wish I'd have had a little more bacon for breakfast. Always a good call. Well, in your testimony, you talked about the different ways that the App Association members are creating and deploying AI tools. In fact, you said that 75% of surveyed members report using generative AI. Do you believe that there is currently a healthy amount of competition in this space?

Morgan Reed:

Well, I think what's been most amazing is the level of competition against bigger players is profound. The news is always covering Microsoft's moves and Meta's moves and other players moves. But I'm telling you right now that we're seeing more moves by small and medium sized companies to use AI in important waves. And one quick and critical example, if you've got a small construction business in a town right now to bid in an RFP that gets let, it's a lot of nights of coffee and desperate typing on your computer to try to get one RFP out. But if I can look at all of your RFPs with AII, look at what you do best, what you claim you do best and help you write that RFP so that instead of filing one, you file 10, maybe you win two bids, now you hire three people. I used your own data. A lot of my panelists are talking about using consumer data and sucking in consumer data. What my members are doing with a lot of this is actually using your private data stores to look at what you are doing well and help you improve upon it. And I think when we talk about AI and privacy, we have to remember that the ability to do something like that simply help you write 10 RFPs is a huge advantage that AI provides. And it's something frankly, small businesses are doing better than the large businesses right now.

Sen. Ted Budd (R-NC):

I appreciate that example, especially the advantages for small businesses. I read about four newsletters a day on this very topic, so I'm fascinated with it. The question, Mr. Reed, is about the FTC's antitrust policy that seems to be focused against vertical integration and we think it might have a chilling effect on your member's ability to develop and roll out new and better AI services. Even with some of the bigs we see, like with OpenAI, of course people read news about that every day, but Microsoft and Apple not even having a board presence there. I don't know if that's fear against antitrust legislation or what, but we see that there's potentially a chilling effect. Do you have any thoughts on that?

Morgan Reed:

Yes, unfortunately the Federal Trade Commission's proposed rule on Hart-Scott-Rodino is terrible. I probably should use better words here in the committee, but it really puts small businesses at a huge disadvantage because it essentially establishes a floor for the potential for an acquisition. And what that does for small AI companies that are trying to figure out how to make their way in the world is to seek venture capital, venture capital, venture capital, or even your parents. They have to believe in your dream. And part of that dream is that you can have a huge success when venture capitalists are putting money in, they're looking at a portfolio and they know nine out of 10 are going to fail. But if the FTC is essentially saying that they're capping at $119 million anybody's success level, then that tells the venture capitalists to change the nine companies they're investing in because they can't dream bigger than $119 million. We also think that the Hart-Scott-Rodino proposal has violated the Reg Flex Act because they actually didn't take into consideration the impact on small and medium sized businesses. So yes, it has a huge impact on competition for small new AI startups, and we are incredibly disappointed and we look forward to seeing if we can convince the FTC to do the right thing and change their way.

Sen. Ted Budd (R-NC):

I appreciate your thoughts on that. Back to your comments on small business. I don't know if that weighs into your answer on the next question, Mr. Reed, but how should this committee weigh the need for firms to be able to use responsibly collected consumer data with the serious concerns that consumers have about the fact that their sensitive data may be breached or improperly used? How should we look at that as a committee?

Morgan Reed:

Well, as a committee, the first thing to do, which you've heard from everyone about two dozen times is pass comprehensive privacy reform in a bipartisan way because that gives businesses the rules of the road on how to behave with consumer data and figuring out how we balance data hygiene, data minimization, and all those questions are going to be part of the hard work that the Senate has to do on this topic. But overall, I would anchor it in the concept of harms. What harm is being done? What's the demonstrable harm and how do you use the existing law enforcement mechanisms to go after those specific harms? I think it's a lot easier to empower existing law enforcement action than it is to try to create a new one at a whole cloth and hope it works.

Sen. Ted Budd (R-NC):

I appreciate your thoughts. Thank the panel for being here. Chairwoman, I yield back.

Sen. Maria Cantwell (D-WA):

Thank you so much, Senator Klobuchar.

Sen. Amy Klobuchar (D-MN):

Thank you very much appropriate that we're doing this remotely for a tech hearing. I wanted to first of all talk about the fact that AI, and I've said this many times, quoted David Brooks. He says he has a hard time writing about it because he doesn't know if it's going to take us to heaven or hell. And I think there are so many potential incredible benefits coming from the state of the Mayo Clinic that we're going to see here, but we have to put some guardrails in place if we're going to realize those benefits and not have them overwhelmed by potential harm. And I know a lot of the companies involved in this agree. We do have to remember when it comes to privacy, these are no longer just little companies in a garage that resisted any rules for too long. And whether it's competition policy, children's privacy, that we need to put some guardrails in place and it will be better for everyone. So I want to start, Professor Calo, your testimony discussed how companies can collect or buy vast amounts of a person's non sensitive information from what's in their shopping cart to their posts on social media and use AI to process all that data and make sophisticated inferences about their private health information such as pregnancy status, whatever with alarming accuracy. Can you speak to how data minimization can ease the burden on consumers trying to protect their privacy?

Ryan Calo:

Thank you, Senator. Yes. It's a critical issue. So many privacy laws, almost all of them differentiate between sensitive categories of information such as health status and more less sensitive information, even public information. But the problem with these AI systems is that they're extremely good at recognizing patterns in large data sets and so they can infer sensitive things from. And so how would data minimization help? Well, data minimization would of course restrict the overall amount of information and the categories for which it could be used. I think the most effective tool is to define categories of sensitive information to include not just sensitive information itself that's collected from the consumer as sensitive or observed somewhere, but also those sensitive inferences that are derived from AI. I think that's the clearest way, that way you know as a corporation--.

Sen. Amy Klobuchar (D-MN):

Very good. Good answer. Thanks Mr. Tiwari. Any comments on that? You know that we have demand for data at an all time high and comprehensive privacy legislation perhaps shape what developers and deploys do to adopt more privacy preserving systems as they develop things which they're already doing. Just quickly?

Udbhav Tiwari:

Thank you, Senator. We very quickly, Mozilla very recently acquired an organization called Anonym, and what Anonym does is it takes privacy preserving technologies, and within trusted execution environments performs the operations that would take place in large distributed systems in a way that nobody can see the data, not the entity that's giving the data, not anonymous and nor the entity that's using the data in order to carry out, in this case attribution for advertising. And we believe that these technologies are showing a path forward that not just minimizes data collection but also showcases that even when data is being processed similar to on device processing, what are the ways in which it can be done that minimizes the risk for the average consumer and improves risk and liability for companies.

Sen. Amy Klobuchar (D-MN):

Thank you. Ms. Kak, Senator Thune and I have a bill, I think you may be familiar with it. The AI Research Innovation and Accountability Act to increase transparency and accountability for the riskiest non-defense applications of AI. It directs a commerce department to set minimum testing and evaluation standards for AI systems that pose the highest risk such as systems used to manage our electric grid. Other critical infrastructure also requires AI deployers to submit regular risk assessments and transparency resort reports to the commerce department that, among other things, document the source of the AI data used to train employees. Do you agree that transparency on the data sets used to train commercially available models can do more to protect consumer privacy but also ensure more reliable systems?

Amba Kak:

Absolutely. And Senator Klobuchar, we're actually seeing some of these big tech companies argue in their policy briefs that it is the training stage that is irrelevant and we should only be focusing on the output stage, but that couldn't be further from the truth and it's very clearly in service of their interests alone. So I think we should really double down on both transparency, but also testing and evaluation throughout the lifecycle of an AI product. I will also say that companies should not be getting to grade their own homework. So the actual metrics that we use for this evaluation should be set by regulators. It should be set by public bodies and not the companies themselves.

Sen. Amy Klobuchar (D-MN):

Okay, very good. Thank you all of you. I have a question on voice cloning scams and one on kids' privacy that I'll submit on the record to all of you. So thank you very much,

Sen. Maria Cantwell (D-WA):

Senator Vance.

Sen. J.D. Vance (R-OH):

Thanks, Madam Chair. Thanks to our four witnesses for being here. I want to direct my questions to Mr. Reed and appreciate, in particular, your testimony. There's this concern in this building and across Washington that AI poses a number of safety concerns and I fully admit that there are a number of issues I worry about as AI continues to develop. In particular, you could imagine a scenario where AI makes these chatbots much more efficient, much more believable, allows predators to prey on children more easily online. That is a real concern and something that I think that we in this committee should be very concerned about. And I know that's a bipartisan worry. On the flip side of it, I also worry that that legitimate concern is justifying some overregulation or some preemptive overregulation attempts that would frankly entrench the tech incumbents that we already have and make it actually harder for new entrants to create the innovation that's going to sort of power the next generation of American growth and American job creation.

And I kind of want to just get your reaction to that thought Mr. Reed and what you're seeing in your work. And the one additional observation I'll make is very often CEOs, especially of larger technology companies that I think already have advantageous positions in AI will come and talk about the terrible safety dangers of this new technology and how Congress needs to jump up and regulate as quickly as possible. And I can't help but worry that if we do something under duress from the current incumbents, it's going to be to the advantage of those incumbents and not to the advantage of the American consumer.

Morgan Reed:

Well, thank you for the question. And of course that is the normal behavior of any company that has significant compliance resources is to look for ways to make sure that they are prepared to comply. Obviously they face billion and trillion dollar risks, so for them it's viewed through that lens of risk versus opportunity. So I completely agree, and that is what is to a certain degree trying to shape this environment. What I have noticed is that AI, outside of the context of the large headline news, is really empowering the way that small businesses can do activities that they are currently not very good at and do them better. For example, if you're a small business owner, one of the things that you run into all the time is, am I buying my inventory at the right time? Am I on a 30-day net, 60-day net or 90-day net?

Every small business owner in your state knows those terms. What I like to do is I want to use AI to look at my last three years of bookkeeping and figure out am I buying the wrong thing? Am I buying at the wrong time? Am I missing my 30-day net and not paying and losing money because that costs me my next customer and that more importantly costs me the next person I'm going to hire. So those are the kinds of areas that we need it for. I heard one of the other witnesses talk about what we need, no inferences on sensitive data. I'm a little concerned about that as well in the context of what you ask. I've had the privilege of serving on a federal advisory committee for HHS, for both this and the previous administration. And one of the things we talk a lot about a lot is something called social determinants of health.

And in the state of Ohio it's really important to figure out do people have access to nutrition? Do they have access to the internet? Do they have access to telemedicine, all the services that are in there, and you need AI to help build inferences about the health and wellbeing of the citizens of your state. So I think we have to be very careful to have these blanket, no inferences, but rather talk about the real harms. So agree with you completely on what you said at the beginning, which are the real structural and functional harms that can happen from ai, but I think we should look through the lens of harms to make the choices that Congress has to make.

Sen. J.D. Vance (R-OH):

So with my brief time, just one question on who has done data privacy in and one of the benefits of our federal system to back up a little bit is sometimes the states do something that's really smart and frankly sometimes they do something that's very stupid, but we can sort of learn from it before projecting a national regulatory regime without any experience who has done this well, who's done data privacy well?

Morgan Reed:

I think the structural model that has worked best so far from the states is one, sometimes called the Virginia model, but it's Virginia, Colorado, Connecticut, Delaware, Montana, and Oregon have kind of taken that structural model. If I had any suggestion, I think starting with that model as a path forward for the federal government is a good idea. It has a lot of broad support, but what we can't have is everybody with their own flavor, right? I don't want 31 flavors of privacy bills. I want a model that applies to everyone everywhere.

Sen. J.D. Vance (R-OH):

Yeah. Okay. Thank you. And thanks to the rest of the witnesses too, I yield.

Sen. Maria Cantwell (D-WA):

Thank you. Senator Budd -- Who else? I think we're waiting for Senator Schmitt. Okay. Nobody else online? Nobody else. I got it. Senator Schmitt. Senator Schmitt.

Sen. Eric Schmitt (R-MO):

Thank you madam chair. Thank you. And I want to thank the witnesses for being here today too. In my duties as a member of both the Senate Commerce Committee and the Armed Services Committee come to realize the significant implications that AI has for the future, not only for our government, but the economy and the citizens of this great country. AI has the potential to transform many types of commercial applications and is already playing a pivotal role in various aspects of our national security apparatus. St. Louis, Missouri, where I'm from in particular, is helping lead that charge through scale AI and its innovative data annotation and curation systems, as well as its advanced model testing and evaluation approach scale. AI is partnering with important government entities such as the NGA in St. Louis, as well as nodal commercial applications like open AI to improve AI modeling.

Axios reported this week that US private sector investments in AI outpace every other leading nation more than doubling investments of our pacing threat, China. Members of this committee will speak to the need potentially for overreaching guardrails related to ai. While I believe targeted measures may be warranted, where current gaps in law may exist, it's critical that we're not overreactive. Any new laws must not hinder investments and innovation being made by our private sector. Unfortunately, the approach by the Biden administration and others in this body threatens investments. Monolithic regulatory regimes pushed by this administration pick winners and losers. AI needs innovation both large and small needs innovators, both large and small. Only the largest of the companies can comply with sweeping regulations, which is precisely why I believe big tech supports these proposals. There's an unholy alliance right now between the current administration and big tech to crowd out future competition.

Additionally, I fear that the Biden administration's efforts on AI are a backdoor attempt to use regulators to silence their political opponents. I recently led a letter with Senator McConnell, Senator Thune, and Ranking Member Cruz, calling out the efforts by the FCC to now police the content of political advertising through regulation of AI leading up to the 2024 elections, the goals of the administration to hide in plain sight. They will leave no stone unturned to use big tech and regulations to squash out those that they disagree with. While I think this hearing provides an important opportunity for us to hear from our witnesses and gain valuable insight into the greater AI and privacy ecosystem, I feel it is incumbent upon us to also ask which existing laws can address the fears expressed by many of our colleagues on this committee related to ai. And as you guys know, AI has dominated a lot of these discussions and one thing that's become increasingly clear is that we have many existing laws in place already that address conflicts involving ai.

For example, it is currently illegal to commit mail fraud. It should not matter whether or not the person uses AI to write the letter or a pen, it's still illegal. We've seen other countries prematurely attempt to overregulate this technology and it's clear we can't follow their leads. So my first question here to Mr. Reed, as Congress debates the need for regulation of ai, it's clear we need to focus on the gaps in the system and not create a whole new regime. With that said, what purely new regulations do you think are needed based on the gaps in our existing regulatory system?

Morgan Reed:

Well, thank you very much for the question, and I'm going to sound like a broken record, but it is a comprehensive privacy bill that clearly defines what are the responsibilities of business, how do we communicate to you and what we're going to do with your data. Because to take a step back, almost all the questions that arise on this are about customers having an unexpected outcome. You heard earlier from the chair, she talked about how polling numbers show that most Americans are convinced that something is going to happen with their data that they don't expect or is not in their interest. That's our fault in business. And that's a problem with the regulation because we have different ways in which we are supposed to communicate with customers on what we're supposed to do. So I think step one, tell customers what we're going to do. Step two, meet those expectations. And to your point, step three have existing laws that are already on the books go after us when we don't. And I think that's a big part of the problem.

Sen. Eric Schmitt (R-MO):

Mr. Tiwari, do you agree with that?

Udbhav Tiwari:

Absolutely. It's very clear that the benefits of having privacy legislation is something that America is already quite familiar with, thanks to laws like HIPAA for Health, COPPA for children that have been in the books for a very long time and have been remarkably effective at preventing some very serious harms, comprehensive federal privacy legislation will go a very long way in ensuring that protection is available to every American.

Sen. Eric Schmitt (R-MO):

Mr. Reed, given the rapid pace of AI innovation, especially among startups and individual developers, can you explain how such a regulation might disproportionately impact if we were to follow this potentially where some people want to go disproportionately impacts smaller entities that lack the resources of bigger tech companies?

Morgan Reed:

Absolutely. I think the issue is that we've seen some calls for things like a federal agency that would license large language models. It would not be surprising that the companies that will be able to achieve those licenses will just happen to be companies that state their value in trillions of dollars versus small and medium sized businesses. So what we see is that a privacy law that essentially limits the ability to meet the requirements is so high that no small business can ever meet it. If a company is highly vertically integrated and already has all of that customer data, then they are at no risk. They've already gotten permission to use all that data. And so going forward, their answer is fine by us because we've already got it.

Sen. Eric Schmitt (R-MO):

Right. And I guess to follow up on that then, how would that concentration of power that could even be exacerbated than what we have right now, right under that kind of regime, how would that affect privacy competition consumer choice in the AI industry?

Morgan Reed:

Well, I think it absolutely affects consumer choice. And to be clear, I don't want to sound too negative, we actually need platforms. Platforms are actually really critical for us to build on top of. So I want to see successful large language models, and so I don't want legislation to say big companies can't build large drainage models. That's terrible too. So the right answer is what you came with at the beginning. Look at the existing law enforcement infrastructure, whether it's the Office of Civil Rights at HHS as Mr. Tiwari discussed, the Federal Trade Commission's ability to enforce COPPA. We already have those tools on the books. If we go the other direction and regulate it at this high level, then it will be almost impossible for small companies to knock off those incumbent positions.

Sen. Eric Schmitt (R-MO):

Well, and that leads to sort of my final question then. If we go down this road, then could that potentially then give an advantage to our adversaries who might develop or adopt a more lenient approach, accelerating their own advancements to the detriment of what we're trying to do?

Morgan Reed:

Of course, the chairwoman and everyone has recognized that the AI ecosystem is global in place, and so our competitiveness is global. I'm very proud of the fact that our members sell products around the world to customers everywhere. I heard mention of Brazil, we actually have great developers in Brazil who are doing amazing things. And what's amazing about that is they depend on the innovation that often comes from the United States to build the next product and the next product that serves a customer base that speaks Portuguese and doesn't speak English. And yet a lot of the innovation came from the platforms here that they build on top of. So our global competitiveness is absolutely impacted if we restrict access to the technology that drives us forward.

Sen. Eric Schmitt (R-MO):

Thank you. Thank you, Madam Chairwoman.

Sen. Maria Cantwell (D-WA):

Thank you. Senator Welch.

Sen. Peter Welch (D-VT):

Thank you very much, Madam Chair and I thank the witnesses. One of the big concerns that all of us have really is meaningful consumer notification and consent in how individual information is being used. And as I understand it, large language models are training on massive data sets, scraped from all across the internet, and that can include private and personally identifiable information. And concerningly to me, and I think a lot of folks, researchers found that ChatGPT could be tricked into divulging training data, including user IP addresses, emails and passwords and software patches are not the solution that we need in my view. So it's really the reason that Senator Lujan and I introduced the AI Consent Act and that would require online platforms to obtain consumers' expressed informed consent before using their personal data for any purpose to train ai. So I want to ask Mr. Tiwari, do you believe it is more effective to give consumers the ability to opt in to allow companies to use their data or opt out once the data is already being used?

Udbhav Tiwari:

Thank you, Senator. Absolutely. Yes. The Mozilla Foundation has run campaigns over the last four to five months that have explicitly focused on this specific question, both requiring companies to be transparent about the fact of whether they are using personal data in order to train their models and after that, to ensure that users have complete control over this processing, meaning both users should be able to consent for such behavior to take place, but also that they should be able to withdraw this consent whenever they like and opt out of this processing. We believe that the risks that exist from the leakage of such private information will drastically reduce if users are given an ability to both understand what the data is being used for and then to make a choice of whether they would like it to be used in that way.

Sen. Peter Welch (D-VT):

Okay, thank you. Another issue here is small business. They don't have obviously the resources, the infrastructure, the big businesses have yet consumer data can be lost or appropriated through that source, but we don't want to impose huge expenses on small business that they just can't meet the local retail florist, let's say, but we want to protect people's privacy. So let me just ask Mr. Calo, what would be a good way to deal with this. I'm thinking about either a pilot program or the capacity of the federal government to provide sort of a punch list of things that can be done to assist small businesses so they don't have to do something that is not within their capacity to do. They want to sell flowers, they don't want to become IT experts. So perhaps you could tell us how we could protect people's privacy in a way that doesn't burden small business.

Ryan Calo:

What a great question. I mean, it is true that small businesses don't have the capacity to comply with the same sorts of requirements at the same level as large businesses. A few years ago, the Tech Policy Lab at the University of Washington, we hosted the 'Start with Security' series by the Federal Trade Commission where the FTC was saying, look, it is going to be unfair and deceptive. You don't have adequate security that's proportionate to how much data you have. But rather than just sort of throw that out there, they went on a tour around all the innovation centers. They came of course to Seattle, but also to San Francisco and elsewhere, and they invited smaller companies to talk about the FTC's own expectations. The more we can do to help scaffold government expectations for smaller businesses, the better off we'll be. But I agree that you do need to have a tiered system because Google and Meta and Amazon and others have the capacity to comply with far more rigorous detailed rules than do small businesses.

Sen. Peter Welch (D-VT):

Thank you. I want to ask Ms. Kak this question. Senator Bennett and I have introduced the Digital Platform Commission Act and what it would do is establish an independent federal commission to regulate digital platforms, and that would include on key issues like AI and privacy concerns. And our theory essentially is that the Congress simply can't keep up with everything that's happening and we can't address everything in one-off piece of legislation. There has to be a governmental agency that's properly staffed and resourced to keep up with this. We've done this in the past when we started the Securities and Exchange Commission, when we started the Federal Aviation Authority, when we did the Interstate Commerce Commission. Give me your thoughts on just the concept of a digital platform commission as having that responsibility on an ongoing basis.

Amba Kak:

Thank you, Senator, for your leadership on this issue. In general, we do think that independent regulation and sort of resourcing our enforcers should be a top priority. What I will say though is that I do think that existing enforcement agencies like the FTC have been working on these issues for decades now. They have the capacity, they have the kind of technical expertise, what the moment needs is for these agencies to be much better resourced so that they can meet the moment and for laws like the APRA and the bipartisan ADPPA, which empower them to then enforce clearer privacy standards.

Sen. Peter Welch (D-VT):

Alright, thank you very much. I yield back.

Sen. Maria Cantwell (D-WA):

Senator Thune.

Sen. John Thune (R-SD):

Thank you, Madam Chair for holding today's hearing. It's my view that this committee, which has expansive jurisdiction over technology issues, should play a significant role in understanding how the most recent developments in AI will impact society and existing law. And ultimately, it's my belief that this will include developing a thoughtful risk-based legislative framework that seeks to address the impacts of ai, which is why I've introduced the AI Research Innovation and Accountability Act last year with Senators Klobuchar Wicker, Hickenlooper Capital, Lujan, Loomis, I should say and Baldwin. The bill establishes a framework to bolster innovation while bringing greater transparency, accountability, and security to the development and operation of the highest impact applications of AI. Through this legislation, basic safety and security guardrails are established for the highest risk applications of AI without requiring a burdensome audit or stamp of approval from the government. And it is my hope the committee will advance the legislation soon. Mr. Reed, there are several proposals out there calling for a new AI oversight agency, a licensing regime that would require the government to approve certain AI systems before they could be deployed. The framework that I just mentioned and that we've been working on with members of this committee would take a more pragmatic approach in my view, arguing instead that a risk-based compliance and assessment process would provide the necessary oversight while also allowing AI developers and research to innovate more quickly. In your view, how would a licensing regime impact the innovation ecosystem in the us?

Morgan Reed:

As I touched on earlier, I think a licensing regime always leads to a situation where those with the power and the time and the number of lawyers to go through the process of meeting that licensing requirement are the ones who win at the table rather than those that have the best technology or the best ideas. So I'm always concerned about a licensing regime that is the only door into success. We know how that works. You end up with three, four, maybe five competitors and they settle into a nice multi-year program. I think beyond that, I want to say that your work on the risk-based framework is a great idea, but I'm always hesitant because I know that we need comprehensive privacy legislation. It's been very hard as people come forth with these great ideas that offer really good alternatives, but then we'll leave out a piece without comprehensive privacy legislation.

What ends up happening is we have a patchwork and that patchwork costs us. You made one other critical point that I think is vital and I've heard it from my fellow panelists. There is in fact already existing government expertise. I've had the honor to work with the Federal Drug Administration, the FDA and its Center for Excellence. And I would say that they have incredible leadership already on understanding AI frameworks, their development internally about how do I get approval for software as a medical device that includes AI and how do I deal with the difference between black box AI and how do I deal with transparency? FDA has been working on this for, so I think my worry about standing up a new federal agency when we already have expertise on these topics within existing agencies, and I'm going to agree with Ms. Kak to say that we need to make sure that we have plenty of resources for them to enforce existing laws.

If you've got a problem in health, OCR, if you've got housing, we've got DOJ, you already have mechanisms, you already have harm-based, risk-based assessments that can be done by existing government agencies. It's a lot better to stand up than standing up a single agency. And my last point on that is very simple. Anyone who's ever run a business knows that hiring and growing a business while hiring is one of the hardest things. You'd be growing and hiring a business inside of the government called a whole new agency, while at the same time that everything is changing, let's let the experts in each area do their work rather than trying to build out while the plane's already in the air.

Sen. John Thune (R-SD):

And to be clear, we do not call for any kind of a bureaucracy. In fact, the framework in our bill doesn't allow for. Thank you for that. Why do we want it to be a light touch approach? On the privacy issue, is it your view that we're better served by having one national standard?

Morgan Reed:

Absolutely.

Sen. John Thune (R-SD):

Okay. And how about the federal privacy law, including a private right of action? What's the practical effect of a provision like that on small businesses and startups?

Morgan Reed:

The reality is that in order to achieve bipartisan legislation, it is likely that we're going to have to give some consideration to a private right of action. I think what has to happen is that if Congress considers putting a private right of action in place, it needs to ensure that it has numerous backstops to make sure that it doesn't become an opportunity for ludicrous sue and settle. Right? We saw that. We've seen that where small businesses will get a letter saying you're using a patent in a fax machine, you owe me $50,000, pay me. And the cost of fighting that ludicrous suit is greater than the $50,000 you send. I don't want our privacy laws with a private right of action to lead to one state being kind of the breeding ground for hilarious, yet economically untenable action against small and medium sized businesses. We're the best victims in this. We have just enough money to pay you, but not enough money to fight you.

Sen. John Thune (R-SD):

Alright, madam chair, my time has expired. I have a question I'd directed to Dr. Calo, but I'll submit it for the record having to do with, well, lemme just say whatever you want to do. Well, let me just very quickly, I see transparency, as I mentioned, as the key to ensuring that both developers and employers of AI systems are accountable to the consumers and businesses they serve. But could you expound just briefly on what constitutes a deployer versus a developer as well as explain how obligations for developers and deployers differ when it comes to transparency and accountability?

Ryan Calo:

Yeah, I mean technology generally has a problem of many hands. And so you have these foundational models and then you have APIs upon which things are built. And so effective legislation makes it clear what the respective responsibilities of each of these parties are. For me personally, speaking on my own behalf, my concern enters into it when whoever it is in the ecosystem is leveraging the tool in a way that harms consumers and tries to extract from them. So maybe that's the platform, but maybe it's somebody using the tools of the platform.

Sen. John Thune (R-SD):

Alright, Madam Chair, thank you.

Sen. Maria Cantwell (D-WA):

Thank you. Thank you. I'd actually like to follow up on that. I was going to, anyway, so it was a good lead in. I want to thank Senator Thune for his leadership because he, not just on data security but on privacy and now on your proposal as it relates to AI. Very good concepts there that we should continue to work on, and hopefully we'll get to a markup soon. But this notion, this data that came out by the Department of Justice that had dismantled a Russian bought farm intended to sow discord in the United States and that the AI, the Russians created scores of fictitious profiles and then generated these posts. And so I'm trying to, you were talking about this ecosystem that's created, right? And now here we have bots who are just exploding with the information because we've given them so much data. You can say it came from the social media platform that they collected it and then that information got scraped and then the information got to the bots and then the bots put it on this accelerant and a bad actor can use it against us. So why is this so important to now have a tool to fight against this? Because the bot system is out of control, but the AI accelerant on the bot system makes it an imperative.

Ryan Calo:

I can only agree with you, Senator. Obviously misinformation and propaganda are not new, but the ability of adversaries of all kinds, domestic and foreign to create plausible looking and very damaging misinformation campaigns has become quite acute. I mean, I'll just use one example as you know, the Center for an Informed Public Studies, misinformation and disinformation, one example is that someone created a deep fake that was not real fictitious, that gave the appearance that there had been a bomb go off at the Pentagon, which was so concerning to people that it actually caused a dip in the stock market until people figured out that it wasn't real. The ability to create a seemingly real catastrophic event is very, very dangerous. But you're talking to something even more basic, which is that AI makes it possible to generate much, much more disinformation and have it appear different from one another. Different media, different phrasing and everything else. It's deeply, deeply concerning. I think there are ways in which a privacy law could help actually, but the problem of disinformation misinformation probably is broader still.

Sen. Maria Cantwell (D-WA):

So what do we do about the bots from a regulatory perspective?

Ryan Calo:

Yeah, that's hard. I mean, so states like California have a bot disclosure act that requires that if you're operating a bot in certain ways, commercial and electioneering, that you have to identify yourself as fake. The problem of course, is that Russian disinformers are not going to comply with our laws. And so I think part of the response has to be political and economic, right? It is one of the main reasons that the federal government needs to get involved because it's not something the states can address. States can't find a global consensus around sanctioning bad acting around information. But I think that placing responsibility on the platforms to do as much as possible since they have control over their own platforms to identify and disincentivize this kind of misinformation that's automated is also really key.

Sen. Maria Cantwell (D-WA):

Thank you. Well, I think that concludes our hearing. I know it's a very busy morning for everybody. The record will remain open for two weeks, and we ask members to submit their questions for the record by July 18th. I want to thank the witnesses, all of you. Very informative panel. We appreciate you answering these questions and helping us move forward on important privacy and AI legislation. We're adjourned.

Authors

Gabby Miller
Gabby Miller is a staff writer at Tech Policy Press. She was previously a senior reporting fellow at the Tow Center for Digital Journalism, where she used investigative techniques to uncover the ways Big Tech companies invested in the news industry to advance their own policy interests. She’s an alu...

Topics