Home

Donate

Transcript: US Senate Judiciary Hearing on Oversight of A.I.

Gabby Miller / Sep 13, 2023

Gabby Miller is Staff Writer at Tech Policy Press.

September, 12, 2023: (l-r) William Dally, Chief Scientist and Senior Vice President of Research
NVIDIA Corporation; Brad Smith, Vice Chair and President, Microsoft Corporation; Woodrow Hartzog, Professor of Law, Boston University School of Law and Fellow, Cordell Institute for Policy in Medicine & Law, Washington University in St. Louis. Dirksen Senate Office Building Room 226, Washington DC.

Artificial Intelligence (AI) is in the spotlight only a week into the U.S. Congress’ return from recess. On Tuesday, the Senate held two AI-focused Subcommittee hearings just a day before the first AI Insight Forum hosted by Senate Majority Leader Charles Schumer (D-NY).

Tuesday’s hearing before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law was led by Chairman Sen. Richard Blumenthal (D-CT) and Ranking Member Josh Hawley (R-MO), another of a series of hearings in the committee on how best to govern artificial intelligence. It also corresponded with their formal introduction of a bipartisan bill by Sens. Blumenthal and Hawley that would deny AI companies Section 230 immunity.

  • Woodrow Hartzog, Professor of Law, Boston University School of Law Fellow, Cordell Institute for Policy in Medicine & Law, Washington University in St. Louis (written testimony)
  • William Dally, Chief Scientist and Senior Vice President of Research, NVIDIA Corporation (written testimony)
  • Brad Smith, Vice Chair and President, Microsoft Corporation (written testimony)

(Microsoft’s Smith will also be in attendance for Sen. Schumer’s first AI Insight Forum on Wednesday and NVIDIA's CEO, Jensen Huang, will be joining him.)

Sen. Blumenthal, in his opening remarks, stated the Subcommittee's desire to craft a legislative framework that builds basic safeguards into AI products for businesses and consumers while maximizing the technology’s future potential benefits–especially from an American entrepreneurial standpoint.

While there was much overlap between witnesses in their responses to questions from Senators, the opening statements made their vantage points and accompanying interests quite stark.

NVIDIA’s Dally emphasized the need for thoughtful deployment of AI systems without suppressing innovation. Microsoft’s Smith focused more on legislative proposals. He opened his testimony asking the Subcommittee to keep three goals in mind while crafting AI legislation: prioritize safety and security, require licenses for advanced AI models, and create an independent agency for effective oversight.

Offering a counterpart to both the panel and the broader industry-led approach to AI policy-making was Professor Hartzog. “I’d like to make one simple point in my testimony today,” he explained. Approaches that merely encourage transparency, promote principles of ethics, and mitigate biases, he believes, are “vital but only half measures and they will not fully protect us to bring AI within the rule of law.”

Broad topics covered in the hearing included:

  • The merits of establishing a licensing regime for companies engaged in high-risk AI development
  • Protecting consumers’ and kids’ privacy
  • Mandatory disclosures for AI-generated content, including the use of watermarks, especially for political ads
  • AI proliferation that could exacerbate foreign influence campaigns from US adversaries such as China and Russia
  • AI’s effect on US labor and the economy
  • Balancing national security concerns while remaining globally competitive, especially with regards to AI hardware (e.g. chips) and software (e.g. cloud infrastructure)

What follows is a lightly edited transcript of the discussion.

Sen. Richard Blumenthal (D-CT):

The hearing of our Subcommittee on Privacy, Technology, and the Law will come to order. I want to welcome our witnesses, all of the audience who are here, and say a particular thanks to Senator Schumer who has been very supportive and interested in what we're doing here. And also to Chairman Durbin whose support has been invaluable in encouraging us to go forward here. I have been grateful, especially to my partner in this effort, Senator Hawley, the ranking member. He and I, as you know, have produced a framework, basically a blueprint for a path forward to achieve legislation. Our interest is in legislation, and this hearing along with the two previous ones have to be seen as a means. To that end, we're very result oriented as I know you are from your testimony, and I've been enormously encouraged and emboldened by the response so far just in the past few days, and from my conversations with leaders in the industry like Mr. Smith, there is a deep

Appetite indeed, a hunger for rules and guardrails, basic safeguards for businesses and consumers, for people in general from the panoply of potential perils.

But there's also a desire to make use of the tremendous potential benefits, and our effort is to provide for regulation in the best sense of the word regulation that permits and encourages innovation and new businesses and technology and entrepreneurship, but at the same time provides those guardrails, enforceable safeguards that can encourage trust and confidence in this growing technology. It's not a new technology entirely. It's been around for decades, but artificial intelligence is regarded as entering a new era. And make no mistake, there will be regulation. The only question is how soon and what, and it should be regulation that encourages the best in American free enterprise, but at the same time provides the kind of protections that we do in other areas of our economic activity.

To my colleagues who say there's no need for new rules. We have enough laws protecting the public. Yes, we have laws that prohibit unfair and deceptive competition. We have laws that regulate airline safety and drug safety, but nobody would argue that simply because we have those rules, we don't need specific protections for medical device safety or car safety. Just because we have rules that prohibit discrimination in the workplace doesn't mean that we don't need rules that prohibit discrimination in voting, and we need to make sure that these protections are framed and targeted in a way that apply to the risks involved. Risk-based rules, managing the risks is what we need to do here. So our principles are pretty straightforward. I think we have no pride of authorship. We have circulated this framework to encourage comment. We won't be offended by criticism from any quarter. That's the way we can make this framework better and eventually achieve legislation.

We hope, I hope at least by the end of this year. And the framework is basically establishing a licensing regime for companies that are engaged in high risk AI development, creating an independent oversight body that has expertise with AI and works with other agencies to administer and enforce the law, protecting our national and economic security to make sure we aren't enabling China or Russia and other adversaries to interfere in our democracy or violate human rights. Requiring transparency about the limits and use of AI models, and at this point includes rules like watermarking, digital disclosure when AI is being used and data access for researchers and ensuring that AI companies can be held liable when their products breach privacy, violate civil rights, endanger the public deepfakes impersonation, hallucination. We've all heard those terms. We need to prevent those harms. And Senator Hawley and I, as former attorneys general of our state, have a deep and abiding affection for the potential enforcement powers of those officials, state officials.

But the point is there must be effective enforcement. Private rights of action as well as federal enforcement are very, very important. So let me just close by saying before I turn it over to my colleague, we're going to have more hearings. The way to build a coalition in support of these measures is to disseminate as widely as possible the information that's needed for our colleagues to understand what's at stake here. We need to listen to the kinds of industry leaders and experts that we have before us today, and we need to act with dispatch more than just deliberate speed. We need to learn from our experience with social media that if we let this horse get out of the barn, it will be even more difficult to contain than social media and we are seeking to act on social media, the harms that it portends right now as we speak. We're literally at the cusp of a new era. I asked Sam Altman when he sat where you are, what his greatest fear was. I said, mine, my nightmare is the massive unemployment that could be created. That is an issue that we don't deal with directly here, but it shows how wide the ramifications may be, and we do need to deal with potential worker displacement and training. And this new era is one that portends enormous promise, but also perils. We need to deal with both. I'll now go to ranking member Senator Hawley.

Sen. Josh Hawley (R-MO):

Thank you, Mr. Chairman. Thank you for organizing this hearing. This is now, as the chairman said, the third of these hearings that we've done. I've learned a lot in the previous couple. I think some of what we're learning about the potentials of AI is exhilarating. Some of it is horrifying, and I think what I hear the chairman saying, and what I certainly agree with is we have a responsibility here now to do our part, to make sure that this new technology, which holds a lot of promise, but also peril, actually works for the American people. That it's good for working people, that it's good for families that we don't make the same mistakes that Congress made with social media. We're 30 years ago now, Congress basically outsourced social media to the biggest corporations in the world, and that has been, I would submit to you nearly in an unmitigated disaster where we've had the biggest, most powerful corporations, not just in America, but on the globe and in the history of the globe, doing whatever they want with social media, running experiments basically every day on America's kids, inflicting mental health harms, the likes of which we've never seen messing around in our elections in a way that is deeply, deeply corrosive to our way of life.

We cannot make those mistakes again. So we are here, as Senator Blumenthal said, to try to find answers and to try to make sure that this technology is something that actually benefits the people of this country. I have no doubt, with all due respect to the corporatists who are in front of us, those heads of these corporations, I have no doubt it's going to benefit your companies. What I want to make sure is, is that it actually benefits America people, and I think that's the task that we're engaged in. I look forward to this today. Thank you, Mr. Chairman.

Sen. Richard Blumenthal (D-CT):

Thank you. I want to introduce our witnesses and then is our custom. I will swear them in and ask them to submit their testimony. Welcome to all of you. William Dally is NVIDIA's chief scientist. He joined NVIDIA in January 2009 as chief scientist, after spending 12 years at Stanford University where he was chairman of the computer science department. He has published over 250 papers. He holds 120 issued patents, and he's the author of four textbooks.

Brad Smith is vice chair and president of Microsoft. As Microsoft's vice chair and president, he is responsible for spearheading the company's work and representing it publicly in a wide variety of critical issues involving the intersection of technology and society, including artificial intelligence, cybersecurity, privacy, environmental sustainability, human rights, digital safety, immigration, philanthropy and products and business for nonprofit customers. And we appreciate your being here.

Professor Woodrow Hartzog is professor of law and class of 1960 Scholar at Boston University School of Law. He's also a non-resident fellow at the Cordell Institute of Policy and Medicine and Law at Washington University, a faculty associate at the Berkman Klein Center for Internet and Society at Harvard University and an affiliate scholar at the Center for Internet and Society at Stanford Law School.

I could go on about each of you at much greater length with all your credentials, but suffice it to say, very impressive, and if you now stand, I'll administer the oath. Do you solemnly swear that the testimony you're about to give is the truth, the whole truth, and nothing but the truth, so help you God? Thank you. Why don't we begin with you, Mr. Dally?

William Dally:

Chairman Blumenthal, ranking member Hawley, esteemed Judiciary committee members. Thank you for the privilege to testify today. I'm NVIDIA's Chief Scientist and head of research, and I'm delighted to discuss our artificial intelligence journey in future NVIDIA's at the forefront of accelerated computing and generative AI technologies that have the potential to transform industries, address global challenges and profoundly benefit society. Since our founding in 1993, we've been committed to developing technology to empower people and improve the quality of life worldwide. Today, over 40,000 companies using NVIDIA platforms across media and entertainment, scientific computing, healthcare, financial services, internet services, automotive and manufacturing to solve the world's most difficult challenges and bring new products and services to consumers worldwide. At our founding in 1993, we were a 3D graphic startup, one of dozens of startups competing to create an entirely new market for accelerators to enhance computer graphics for games.

In 1999, we invented the graphics processing unit or GPU, which can perform a massive number of calculations in parallel. When we launched the GPU in gaming, we recognized that GPU could theoretically accelerate any application that could benefit from massively parallel processing and this bet paid off. Today, researchers worldwide innovate on NVIDIA GPUs. Through our collective efforts, we have made advances in AI that will revolutionize and provide tremendous benefits to society across sectors such as healthcare, medical, research, education, business, cybersecurity, climate, and beyond. However, we also recognize that like any new product or service, AI products and services have risks and those who make and use or sell AI enabled products and services are responsible for their conduct. Fortunately, many uses of AI applications are subject to existing laws and regulations that govern the sectors in which they operate. AI enabled services in high risk sectors could be subject to enhanced licensing and certification requirements when necessary.

While other applications with less risk of harm may need less stringent licensing and or regulation with clear, stable, and thoughtful regulation, AI developers will work to benefit society while making products and services as safe as possible. For our part, NVIDIA is committed to the safe and trustworthy development and deployment of ai. For example, NEMO guardrails are open source software empowers developers to guide generative AI applications to produce accurate, appropriate and secure text responses. NVIDIA has implemented model risk management guidance, ensuring a comprehensive assessment and management of risks associated with NVIDIA developed models. Today, NVIDIA announced that it's endorsing the White House's voluntary commitments on ai. As we deploy AI more broadly, we can and will continue to identify and address risks. No discussion of AI would be complete without addressing what is often described as frontier AI models. Some have expressed fear that frontier models will evolve into uncontrollable artificial general intelligence, which could escape our control and cause harm.

Fortunately, uncontrollable artificial general intelligence is science fiction, not reality. At its core, AI is a software program that is limited by its training, the inputs provided to it and the nature of its output. In other words, humans will always decide how much decision-making power to seed to AI models. So long as we are thoughtful and measured, we can ensure safe, trustworthy, and ethical deployment of AI systems. Without suppressing innovation, we can spur innovation by ensuring that AI tools are widely available to everyone, not concentrated in the hands of a few powerful firms. I'll close with two observations. First, the AI g e is already out of the bottle. AI algorithms are widely published and available to all AI software can be transmitted anywhere in the world at the press of a button, and many AI development tools, frameworks, and foundational models are open sourced. Second, no nation and certainly no company controls a choke point to AI development.

Leading US computing platforms are competing with companies from around the world. While US companies may currently be the most energy efficient, cost efficient, and easiest to use, they're not the only viable alternatives for developers abroad. Other nations are developing AI systems with or without US components and they will offer those applications in the worldwide market. Safe and trustworthy AI will require a multilateral and multi-stakeholder cooperation or it will not be effective. The United States is in a remarkable position today, and with your help, we can continue to lead on policy and innovation well into the future. NVIDIA stands ready to work with you to ensure that the development and deployment of generative AI and accelerated computing serve the best interest of all. Thank you for your opportunity to testify before this committee.

Sen. Richard Blumenthal (D-CT):

Thank you very much. Mr. Smith.

Brad Smith:

Chairman Blumenthal, Ranking Member Hawley, members of the subcommittee. My name is Brad Smith. I'm the vice chair and president of Microsoft, and thank you for the opportunity to be here today, and I think more importantly, thank you for the work that you have done to create the framework you've shared. Chairman Blumenthal, I think you put it very well. First, we need to learn and act with dispatch and ranking member. Hawley, I think you offered real words of wisdom. Let's learn from the experience the whole world had with social media and let's be clear-eyed about the promise and the peril and equal measure as we look to the future of ai. I would first say, I think your framework does that. It doesn't attempt to answer every question by design, but it's a very strong and positive step in the right direction and puts the US government on the path to be a global leader in ensuring a balanced approach that will enable innovation to go forward with the right legal guardrails in place.

As we all think about this more, I think it's worth keeping three goals in mind. First, let's prioritize safety and security, which your framework does. Let's require licenses for advanced AI models and uses in high risk scenarios. Let's have an agency that is independent and can exercise real and effective oversight over this category. And then let's couple that with the right kinds of controls that will ensure safety of the sort that we've already seen. I think start to emerge in the eight White House commitments that were launched on July 21st. Second, let's prioritize as you do the protection of our citizens and consumers, let's prioritize national security always in a sense in some ways the first priority of the federal government, but let's think as well as you have about protecting the privacy, the civil rights, and the needs of kids among many other ways of working and ensure we get this right.

Let's take the approach that you are recommending, namely focus not only on those companies that develop AI like Microsoft as well as companies that deploy AI like Microsoft in different categories. We're going to need different levels of obligations, and as we go forward, let's think about the connection between say the role of a central agency that will be on point for certain things as well as the obligations that frankly will be part of the work of many agencies and indeed our courts as well. And let's do one other thing as well. Maybe it's one of the most important things we need to do so that we ensure that the threats that many people worry about remain part of science fiction and don't become a new reality. Let's keep AI under the control of people. It needs to be safe and to do that, as we've encouraged, there need to be safety breaks, especially for any AI application or system that controls critical infrastructure.

If a company wants to use AI to say, control the electrical grid or all of the self-driving cars on our roads or the water supply, we need to learn from so many other technologies that do great things but also can go wrong. We need a safety break just like we have a circuit breaker in every building and home in this country to stop the flow of electricity. If that's needed, then I would say let's keep one third goal in mind as well. This is the one where I would suggest you maybe consider doing a bit more To add to the framework, let's remember the promise that this offers right now. If you go to state capitals, you go to other countries, I think there's a lot of energy being put on that. When I see what Governor Newsom is doing in California or Governor Bergham in North Dakota or Governor Youngkin in Virginia, I see them at the forefront of figuring out how to use AI to say, improve the delivery of healthcare advanced medicine, improve education for our kids, and maybe most importantly, make government services more accessible and more efficient. Let's see if we can find a way to not only make government better by using this technology but cheaper or use the savings to provide more and better services to our people, that would be a good problem to have the opportunity to consider in some. Professor Hartzog has said, this is not a time for half measures. It is not. He is right. Let's go forward. As you have recommended, let's be ambitious and get this right. Thank you.

Sen. Richard Blumenthal (D-CT):

Thank you, thank you very much. And Mr. Hartzog, I read your testimony and you are very much against half measures, so we look forward to hearing what the full measures that you recommend are.

Woodrow Hartzog:

That's correct. Senator Chair Blumenthal, ranking member Hawley, and members of the committee, thank you for inviting me to appear before you today. My name is Woodrow Hartzog and I'm a professor of law at Boston University. My comments today are based on a decade of researching law and technology issues, and I'm drawing from research on artificial intelligence policy. I conducted as a fellow with colleagues at the Cordell Institute at Washington University in St. Louis committee members. Up to this point, AI policy has largely been made up of industry led approaches like encouraging transparency, mitigating bias, and promoting principles of ethics. I'd like to make one simple point in my testimony today. These approaches are vital, but they are only half measures. They will not fully protect us to bring AI within the rule of law. Lawmakers must go beyond these half measures to ensure that AI systems and the actors that deploy them are worthy of our trust.

Half measures like audits, assessments and certifications are necessary for data governance, but industry leverages procedural checks like these to dilute our laws into managerial box checking exercises that entrench harmful surveillance based business models. A checklist is no match for the staggering fortune available to those who exploit our data, our labor, and our precarity to develop and deploy AI systems, and it's no substitute for meaningful liability when AI systems harm the public. Today I'd like to focus on three popular half measures and why lawmakers must do more. First, transparency is a popular proposed solution for opaque systems, but it does not produce accountability on its own. Even if we truly understand the various parts of AI systems. Lawmakers must intervene when these tools are harmful and abusive. A second laudable, but insufficient approach is when companies work to mitigate bias. AI systems are notoriously biased along lines of race, class, gender, and ability.

While mitigating bias in AI systems is critical, self-regulatory efforts to make AI fair are half measures doomed to fail. It's easy to say that AI systems should not be biased. It's very difficult to find consensus on what that means and how to get there. Additionally, it's a mistake to assume that if a system is fair, then it's safe for all people. Even if we ensure that AI systems work equally well for all communities, all we will have done is create a more effective tool that the powerful can use to dominate, manipulate, and discriminate. A third AI half measure is committing to ethical principles. Ethics are important, and these principles sound impressive, but they are a poor substitute for laws. It's easy to commit to ethics, but industry doesn't have the incentive to leave money on the table for the good of society. I have three recommendations for the committee to move beyond ai.

Half measures. First, lawmakers must accept that AI systems are not neutral and regulate how they are designed. People often argue that lawmakers should avoid design rules for technologies because there are no bad AI systems, only bad AI users. This view of technologies is wrong. There is no such thing as a neutral technology including AI systems. Facial recognition technologies empower the watcher, generative AI systems replace labor. Lawmakers should embrace established theories of accountability like product liabilities, theory of defective design or consumer protection theory of providing the means and instrumentalities of unfair and deceptive conduct. My second recommendation is to focus on substantive laws that limit abuses of power. AI systems are so complex and powerful that it can seem like trying to regulate magic, but the broader risks and benefits of AI systems are not so new AI systems bestow power. This power is used to benefit some and harm others.

Lawmakers should borrow from established legal approaches to remedying power imbalances to require broad non-negotiable duties of loyalty, care and confidentiality, and implement robust brightline rules that limit harmful secondary uses and disclosures of personal data in AI systems. My final recommendation is to encourage lawmakers to resist the idea that AI is inevitable. When lawmakers go straight to putting up guardrails, they fail to ask questions about whether particular AI systems should exist at all. This dooms us to half measures. Strong rules would include prohibitions on unacceptable AI practices like emotion recognition, biometric surveillance, and public spaces, predictive policing and social scoring. In conclusion, to avoid the mistakes of the past, lawmakers must make the hard calls. Trust and accountability can only exist where the law provides meaningful protections for humans and ai, half measures will certainly not be enough. Thank you, and I welcome your questions.

Sen. Richard Blumenthal (D-CT):

Thank you Professor Hartzog, and I take very much to heart. You are imploring us against half measures. I think listening to both Senator Hawley and myself, you have a sense of boldness and initiative and we welcome all of the specific ideas. Most especially Mr. Smith, your suggestion that we can be more engaged and proactive at the state level or federal level in making use of AI in the public sector. But taking the thought that Professor Hartzog has so importantly introduced AI technology in general is not neutral. How do we safeguard against the downsides of ai, whether it's discrimination or surveillance? Will this licensing regime and oversight entity be sufficient and what kind of powers do we need to give it?

Brad Smith:

Well, I would say first of all, I think that a licensing regime is indispensable in certain high risk scenarios, but it won't be sufficient to address every issue, but it's a critical start because I think what it really ensures is especially say for the frontier models, the most advanced as well as certain applications that are highest risk, frankly, you do need a license from the government before you go forward, and that is real accountability. You can't drive a card until you get a license. You can't make the model or the application available until you pass through that gate. I do think that it would be a mistake to think that one single agency or one single licensing regime would be the right recipe to address everything, especially when we think about the harms that we need to address. And that's why I think it's equally critical that every agency in the government that is responsible for the enforcement of the law and the protection of people's rights master the capability to assess ai. I don't think we want to move the approval of every new drug from the FDA to this agency. So by definition, the FDA is going to need, for example, to have the capability to assess ai. That would be just one of several additional specifics that I think one can think about.

Sen. Richard Blumenthal (D-CT):

I think that's a really important point because AI is going to be used in making automobiles, making airplanes, making toys for kids. So the FAA, the FDA, the Federal Trade Commission, the Consumer Product Safety Commission, they all have presently existing rules and regulations, but there needs to be an oversight entity that uses some of those rules and adapts them and adopts new rules so that those harms can be prevented. And there are a lot of different names. We could call that entity. Connecticut now has an office of artificial intelligence. You could use different terms, but I think the idea is that we want to make sure that the harms are prevented through a licensing regime focused on risk. Mr. Dally, you said that autonomous AI is science fiction. AI beyond human control is science fiction, but science fiction has a way of coming true, and I wonder whether that is a potential fear. Certainly it is one that's widely shared at the moment, whether it's fact-based or not. It is in the reality of human perception. And as you well know, trust and confidence are very, very important. So I wonder how we counter the perception and prevent the science fiction from becoming reality.

William Dally:

So what I said is that artificial intelligence that gets out of control is science fiction, not autonomous. We use artificial intelligence, for example, autonomous vehicles all the time. I think the way we make sure that we have control over AI of all sorts is by, for any really critical application, keeping a human in the loop, AI is a computer program. It takes an input, it produces an output, and if you don't connect up something that can cause harm to that output, it can't cause that harm. And so anytime that there is some grievous harm that could happen, you want a human being between the output of that AI model and the causing of harm. And so I think as long as we're careful about how we deploy AI to keep humans in the critical loops, I think we can assure that Theis won't take over and shut down our power grid or cause airplanes to fall out of the sky. We can keep control over them.

Sen. Richard Blumenthal (D-CT):

Thank you. I have a lot more questions, but we're going to adhere to five minute rounds. We have a very busy day, as you know, with votes as a matter of fact. And I'll turn to Senator Hawley.

Sen. Josh Hawley (R-MO):

Thank you, Mr. Chairman. Thanks again to the witnesses for being here. I want to particularly thank you, Mr. Smith. I know that there's a group of other, of your colleagues, your counterparts in industry who are gathering I think tomorrow and that is what it is, but I appreciate you being willing to be here in public and answer questions in front of the presses here. And this is open to anybody who wants to see it, and I think that's the way that this ought to be done. I appreciate you being willing to do that. You mentioned protecting kids. I just want to start with that. If I could. I want to ask you a little bit about what Microsoft has done and is doing. Kids use your binging chatbot. Is that fair to say?

Brad Smith:

Yes. We have certain age controls, so we don't let a child of any age. But yes, in general, it is possible for children to register if they're of a certain age.

Sen. Josh Hawley (R-MO):

And the age is?

Brad Smith:

I'm trying to remember. As I sit here, I'll get...

Sen. Josh Hawley (R-MO):

I think it's 13, does that sound right?

Brad Smith:

Maybe, I was going to say 12 or 13. I'll take the 13.

Sen. Josh Hawley (R-MO):

Do you have some sort of age verification? I mean, how do we know what age? I mean, obviously the kid can put in whatever age he or she wants to. Is there some form of age verification for Bing?

Brad Smith:

We do have age verification systems that then involve typically getting permission from a parent, and we use this across our services, including for gaming. I don't remember off the top of my head exactly how it works, but I'd be happy to get you the details.

Sen. Josh Hawley (R-MO):

Great. My impression is that Bing Chat doesn't really have an enforceable age verification. I mean, there's no way really to know, but again, you correct me if that's wrong. Let me ask you this. What happens to all of the information that our hypothetical 13 year old is putting into the tool as it's having this chat? I mean, they could be chatting about anything and going back and forth on any number of subjects. What happens to that info that the kid puts in?

Brad Smith:

Well, the most important thing I would say first is that it all is done in a manner that protects the privacy of children.

Sen. Josh Hawley (R-MO):

And how is that?

Brad Smith:

Well, we follow the rules in COPPA, which as you know, exist to protect child online privacy, and it forbids using it for tracking. It forbids its use for advertising or for other things. It seeks to put very tight controls around the use and the retention of that information. The second thing I would just add to that is in addition to protecting privacy, we are hyper-focused on ensuring that in most cases, people of any age, but especially children, are not able to use something like binging chat in ways that would cause harm to themselves or to others.

Sen. Josh Hawley (R-MO):

And how do you do that?

Brad Smith:

We basically have a safety architecture that we use across the board. Think about it like this. There's two things around a model. The first is called a classifier, so that if somebody asks, how can I commit suicide tonight? How can I blow up my school tomorrow that hits a classifier that identifies a class of questions or prompts or issues? And then second, there's what we call meta prompts and that we intervene so that the question is not answered. If someone asks how to commit suicide, we typically would provide a response that encourages someone to get mental health assistance and counseling and tells them how if somebody wants to know how to build a bomb, it says, no, you cannot use this to do that. And that fundamental safety architecture is going to evolve. It's going to get better. But in a sense, it's at the heart, if you will, of both what we do and I think the best practices in the industry. And I think part of what this is all about, Eric, that we're talking about here, is how we take that architectural element and continue to strengthen it.

Sen. Josh Hawley (R-MO):

Very good. That's helpful. Let me ask you about the information back to the kids' information for a second. Is it stored in the United States? Is it stored overseas?

Brad Smith:

If the child is in the United States, the data is stored in the United States. That's true not only for children, it's for adults as well.

Sen. Josh Hawley (R-MO):

And who has access to that data?

Brad Smith:

The child has access if the parents may or may not have access. Typically we give…

Sen. Josh Hawley (R-MO):

In what circumstances would the parents have access?

Brad Smith:

I would have to go get you the specifics on that. Our general principle is this, and this is something we've implemented in the United States. Even though it's not legally required in the United States, it is legally required. As you may know, in Europe, people we think have a right to find out what information we have about them. They have the right to see it, they have the right to ask us to correct it. If wrong, they have the right to ask us to delete it if that's what they want us to do.

Sen. Josh Hawley (R-MO):

And you do that. If they ask you to delete it, you delete it.

Brad Smith:

We better. Yes. That's our promise. And we do a lot to comply with that.

Sen. Josh Hawley (R-MO):

Let me just ask about, and I have a lot more questions, I'm going to try to adhere to the timeline it, Mr. Chairman, five minutes, Mr. Chairman. We'll have a second round. Alright, that's great news for us. Not such great news for the witnesses. Sorry, our witnesses. Lemme just before I leave this subject, last thing just about the kids' personal data and where it's stored. I'm asking you this as I'm sure you can intuit, because we've seen other technology companies in the social media space who have major issues about where data is stored and major access issues. And I'm thinking of, it shouldn't be hard to guess, I'm thinking in particular of China where we've seen other social media companies who say, oh, well America's data is stored in America, but guess what? Lots of people in other countries can access that data. So is that true for you, Mr. Smith is a child's data that they've entered into the binging chat that's stored in the United States. You just said if they're an American citizen, can that be accessed in let's say, China by a Microsoft China-based engineer?

Brad Smith:

I don't believe so. I'd love to go back and just confirm that, but I don't believe,

Sen. Josh Hawley (R-MO):

Would you be able to get that for me for the record? Thank you. Sure. Okay. I'll have lots more questions later. Thanks, Mr. Chairman. \

Sen. Richard Blumenthal (D-CT):

Thanks. Senator Hawley. Senator Klobuchar.

Sen. Amy Klobuchar (D-MN):

Thank you very much. Thank you all of you. I think I will lead with some elections questions since I'm the chair of the rules committee. Mr. Smith, in your written testimony, you talked about how watermarks could be helpful, disclosure of AI material, as you know, and we've talked about, I have a bill that I led that Representative Clarke leads in the House to require disclaimer and some kind of mark on AI generated ads. I think we have to go further, we'll get to that in a minute, professor. But could you talk about what you mean by in your written testimony? The health of democracy and meaningful civic discourse will undoubtedly benefit from initiatives that help protect the public against deception or fraud facilitated by AI generated conduct.

Brad Smith:

Absolutely. And here I do think things are moving quickly both in a positive and a worrisome direction in terms of what we're seeing. On the positive side, I think you're seeing the industry come together. You're seeing a company like Adobe. I think exercise, real leadership, and there's a recipe that I see emerging. I think it starts with a first principle. People should have the right to know if they're getting a phone call from a computer, from ai. If there's content coming from an AI system rather than a human being, we then need to make that real with legal rights that back it up. We need to create what's called a provenance system, watermarking for legitimate content so that it can't be altered easily without our detection to create a deep fake. We need to create an effort that brings the industry and I think governments together so we know what to do. And there's a consensus when we do spot deep fakes, especially say even deep fakes that have altered legitimate content.

Sen. Amy Klobuchar (D-MN):

Thank you.

Brad Smith:

So that would be the first.

Sen. Amy Klobuchar (D-MN):

And let's get to that hot off the press. Senator Hawley and I have introduced our bill today with Senator Collins and who led the Electoral Reform Act, as we know in Senator Coons, to ban the use of deceptive AI generated content in elections. So this would work in concert with some watermark system, but when you get into the deception where it is fraudulent, AI generated content pretending to be the elected official or the candidate when it is not, and we've seen this used against people on both sides of the aisle, which is why it was so important that we be bipartisan in this work. And I want to thank him for his leadership, not only of the framework, but also on the work that we're doing.

And I guess I'll go to you, Senator Hartzog. Mr. Hartzog, I just promoted you maybe. I mean, it's very debatable. Could you, in your testimony, you advocate for some outright prohibitions, which we're talking about here. Now we do have, of course, a constitutional exception for satire and humor because we love satire. So much of the senators do. Just kidding. But could you talk about why you believe there has to be some outright ban of misleading AI conduct related to federal candidates and political ads? Talk about that.

Woodrow Hartzog:

Sure, absolutely. Thank you for the question. Of course. Keeping in mind free expression, constitutional protections that would apply to any sort of legislation, I do think that Brightline rules and prohibitions around such deceptive ads are critical because we know that procedural walkthroughs, as I said in my testimony, often give the veneer of protection without actually protecting us. And so to outright prohibit these practices, I think is really important. And I would even go potentially a step further and think about ways in which we can prohibit not just those that we would consider to be deceptive, but practices that we would consider even abusive, that leverage our internal limitations and our desire to believe or want to believe things against us. And there's a body of law that sort of runs alongside unfair and deceptive trade practices around abusive trade

Sen. Amy Klobuchar (D-MN):

Practices. Okay. Alright. Mr. Dally, thinking of that, and I've talked to Mr. Smith about this as well. AI used as a scam. I had someone actually that I know well, who has a kid in the Marines who's deployed somewhere where they don't even know where it is, fake voice calls them, asks for money to be sent somewhere in Texas, I believe. Could you talk about what companies do, and I appreciate the work you've done to ensure that AI platforms are designed so they can't be used for criminal purposes. That's got to be part of the work that we do because it's not just scams against elected officials.

William Dally:

I think the best measures against deepfakes, and Mr. Smith mentioned it in his testimony, is the use of provenance and authentication systems where you can have authentic images, authentic voice recordings signed by the device, whether it's a camera or an audio recorder that has recorded that voice. And then when it's presented, it can be authenticated as being genuine and not a deep fake. That's sort of the flip side of watermarks, which would require that anything that is synthetically generated be identified as such. And those two technologies in combination can really help people sort out along with a certain amount of public education and make sure people understand what the technology is capable of and are on guard for that. But it can help them sort out what is real from what is fake.

Sen. Amy Klobuchar (D-MN):

Okay. Last, Mr. Smith, back where I started here, some AI platforms use local news content without compensating journalists and papers, including by using their content to train AI algorithms. The Journalism Competition Preservation Act, a bill I have with Senator Kennedy would allow local news organizations to negotiate with online platforms, including generative AI platforms that use their content without compensation. Could you talk about the impacts that AI could have on local journalism? You talked about in your testimony about the importance of investment in quality journalism, but we're getting at, we've got to find a way that we make sure that the people who are actually doing the work are compensated in many ways, but also in journalism. Mr. Smith.

Brad Smith:

I would just say three quick things. Number one, look, we need to recognize that local journalism is fundamental to the health of the country and the electoral system and it's ailing. So we need to find ways to preserve and promote it. Number two, generally I think we should let local journalists and publications make decisions about whether they want their content to be available for training or grounding and the like. And that's a big topic and it's worthy of more discussion. And we should certainly let them, in my view, negotiate collectively because that's the only way local journalism is really going to negotiate effectively.

Sen. Amy Klobuchar (D-MN):

I appreciate your words. You want to add one thing I'm going to get in trouble from Senator Blumenthal here. Go ahead.

Brad Smith:

No, but then I will just say, and there are ways that we can use AI to help local journalists, and we're interested in that too. So let's add that to the list.

Sen. Amy Klobuchar (D-MN):

Okay, very good. And thank you again, both of you. I talked about Senator Hawley's work, but you, Senator Blumenthal for your leadership.

Sen. Richard Blumenthal (D-CT):

Thank you very much. Thank you for your Senator Klobuchar Senator Hirono.

Sen. Mazie Hirono (D-HI):

Thank you, Mr. Chairman. Mr. Smith, it's good to see you again. So every time we have one of these hearings, we learn something new. But the conclusion I've drawn is that AI is ubiquitous. Anybody can use ai, it can be used in any endeavor. So when I hear you folks testifying about how we shouldn't be taking half measures, I'm not sure what that means. What does it mean not taking half measure on something as ubiquitous as AI where there are other regulatory schemes that can touch upon those endeavors that use ai. So there's always a question I have of when we address something as complex as how AI is looking, that there are unintended consequences that we should care about. Would you agree anybody, Mr. Smith?

Brad Smith:

I would absolutely agree. I think we have to define what's a full measure and what's a half measure. But I bet we can all agree that half measures are not good enough.

Sen. Mazie Hirono (D-HI):

Well, that is the thing. How to recognize going forward, what is actually going to help us with this powerful tool. So I have a question for you, Mr. Smith. It is a powerful tool that can be used for either good or it can also be used to spread a lot of disinformation and misinformation. And that happened during the disaster on Maui and Maui residents were subject to disinformation. Some of it coming from foreign governments, i e Russia, looking to sow confusion and distrust, including don't sign up for FEMA because they cannot be trusted. And I worry that with AI, such information will only become more rampant with future disasters. Do you share my concern about misinformation in the disaster context in the role AI could play? And what can we do to prevent these foreign entities from pushing out AI disinformation to people who are very vulnerable?

Brad Smith:

I absolutely share your concern, and I think there's two things we need to think about doing. First, let's use the power of AI as we are to detect these kinds of activities when they're taking place because they can enable us to go faster as they did in that instance where Microsoft, among others used AI and other data technologies to identify what people were doing. Number two, I just think we need to stand up as a country and with other governments and with the public and say, there need to be some clear red lines in the world today, regardless of how much else or what else we disagree about. When you think about what happens typically in the wake of an earthquake or a hurricane or a tsunami or a flood, the world comes together, people are generous, they help provide relief. And then let's look at what happened after the fire in Maui. It was the opposite of that. We had some people not necessarily directed by the Kremlin, but people who regularly spread Russian propaganda trying to discourage the people of Lahaina from going to the agencies that could help them. That's inexcusable. And we saw what we believe is Chinese directed activity, trying to persuade the world in multiple languages that the fire was caused by the United States government itself using a meteorological weapon. Those are the things that we should all try to bring the international community together and agree they're off limits.

Sen. Mazie Hirono (D-HI):

Well, how do we identify that this is even occurring, that there is China or Russia directed misinformation going on? How do we, I didn't know this was happening, by the way. And even in the energy committee on which I sit, we had people testify and I asked regarding the Maui disaster. I asked one of the testifiers whether he was aware that there had been disinformation put out by a foreign government. In that example, he said, yes, but I don't know that the people of Maui recognized that that was going on. So how do we one even identify that that's going on and then to come forward and say this is happening and to name names, identify that which country it is, that it's spreading this kind of disinformation and misinformation.

Brad Smith:

I think we have to think about two things. First, I think we at a company like Microsoft have to lean in and we are with data, with infrastructure, with experts and real-time capability to spot these threats, find the patterns and reach well-founded conclusions. And then the second thing, this is the harder thing. This is where it's going to need all of your help. What do we do if we find that a foreign government is deliberately trying to spread false information next year in a Senate or presidential campaign about a candidate? How do we create the room so that information can be shared and people will consider it? You all with this, the most important word in your framework is bipartisan. How do we create the bipartisan framework so that when we find this, we create a climate where people can listen? I think we have to look at both of those parts of the problem together.

Sen. Mazie Hirono (D-HI):

Well, I hope we can do that. And Mr. Chairman, if you don't mind, one of the concerns about AI from the worker standpoint is that their jobs will be gone. And Professor og, you mentioned that generative AI can result in job losses. And for both you and Mr. Smith, what are the kinds of jobs that will be lost to ai?

Woodrow Hartzog:

That's an excellent question. It's difficult to project that into the future, but I would start with saying that not necessarily something that can be automated effectively, but things that I think that those that control the purse strings think could be automated effectively. And if it gets to the point where it appears as though it could, I imagine you'll see industry move in that direction.

Sen. Mazie Hirono (D-HI):

Mr. Smith, I think you mentioned in your book, which I am listening to, that things like ordering something out of a drive-through that those jobs could be gone through ai.

Brad Smith:

Yeah. Four years ago, we published our book with my co-author behind me and we said, what's the first job that we think might be eliminated by ai? We don't have a crystal ball, but I bet it's taking an order in the drive-through of a fast food restaurant. You're not really establishing a rapport with the human being. All the person does is listen and type into a computer what you're saying. So when AI can hear as well as a person, it can enter that in. And indeed, I was struck a few months ago, I think it was announced that they were starting to consider whether they would automate with ai, the drive-through. I think there's a lesson though in that, and it should give us both pause, but I think a little bit of optimism. There's no creativity involved in a drive-through at least listening and entering an order. There are so many jobs that do involve creativity. So the real hope, I think, is to use AI to automate the routine, maybe even the work that's boring, to free people up so they can be more creative, so they can focus more on paying attention to other people and helping them. And if we just apply that recipe more broadly, I think we might put ourself on a path that's more promising.

Sen. Mazie Hirono (D-HI):

Thank you. Thank you, Mr. Chairman.

Sen. Richard Blumenthal (D-CT):

Thank you. Senator Hirono. Senator Kennedy.

Sen. John Kennedy (R-LA):

Thank you, Mr. Chairman. And thank you for calling this hearing. Mr. Dally. Am I saying your name correctly, sir? That's correct, yes. Mr. Dally, if I am a recipient of content created by generative AI, do you think I should have a right to know that that content was generated by a robot?

William Dally:

Yes, I think you do. I think the details would depend on the context, but in most cases, I think, or anybody else, if I received something, I'd like to know, is this real or was this generated?

Sen. John Kennedy (R-LA):

Mr. Smith?

Brad Smith:

Generally yes. What I would say is if you're listening to an audio, if you're watching a video, if you're seeing an image and it was generated by ai, I think people have a right to know. The one area where I think there's a nuance is if you're using AI to say, help you write something, maybe it's helping you write the first draft. Just as, I don't think any of us would say that. When our staff helps us write something, we are obliged to give the speech and say, now I'm going to read the paragraph that my staff wrote. You make it your own. And I think the written word is a little more complex, so we need to think that through. But as a broad principle, I agree with that principle.

Sen. John Kennedy (R-LA):

Professor.

Woodrow Hartzog:

There are situations where you probably wouldn't expect to be dealing with the product of generative ai, and in those instances....

Sen. John Kennedy (R-LA):

Well, that's the problem.

Woodrow Hartzog:

Right? But as times change, it's possible that our expectations change.

Sen. John Kennedy (R-LA):

But as a principal, do you think that people should have a right to know when they're being fed content from generative AI?

Woodrow Hartzog:

If they, well, it's, I tell my students, it depends on the context. Generally speaking, if you're vulnerable to generative ai, then the answer's absolutely yes.

Sen. John Kennedy (R-LA):

What do you mean? If you're vulnerable?

Woodrow Hartzog:

So there may be situations...

Sen. John Kennedy (R-LA):

I'm just looking for a straight answer. No disrespect.

Woodrow Hartzog:

No, not at all.

Sen. John Kennedy (R-LA):

I kind like two things. Breakfast, food, and straight answer. I love them. If a robot is feeding me information, and I don't know it's a robot, am I entitled to know it's a robot as a consumer? Pretty straight up.

Woodrow Hartzog:

I think the answer is yes in a lot of contexts.

Sen. John Kennedy (R-LA):

Alright, let start back from Mr. Dowley. Am I entitled to know who owns that robot and where that content came from? I know it came for a rope from a robot, but somebody had to goose the robot to make it. Give me that content. Am I entitled as a consumer to know who owns the robot?

William Dally:

I think that's a harder question. That depends on the particular context. I think if somebody is feeding me a video and it's been identified as being generated by ai, I now know that it's generated. It's not real. If it's being used, for example, in a political campaign, then I would want to know who.

Sen. John Kennedy (R-LA):

But let stop you informed it. Let's suppose I'm looking at a video and it was generated by a robot. Would it make any difference to you whether that robot was owned by, let's say, president Biden or President Trump? Don't you want to know in evaluating the content, who owns the robot and who prompted it to give you this information?

William Dally:

I would probably want to know that. I don't know that I would feel it would be required for me to know that.

Sen. John Kennedy (R-LA):

How about you, Mr. Smith?

Brad Smith:

I'm generally a believer in letting people know not only that it's generated by a computer, but who owns the program that's doing it. The only qualification I would offer, and it's something you all should think about and would know better than me, there are certain areas in political speech where one has to decide whether you want people to act with anonymity. The Federalist Papers were first published under a pseudonym, and I think in the world today, I'd rather have everybody know who's speaking.

Sen. John Kennedy (R-LA):

Professor.

Woodrow Hartzog:

I'm afraid I'm going to disappoint you again, Senator, with a not straight answer, but I agree.

Sen. John Kennedy (R-LA):

How do you feel about breakfast food.

Woodrow Hartzog:

Right. I am pro breakfast food. So we agree on that. I agree with Mr. Smith. I think that there are circumstances where you'd want to preserve anonymous speech, and there are some where you absolutely would want to know who.

Sen. John Kennedy (R-LA):

Okay, well, I don't want to go too over. Obviously this is an important subject and the extent to which I think, let me rephrase that. The extent of most senator's knowledge in terms of the nuances of ai, their general impression is that AI has extraordinary potential to make our lives better. If it doesn't make our lives worse first, and that's about the extent of it, in my judgment, we are not nearly ready to be able to craft a bill that looks like somebody designed it on purpose. I think we're more likely to take baby steps and ask you those questions predictably, because Senator Otz and I have a bill. It's very simple. It says, if you own a robot that's going to spit out artificial content to consumers, consumers have the right to know that it was generated by a robot and who owns the robot. And I think that's a good place to start. But again, I want to thank my colleagues here, my chair and my ranking member. They know a lot about this subject and I want to hear their questions too. Thank you all for coming.

Sen. Josh Hawley (R-MO):

Thank you. Senator Kennedy, on behalf of the chairman, we're going to start a second round, and I guess I'll go first since I'm the only one sitting here. It's bad news for the witnesses.

Sen. John Kennedy (R-LA):

Well, I came to hear you look.

Sen. Josh Hawley (R-MO):

I'm sure. Yeah yeah.

Sen. John Kennedy (R-LA):

I mean it.

Sen. Josh Hawley (R-MO):

Let me come back to this. We were talking about kids and kids' privacy and safety. Thanks for the information you're going to get me. Lemme give you an opportunity though, to maybe make a little news today in the best possible way. 13, the age limit for Bing's chat. That's such a young age. I mean, listen, I've got three kids at home, 10, 8, 2 are my kids. I don't want my kids to be interacting with chatbots anytime soon at all. But 13 is so incredibly young. Would you commit today to raising that age? And would you commit to a verifiable age verification procedure such that parents can know, they can have some sense of confidence that their 12 year old is not just saying to being, yeah, yeah, yeah. I'm 13. Yeah, I'm 15. Sure. Go right on ahead. Now let's get into a back and forth with this robot. As Senator Kennedy said, would you commit to those things on behalf of child safety today?

Brad Smith:

Look, as you can imagine, the teams that work at Microsoft, let me go out and speak, but they probably have one principle they want me to remember. Don't go out and make news without talking to them first.

Sen. Josh Hawley (R-MO):

But you're the boss.

Brad Smith:

Yeah. Let's just say wisdom is important and most mistakes you make when you make them by yourself. I'm happy to go back and talk more about what the right age should be.

Sen. Josh Hawley (R-MO):

Don't you think 13 is awfully low though?

Brad Smith:

It depends for what, actually.

Sen. Josh Hawley (R-MO):

To interact with a robot who could be telling you to do any number of things. Don't you think that's awfully young?

Brad Smith:

Not necessarily. Let me describe...

Sen. Josh Hawley (R-MO):

Really.

Brad Smith:

...this scenario. When I was in Seoul, Korea a couple of months ago, we met with the deputy Prime Minister, who's also the Minister of Education, and they're trying to create for three topics that are very objective, math, coding and learning English, a digital textbook with an AI tutor. So that if you're doing math and you don't understand a concept, you can ask the AI tutor to help you solve the problem. And by the way, I think it's useful not only for the kids, I think it's useful for the parents, and I think it's good. Let's just say a 14 year old, let's say, what's the age of eighth grade algebra. Most parents, I found when my kids were in eighth grade algebra, I tried to help them with their homework. They didn't believe I ever made it through the class. I think we want kids in a controlled way with safeguards to use something that way.

Sen. Josh Hawley (R-MO):

But we're not talking here about tutors. I'm talking about your AI chat, being chat. I mean, famously, earlier this year, your chatbot, you had a technology writer for the New York Times who wrote about this. I'm looking at the article right now. Your chatbot was urging this person to break up his marriage.

Brad Smith:

I'm not sure.

Sen. Josh Hawley (R-MO):

Do we want 13 year olds to be having those conversations?

Brad Smith:

No, of course not.

Sen. Josh Hawley (R-MO):

Okay. Well, will you commit to raising the age?

Brad Smith:

I actually don't want Bing Chat to break up anybody's marriage.

Sen. Josh Hawley (R-MO):

I don't either.

Brad Smith:

But that there may be some exceptions, but we're not going to make the decision on the exception. No, but it goes to, we have multiple tools. Age is a very red line.

Sen. Josh Hawley (R-MO):

It is a very red line. That's why I like it.

Brad Smith:

And my point is, there is a safety architecture that we can apply to bring,

Sen. Josh Hawley (R-MO):

But your safety architecture didn't stop. An adult didn't stop the chatbot from having this discussion, an adult in which it said, you don't really love your wife, you, your wife isn't good for you. She doesn't really love you. Now this as an adult, can you imagine the kind of things that your chatbot would say to a 13 year old? I mean, I'm serious about this. Do you really think this is a good idea?

Brad Smith:

Yeah, but look, wait, wait a second. Let's put that in context. At a point where the technology had been rolled out for only 20,000 people, a journalist for the New York Times spent two hours on the evening of Valentine's Day ignoring his wife and interacting with a computer trying to break the system, which he managed to do. We didn't envision that use and by the next day we had fixed it.

Sen. Josh Hawley (R-MO):

Are you telling me that you've envisioned all the questions that 13 year olds might ask and that I as a parent should be absolutely fine with that? Are you telling me that I should trust you in the same way that the New York Times writer did?

Brad Smith:

What I am saying is I think as we go forward, we have an increasing capability to learn from the experience of real people and put the guardrails and safety.

Sen. Josh Hawley (R-MO):

Right, that's what worries me. That's exactly what worries me is what you're saying is we have to have some failures. I don't want 13 year olds to be your Guinea pig. I don't want 14 year olds to be your Guinea pig. I don't want any kids to be your Guinea pig. I don't want you to learn from their failures. You want to learn from the failures of your scientists. Go right ahead. Let's not learn from the failures of America's kids. This is what happened with social media. We had social media who made billions of dollars giving us a mental health crisis in this country. They got rich, the kids got depressed, committed suicide. Why would we want to run that experiment again with ai? Why not raise the age You can do it.

Brad Smith:

We shouldn't want, first of all, we shouldn't want anybody to be a Guinea pig. I think regardless of age or anything.

Sen. Josh Hawley (R-MO):

Good. Well, let's roll kids out right here, right today, right now, no,

Brad Smith:

But let's also recognize that technology does require real users. What's different about this technology and which is so fundamentally different in my view from the social media experience is that we not only have the capacity, but we have the will and we are applying that will to fix things in hours and days.

Sen. Josh Hawley (R-MO):

Well, yeah. To fix things after there's been, after the fact. I mean, I am sorry. It just sounds to me like you're boiling it down. You're saying trust us, we're going to do well with this. I'm just asking you why we should trust you with our children.

Brad Smith:

I'm not asking for trust, although I hope we will work every day to earn it. That's why you have a licensing obligation.

Sen. Josh Hawley (R-MO):

There isn't a licensing obligation right now.

Brad Smith:

That's why in your framework, in my view...

Sen. Josh Hawley (R-MO):

Well, sure. But I'm asking you as the president of this company to make a commitment now for child safety and protection to say, you know what? Microsoft is going to, you could tell every parent in America now, Microsoft is going to protect your kids. We will never use your kids as a science experiment ever. Never. And therefore, we're not going to target your kids and we're not going to allow your kids to be used by our chatbots as a source of information if they're younger than 18.

Brad Smith:

But I think you're talking about with all due respect, there's two things that you're talking about and I think we're...

Sen. Josh Hawley (R-MO):

I'm just talking about protecting kids. This is very simple.

Brad Smith:

Yeah, no, but we don't want to use kids as a source of information and monetize it, et cetera. But I'm equally of the view. I don't want to cut off an eighth grader today with the right or ability to use this tool that will help them learn algebra or math in a way that they couldn't a year ago.

Sen. Josh Hawley (R-MO):

Yeah. Well, with all due respect, it wasn't algebra or math that your chatbot was recommending or talking about when it was trying to break up some reporter's marriage.

Brad Smith:

Of course now, but now we're mixing things in.

Sen. Josh Hawley (R-MO):

No, we're not. We're talking about your chatbot. We're talking about Bing chat.

Brad Smith:

Of course. We're talking about Bing chat and I'm talking about the protection of children and how we make technology better. And yes, there was that episode back in February on Valentine's Day. Six months later, if that journalist tries to do the same thing again, it will not happen. Okay.

Sen. Josh Hawley (R-MO):

You want me to be done, Senator?

Sen. Amy Klobuchar (D-MN):

I just don't want to miss my vote.

Brad Smith:

There's other witnesses.

Sen. Amy Klobuchar (D-MN):

I don't want to miss my vote.

Sen. Josh Hawley (R-MO):

Senator Klobuchar.

Sen. Amy Klobuchar (D-MN):

Oh, you are very kind. Thank you. Some of us haven't voted yet, so I wanted to turn to you, Mr. Dally. In March, NVIDIA announced a partnership with Getty Images to develop models that generate new images using Getty's Image Library. Importantly, this partnership provides royalties to content creators. Why was important to the company to partner with and pay for these of Getty's Image Library developing generative AI models?

William Dally:

Well, NVIDIA, we believe in respecting people's intellectual property rights and the rights of the photographers who produce the images that are models trained on and are expecting income from those images we didn't want to infringe on. So we did not just scrape a bunch of images off the web to train our model. We partnered with Getty and we trained our model Picasso, and when people use Picasso to generate images, the people who provided the original content get renumerated. And we see this as a way of going forward in general where people who are providing the IP that trains these models should benefit from the use of that ip.

Sen. Amy Klobuchar (D-MN):

Okay, and today the White House announced eight more companies that are committing to take steps to move towards safe, secure, and transparent development of AI and videos, one of those companies. Could you talk about the steps that you've taken and what steps do you plan to take to foster ethical and responsible development of ai?

William Dally:

So we've done a lot already. We have implemented our NEMO guardrails, so we can basically put guardrails around our own large language model nemo, so that inappropriate prompts to the model don't get a response. If the model inadvertently were to generate something that might be considered offensive that is detected and intercepted before it can reach the user of the model. We have a set of guidance that we provide for all of our internally generated models and how they should be appropriately used. We provide cards that sort of say where the model came from, what the data set is trained on, and then we test these models very thoroughly and the testing depends upon the use. So for certain models, we test them for bias, right? We want to make sure that when you refer to a doctor, it doesn't automatically assume it's a him. We test them in certain cases for safety. We have a variant of our NEMO model called Bio Nemo that's used in the medical profession. We want to make sure that the advice that it gives is safe and there are a number of other measures. I could give you a full list if you wanted.

Sen. Amy Klobuchar (D-MN):

Very good. Thank you. Professor Hartzog, do you think Congress should be more focused on regulating the inputs and design of generative AI or focus more on outputs and capabilities?

Woodrow Hartzog:

Oh, can't the answer Senator. Be both.

Sen. Amy Klobuchar (D-MN):

Of course it can.

Woodrow Hartzog:

Certainly. I think that the area that has been ignored, I think up to this point has been the design and inputs to a lot of these tools. And so to the extent that that area could use some revitalization, I would encourage inputs and outputs, design and uses.

Sen. Amy Klobuchar (D-MN):

Okay. And I suggest you look at these election bills because as we've all been talking about, I think we have to move quickly on those and the fact that it's bipartisan has been a very positive thing. So absolutely. I wanted to thank Mr. Smith for wearing a Purple Vikings tie. I know that that maybe was an AI generated message that you got to know that this would be a smart move with me after their loss on Sunday. I will remind you they're playing Thursday night

Brad Smith:

As a native of Wisconsin. I can assure you it was an accident.

Sen. Amy Klobuchar (D-MN):

Very good. Alright, thank you all of you. We have a lot of work to do. Thanks, Senator Blackburn.

Senator Marsha Blackburn (R-TN):

Thank you Mr. Chairman. Mr. Smith, I want to come to you first and talk about China and the Chinese Communist Party. The way they have gone about, and we've seen a lot of it on TikTok. They have these influence campaigns that they are running to influence certain thought processes with the American people. I know you all just did a report on China. You covered some of the disinformation, some of the campaigns. So talk to me a little bit about how Microsoft, but then the industry as a whole can combat some of these campaigns.

Brad Smith:

I think there's a couple of things that we can think more about and do more about. The first is we all should want to ensure that our own products and systems and services are not used, say by foreign governments in this manner. And I think that there's room for the evolution of export controls and next generation export controls to help prevent that. I think there's also room for a concept that's worked since the 1990s in the world of banking and financial services. It's these know your customer requirements and we've been advocates for those so that if there is abuse of systems, the company that is offering the service knows who is doing it and is in a better position to stop it from happening. I think the other side of the coin is using AI and advancing our defensive technologies, which really start with our ability to detect what is going on. And we've been investing heavily in that space. That is what enabled us to produce the report that we published. It is what enables us to see the patterns in communications around the world and we're seeking to be a voice with many others that really calls on governments to, I'll say, lift themselves to a higher standard so that they're not using this kind of technology to interfere in other countries and especially in other countries. Elections.

Senator Marsha Blackburn (R-TN):

In the report that you all did and you were looking at China, did you look at what I call the other members of the axis of evil Russia, Iran, North Korea?

Brad Smith:

We did, and that specific report that you're referring to was focused on what we call east was East Asia. We see especially prolific activities, some from China, some from Iran, and really the most global actor in this space is Russia. And we've seen that grow during the war, but we've seen it really spiral in the recent years. Going back to the middle of the last decade, we estimate that the Russian government is spending more than a billion dollars a year on a global, what we call cyber influence operation. Part of it targets the United States. I think their fundamental goal is to undermine public confidence in everything that the public caress about in the United States, but it's not unique to the United States. We see it in the South Pacific, we see it across Africa and I do think it's a problem we need to do more to counter.

Senator Marsha Blackburn (R-TN):

So summing it up, you would see something like a know your customer or a SWIFT system, things that apply to banking that is there to help weeded out. You think that companies should increase their due diligence to make certain that their systems are appropriate and then being careful about doing business with countries that may misuse a certain technology?

Brad Smith:

Generally, yes. I think one can look at the specific scenarios and what's more high risk, but a know your customer requirement. We've also set a know your cloud requirement in effect so that these systems are deployed in secure data centers.

Senator Marsha Blackburn (R-TN):

Okay. Mr. Hartsock, let me come to you. I think one of the things as we look at AI detrimental impacts, and we don't always want to look at the doomsday scenarios, but we are looking at some of the reports on surveillance with the C C P surveilling, the uyghurs with Iran surveilling women, and I think there are other countries that are doing the same type surveillance. So what can you do to prevent that? How do we prevent that?

Woodrow Hartzog:

Senator, I've argued in the past that facial recognition technologies in certain sorts of biometric surveillance are fundamentally dangerous and that there's no world in which that should be safe for any of us and that we should prohibit them outright, in the very least, prohibition of biometric surveillance in public spaces, prohibition of emotion recognition, this is what I refer to as the strong bride line measures that draws absolute lines in the sands rather than procedural ones that ultimately I think end up entrenching this kind of harmful surveillance.

Senator Marsha Blackburn (R-TN):

Okay. Mr. Chairman, can I take another 30 seconds because Mr. Daley was shaking his head in agreement on some things. I was catching that. Do you want to weigh in before I close my questioning on either of these topics?

William Dally:

I was in general agreement I guess when I was shaking my head. I think we need to be very careful about who we sell our technology to. And in NVIDIA we try to sell to people are using this for good commercial purposes and not to suppress others, and we will continue to do that. We don't want to see this technology misused to oppress anybody.

Senator Marsha Blackburn (R-TN):

Got it. Thank you. Thanks.

Sen. Richard Blumenthal (D-CT):

Thanks. Senator Blackburn, my colleague, Senator Hawley mentioned that we have a forum tomorrow, which I welcome. I think anything to aid in our education and enlightenment are being senators is a good thing and I just want to express the hope that some of the folks who are appearing in that venue will also cooperate and appear before this subcommittee. We will certainly be inviting more than a few of them, and I want to express my thanks to all of you for being here, but especially to Mr. Smith who has to be here tomorrow to talk to my colleagues privately. And our effort is complimentary, not contradictory to what Senator Schumer is doing. As you know, I'm very focused on election interference because elections are upon us and I want to thank my colleague, Senator Klobuchar and Hawley Coons and Collins for taking a first step toward addressing the harms that may result from DeepFakes impersonation, all of the potential perils that we've identified here.

And it seems to me that authenticating the truth or ads that embody true images and voices is one approach. And then banning the DeepFakes and impersonations is another approach. And obviously banning anything in the public realm, in public discourse, endangers, running afoul of the First Amendment, which is why disclosure is often the remedy that we seek, especially in campaign finance. So maybe I should ask all of you, whether you see that banning certain kinds of election interference and Mr. Smith, you raised the specter of foreign interference and the frauds and scams that could be perpetrated as they were in 2016. And I think it is one of those nightmares that should keep us up at night because we are an open society, we welcome free expression and AI is a form of expression whether we regard it as free or not and whether it's generated and high risk or simply touching up some of the background in the TV ad. Maybe you can each of you talk a little bit about what you see the potential remedies there, Mr. Dally.

William Dally:

So I think it is a grave concern with the election season coming up that the American public may be misled by deep fakes of various kinds. I think as you mentioned that the use of provenance to authenticate a true image or voice at its source and then tracking that to its deployment will let us know what are real images and if we insist on AI content, AI generated content being identified as such that people are at least tipped off that this is generated and not the real thing. I think that we need to avoid having some especially foreign entity interfer in our elections, but at the same time, AI generated content is speech and I think it would be a dangerous precedent to try to ban something. I think it's much better to have disclosure as you suggested than to ban something outright.

Sen. Richard Blumenthal (D-CT):

Mr. Smith.

Brad Smith:

Three thoughts. Number 1, 2024 is a critical year for elections not only in this country, but it's not only for the United States, it's for the United Kingdom for India across the European Union. More than 2 billion people will vote for who is going to represent them. And so this is a global issue for the world's democracies. Number two, I think you're right to focus in particular on the First Amendment because it's such a critical cornerstone for American political life and the rights that we all enjoy. And yet I will also be quick to add. I don't think the Russian government qualifies for protection under the First Amendment and if they're seeking to interfere in our elections, then I think that the country needs to take a strong stand and a lot of thought needs to be given as to how to do that effectively. But then number three, and this I think goes to the heart of your question and why it's such a good one, I think it's going to require some real thought discussion and an ultimate consensus to emerge.

Let me say around one specific scenario. Let's imagine for a moment that there is a video that involves a presidential candidate that originally was giving a speech. And then let's imagine that someone uses AI to put different words into the mouth of that candidate and uses AI technology to perfect it to a level that it is difficult for people to recognize as fraudulent. Then you get to this question, what should we do? And at least as I've been trying and we've been trying to think this through, I think we have two broad alternatives. One is we take it down and the other is we relabel it. If we do the first, then we're acting as sensors. And I do think that makes me nervous. I don't think that's really our role to act as sensors and the government really cannot, I think under the first amendment, but relabeling to ensure accuracy. I think that is probably a reasonable path, but really what this highlights is the discussion still to be had. And I think the urgency for that conversation to take place.

Sen. Richard Blumenthal (D-CT):

And I will just say, and then I want to come to you, professor Harza, that I agree emphatically with your point about the Russian government or the Chinese government or the Saudi government as potential interferes. They're not entitled to the protection of our Bill of rights when they are seeking to destroy those rights and purposefully trying to take advantage of a free and open society to in effect decimate our freedoms. So I think there is a distinction to be made there in terms of national security, and I think that rubric of national security, which is part of our framework, applies with great force in this area and that is different from a presidential candidate putting up an ad that in effect puts words in the mouth of another candidate. And as you may know, we began these hearings with introductory remarks from me that were impersonation taken from my comments on the floor, taking my voice from speeches that I made on the floor of the United States Senate with content generated by chat GPT, that sounded exactly like something I would say in a voice that was indistinguishable from mine. And obviously I disclosed that fact at the hearing, but in real time, as Mark Twain famously said, A lie travels halfway around the world before the truth gets out of bed, and we need to make sure that there is action in real time if you're going to do the kind of identification that you suggested. Real time meaning real time in a campaign which is measured in minutes and hours, not in days and months. Professor Hartzog,

Woodrow Hartzog:

Thank you, Senator, like you, I am nervous about just coming out and saying we're going to ban all forms of speech, particularly when you're talking about something as important as political speech. And like you, I also worry about disclosure alone as a half measure. And earlier in this hearing it was asked what is a half measure? And I think that goes towards answering your question today. I think the best way to think about half measures is an approach that is necessary but not sufficient. That risks giving us the illusion that we've done enough. But ultimately, and I think this is the pivotal point, doesn't really disrupt the business model and the financial incentives that have gotten us here in the first place. And so to help answer your question, one thing that I would recommend is thinking about throwing lots of different tools, which I applaud your bipartisan framework for doing, is bringing lots of different tools to bear on this problem. Thinking about the role that surveillance advertising plays in powering a lot of these harmful technologies and ecosystems that doesn't allow the system the lie just to be created but flourish and to be amplified. And so I would think about rules and safeguards that we could do to help limit those financial incentives. Borrowing from standard principles of accountability, things like we use disclosures where it's effective, it's not effective. You have to make it safe, and if you can't make it safe, it shouldn't exist.

Sen. Richard Blumenthal (D-CT):

Yeah, I think I'm going to turn to Senator Hawley for more questions, but I think this is a real conundrum. We need to do something about it. We need more than half measures. We can't dilute ourselves by thinking with a false sense of comfort that we've solved the problem if we don't provide effective enforcement. And to be very blunt, the Federal Elections Commission often has been less than fully effective, a lot less than fully effective in enforcing rules relating to campaigns. And so they're again, an oversight entity with strong enforcement authority, sufficient resources and the will to act is going to be very important if we're going to address this problem in real time. Senator Hawley,

Sen. Josh Hawley (R-MO):

Mr. Smith, lemme just come back to something you said. Thinking about now workers, you talked about Wendy's, I think it was automating the drive-through and talking about this was a good thing. I just want to press on that a little bit. Is it a good thing that workers lose their jobs to ai, whether it's at Wendy's or whether it's at Walmart or whether it's at the local hardware store? I mean, you pointed out that your comment was that there's really no creativity involved in taking orders through the drive-through, but that is a job oftentimes a first job for younger Americans. But hey, in this economy where the wages of blue collar workers have been flat for 30, 40 years and running, what worries me is that oftentimes what we hear from the tech sector to be honest with you is that jobs that don't have creativity as tech defines that don't have value, I'm frankly scared to death that AI will replace lots of jobs that tech types think aren't creative and will leave even more blue collar workers without any place to turn. So my question to you is can we expect more of this and is it really progress for folks to lose those kinds of jobs that I suspect that's not the best paying job in the world, but at least it's a job, and do we really want to see more of these jobs lost?

Brad Smith:

Well, to be clear, first I didn't say whether it was a good or bad thing. I was asked to predict what jobs would be impacted and identified that job as one that likely would be. But let's, I think step back, because I think your question is critically important. Let's first reflect on the fact that we've had about 200 years of automation that have impacted jobs sometimes for the better, sometimes for the worse. In Wisconsin where I grew up or in Missouri where my father grew up, if you go back 150 years, it took 20 people to harvest an acre of wheat or corn and now it takes one. So 19 people don't work on that acre anymore, and that's been an ongoing part of technology. The real question is this, how do we ensure that technology advances so that we help people get better jobs, get the skills they need for those jobs, and hopefully do it in a way that broadens economic opportunity rather than narrows it?

I think the thing we should be the most concerned by is that since the 1990s, and I think this is the point you're making, if you look at the flow of digital technology fundamentally we've lived in a world that has widened the economic divide. Those people with a college or graduate education have seen their incomes rise in real terms. Those people with say a high school diploma or less, have seen their income level actually drop compared to where it was in the 1990s. So what do we do now? Well, I'll at least say what I think our goal should be. Can we use this technology to help advance productivity for a much broader range of people, including people who didn't have the good fortune to go to say, where you or I went to college law school and can we do it in a way that not only makes them more productive, but actually reaps some of the dividends of that productivity for themselves in a growing income level? I think it's that conversation that we need to have.

Sen. Josh Hawley (R-MO):

Yeah, I agree with you and I hope that that is hope that that's what AI can do. You talked about the farm used to take 20 people to do what one person could do. It used to take thousands of people to produce textiles or furniture or other things in this company where now it's zero, so we can tell the tale in different ways. I'm not sure that seeing working class jobs go overseas or be replaced entirely is a success story. In fact, I'd argue it's not at all. It's not a success story, and I'd argue more broadly that our economic policy, the last 30 years has been downright disastrous for working people and tech companies and financial institutions and certainly banks and Wall Street, they have reaped huge profits, but blue collar workers can barely find a good paying job. I don't want AI to be the latest accelerant of that trend, and so I don't really want every service station in America to be manned by some computer such that nobody can get a job anymore, get their foot in the door, start there, climb up the ladder.

That worries me. Let me ask you about something else here In my expiring time, you mentioned national security. Critically important. Of course, there's no national security threat that is more significant for the United States than China. Lemme just ask you, is Microsoft two entwined with China? You have the Microsoft Research Asia that was set up way in Beijing back in the 1990s. You've got centers now in Shanghai and elsewhere. You've got all kinds of cooperation with Chinese state owned businesses. I'm looking at an article here from Protocol Magazine where one of their contributors said that Microsoft had been the alma mater of Chinese big tech. Are you concerned about your degree of entwinement with the Chinese government? Do you need to be decoupling in order to make sure that our national security interests aren't fatally compromised?

Brad Smith:

I think it's something that we need to be and are focused on to some degree in some technology fields. Microsoft is the alma mater of the technology leaders in every country in the world because of the role that we've played over the last 40 years. But when it comes to China today, we are and need to have very specific controls on who uses our technology and for what and how. That's why we don't, for example, do work on quantum computing or we don't provide facial recognition services or focus on synthetic media or a whole variety of things. While at the same time when Starbucks has stores in China, I think it's good that they can run their services in our data center rather than a Chinese company's data center. Well,

Sen. Josh Hawley (R-MO):

Just on facial recognition, I mean back in 2016, your company released this database, Ms. Celeb, 10 million faces without the consent of the folks who were in the database. You eventually took it down, although it took three years, China use that database to train much of its facial recognition software and technology. I mean, isn't that a problem? You said that Microsoft might be the alma mater of many companies AI, but China's unique. No. I mean China is running concentration camps using digital technology like we've never seen before. I mean, isn't that a problem for your company to be in any way involved in that?

Brad Smith:

We don't want to be involved in that in any way, and I don't believe we are. I think that…

Sen. Josh Hawley (R-MO):

Are you going to close your centers in China, your Microsoft research, Asian Beijing, or your center in Shanghai?

Brad Smith:

I don't think that will accomplish what you are asking us.

Sen. Josh Hawley (R-MO):

You're running thousands of people through your centers out into the Chinese government and Chinese state owned enterprises. Isn't that a problem?

Brad Smith:

First of all, there's a big premise and I don't embrace the premise that that is in fact what we're doing.

Sen. Josh Hawley (R-MO):

Which part is wrong?

Brad Smith:

The notion that we're running thousands of people through and then they're going into the Chinese government.

Sen. Josh Hawley (R-MO):

Is that not right? I thought you had 10,000 employees in China whom you've recruited from Chinese state owned agencies, Chinese state owned businesses, they come work for you, and then they go back to these state owned entities.

Brad Smith:

We have employees in China. In fact, we have that number. To my knowledge, that is not where they're coming from. That is not where they're going. We are not running that kind of revolving door, and it's all about what we do and who we do it with that I think is of paramount importance and that's what we're focused on.

Sen. Josh Hawley (R-MO):

You'd condemn what the Chinese government's doing to the Uyghurs and the Xinjiang province and all of that.

Brad Smith:

We do everything we can to ensure that our technology is not used in any way for that kind of activity in China and around the world, by the way.

Sen. Josh Hawley (R-MO):

But you condemn it to be clear.

Brad Smith:

Yes.

Sen. Josh Hawley (R-MO):

What are your safeguards that you have in place such that your technology is not further enabling the Chinese government given the number of people you employ there and the technology you develop there?

Brad Smith:

Well, you take something like facial recognition, which is at the heart of your question. We have very tight controls that limit the use of facial recognition in China, including controls that in effect make it very difficult, if not impossible to use it for any kind of real time surveillance at all. And by the way, the thing we should remember, the US is a leader in many AI fields. China's the leader in facial recognition technology and the AI for it and

Sen. Josh Hawley (R-MO):

Well, in part because of the information that you helped them acquire. No,

Brad Smith:

No. It's because they have the world's most data.

Sen. Josh Hawley (R-MO):

Well, yeah, but you gave them 2 million.

Brad Smith:

No, I don't think that's a fair characterization.

Sen. Josh Hawley (R-MO):

You don't think that had anything to do with it?

Brad Smith:

I don't think. When you have a country of 1.4 billion people and you decide to have facial recognition used in so many places, it gives that country a massive data.

Sen. Josh Hawley (R-MO):

But are you saying that the database that Microsoft released in 2016 m ms 11, you're saying that that wasn't used by the Chinese government to train their facial recognition?

Brad Smith:

I am not familiar with that, and I add it to the list. I'd be happy to provide you with information, but my goodness, the advance in that facial recogni recognition technology, if you go to another country where they're using facial recognition technology, it's highly unlikely. It's American technology. It's highly likely that it's Chinese technology because they are such leaders in that field, which I think is fine. I mean, if you want to pick a field where the United States doesn't want to be a technology leader, I'd put facial recognition technology on that list. But let's recognize it's homegrown.

Sen. Josh Hawley (R-MO):

How much money has Microsoft invested in AI development in China?

Brad Smith:

I don't know. But I will tell you this, the revenue that we make in China, which accounts for what, about one out of every six humans on this planet. If not, it's 1.5% of our global revenue. It's not the market for us that it is for other industries or even some other tech companies.

Sen. Josh Hawley (R-MO):

It sounds then like you can afford to decouple.

Brad Smith:

But is that the right thing to do?

Sen. Josh Hawley (R-MO):

Yes. In a regime that is fundamentally evil, that is inflicting the kind of atrocities on its own citizens that you just alluded to, that is doing to the Uyghurs what it's doing that is running modern day concentration camps. Yeah, I think it is.

Brad Smith:

But there's two questions that I think are at least are worthy of thought. Number one, do you want General Motors to sell or manufacture cars, let's just say sell cars in China. Do you want to create jobs for people in Michigan or Missouri so that those cars can be sold in China? If the answer to that is yes, then think about the second question. How do you want General Motors in China to run its operations? And where would you like it to storage its data? Would you like it to be in a secure data center run by an American company, or would you like it to be run by a Chinese company which will better protect General Motors trades secrets? I'll argue we should be there so that we can protect the data of American companies, European companies, Japanese companies, even if you disagree on everything else that I believe serves this country well,

Sen. Josh Hawley (R-MO):

Yeah, but I think you're doing a lot more than just protecting data in China. You have major research centers, thousands, tens of thousands of employees. And to your question, do I want General Motors to be building cars China? No, I don't. I want them to be making cars here in the United States with American workers. And do I want American companies to be aiding in any way the Chinese government in their oppressive tactics? I don't. Senator Ossoff, would you like me to yield now? Are you ready?

Sen. Richard Blumenthal (D-CT):

I'm going to. I have been very hesitant to interrupt. Very, very patient. The discussion, the conversation here has been very interesting and I'm going to call on Senator Ossoff and then I have a couple of follow-up questions.

Sen. Jon Ossoff (D-GA):

Thank you, Mr. Chairman, and thank you all for your testimony. Just getting down to the fundamentals, Mr. Smith, if we're going to move forward with a legislative framework, a regulatory framework, we have to define clearly in legislative text precisely what it is that we're regulating. What is the scope of regulated activities, technologies, and products? So how should we consider that question and how do we define the scope of technologies, the scope of services, the scope of products that should be subject to a regime of regulation that is focused on artificial intelligence?

Brad Smith:

I think there's three layers of technology on which we need to focus in defining the scope of legislation and regulation. First is the area that has been the central focus of 2023 in the executive branch. And here on Capitol Hill, it's these so-called frontier or foundation models that are the most powerful say for something like generative ai. In addition, there are the applications that use AI or as senators, Blumenthal and Hawley have said, the deployers of ai. If there is an application that calls on that model in what we consider to be a high risk scenario, meaning it could make a decision that would have an impact on say, the privacy rights, the civil liberties, the rights of children or needs of children, then I think we need to think hard and have law and regulation that is effective to protect Americans. And then the third layer is the data center infrastructure where these models and where these applications are actually deployed. And we should ensure that those data centers are secure, that there are cybersecurity requirements that the companies, including ours need to meet. We should ensure that there are safety systems at one, two or all three levels. If there is an AI system that is going to automate and control say something like critical infrastructure such as the electrical grid. So those are the areas where we would say start there with some clear thinking and a lot of effort to learn and apply the details, but focus there

Sen. Jon Ossoff (D-GA):

As more and more models are trained and developed to higher levels of power and capability, there will be a proliferation. There may be a proliferation of models, perhaps not the frontier models, perhaps not those at the bleeding edge that use the most compute of all powerful enough to have serious implications. So is the question, which models are the most powerful in a moment in time, or is there a threshold of capability or power that should define the scope of regulated technology?

Brad Smith:

I think you've just posed one of the critical questions that frankly a lot of people inside the tech sector and across the government and in academia are really working to answer. And I think the technology is evolving and the conversation needs to evolve with it. Let's just pause at this. Something like GPT-4 from OpenAI, let's just pause it. It can do 10,000 things really well. It's expensive to create and it's relatively easy to regulate in the scheme of things because there's one or two or 10. But now let's go to where you are going, which I think is right. What does the future bring in terms of proliferation? Imagine that there's an academic at Professor Hartzog's university who says, I want to create an open source model. It's not going to do 10,000 things well, but it's going to do four things.

Well, it won't require as many NVIDIA GPUs. It won't require as much data, but let's imagine that that could be used to create the next virus that could spread around the planet. Then you'd say, well, we really need to ensure that there's safety, architecture and controls around that as well. And that's the conundrum. That's why this is a hard problem to solve. It's why we're trying to build safety architecture in our data center so that open source models can say run in them and still be used in ways that will prohibit that kind of harm from taking place. But as you think about a licensing regime, this is one of the hard questions. Who needs a license? You don't want it to be so hard that only a small number of big companies can get it, but then you also need to make sure that you're not requiring people to get it when they really, we would say, don't need a license for what they're doing. And the beauty of the framework in my view, is it starts to frame the issue. It starts to define the question.

Sen. Jon Ossoff (D-GA):

Lemme ask this question and how we work on is it a license to train a model to a certain level of capability? Is it a license to sell or license access to that model or is it a licensed to purchase or deploy that model? Who is the licensed entity?

Brad Smith:

That's another question that is key and may have different answers in different scenarios, but mostly I would say it should be a license to deploy. I think that there well be obligations to disclose to say an independent authority when a training run begins depending on what the goal when the training run ends so that an oversight body can follow it just the way say might happen when a company's building a new commercial airplane. And then there are what's emerging. The good news is there's emerging a foundation of call it best practices for then how the model should be trained, what kind of testing there should be, what harms should be addressed. That's a big topic that needs discussion.

Sen. Jon Ossoff (D-GA):

When you say, forgive me Ms. Smith, when you say a license to deploy, do you mean for example, if a Microsoft Office product wishes to use a GPT model for some user serving purpose within your suite, you would need a license to deploy GPT in that way? Or do you mean that GPT would require a license to offer to Microsoft? And putting aside whether or not this is a plausible commercial scenario, the question is what's the structure of the licensing arrangement in this case?

Brad Smith:

It's more the latter. Imagine, look, think about it like Boeing. Boeing builds a new plane before it can sell it to United Airlines, and United Airlines can start to fly it. The F A A is going to certify that it's safe. Now imagine we're at, call it GPT 12, whatever you want to name it, before that gets released for use, I think you can imagine a licensing regime that would say that it needs to be licensed after it's been in effect certified as safe. And then you have to ask yourself, well, how do you make that work so that we don't have the government slow everything down? And what I would say is you bring together three things. First, you need industry standards so that you have a common foundation and well understood way as to how training should take place. Second, you need national regulation. And third, if we're going to have a global economy, at least in the countries where we want these things to work, you probably need a level of international coordination. And I'd say, look at the world of civil aviation. That's fundamentally how it has worked since the 1940s. Let's try to learn from it and see how we might apply something like that or other models here.

Sen. Jon Ossoff (D-GA):

Mr. Dally, how would you respond to the question in a field where the technical capabilities are accelerating a rapid rate, future rate unknown, where and according to what standard or metric or definition of power do we draw the line for? What requires a license for deployment and what can be freely deployed without oversight by the government?

William Dally:

I think it's a tough question because I think you have to balance two important considerations. The first is the risks presented by a model of whatever power. And on the other side is the fact that we would like to ensure that the US stays ahead in this field. And to do that, we want to make sure that individual academics and entrepreneurs with a good idea can move forward and innovate and deploy models without huge barriers.

Sen. Jon Ossoff (D-GA):

So it's the capability of the model. It's the risk presented by its deployment without oversight. Is that because the thing is we're going to have to write legislation, and the legislation is going to have to, in words, is define the scope of regulated products. And so we're going to have to bound that which is subject to a licensing arrangement or wherever we land. And that which is not,

William Dally:

I think it is...

Sen. Jon Ossoff (D-GA):

What is the very, and so how do you, I mean, and

William Dally:

It is dependent on the application because if you have a model which is basically determining a medical procedure, there's a high risk with that depending on the patient outcome. If you have another model which is controlling the temperature in your building, if it gets a little bit wrong, you may consume a little bit too much power, or maybe you're not as comfortable as you would be, but it's not a life-threatening situation. So I think you need to regulate the things that have high consequences if the model goes awry.

Sen. Jon Ossoff (D-GA):

And I'm on the Chairman's borrowed time, so just tap the gavel when you want me to stop. You had to wait. That's true. So we'll give you a couple. Okay, good. Okay, professor, and I'd be curious to hear from others concisely with respect for the chairman's. How does any of this work without international law? I mean, isn't it correct that a model potentially a very powerful and dangerous model, for example, whose purpose is to unlock CBRN or mass destructive virological capabilities to a relatively unsophisticated actor once trained, it's relatively lightweight to transport and without an international legal system and B, a level of surveillance that seems inconceivable into the flow of data across the internet, how can that be controlled and policed?

Woodrow Hartzog:

It's a great question, Senator. And with respect to being efficient in my answer, I'll simply say that there are going to be limits, even assuming that we do need international cooperation, which I would agree with you. I mean, we've already started thinking about ways in which, for example, within the EU, which is already deployed some significant AI regulation, we might design frameworks that are compatible with that. That requires some sort of interaction. But ultimately what I worry about is actually deploying a level of surveillance that we've never before seen in an attempt to perfectly capture the entire chain of AI. And that's simply not possible.

Sen. Jon Ossoff (D-GA):

I share that concern about privacy, which is in part why I raised the point. How can we know what folks are loading a lightweight model once trained onto perhaps a device that's not even online anymore,

Woodrow Hartzog:

Right? There are limits, I think, to what we'll ever be able to know.

Sen. Jon Ossoff (D-GA):

Either of you want to take a stab before I get gaveled out here.

Brad Smith:

I would just say you're right. There's going to need to be a need for international coordination. I think it's more likely to come from like-minded governments than perhaps global governance, at least in the initial years. I do think there's a lot we can learn. We were talking with Senator Blackburn about the SWIFT system for financial transactions, and somehow we've managed globally and especially in the United States for 30 years to have know your customer requirements obligations for banks. Money has moved around the world. Nothing is perfect. I mean, that's why we have laws, but it's worked to do a lot of good to protect against, say, terrorist or criminal uses of money that would cause concern.

William Dally:

Well, I think you're right in that these models are very portable. You could put the parameters of most models, even the very large ones on a large U S B drive and carry it with you somewhere. You could also train them in a data center anywhere in the world. So I think it's really the use of the model and the deployment that you can effectively regulate. It's going to be hard to regulate the creation of it because if people can't create them here, they'll create them somewhere else. And I think we have to be very careful if we want the US to stay ahead, that we want the best people creating these models here in the US and not to go somewhere else where the regulatory climate has driven them.

Sen. Jon Ossoff (D-GA):

Thank you. Thank you, Mr. Chairman.

Sen. Richard Blumenthal (D-CT):

Thank you. Senator Ossoff. I hope you are okay with a few more questions. We've been at it for a while. You've been very patient.

Brad Smith:

Do we have a choice?

Sen. Richard Blumenthal (D-CT):

No, but thank you very much. It's been very useful. I want to follow up on a number of the questions that I've asked. First of all, on the international issue, there are examples and models for international cooperation. Mr. Smith, you mentioned civil aviation, the 7 37 max, I think I have it right when it crashed, it was a plane that had to be redone in many respects. And companies, airlines around the world look to the United States for that redesign and then approval, civil aviation, atomic energy, not always completely effective, but it has worked in many respects. And so I think there are international models here where frankly the United States is a leader by example and best practices are adopted by other countries when we support them. And frankly in this instance, the EU has been ahead of us in many respects regarding social media, and we are following their leadership by example. I want to come to this issue of having centers, whether they're in China or for that matter elsewhere in the world, requiring safeguards so that we are not allowing our technology to be misused in China against the Uighurs and preventing that technology from being stolen or people we train there from serving bad purposes. Are you satisfied, Mr. Smith, that it is possible in fact that you are doing it in China that is preventing the evils that could resolve from doing business there in that way?

Brad Smith:

I would say two things. First, I feel good about our track record and our vigilance and the constant need for us to be vigilant about what services we offer to whom and how they're used. It's really those three things. And I would take from that what I think is probably the conversation we'll need to have as a country about export controls more broadly. There's three fundamental areas of technology where the United States is today. I would argue the global leader first, the G P U chips from a company like NVIDIA. Second, the cloud infrastructure from a company like say Microsoft. And the third is the foundation model from a firm such as OpenAI, and of course Google and a w s and other companies are global leaders as well. And I think if we want to feel that we're good in creating jobs in the United States by inventing and manufacturing here, as you said, Senator Hawley, which I completely endorse and good the technology being used properly, we probably need an export control regime that weaves those three things together.

For example, there might be a country in the world, let's just set aside China for a moment. Leave that out. There's just say there's another country where you all in the executive branch would say, we have some qualms, but we want US technology to be present and we want US technology to be used properly the way that would make you feel good. You might say, then we'll let NVIDIA export ships to that country to be used in say, a data center of a company that we trust that is licensed even here for that use. With the model being used in a secure way in that data center with a know your customer requirement and with guardrails that put certain kinds of use off limits. That may well be where government policy needs to go and how the tech sector needs to support the government and work with the government to make it a reality.

Sen. Richard Blumenthal (D-CT):

I think that that answer is very insightful and raises other questions. I would kind of analogize this situation to nuclear proliferation. We cooperate over safety in some respects with other countries, some of them adversaries, but we still do everything in our power to prevent American companies from helping China or Russia in their nuclear programs. Part of that non-proliferation effort is through export controls. We impose sanctions. We have limits and rules around selling and sharing certain choke point technologies related to nuclear enrichment as well as biological warfare surveillance and other national security risks. And our framework, in fact envisions sanctions and safeguards precisely in those areas for exactly the reasons we've been discussing here. Last October, the Biden administration used existing legal authorities as a first step in blocking the sale of some high performance chips and equipment to make those chips to China. And our framework calls for export controls and sanctions and legal restrictions. So I guess a question that we will be discussing, we're not going to resolve it today regrettably, but we would appreciate your input going forward. And I'm inviting any of the listening audience here in the room or elsewhere to participate in this conversation on this issue and others, how should we draw a line on the hardware and technology that American companies are allowed to provide anyone else in the world, any other adversaries or friends because as you've observed Mr. Dally, and I think all of us accept it's easily proliferated.

William Dally:

Yeah. If I could comment on this. Sure. I think you drew an analogy to nuclear regulation and mentioned the word choke point. And I think the difference here is that there really isn't a choke point. And I think there's a careful balance to be made between limiting where our chips go and what they're used for and disadvantaging American companies and the whole food chain that feeds them. Because we're not the only people who make chips that can do ai. I wish we were, but we're not. There are companies around the world that can do it. There are other American companies, there are companies in Asia, there are companies in Europe, and if people can't get the chips, they need to do AI from us, they will get them somewhere else. And what will happen then is it turns out that chips aren't really the things that make them useful. It's the software. And if all of a sudden the standard chips for people to do AI become something from pick a country, Singapore, all of a sudden all of the software engineers will start writing all the software for those chips. They'll become the dominant chips. And the leadership of that technology area was shifted from the US to Singapore. Whatever other country becomes dominant, we have to be very careful to balance the national security considerations and the abusive technology considerations against preserving the US lead in this technology area.

Brad Smith:

Mr. Smith? Yeah, it's a really important point. And what you have is the argument counterargument. Let me for a moment, channel what Senator Hawley often voices that I think is also important. Sometimes you can approach this and say, look, if we don't provide this to somebody, somebody else will. So let's not worry about it. I get it. But at the end of the day, whether you're a company or a country, I think you do have to have clarity about how you want your technology to be used. And I fully recognize that there may be a day in the future after I retire from Microsoft when I look back and I don't want to say, oh, we did something bad because if we didn't, somebody else would. I want to say no. We had clear values and we had principles, and we had in place guardrails and protections, and we turned down sales so that somebody couldn't use our technology to abuse other people's rights. And if we lost some business, that's the best reason in the world to lose some business. And what's true of a company is true as a country. And so I'm not trying to say that your view shouldn't be considered. It should. That's why this issue is complicated. How to strike that.

Sen. Richard Blumenthal (D-CT):

Professor Hartzog, do you have any comment?

Woodrow Hartzog:

I think that was well said, and I would only add that it's also worth considering in this discussion about how we sort of safeguard these incredibly dangerous technologies and the risk that could happen if they, for example, proliferated. If it's so dangerous, then we need to revisit the existential question again. And I just bring it back to thinking not only about how we put guardrails on, but how we lead by example, which I think you brought up, which is really important. And we don't win the race to violate human rights, and that's not one that we want to be running.

Sen. Richard Blumenthal (D-CT):

And it isn't simply Chinese companies importing chips from the United States and building their own data centers. Most AI companies rent capabilities from cloud providers. We need to make sure that the cloud providers are not used to circumvent our export controls or sanctions. Mr. Smith, you raised that, know your customer rules. Knowing your customers would require cloud ai, cloud providers whose models are deployed to know what companies are using those models. If you're leasing out a supercomputer, you need to make sure that your customer isn't the people's liberation army. That it isn't being used to subjugate Uyghurs, that it isn't used to do facial recognitions on dissidents or opponents in Iran, for example. But I do think that you've made a critical point, which is there is a moral imperative here, and I think there is a lesson in the history of this great country, the greatest sun in the history of the world, that when we lose our moral compass, we lose our way. And when we simply do economic or political interests, sometimes it's very shortsighted and we wander into a geopolitical, swamp and quicksand.

So I think these kinds of issues are very important to keep in mind when we lead by example. I want to just make a final point, and then if Senator Hawley has questions we're going to let him ask. But on this issue of worker displacement, I mentioned at the very outset, I think we are in on the cusp of a new industrial revolution. We've seen this movie before, as they say, and it didn't turn out that well in the industrial revolution where workers were displaced on mass, those textile factories and the mills in this country and all around the world went out of business essentially, or replace the workers with automation and mechanics. And I would respond by saying, we need to train those workers. We need to provide education. You've alluded to it, and it didn't be a four year college in my state of Connecticut, electric Boat, Pratt and Whitney, Sikorsky defense contractors are going to need thousands of welders, electricians, tradespeople of all kinds, who will have not just jobs, they'll have careers that require skills that frankly, I wouldn't begin to know how to do and I haven't the aptitude to do, and that's no false modesty.

So I think there are tremendous opportunities here, not just in the creative spheres that you have mentioned, where we may think higher human talents may come into play, but in all kinds of jobs that are being created daily already in this country. And as I go around the state of Connecticut, the most common, the comment I hear from businesses, we can't find enough people to do the jobs we have right now. We can't find people to fill the openings that we have. And that is, in my view, maybe the biggest challenge for the American economy today.

Brad Smith:

I think that is such an important point and it's really worth putting everything we think about for jobs because I wholeheartedly endorse Senator Hawley, what you were saying before about we want people to have jobs, we want them to earn a good living, et cetera. First, let's consider the demographic context in which jobs are created. The world has just entered a shift of the kind that it literally hasn't seen since the 14 hundreds, namely populations that are leveling off or in much of the world now declining. One of the things we look at is every country and measure over five year periods is the working age population increasing or decreasing? And by how much from 2020 to 2025, the working age population in this country, people age 20 to 64 is only going to grow by 1 million people. The last time it grew by that small and number who was President of the United States, John Adams.

That's how far back you have to go. And if you look at a country like Italy, take that group of people over the next 20 years, it's going to decline by 41%. And what's true of Italy is true almost to the same degree in Germany. It's already happening in Japan, in Korea. So we live in a world where for many countries we suddenly encounter what you actually find. I suspect when you go to Hartford or St. Louis or Kansas City, people can't find enough police officers, enough nurses, enough teachers, and that is a problem we need to desperately focus on solving. So how do we do that? I do think AI is something that can help. And even in something like a call center, one of the things that's fascinating to me, we have more than 3000 customers around the world running proofs of concept.

One fascinating one is a bank in the Netherlands, they said, you go into a call center today, the desks of the workers look like a trading floor in Wall Street. They have six different terminals. Somebody calls, they're desperately trying to find the answer to a question. With something like GPT-4 with our services, six terminals can become one. Somebody who's working there can ask a question. The answer comes up. And what they're finding is that the person who's answering the phone, talking to a customer can now spend more time concentrating on the customer and what they need. And I appreciate all the challenges. There's so much uncertainty. We desperately need to focus on skilling, but I really do hope that this is an era where we can use this to frankly help people fill jobs, get training, and focus more. Let's just put it this way. I'm excited about artificial intelligence. I'm even more excited about human intelligence. And if we can use artificial intelligence to help people exercise more human intelligence and earn more money doing so, that would be something that would be way more exciting to pursue than everything that we've had to grapple with the last decade around say social media and the like.

Sen. Richard Blumenthal (D-CT):

Well, our framework very much focuses on treatment of workers, about providing more training. It may not be something that this entity will do, but it is definitely something that it has to address. And it's not only displacement, but also working conditions and opportunities within the workplace for promotion to prevent discrimination, to protect civil rights. We haven't talked about it in detail, but we deal with it in our framework in terms of transparency around decision-making. And China may try to steal our technology, can't steal our people, and China has its own population challenges with the need for more people skilled labor. But I say about Connecticut, we don't have gold mines or oil wells. What we have is a really able workforce, and that's going to be the key to I think, America's economy in the future. And AI can help promote development of that workforce.

Senator Hawley, anything? Nothing further. You all have been really patient and so has our staff. I want to thank our staff for this hearing, but most important, we're going to continue these hearings. It is so helpful to us. I can go down our framework and tie the proposals to specific comments made by Sam Altman or others who have testified before, and we will enrich and expand our framework with the insights that you have given us. So I want to thank all of our witnesses and again, look forward to continue our bipartisan approach here. You made that point, Mr. Smith. We have to be bipartisan and adopt full measures, not half measures. Thank you all. This hearing is adjourned.

Authors

Gabby Miller
Gabby Miller was a staff writer at Tech Policy Press from 2023-2024. She was previously a senior reporting fellow at the Tow Center for Digital Journalism, where she used investigative techniques to uncover the ways Big Tech companies invested in the news industry to advance their own policy interes...

Topics