Home

Transcript: Joint Hearing on “Artificial Intelligence and Its Potential to Fuel Economic Growth and Improve Governance”

Gabby Miller / Jun 6, 2024

June 4, 2024: Sen. Martin Heinrich (D-NM) chairs a Joint Economic Committee hearing alongside Sen. Amy Klobuchar (D-MN). Source.

On Tuesday, June 4, 2024, the Joint Economic Committee hosted a hearing on “Artificial Intelligence and Its Potential to Fuel Economic Growth and Improve Governance." Chaired by Sen. Martin Heinrich (D-NM) and Vice Chair Rep. David Schweikert (R-AZ), the hearing covered topics ranging from using AI to automate mundane administrative tasks in healthcare to training large language models on the Department of Energy’s trove of science data.

The hearing featured four expert witnesses:

  • Dr. Brian Miller - Nonresident fellow, American Enterprise Institute (written testimony)
  • Adam Thierer - Senior Fellow of Technology and Innovation, R Street Institute (written testimony)
  • Dr. Ayanna Howard - Dean of Engineering, The Ohio State University (written testimony)
  • Dr. Jennifer Gaudioso - Director, Center for Computing Research, Sandia National Laboratory (written testimony)

Below is a lightly edited transcript of the hearing. Please refer to the official video of the hearing when quoting speakers.

Sen. Martin Heinrich (D-NM):

Thank you for pulling this hearing together. It should be really interesting. A number of folks know that I've been heavily involved in these conversations and we have been able to really put together a surprising amount of sort of bipartisan interest and where we think we need to, where we really think that the benefits are going to accrue from artificial intelligence and where are the places where we have to be careful and minimize some of the risks. So I'm very much looking forward to continuing that conversation today, and I'm going to introduce our other two distinguished witnesses. Dr. Ayanna Howard is the Dean of Engineering at Ohio State University. Previously she was chair of the Georgia Institute of Technology School of Interactive Computing in the College of Computing, as well as the founder and director of the Human Automation Systems Lab. Her career spans higher education, NASA's Jet Propulsion Laboratory and the private sector.

Dr. Howard is the founder and president of the Board of Directors of Zyrobotics, a Georgia Tech spinoff company that develops mobile therapy and educational products for children with special needs. She's also a fellow of the American Association for the Advancement of Science and the National Academy of Inventors and was appointed to the National Artificial Intelligence Advisory Committee. Dr. Jennifer Gaudioso is director of the Center for Computing Research at Sandia National Laboratories, where she stewards the center's portfolio of research from fundamental science to state-of-the-art applications. She's also the program executive for the National Nuclear Security Administration's Advanced Simulation and Computing program there at Sandia. Previously, she served as the director of the Center for Computation and Analysis for national security where she oversaw the use of systems analysis, cybersecurity and data science capabilities to tackle complex national security challenges.

Rep. David Schweikert (R-AZ):

Thank you, Senator Heinrich. Let's go ahead and hear from our witnesses and Dr. Miller. Everyone gets five minutes and then hopefully we can follow up with questions. Dr. Miller?

Dr. Brian Miller:

Thank you Chairman Heinrich, Vice Chairman Schweikert and distinguished members of the committee for allowing me to share my views on AI and its potential to fuel economic growth and governance. I'm a pragmatist, so I'm going to focus on pragmatic applications and policy questions for the fifth of the economy that comprises healthcare. As mentioned, I'm a practicing hospitalist at Hopkins, non-resident fellow at AEI. I actually work for four regulatory agencies including the FDA, CMS and the FTC and FCC and I also serve on MedPAC. I should note that today I'm here in my personal capacity and my views are my own and don't represent those of Johns Hopkins, AEI or MedPAC. So I just actually finished a week working in the hospital on the night shift. It's an interesting experience. It's seven days in a row of flying a 747 with analog controls with no autopilot.

It's not a good thing for us to have systems focused this way across the country. And I'd say actually since I first rounded in the hospital as a medical student 15 years ago, things haven't really changed. I don't really see a lot of change in clinical operations and what we do and the broader economic data support This assertion, the Bureau of Labor Statistics tells us that for around 25 years, the hospital industry has had flat or declining labor productivity most years and demands going up, right? People are getting sicker. We have more elderly patients and we have a labor shortage as a consequence. So we're missing 78,000 registered nurses, 68,000 primary care physicians amongst others and also the spendings breaking the budget, right? So Medicare and Medicaid are $1.7 trillion or more annually, and that crowds out other sort of transformative investments that we want to make in things like transportation, education, my personal favorite space exploration.

And so we got to think differently, and so AI and automation actually can help solve our productivity problem in my industry and let us clinicians do what clinicians do best, which is focus on the patients instead of paperwork. Patients today face delays in diagnosis, clinical errors, and tired and fatigued clinical staff. We're focused on admin tasks, so AI is not really a Terminator three, it's also not really Star Trek. It's an inherently practical and technical issue for implementing it in healthcare. We can use it to automate mundane administrative tasks like physician charting with ambient AI, coding and billing. Imagine if AI were summarizing your clinic visit as you were actually talking with the physician instead of them staring at the computer. And imagine if that physician could save time from the six hours a day that's spent on charting. This is actually being tested today and my colleagues at other hospitals are part of these pilots and I can also augment clinical labor.

It could assist with mammography interpretation, melanoma diagnosis, improving efficiency and accuracy, and identifying areas of concern. It could advance a physician review, can automate other elements of clinical practice, reading pathology slides, looking at EEGs to check for seizures and other neurologic problems. And then a lot of folks are really worried about the labor impact and I have to say that with the average day for a primary care physician estimated at 26.7 hours, if they complete all the tasks they're supposed to, there's plenty of room for us to have software and automation pick this up for consumers. The win is huge. So if you're a consumer and you have a chronic disease, the burden is significant. Being a diabetic, you have to check your sugars, you have to give yourself a bunch of shots, you have to catch your carbs, watch what you eat. It's not easy. Imagine if we could create integrated systems with glucometers to check glucose insulin pumps, and we could take that burden away from the patients so they could just focus on going about their life from a policy perspective.

We have to be careful not to overregulate. So right now, this is, and I'm a car guy. This is like putting airbags in cars in 1920. If we go too far, we should be practical and use existing authorities that we have at agencies like the FDA and the Office of National Coordinator for Health IT. And we want to facilitate permissive, bottom up innovation from clinicians, nurses, engineers and others, and we want that to come from the bedside. We should also aim to pay for and drive competition amongst new and old care models between humans and technology, and we want rapid cycle stacked incremental innovation to transform healthcare. We can't tax and spend our way out of this, so we must innovate and instead remember why America is great. Thank you.

Adam Thierer:

Chairman Heinrich, vice Chairman Schweikert, members of the committee, thank you for the invitation to participate in this important hearing on artificial intelligence and its potential to fuel economic growth and improve governance. My name is Adam Thierer and I'm a senior fellow at the R Street Institute where I focus on emerging technology issues. I also recently served as a commissioner on the US Chamber of Commerce Commission on Artificial Intelligence, Competitiveness, Inclusion and Innovation. Today I will discuss three points relevant to this hearing. First, AI and advanced computational technologies can help fuel broad-based economic growth and sectoral productivity, while also improving consumer health and welfare in important ways. Second, to unlock these benefits, the United States needs to pursue a pro- innovation AI policy vision that can help bolster our global competitive advantage and geopolitical security. Third, we can advance these goals through an AI opportunity agenda that includes a learning period moratorium on burdensome new forms of AI regulations.

I'll address each point briefly, but I've included three appendixes to my testimony for more details. AI is set to become the most important general purpose technology of our era, and AI could revolutionize every segment of the economy in some fashion. The potential exists for AI to drive explosive economic growth and productivity enhancements. While predictions vary, analysts forecast that AI could deliver trillions in additional global economic activity and significantly boost annual GDP growth. This would be over and above the 4 trillion of gross output that the US Bureau of Economic Analysis says that the digital economy already accounted for in 2022, but what really matters is what AI means to every American personally. AI is poised to revolutionize health outcomes. In particular, AI is already helping with early detection and treatment of cancers, strokes, heart disease, brain disease, sepsis, and other ailments. AI is also helping address organ failure paralysis, vision impairments, and much more.

The age of personalized medicine will be driven by AI advancements. AI can help make government more efficient as well. Ohio Lieutenant Governor Jon Husted recently used an AI tool to help sift through the state's code of regulations and eliminate 2.2 million words of unnecessary and outdated regulations. California Governor Gavin Newsom just announced an effort to use generative AI tools to improve public services and cut 8% from the state's government operations budget, and regulators are already using AI to facilitate compliance with existing policies such as post-market medical device surveillance. AI also holds the potential to achieve administrative savings for federal health insurance programs or better yet reduce the number of people dependent on them by identifying and treating ailments earlier, there's an important connection as well between AI and broader national objectives. A strong technology base is a key source of strength and prosperity, so it's essential we do not undermine innovation and investment as the next great technology race gets underway with China and the rest of the world.

Luckily, US innovators are still in the lead. Had a Chinese operator launched a major generative AI model first, it would've been a veritable Sputnik moment for America. Still, China has made its imperial ambitions clear with the goal to become a global leader in advanced computation by 2030, and it has considerable talent, data, and resources to power those ambitions. Experts argue that China's whole-of-society approach is challenging America's traditional advantages in advanced technology. We therefore need an innovation policy for AI that will not only strengthen our economy and provide better products and jobs, bolster national security, and allow our values of pluralism, personal liberty, individual rights, and free speech to shape global information markets and platforms. If by contrast, fear-based policies impede America's AI developments, then China wins. To achieve these benefits that AI offers and meet the rising global competition, America needs what I call an 'AI opportunity agenda.'

An AI opportunity agenda begins with reiterating the freedom to innovate is the cornerstone of American technology policy and the key to unlocking the enormous potential of our nation's entrepreneurs and workers. As part of this agenda, Congress should craft a learning period moratorium on new AI proposals such as AI specific bureaucracies, licensing systems, or liability schemes, all of which would be counterproductive and undermine our nation's computational capabilities. In addition, this moratorium should consider preempting burdensome state and local regulatory enactments that conflict with our national AI policy framework. Next Congress should require a government's existing 439 federal departments and sub-departments to evaluate their current policies towards AI systems with two purposes in mind. First, to ensure that they are not overburdening algorithmic systems with outdated policies. And second, to determine how existing rules and regulations are capable of addressing the concerns that some have raised about AI.

Taking inventory of existing rules and regulations can then allow policymakers to identify any gaps that Congress should address using targeted remedies. Finally, an AI opportunity agenda requires openness to new talent and competition. Experts find that with a talent war brewing between the US and China, China is moving ahead in some important ways and we must take steps to attract and retain the world's best and brightest. In sum, America's AI policy should be rooted in patience and humility instead of a rush to over-regulate based on hypothetical worst case thinking. We're still very early in the AI life cycle. There's still no consensus on even how to define the term, let alone legislate beyond establishing definitions. I thank you for holding this hearing and for your consideration of my views. I look forward to any questions you may have.

Dr. Ayanna Howard:

Chairman Heinrich, Vice Chairman Schweikert and members of the Joint Economic Committee, thank you for this opportunity to participate in today's hearing on artificial intelligence and its potential for job growth and governance, and it's an honor to be with you today. My comments in this testimony are focused on the national importance of AI literacy and its role in augmenting the current and future workforce talent pool as well as the government's role in enabling this to happen. While demographics of the US are changing, these changes are not reflected in the diversity of students pursuing degrees related to AI engineering and computer science. According to the 2023 World Economic Forum Future of Jobs report, AI continues to shift the skills that are needed within the workforce. In some cases, creating new jobs, augmenting old jobs, and eliminating other jobs. AI talent shortage is thus not just a US problem. Buying outside talent is thus no longer a viable option to solve this issue.

Too often though we disregard our untapped talent pools. Organizations tend to over-index on hiring new talent with needed skills versus upskilling their current workforce. As an educator, I have witnessed bright students whom because of gaps in the high school curricula leave the engineering major because they struggle when they take their first discipline-specific engineering course. Yet when we have instituted enrichment programs such as preface and accelerate in the College of Engineering at Ohio State, we have seen quantifiable growth in student retention and graduation rates in engineering. There is thus no reason beyond intentionality and resources why organizations, government agencies and educational institutions can't institute similar AI training and literacy programs within their own organizational borders. There has been some movement in Congress to expand the Digital Equity Act into an AI literacy act, but there needs to be more. As a technology researcher and college dean, I also dabble a bit in policy with respect to AI and regulations.

I think policy will be critical to building trust. Policies and regulations allow for equal footing by establishing expectations and ramifications if companies or other governments violate them. Now, some companies will disregard the policies and just pay the fines, but there is still some concept of a consequence. Right now there's a lot of activity around AI regulations. There's the European Union AI Act, which parliament just adopted in March 2024. There are draft AI guidelines that were released by the Japanese government and slightly different proposals in the US including President Biden's AI Executive Order. There's state specific activity too. Over the past five years, it's been documented that 17 states have enacted 29 bills that focus on some aspect of AI regulations. In fact, on June 11th this month, I'll be participating in an AI symposium at the Ohio State House, which brings academic leaders, policymakers, and industry experts to talk about the challenges and opportunities that AI poses for Ohio's universities, but this practice of each state coming up with their own rules for regulating ai, it will continue to happen if AI bills are not being passed at a federal level and that is a problem.

I believe we have a lot of room for improvement in making sure that people not only understand technology and the opportunities it provides, but also the risk that it creates with new federal regulations, more accurate systems, and increased AI literacy training and upskilling for the untapped labor market. This can happen. The intersection of the country's growing dependence on advanced AI technologies coupled with a clear shortage of AI talent is fast becoming a national security issue that must be addressed urgently. In 2001 Secretary of Defense, Lloyd Austin emphasized in a speech that sophisticated information technologies, including artificial intelligence, will be key differentiators in future conflicts. The US though, we have our risks and we don't have enough talent trained with sufficient AI literacy that is needed for advancing emerging technologies critical to maintaining American leadership. If we are not careful, we might be living another 1957 Sputnik moment today. With nearly every aspect of life evolving to being coupled to AI, the US cannot afford to sit back and wait for an AI-based crisis to hit. We are at a crossroads. The US must make an equivalently bold investment in growing the AI talent pool to help protect democracy, citizens', quality of life, and the overall health of the nation. I want to thank you for this opportunity to participate in this important hearing and I appreciate the committee's attention to this topic and look forward to answering your questions. Thank you.

Dr. Jennifer Gaudioso:

Chairman Heinrich, Vice Chairman Schweikert and distinguished members of the committee. Thank you for the opportunity to testify today on the crucial role of the national labs in driving AI innovations. Doing AI at the frontier and at scale is crucial for maintaining competitiveness and solving complex global challenges. Today I want to emphasize two key points about how the national labs can and should contribute to frontier AI at scale. First, the role of the national labs in accelerating computing innovations through partnerships. And two, the role of the national labs in critical AI advances aligned with our national interests to date and going forward. But first, let me provide a brief overview of Sandia National Labs to provide context for the rest of my testimony. Sandia is one of three research and development labs of the US Department of Energy, National Nuclear Security Administration. Our roots go back to World War II and the Manhattan Project throughout its 75 year history.

As a multidisciplinary national security engineering laboratory, Sandia's primary mission has been to ensure the US nuclear arsenal is safe, secure, and reliable, and can fully support our nuclear deterrence policy. Importantly, there is strategic synergy and interdependence between Sandia's core mission and its capabilities-based science and engineering foundations. Because breakthroughs in one area beget discoveries in others in a cycle that pushes breakthroughs and fuels advancements. For decades, the Department of Energy National Labs have been pioneering breakthroughs in high performance computing through strong public-private partnerships. This collaborative approach has greatly enhanced America's overall competitiveness. As Mike Schulte from AMD Research said, "one of the key takeaways is how impactful the forward programs were on our overall high performance computing plus AI competitiveness. We not only created great systems for the Department of Energy, but in general it greatly enhanced our overall competitiveness in high performance computing AI and energy efficient computing can."

Another powerful example is our recent tri-lab partnership with Cerebras Systems that I discussed in my written testimony. Let me expand upon the impact of that partnership by sharing the latest results. Funded by NNSA, the team achieved a major breakthrough using the Cerebras Wafer-Scale Engine to run molecular dynamic simulations 179 times faster than the world's leading supercomputer. This required innovations in both hardware and software. This remarkable advancement has the potential to revolutionize material science and drive scientific discoveries across various domains. For example, renewable energy experts will now be able to optimize catalytic reactions and design more efficient energy storage systems by simulating atomic scale processes over extended durations. This partnership exemplifies how to open up new frontiers in scientific research, potentially transform industries and address critical global challenges while pushing the boundaries of AI and computing technologies. The DOE National Labs have also researched AI for decades with a focus on addressing critical challenges for the nation.

Recently, 10 of these laboratories, including Sandia, showcased their work at the AI Expo for National Competitiveness in Washington, DC. At the expo, the labs highlighted their contributions to AI research and their ability to contribute to the frontiers of science and solve national energy and security challenges. The labs are developing reliable and trustworthy AI-based solutions for critical areas such as nuclear deterrence, engineering, national security programs, non-proliferation, energy and homeland security needs and advanced science and technology, pushing AI to the frontier and scaling it. Through the Department of Energy's frontiers of AI for science, security and technology initiative, known as FAST, we'll maintain US competitiveness and solve global challenges. The national lab's long history driving computing innovations, coupled with our strategic AI research focused on key applications, makes DOE and the lab's invaluable partners for realizing AI's full potential through secure, trustworthy and high performance systems in New Mexico. We are working with our premier institutions and industrial partners in the state to finalize the New Mexico AI Consortium. This consortium seeks to transform the landscape of AI research, cultivate a skilled workforce, and build a robust infrastructure to support cutting edge AI research, education and commercialization in the state. By harnessing the lab's capabilities through academic and industry partnerships, we can lead the world in AI while safeguarding our national interests. I welcome discussions on how we can work together on this critical imperative. Thank you for convening the hearing and I look forward to your questions.

Sen. Martin Heinrich (D-NM):

Thank you. Vice Chairman Schweikert. Dr. Gaudioso, as you talk about in your testimony, national labs like Sandia have historically played an important role in innovation and technology development. How has that prepared them to steward AI development?

Dr. Jennifer Gaudioso:

The national labs when it comes to AI development, one, we have a history of working in AI in the algorithms. Our work in advancing computing technologies has been focused on supporting the simulation missions and the science the labs have, but we can also have been using that computing power to start pushing large scale AI. We also in the national labs actually have the worlds largest, the free, world's largest scientific workforce and the unique data sets that science has. So for instance, ChatGPT and other types of large language models are built on the corpus of knowledge that's in the internet. We know that we can build much more exquisite and impactful models if we train them on the exquisite science data that we have in the Department of Energy, and we look forward to using that data to build models and can transform how we do science to solve our challenges.

Sen. Martin Heinrich (D-NM):

Can you explain a little bit of that? Because there is a tendency among some of our colleagues to think of AI now just as a really elegant chat bot, something that can respond back with language that you would be hard pressed to know whether it was a human or not on the other side. But when you take a large language model and you put it on top of some of these foundational science models so that you can use language as the basis to coach new science, new alloys, new molecules, new pharmaceuticals out of these foundational models, you get really powerful combinations. Can you talk about the opportunities there a little bit?

Dr. Jennifer Gaudioso:

I'd be happy to discuss those opportunities because I think we have the large language models that are trained on language, visual arts, and other popular media. We now need to train physics models. We need to train them on chemistry data, and these models will help us be able to make connections in the science data that today -- I'm a chemist by training, I was trained to read the scientific literature comb through the data, spend years trying to make sense of the world around me, make a hypothesis, design experiments to test my hypothesis and iterate. Well, if we can train a chemistry AI model, I have my own student intern right there with all of the world's chemistry knowledge or at least the trusted chemistry knowledge encoded in it, and we can use that to make science go much faster and to make connections that no human is ever going to make. And so we're already seeing this in materials discovery.

Sen. Martin Heinrich (D-NM):

Yeah, material science in particular is just an incredibly slow, painful, long-term endeavor in the normal course of how we do science, and I think this is really going to change that dramatically. We heard a little bit about the importance of labor and workforce in maintaining our advantages in ai, but you mentioned something else which is data. Talk a little bit about what the unique data sets that we have at places like our national labs within our and how some of that, and for that matter, data curation, the importance of data curation, how that gives us a leg up over some of our competitors as well.

Dr. Jennifer Gaudioso:

Yeah, the data is really at the heart of ai, right? And we have both open science data, the Office of Science Laboratories, the national labs broadly do science to advance the public interest. And so most of the science data we have is public, but we as the scientists that discover and produce that data know how to interpret it and how to curate it to make it AI ready and to be able to use it to build these models. But we also have access as federally funded research and development centers. We have trusted partnerships with the US government and we have access to national security science data that we use as Sandia does in designing hypersonic reentry bodies or nuclear weapons. And that data, which of course we don't want to make public, can be used to train closed foundation models that will help us change the design life cycles and respond to the speed of the national security threats we're facing today.

Sen. Martin Heinrich (D-NM):

Great. I'm going to yield back the rest of my time, vice chairman.

Rep. David Schweikert (R-AZ):

Thank you, Chairman Heinrich. Dr. Miller, first, you already know I'm a bit of a fan of what you do and the way you think. Can you play a game with me instead of just reading a written question here. I come to you, you get to use the full power of what you believe exists today and it's going to exist over the next year. How could you revolutionize medicine? How could you revolutionize the cost? How could you revolutionize making people well, and the morality of ending and providing cures?

Dr. Brian Miller:

A couple answers. One, if you have high blood pressure, we have software that could titrate the medications for you. You could do that at home. You could send me a message, I could talk with you about exercise. And in fact, software in theory could titrate lots of medications for lots of common conditions. You wouldn't even have to necessarily leave your house to see me. In fact, a lot of the time you might not even need to see me and then see me for acute concerns. You could automatically have your clinical preventive services ordered. You could have your colonoscopy ordered if relevant, PSA to check for prostate cancer. So a lot of care could occur not just outside the walls of the clinic, but also even outside needing to see a physician. And then let's say you had a condition and you had to do a prior authorization, which my colleagues and I don't particularly enjoy doing. Imagine if the first layer of approval of review and then approval were automated and in near real time.

Rep. David Schweikert (R-AZ):

We have that piece of legislation. So doctor, within that scope, you have the data off my wearables, my breath biopsy, whatever it may be. Do you see a world, at least at the basic level, the AI and then the algorithm that's attached to it could write the script?

Dr. Brian Miller:

Absolutely.

Rep. David Schweikert (R-AZ):

Okay. That was clean without a whole lot of struggle. Dr. Howard, this is a little bit different, but, and you need to correct me. I was listening to your discussion about, okay, we need more people of variety who are writing AI and code, but in some ways, maybe I have the utopian vision that it provides access for more people to be able to use technology. Most people have no idea how to write an app, but they can use the app to do technical jobs. Is there in some ways that yes, there may be this hierarchy of over here, my people writing code, doing those things, but over here, isn't this an empowerment for almost every American to do things that are much more complex?

Dr. Ayanna Howard:

Yeah, it is. So when I define AI literacy, it's not about creating computer scientists or coders. It's about making every citizen understand how to interact with AI to do their jobs better. So it's allowing doctors to basically talk on their phone and it transcribes it into the actual records that can then be shared with other doctors. So that's really about it.

Rep. David Schweikert (R-AZ):

That's a much more elegant way to phrase it. Mr. Thierer, what's my GDP growth? My, I have a personal fixation on where we are demographically as a country, we're getting old very fast. We often don't want to talk about it. We have to be brutally honest. 100% of the calculated future debt for the next 30 years interest healthcare costs. And if a decade from now we backfill social security, it's demographics. What is your vision of AI? The growth, the labor substitution? Does it save us?

Adam Thierer:

Yeah. Well, nothing can save us, but it can certainly make a major contribution towards the betterment of our government processes and potentially our debt. There's been various estimates, congressman, on exactly how much AI could contribute to overall gross domestic product, the low end being something like at least 1.2% annually, but it goes up from there with one forecast for $15.7 trillion.

Rep. David Schweikert (R-AZ):

You need to be slightly louder.

Adam Thierer:

...1.2% annually, GDP boost and $15.7 trillion potential contribution to the global economy by 2030, according to another report. I have all this data in a supplement to my testimony, and again, the estimates vary widely, but the bottom line is that almost all economists, scientists, and consultancies realize that this is a great opportunity to once again build digital technology companies in the world by market world, our American technology.

Rep. David Schweikert (R-AZ):

Alright, to our true AI expert, Mr. Beyer.

Rep. Don Beyer (D-VA):

First of all, Mr. Vice Chairman, thank you very much for good convening this. And it's Dr. Miller, in fact, I just got off a zoom couple of nights ago with Dr. George Church at Harvard who was explaining to me that he and his colleagues have built new microorganisms with DNA completely different on the planet. And because of that, the viruses don't work, they're completely immune for virus [break in video]. But we have seen in the past that the introduction of new technologies to medicine hasn't necessarily improved things. You specifically talk about the absence of labor productivity growth in healthcare. The best example I can think of is EHRs, Electronic Health Interoperability. Veterans Affairs and DOD have been fighting for years about how to bring them together. How do we acknowledge the 17 to 19% GDP on healthcare like double any other place in the world and use AI to bring down those costs and bring labor?

Dr. Brian Miller:

Anything that gets in the way of actually us using it in a productive and proactive fashion is state and federal regulation. There's a role for state and federal regulation, but we don't want to go to town to prevent people from innovating at the bedside and getting it into practice, writing a service. Why not let them bill, if they can provide that service and compete? And if you have that competition within a population-based payment system like Medicare Advantage or Medicaid managed care, you can potentially drive service delivery and innovation for consumers to then have choice. Human service maybe with a Bluetooth exam. They could have remote service like audio video only. They could have automated service, right, from software or they could even [break in video]. If we drive policy -- consumers that choice, that will improve labor productivity because the consumers will choose.

Rep. Don Beyer (D-VA):

Great. Thank you Dr. Miller very much. Mr. Thierer, your 10 principles to guide AI policy. You said it's equally important that lawmakers not demand that all AI systems be perfectly explainable and how they operate it. We had Secretary Xavier Becerra in here recently over at Ways [and Means Committee] and he said [indistinguishable] prior authorization being made by AI. What are the limits of explainability? What could we as lawmakers really demand in terms of explainability?

Adam Thierer:

Well, transparency is a good principle, but the question about how to mandate it by law is always tricky. And when you get specifically into algorithmic explainability, the question is exactly how do you explain all the inner workings of a model before it gets to market, right? That's very difficult. And what I've articulated in the 10 principles to the AI task force that I sent up were basic to on the backend look at how to try to micromanage and figure out how explainable they are because I think that's a fool's errand. I don't think that can be done efficiently without, as we look at the output actually [indistinguishable].

Rep. Don Beyer (D-VA):

Right, balance. But thank you for the principles and Mr. Chairman, I yield back now.

Sen. Eric Schmitt (R-MO):

America's poised to enter the next decades of the 21st century hand in hand with the technology that could possibly define it. Artificial intelligence, decades of innovation entrepreneurship have led to this point from industry titans of Nvidia to innovation centers like St. Louis's own geospatial hub. America is ahead in the AI race and has the resources to double down on its unique advantages. Yet America's position in AI is under constant pressure. China is investing billions and billions into its own AI industry. Some of this investment is for AI surveillance technology to export their alignment, their malignant surveillance state abroad. There is no telling what could happen if China became the dominant player in the 21st century. I'm sure China is watching us. Europe is too, hoping that we bury our burgeoning AI industry in unnecessary regulation and lose sight of what got us in this position in the first place.

The worst thing we could do in this race towards AI is stifle innovation by unleashing the bureaucrats and putting crippling regulations onto innovators. The EU has done this and now Europe will most likely be watching this race from the sidelines. Yet there have been rumblings here on Capitol Hill and fancy summits all over the world that the US should over-regulate this industry. This would only serve to hamstring our innovation and give China the keys to this amazing technology. I want to zero in on this. I think this is a common theme that we hear about as far as Overregulating, and I think the American way here is we're concerned about this, but I want to drill down on that a little bit and maybe Mr. Thierer, I'll start with you. What do we mean by that? How would you define that Colorado has passed some regulations that even their governor has questioned and just using that as one example, what is it that we should be concerned about in this framework?

Adam Thierer:

Certainly, thank you for the question. So first of all, as of noon today, there are 754 AI bills pending across the United States of America. 642 of those bills are at the state level. That does not include all the city-based bills. Probably the most important AI bill that's passed so far is New York City, not New York State, New York City. And so, and then there's pathworks, Right. So the cumbersome nature of all those compliance rules added on top of each other, even if well-intentioned, can be enormously burdensome to AI innovators and entrepreneurs. So that's just one thing to note. The other thing to note is that there's been discussions about the idea of overarching new bureaucracies or certain types of licensing schemes. I have no problem with existing licensing schemes as applied in the narrow focused areas where AI might be applied, whether it's medicine, drones, driverless cars, but an overarching new licensing regime for all things AI is going to be incredibly burdensome.

That's a European approach. We don't want that. And Senator, lemme just say something about your China point. This is really important. We're here on June 4th, this is the 35th anniversary to the Tiananmen Square Massacre. And when we talk about the importance of getting this right for America and our global competitiveness, it's important for exactly the reason you pointed out because if we don't and China succeeds, then they're exporting their values, their surveillance systems, their censorship. The very fact that I just uttered the term Tiananmen Square at this hearing means this hearing won't be seen in China. I apologize for that to everyone else here. But the bottom line is that that means what's at stake is geopolitical competitiveness and security and our values as a nation. And so this is why we have to get it right.

Sen. Eric Schmitt (R-MO):

So it's interesting because when I was going to school, the idea was that sort of the more literate a society became, the more educated they became, the more open they became, the more likely they were to become a democracy. And China was kind of always an example, maybe if there are fewer poor people there and they're more literate, that ultimately they'll demand more. But interestingly, AI has uniquely and very low tech AI as it relates to surveillance powered communist regimes, right? It empowers a totalitarian level of control that 30 years ago, I'm not sure anybody could really foresee. And that's certainly what they've capitalized on to your point. And if people think that that is a way to maintain power, which has been the way of the world in many places, you're right, they become the dominant player in this. I do want to just shift with a little bit of the time that I have left and anybody please chime in on this point, but I'll start with you Mr. Thierer again. Big tech versus little tech here. I think there's a concern at least that I have that a regulatory scheme or we're doing something that sort of protects the big players, but ultimately leaves out the innovation again, that got us to this point. Now how do you view this and what can we do to guard against that? Because I do think there's some folks that want to have a more sort of protectionist view of the big players here and they have all the answers and they're very important players, but not the only players. And how do you guard against this shutting out little tech in this process?

Adam Thierer:

Amen to that. So let's take a look at Europe. I mean, one of the things I always ask my students or crowds that I talk to about AI policy or technology policy is I say, name me a leading digital technology innovator headquartered in the European Union today. Silence that has everything to do with getting policy wrong. And what the European Union is doing right now, the only thing they're exporting on the digital technology front is regulation. And basically that's all they've got left. And they're trying to regulate mostly large American tech companies. And so what's ironic about their regulatory regime is it was meant to keep things more in check and competitive, but there's only a handful of large technology companies that can comply with those rules and regulations. We don't want that to happen in the United States. We have thousands upon thousands of small entrepreneurial companies starting up in the AI space right now. And this is the hope for the future, especially open source technology right here in America that's happening on the ground. We have got to preserve that entrepreneurial freedom to innovate kind of model for the United States so we don't become the innovation backwater that is the European Union. Thank you, Senator.

Sen. Amy Klobuchar (D-MN):

Thank you very much. Thanks for doing this important hearing and thank you to our witnesses. I come from a state that believes in innovation. We brought the world everything from the pacemaker to the post- it note. And I also think that we have to get ahead of this in a good way. We have to put guardrails in place. That's something that we really didn't do with tech policy. And now there are all kinds of issues with privacy and not going into everything that we need to do that I hope we can do differently with ai. And I think David Brooks, the columnist put it best when he said the people in AI seem to be experiencing radically different brain states all at once. I found it incredibly hard to write about because it is literally unknowable whether this technology is leading us to heaven or hell.

We need guardrails that acknowledge that both are possible. So I'll start out with Senator Thune and I serve on the Commerce Committee and we've introduced legislation that has gotten some positive feedback. The AI Research Innovation and Accountability Act to increase transparency and accountability for non-defense applications and differentiating between some of the riskier applications like electric grids and then others and directing the NIST, the Commerce Department, to issue standards for critical impact systems. So I guess I'll start with you, Mr. Thierer. The bill that I just mentioned takes a risk-based approach that recognizes different levels of regulation are appropriate for different uses of AI. Do you agree that a risk-based approach to regulation is a good way to put in place some guardrails?

Adam Thierer:

Yeah, absolutely. I wrote a paper about your bill senator--

Sen. Amy Klobuchar (D-MN):

Maybe I know that, it's kind of a softball beginning. Yes.

Adam Thierer:

Well, I love building on the NIST framework, right? Because that exists and it was a multi-stakeholder widely agreed to a set of principles for AI risk management. And so it's really good to utilize the existing regulatory infrastructure we already have and build on that first.

Sen. Amy Klobuchar (D-MN):

Very good. Do you want to add something, Dr. Howard? I also noticed that your testimony emphasized the importance of AI literacy training. And we actually in that bill direct the Commerce Department to develop ways of educating consumers that this has got to be part of anything, including the work that Senator Heinrich, our leader here, as well as Senator Schumer and Senators Rounds and Young have done for the bigger base bill and that we hope to be part of. Do you want to talk about literacy a bit?

Dr. Ayanna Howard:

Yeah. I think even if you think about doing policy, right? You have to have individuals understand that definition of right. And if you don't understand AI and both the opportunities and the risks, there's no way that you can think about great policy. And so when I think about this, it's not just computer scientists and engineers, it's everyone that's touching any type of technology who should understand how to define it, understand data, understand parameters, understand outcomes, understand what the impacts are on different markets, different populations. And so that's really important.

Sen. Amy Klobuchar (D-MN):

Do you want to add anything, Dr. Gaudioso?

Dr. Jennifer Gaudioso:

I think that there is the importance of the risk framework. There is also research that needs to be done to give us the technical underpinning, right? Trust is something that a human conveys, but we are still in the early stages of doing research to understand what makes a model trustworthy, when does it respond within the bounds of our data? Where is it reliable, where is it not? And so I think policy just needs to keep in mind where we are heading and what the technical basis is at any given point in time because the technology to understand the trustworthiness, the mathematical underpinnings is something the national labs have researched for a long time and is moving quickly.

Sen. Amy Klobuchar (D-MN):

Very good. One of the things that I'm like hair on fire moment is just because I chair the Rules Committee is the democracy piece of this. And I guess I'll ask you, this isn't the subject really. We're talking about innovation, but if our democracy is unstable because people don't know if it's the candidate they love or the candidate they don't like that is speaking because you can't tell, it is just something that we have to think about in terms of going forward as a nation. Something like over 15 states now have required bans or disclosures on deepfake ads. Senator Hawley and I as well as Senators Collins and Coons and many others have put together a bill and actually banning deepfakes with exceptions for satire and the like. Senator Murkowski and I have the bill that we lead on disclaimers. And I'm just really worried with federal elections that while states are doing things which is good, we don't preempt them on state ads, that we have to guardrail our democracy here so people know who are they are hearing from. And I often get worried that some little disclaimer at the end, no one's going to really notice. Do you want to answer that?

Dr. Ayanna Howard:

That is true. It's just like with consent forms, nobody actually reads them. And so one of the things is how do we provide individuals or how do we provide some transparency and trust on the information they're hearing because we know it's very easy to manipulate individuals with advertisement and media. And so if those advertisements and media are very, very real or associated with a candidate that people resonate with or don't, that will influence them, guaranteed a hundred percent.

Sen. Amy Klobuchar (D-MN):

And Dr. Miller, I think I'm out of time, but I'll put a question in writing too. On tech hubs, I know that you know a lot about this, kind of your testimony is on the importance of policies that promote development of new science and new innovation. We have a lot of medical devices in Minnesota and it's served our country well. And I just want to talk a little bit about that in tech hubs. And you can do it in writing unless you want to add something and the chair will let me ask you that. Is that okay? Do you want to add anything to that?

Dr. Brian Miller:

Yeah, I guess one thought I think with tech hubs and also just tech innovation is we often don't realize that the current status of purely human driven care is actually frequently low quality and often highly unsafe. And so promoting innovation at universities at small companies that change that and automate components of care delivery or assist nurses, doctors, pharmacists, whomever in making decisions will actually massively raise the quality and safety and efficiency of care. And I'd add, I would say my greatest fear is actually that we don't take advantage of this opportunity because the care delivery system is a mess.

Sen. Amy Klobuchar (D-MN):

Yeah. That's where you go to heaven or hell, we got to make sure we got it right. Alright, thank you very much Dr. Miller. Thank you all.

Rep. David Schweikert (R-AZ):

And Senator, that was a terrific question. It's sort of the... we sometimes are emotionally tied and sometimes the disruption of the technology makes us nervous, but the math is the math. We've seen a number of papers that talk about some of the ability for the AI to read the data coming off my watch or the wearable or the glucose meter or the thing you blow into and being able to analyze that data actually is remarkably good and statistically much more accurate than someone that went to postgraduate school for what, nine years. And I feel crappy saying that because I can't imagine what your student debt is. On that note, thank you Senator. Congressman Beyer was actually, he and I were sort of channeling each other. What I'm trying to get is a model where AI makes traffic better, where AI helps me attach an air quality monitor, these things, and we crowdsource our environmental data where AI is, and I accept some of that becomes technically an algorithm underlying, it's actually not crawling through a stack. But even where Congressman Beyer was, the ability to revolutionize the cost and delivery and efficacy of healthcare of, what was it? About three weeks ago, a month ago, we had one of the first drugs solely designed by AI, a new molecule that looks like it has remarkable efficacy.

How do I get this to move fast? Because I believe cures are moral and it's interesting, is the solution an environment as you and I think about policy? Is it taking a look at the outcomes and making sure those outcomes are effective in some ways morally efficient? Because if we don't do something fairly dramatically on the cost of delivering services, I mean, yesterday we borrowed $101,000 per second over the last 366 days. It's a leap year. If I had come to you a few years ago and said, we're going to be over a hundred thousand dollars a second in borrowing and almost all the growth of borrowing is interest, interest now will be number two in our spending stack and the growth of healthcare, am I channeling you appropriately?

Rep. Don Beyer (D-VA):

Wholly, very much so. It's terrifying to think that interest on the debt is greater than Medicare, greater than Medicaid, greater than the defense budget. It only has to catch up with his discretionary non-defense and social security.

Rep. David Schweikert (R-AZ):

So as I come to all of you, you have the ginormous computers and lots of technical data that is not public. You have the next generation students, you have the policy and you have the case of how we could revolutionize healthcare. How do I deal with the fact that when he and I have actually had conversations of telehealth, digital health, the fact of the matter is in many ways this because you sat and we talked about it, if the pandemic hadn't happened, I don't know if I would've ever gotten our telehealth bill a single hearing. It only moved forward because apparently grandma wouldn't know how to work FaceTime, turns out she's really good at it. I don't believe the next generation is talking to someone on the phone. I think it's reading the data off my body. How do I sell this story, Dr. Miller? How do we sell the morality of doing it better, faster, cheaper, and much more accurately?

Dr. Brian Miller:

I think it's immoral not to do that. So if we don't give patients the choice of having cheaper, more efficient, more accessible, more personalized care, I think that we would be making a massive moral error. You mentioned telehealth 20 years ago, if we talked about telehealth, people would say that we were cuckoo for Cocoa Puffs because no one's going to call their doctor or do Skype or FaceTime and now it's the standard and took a global pandemic where a million Americans died for us to have telehealth. So I think the answer is one, hopefully we don't have another global pandemic, but we don't want to wait until there's some catastrophic event until we offer automated or autonomous care, right? If you're a poor American with chronic disease, autonomous and automated care or AI-assisted care is basically the best thing ever because you'll get more access, you'll get higher quality and it's going to be cheaper. So I personally think that we have to do it. It's not a choice.

Rep. David Schweikert (R-AZ):

Mr. Thierer, and I know you're going to respond to that. Does it make a difference in our world that, what was it three weeks ago, apple finally got its next generation of watch for cardiac arrhythmias? Those things are substantially certified as a medical device. Is that what you were talking about when the next generation disruption is coming?

Adam Thierer:

Yeah, absolutely. And to answer your question, Congressman, about how we essentially sell these benefits. We talk about it in terms of opportunity costs, like what would we be losing? What kind of foregone innovation would we lose if we don't get this right? Well, we can put our numbers on this. Let's talk about some of the biggest killers in America today. 800,000 people lose their lives to heart disease. 600,000 people use their lives to cancers every year. Now. I mean, how about cars? Let's talk about public health and vehicles. I mean, every single day there's 6,500 people injured on the roads in America. 100 of them die. 94% of those are attributable to human error behind the wheel. I have to believe that if we had more autonomy in automobile sector, we could actually make a dent, excuse the pun, in that death toll. And so I mean, this is where we can talk to the public about the real world trade-offs at work. If we get this wrong, we've had a 50 year war on cancer. That goes back to the time when Richard Nixon was in office and we've made some strides, but we could make a lot more if we had serious, robust technological change to bring to bear on this through the form of computational algorithmic learning. I mean, this is where we can make the most difference.

Rep. Don Beyer (D-VA):

Mr. Vice Chairman, if I can wander for just 30 seconds. I was

Rep. David Schweikert (R-AZ):

Waiting for you to step in. You were a little slow on the uptake.

Rep. Don Beyer (D-VA):

Well, I just wanted to help you stay on message, but if I can go off message for a minute, I wanted to respond to one of the things that Senator Schmitt said about licensing. My dear friend Tom Wheeler, who chaired the FCC, a Democrat, and clearly a left of center Democrat, called to tell me how important it was not to use licensing and AI. That when we did that, all we were doing was essentially embracing anti- competitiveness and locking in the advantage of the incumbents. And we need to be very careful about that. Senator Schmitt also started with two minutes on China. I also want to quote Martin Wolf, who was the editor-in-chief of Financial Time, saying, please don't give up. That 20 years of liberalization is too soon to tell. That sooner or later, the state motto of Virginia, sic semper tyrannis, the sooner or later the Chinese people are going to rise up.

Now we need to be worried about the Chinese Communist Party, not the Chinese people, that they will be demanding freedom sooner, hopefully rather than later. And Dr. Howard, I have two Brunonian children, so it's wonderful to have you here and I really appreciate your service on the National AI Advisory Council. I mean, you really set the stage for the big executive order and all that, but I'm specifically interested in your emphasis on digital literacy. We've been looking at what Finland has done with the multi-hour training and digital literacy as we struggle with deepfakes, which are now coming more and more, that you start with the notion that we need to be teaching people what to be suspicious of and let their own instincts kick in. But how can we develop digital literacy in a much more robust way than what we've done so far?

Dr. Ayanna Howard:

So I think this is an area where you have to bring in academics, industry organizations, nonprofits and government. I think about it very similar to cybersecurity nowadays. People actually check to make sure, is this really spam? I'm not going to click the link, but I will tell you, five years ago everyone was clicking, and so how do you get people to be aware that this is an issue? Half of the Americans have no clue that there might be a fake, it might be manipulation, advertisement might be by a chatbot. And so it really is ensuring that we have this conglomeration of everyone thinking about how do we train within the organization outside the organization from K-12 to gray.

Rep. Don Beyer (D-VA):

Yeah. Thank you very much. But David also, before yielding back to you, because didn't turn my--

Rep. David Schweikert (R-AZ):

Was going to say, this is a conversation. We're doing almost a colloquy question model.

Rep. Don Beyer (D-VA):

Well, in a colloquy thing, I want to thank you for bringing together--

Rep. David Schweikert (R-AZ):

I'm also stalling because I have another member coming. Okay, so keep going.

Rep. Don Beyer (D-VA):

I want to thank you for getting the Joint Economic Committee to focus on the challenges of diabetes and end stage renal disease. We had the same type hearing a few months ago, and we've both been worried about the cost of dialysis. It took Mitch Daniel's former OMB director, et cetera, to do the math while we were sitting here and say, 31% of our Medicare budget right now is just dialysis.

Rep. David Schweikert (R-AZ):

Think about what he just said. 31% is Medicare, 33% is all healthcare is functionally diabetes.

Rep. Don Beyer (D-VA):

$260 billion a year. And now we have GLP-1 antagonists. We have solutions, not inexpensive, but so far–

Rep. David Schweikert (R-AZ):

Can you help me do some things on the Farm Bill?

Rep. Don Beyer (D-VA):

Oh, absolutely. Everything we can, but this one, we look at how to deal with the a hundred thousand dollars a second and how we make the 19 or 18% of GDP on healthcare trimmed down, and not just GLP- 1, but many other ways that we think about using technology and AI and better management to manage healthcare in America.

Rep. David Schweikert (R-AZ):

Yeah. Dr. Howard, just to stick in the back of your head, and it's a slight non-sequitur. You're talking about teaching people technology literacy. What's our only success functioning in the last decade of getting Americans to actually exercise? We've spent hundreds of billions. This is somewhat of a trick question, and he already knows the answer. It was gamification. It was Pokemon Go. I know that sounds absurd, but if you actually look at the data, Pokemon Go did more to get people out chasing the little--and we've often had this running discussion. What would happen if that type of technology is saying, here's how I train you to understand how to work ChatGPT, the gamification of even down to healthcare and maintenance. If drug adherence is 16% of all US healthcare, when I forget to take my statin, when I don't do those things, how do I make it my pill bottle cap beeps at me or those sorts of things.

There are solutions that are genuinely ahead of us and we're actually struggling saying, is there a unified theory of the ability to use this technology disruption? When I call the IRS, the person I'm talking to is actually a ChatGPT, but it stays on the phone with me and it helps me fill out my forms and then maybe texts me the form I need, instead of someone who's been dealing with crazy for seven hours and doesn't really want to be on the phone with me. And that's actually going on right now. And so far the early data of the IRS experiment of using a chatbot has been apparently really good. That's human. So if it be from the cures to the education to the miracles of producing new materials and we're trying to help us sort of build the argument that many of us aren't that bright, but we get to sit here and read things that smart people write for us. But how do we create a unified theory of, let the technology run because God forbid, none of us truly know what it's going to look like a few years from now. It might be fair.

Rep. Don Beyer (D-VA):

Mr. Chairman, will you yield for a question? I thought you were going to tell us it was pickleball rather than--

Rep. David Schweikert (R-AZ):

You know, I don't like you anymore. I tried one pickleball and my 8-year-old beat me.

Adam Thierer:

Could I just wholeheartedly endorse what Dr. Howard had to say about digital literacy, AI literacy? Because this is really important. First of all, Representative Rochester has a really nice bill on digital AI literacy that I think we should take a look at. That's really good stuff. And when we talk about this AI literacy, digital literacy, we're talking about learning for life. No matter what kind of punches come at, if we can roll with those punches and figure out how to adapt, when we know more about the technology, it's about building resiliency, societal and individual resiliency. And people sometimes laugh at this. I was a co-chair of an Obama administration online safety and security task force where the only thing anybody in the room could agree on was the importance of digital etiquette and literacy. So there's a lot of agreement on this. This is a good place to start.

It's a good foundation for building that resiliency. And some people will say, well, that's not enough. Okay, fine. We'll find other remedies. But it can go a long way. I'm old enough to remember the problems we had in this country with littering and forest fires back in the sixties and seventies. And I remember, well, I'm sure some of you up there too as well, that give a hoot, who don't pollute. We address that, right? We went after woodsy the owl and things like that with smokey the bear and forest fires. We made a huge difference just with societal education about the problems of littering and forest fires, right? That wasn't a law that passed, that was actual societal learning. It was wrong to throw things out your window of your car. So you apply that mentality to the world of digital and AI policy. And we talk about, again, AI etiquette, if you will, like proper behavior using algorithmic services and technologies using LLMs, using these systems.

Rep. David Schweikert (R-AZ):

I want to go, and actually I also want Mr. Beyer to comment on this, and you teach students, you already have to deal with lots of freaky smart people. Most of 'em bathe, I assume. That's actually really funny if you know some of her scientists, how do I deal with my brothers and sisters here who aren't Don Beyer, who are almost fearful of the technology? What do we do to take away, I swear they instantly think of a Terminator movie. What do you do in healthcare? I can't tell you the--I'm going to forgive my elegance and my language. The crap I take when I basically say the same things you have at forms of my, here's my healthcare costs, here's things we could do to disrupt it using technology, and I will get administrators and this and that that come and say, well, we can't do that. It might be against our state law.

Dr. Brian Miller:

So technology allows us to operate at a higher level. I have a terrible sense of direction. So I use Google Maps and Uber and Lyft to get places. I don't pick up a rotary phone and call my friends to ask for directions and write 'em down on a notepad, right?

Rep. David Schweikert (R-AZ):

After you look it up in the phone book?

Dr. Brian Miller:

Yeah. I don't even have a phone book in the house anymore. And my iPhone organizes my calendar and email and tells me where to go and what to do because I'm a little absent-minded. And that's the standard. That's the standard of my day. And I think if we make that analogy over to healthcare, where right now we have the rotary phone and we actually a single handedly keep the fax machine lobby employed, we have an opportunity to totally transform that. So the clinical example is if your blood pressure is really low and you have septic shock and you're going to the ICU and you're getting pressors, they have to stick some big IV in your neck. 30 years ago if they did that, they just look at anatomical landmarks and put the IV in and hope that they didn't hit your carotid artery, which would be bad. Now you use ultrasound, you do ultrasound guided, you have a little probe and you take a look. And if you try to do it the other way, the nurse would run screaming into the room telling you that you're about to be negligent and doing something bad. And the answer here is that technology will allow us to do a safer, more effective job. It will become the standard and at some point to actually not technology will be negligent.

Rep. David Schweikert (R-AZ):

You get the last.

Rep. Don Beyer (D-VA):

Well, first of all on your comment on gamification, I wanted to show you, David, that I'm on day 643 on Duolingo.

Rep. David Schweikert (R-AZ):

I'm so proud of you.

Rep. Don Beyer (D-VA):

And that's only because of gamification.

Rep. David Schweikert (R-AZ):

But it makes my point of trying–

Rep. Don Beyer (D-VA):

And it'll ring at 11:30 at night if I forgot to do it.

Rep. David Schweikert (R-AZ):

That's what I want from pill bottle caps when you don't take your statin.

Rep. Don Beyer (D-VA):

And Dr. Gaudioso, I was very impressed with all your testimony, but especially the notion that scientific machine learning -- Sandia is fusing machine learning with scientific principles to solve scientific and engineering problems. For me, that is maybe the most exciting part of AI, not chat GPT-4 or five or six or seven, but the notion that everything from fusion energy to how our biology works, et cetera, et cetera, that you can use machine learning, the predictive parts of AI to figure things out. Can you expand on that as a scientist?

Dr. Jennifer Gaudioso:

I would love to thank you for the question. I think to me, this is the really exciting potential, right? I mean, ChatGPT has shown us how it can change our daily interactions. And I was able to put my written testimony into our internal chat engine and ask it if it was helping me make it a little less technical and more general. And it was great for providing me with a first draft in editing, but that's just been trained on the corpus of knowledge that's in the internet. I think what I get really excited about is the transformative potential of training models on science data so that I have my chemist intern with me that can help me discover new science properties that can then help me think through the physics and thermal and mechanical stresses to design a part that can be manufactured today.

We can just go from a new material to something that can be in our hands and usable and transform, not just how we do medicine and how we interact with patients, but how we make things in the country. And so AI has the potential if we do it and we constrain it with science so that these concepts of hallucination and statistically guessing what the next answer should be based on what is learned, we can constrain that with physics and chemistry and science data. We can then do new manufacturing. We can make digital twins of the human body to take the drug discovery from decades down to months, maybe a hundred days for the next vaccine.

Rep. David Schweikert (R-AZ):

Mr. Beyer, anything to follow?

Rep. Don Beyer (D-VA):

No, but I'm so glad that you're doing that. And one of the things we don't talk about much is as somebody who ran a small business for many, many years, the notion that one of the most important technologies is management. We don't tend to think of it that way, but the way we can explore the use of artificial intelligence to make management much better and management decisions much better once again, to the issue of making our world much more efficient, dealing with the hundred thousand dollars per second that we borrow.

Rep. David Schweikert (R-AZ):

And if we're lucky, we'll replace members of Congress with something intelligent, nevermind and they've called votes for us on the house side. So

Rep. Don Beyer (D-VA):

Or raise our pay. Can I ask one more question then?

Rep. David Schweikert (R-AZ):

Will it be short?

Rep. Don Beyer (D-VA):

Yeah, yeah. I'm positive.

Rep. David Schweikert (R-AZ):

Okay.

Rep. Don Beyer (D-VA):

Dr. Howard, you started Zyrobotics and you also made, what's it say, STEM tools and learning games for children with diverse learning needs. Yes. I love the chair of our AI task force. Jay Obernolte, machine learning masters from CalTech. So sort of a smart guy and he made his fortune in video games. I'd love to get your insight into how we use gaming to help educate people on not just artificial intelligence, but on everything else in the science world.

Dr. Ayanna Howard:

Well, with our robotics, I could get five-year-olds to learn how to code through gamification. And it really is how do you provide small nuggets based on someone's knowledge, engaging with them and bring them along, scaffold them along. At the end, they're like, oh, I'm actually putting code together to do simple things for a five-year-old. I think that could be done with adults as well.

Rep. Don Beyer (D-VA):

I'd love to work with you. I have a couple of ideas which we should go offline with, but David, thank you so much, Mr. Chairman.

Rep. David Schweikert (R-AZ):

And he knows that's actually one of my fixations. So there's a reason I like you. Thank you for engaging in this hearing with us. You be prepared for three days, we may ask you questions. I am going to ask also to do something a little bit different for the public record. If you have articles that you think would be appropriate for us to try to absorb. In reality, we're going to make our staff read it and then give us the highlighted copy. Please send it our direction. And with that, we're off to votes. This hearing is adjourned.

Authors

Gabby Miller
Gabby Miller is a staff writer at Tech Policy Press. She was previously a senior reporting fellow at the Tow Center for Digital Journalism, where she used investigative techniques to uncover the ways Big Tech companies invested in the news industry to advance their own policy interests. She’s an alu...

Topics