Home

Donate

A Conversation with White House Office of Science and Technology Policy Director Arati Prabhakar

Justin Hendrix / Jun 16, 2024

Audio of this conversation is available via your favorite podcast service.

Dr. Arati Prabhakar is the Director of the White House Office of Science and Technology Policy (OSTP) and Technology Policy and Science Advisor to President Joe Biden. This week, she hosted an event in Washington DC called "AI Aspirations: R&D for Public Missions." Speakers included executive branch officials and agency leaders, from the Secretary of Education to the Food and Drug Administration Commissioner, as well as lawmakers such as Senators Amy Klobuchar (D-NY) and Mark Warner (D-VA), and Representative Don Beyer (D-VA). Prior to the event, I spoke to Dr. Prabhakar about OSTP's priorities.

What follows is a lightly edited transcript of the episode.

Justin Hendrix:

Good morning. I'm Justin Hendrix, Editor of Tech Policy Press, a nonprofit media venture intended to provoke new ideas, debate and discussion at the intersection of technology and democracy. This week, the White House Office of Science and Technology Policy hosted an event at the Johns Hopkins University Bloomberg Center in Washington, DC called AI Aspirations: R&D for Public Missions. The event was hosted by Dr. Arati Prabhakar, who serves as the Director of the White House Office of Science and Technology Policy. In her opening remarks, Dr. Prabhakar said, "AI is a priority for the administration and the goal is to figure out how to manage AI's risks so that we can seize its benefits."

Dr. Arati Prabhakar:

President Biden loves to say that America is the only nation in the world that can be defined by one word: possibilities. And that is exactly why we all got together here today. We're going to explore the possibilities ahead, if we put artificial intelligence to work in powerful and responsible ways to achieve big things for our country. We're here today to look into the future with purpose. We're going to imagine the future so we can build it.

Justin Hendrix:

The event featured dozens of speakers from across the government such as Secretary of Education, Miguel Cardona, Food and Drug Administration Commissioner, Robert Califf, and Sethuraman Panchanathan, Director of the National Science Foundation. They spoke about promising applications of AI to solve a range of problems from treating cancer to weather forecasting to clean energy. For instance, ARPA-E Director, Evelyn Wang, spoke about the potential for AI to help improve the nation's electricity grid.

Dr. Evelyn Wang:

Our AI aspiration for energy is simple: clean electricity for all.

Justin Hendrix:

And there was the hope that AI will improve the delivery of government services. Office of Management and Budget Director Shalanda Young emphasized the significant potential of AI to improve benefits delivery, particularly in enhancing customer service and making interactions with the government more efficient and user-friendly. Young cited examples such as Medicaid, Medicare, student loans and disaster relief as areas where AI can play a critical role in improving service delivery and accessibility. She noted that President Joe Biden's executive order on improving federal customer experience and service delivery directs every agency to prioritize customer needs. She said AI can help.

Shalanda Young:

President Biden has tasked us with doing is improving customer service. The government actually cares that people like their experience with the government and often reputation, and we have a long road to improve on a longstanding reputation is people don't like dealing with the government. And so we have a customer service experience team at the Office of Management and Budget that is doing exactly this and AI ...

Justin Hendrix:

A number of the speakers pointed to the need to focus on safety, often referencing the failure to institute policy on social media as a cautionary tale. For instance, Secretary Cardona said AI will play an increasing role in the education of the nation's youth, which means guardrails are necessary.

Secretary Miguel Cardona:

We're at a moment where our choices about AI really, really matter. Imagine if in the early years of the Industrial Revolution people came together and said, "We want all the progress the revolution can bring, but we also want to make sure that progress comes with protections for workers and consideration of environmental impact." Imagine if early in the history of the internet we thought this is going to have vast implications for people's privacy or how people protect their own work. Let's go ahead and lay down some markers now. I can't help but think if we did that during the increase of social media for our youth, I wonder what impact that would have on what we consider now a youth mental health crisis.

June 13, 2024: US Secretary of Education Miguel Cardona speaks at an event hosted by the White House Office of Science and Technology Policy (OSTP) at the Johns Hopkins University Bloomberg Center in Washington DC. Justin Hendrix/Tech Policy Press.

Justin Hendrix:

Senator Amy Klobuchar (D-MN) was one of multiple democratic lawmakers to speak at the event. She mentioned the Senate AI roadmap recently released by Senate Majority Leader Chuck Schumer (D-NY) and the working group he led with Senators Todd Young (R-IN), Martin Heinrich (D-NM), and Mike Rounds (R-SD).

Sen. Amy Klobuchar:

And we've had all of these forums and startups and civil rights leaders and everyone involved so at least we got a good beginning to these discussions and then they put out this roadmap. Some people love it, some people want to see some changes. That's how policy is, and that's how you make laws.

Justin Hendrix:

Senator Klobuchar spoke of multiple pieces of legislation she hopes to advance. She indicated that the Artificial Intelligence Research Innovation and Accountability Act, which has garnered support from several committee colleagues is set for markup in the next week or two. She said it would enhance transparency around AI deployment by requiring companies using high-impact AI systems in areas like healthcare, housing, and employment to provide information to the National Institute of Science and Technology and that it would bolster research and development and consumer education. Senator Klobuchar also pointed to the bill she proposed with Senator Josh Hawley (R-MO) and others, which is aimed at addressing potential harms of deepfakes, the Protect Elections from Deceptive AI Act.

Sen. Amy Klobuchar:

And hopefully we can get this moving as soon as possible so we don't have one of those situations where we are behind the rest of the world because we want to lead the world. And the fact that so much of this technology was designed in America by are brilliant minds for good intentions ...

June 13, 2024. Sen. Amy Klobuchar (D-MN), Chair of the Senate Rules Committee & Judiciary Antitrust Subcommittee, speaks at an event hosted by the White House Office of Science and Technology Policy (OSTP) at the Johns Hopkins University Bloomberg Center in Washington DC. Justin Hendrix/Tech Policy Press.

Justin Hendrix:

Senator Klobuchar was followed by Senator Mark Warner (D-VA), Chairman of the Intelligence Committee. He stressed the importance of addressing AI's impact on national security, market manipulation and elections. Like others, he stressed the necessity of learning from past mistakes around the failure to institute social media regulation and advocated for proactive measures and the involvement of researchers and experts to ensure thoughtful AI governance. Senator Warner also indicated that important AI governance questions are at hand in defense contexts. For instance, he said there have been proposals for developing AI-driven drones for Ukraine. These drones would operate in swarms and once deployed would not require communication with humans in order to avoid being jammed. Warner pointed out that this could potentially be the first major weapon without a human interface. While it could help Ukrainians on the battlefield, it prompts the need for careful consideration.

Sen. Mark Warner:

And this is not classified, so I can talk about this. I've seen some proposals recently in terms of how AI relates to national security, that there's some individuals developing for Ukraine, the ability for the Ukrainians to bring a swarm of drones that once you set them off, there would be no communication between the human and the swarm of drones because where a lot of the drones have been taken down recently is through the jamming. So if you don't have that jamming back to a controller, and if you then ... The lead drone was taken down, it would be software interrelated, so the kind of brain drone would go to the second drone to the third. Very interesting, could be a breakthrough in terms of the war, but what I've just described in many ways could be the first major AI weapon without human interface. So how we think through this, even though we've got enormous, enormous needs is very, very important.

June 13, 2024. Sen. Mark Warner (D-VA), Chairman of the Senate Intelligence Committee, speaks at an event hosted by the White House Office of Science and Technology Policy (OSTP) at the Johns Hopkins University Bloomberg Center in Washington DC. Justin Hendrix/Tech Policy Press.

Justin Hendrix:

Later in the afternoon, Representative Don Beyer (D-VA), a member of the Bipartisan AI House Taskforce expressed his enthusiasm for AI's potential to address significant challenges and improve lives. He emphasized the importance of developing meaningful legislation to manage AI's risks and like Senators Klobuchar and Warner, warned against repeating the regulatory failure seen with social media. Representative Beyer outlined his five key aspirations for AI, including its use in advancing fusion technology, applications in healthcare, enhancements to management practices and supply chains, its potential impact on the scientific process and the need for public investment in AI to ensure its benefits broadly, not just for a wealthy few.

Rep. Don Beyer:

I had an interview last week with some podcasts on AI and they wanted to know what my advice was for the many, many people studying AI at MIT, and Harvard and Stanford and the like. And my instantaneous thought was, "Please don't just think about it as a way to invent a new technology and become a unicorn and a trillionaire. Please don't think about it as a way to get rich and famous. But rather do like the people in this room. Think about how it can change the lives of every person on this planet in a really good productive way."

Justin Hendrix:

A day ahead of the event, I had the chance to interview for the podcast. We talked about her perspective on AI, how she believes the species will learn to use it and how it'll cope with its risks.

Dr. Arati Prabhakar:

I'm Arati Prabhakar. I'm President Biden's science and technology advisor, and I am the Director of the White House Office of Science and Technology Policy.

Justin Hendrix:

Dr. Prabhakar, I'm so glad that you can join me today, and I'm speaking to you just a day in advance of an event that you are hosting called AI Aspirations: R&D for Public Missions. What's going on in DC this week? What are you hoping to accomplish at this event?

Dr. Arati Prabhakar:

Well, it's great to be with you, Justin. Yeah, we've got a fun conference tomorrow. It's focused on how we use AI for public missions, but the context for this is of course that last year AI took the world by storm. It was already in our lives, but with chatbots and image generators, it was right smack in front of us and it became a very significant priority for President Biden and for Vice President Harris. They did a lot of work last year to get on the right path to managing AI's risks. They put out a very broad and comprehensive executive order back in October, and their approach was to say, "Look, promise and peril are going to come together with this technology, and we have got to manage the risks so that we can seize the benefits."

And we've made a tremendously important start on managing a number of the different kinds of risks that AI brings. There's still a lot more work to be done there, but tomorrow we are holding this conference to talk about the other side of the equation, which is how do we seize the benefits that AI can provide and not just for things that the market's going to make happen but for public missions? And so we're going to tell seven stories about huge aspirations that we have to use AI to make the world better for everyone in this country and around the world.

June 13, 2024. Dr. Arati Prabhakar, Director of the White House Office of Science and Technology Policy and Advisor to the President on Science and Technology, speaks at an event hosted by the White House Office of Science and Technology Policy (OSTP) at the Johns Hopkins University Bloomberg Center in Washington DC. Justin Hendrix/Tech Policy Press.

Justin Hendrix:

From prior remarks you've made and perhaps looking at the agenda for this event, I get the sense that you're trying to make sure that AI is not treated as synonymous with generative AI.

Dr. Arati Prabhakar:

That is actually part of what ... When you actually get into how can we use AI to do big things, you quickly find that, of course, the big surge in capability with large language models, with image generators, these multimodal models that are happening now, all of that is super exciting and will have important implications for public missions. But there are a lot of public missions from health to weather to more sustainable materials, that actually what they really need is to be able to build models by training on very deep technical data from weather sensors or biological data and clinical data, whatever their domain is. And so I think these different fields of AI build on each other, and there's a lot to be learned from the big surge in capability from LLMs, but certainly there's a much bigger world of applications than just that.

Justin Hendrix:

You said before that this period in AI technology is like medicines before clinical trials, anyone could market a potion as a cure, but no one knew what would happen if you took it. I know you're a student of the introduction of new technologies and how societies react across history. When it comes to AI, based on that definition of things, where do you think we are in this cycle?

Dr. Arati Prabhakar:

Well, the reason I really like the analogy to drugs and clinical trials is that it's actually not the clinical trials are perfect. It's just that they let us reap the benefit of pharmaceuticals by figuring out when they were safe enough and effective enough to give to people who had different diseases. And so with AI, similarly, I think what we all want is to know that a particular AI model is safe and effective and trustworthy. But in reality, what that translates to is it safe, effective, and trustworthy enough for the specific way that it will be used in the particular system than it is?

That's a different question if you want to use a large language model to provide medical advice versus ideate about new forms of poetry. I mean, completely different applications. So I think we have to recognize that it's very application-specific and that we are still in the early days of figuring out how to really assess how safe, effective and trustworthy a model is, and then to design models to be safe, effective, and trustworthy. We're making a start with red teaming and sharing best practices. That has to happen, but there's so much more including deep research to learn how to do this well.

Justin Hendrix:

You've also said we are in choppy waters with this rapidly changing technology. That means it's more important than ever to steer for the light of these fundamental values. You call out safety, individual privacy, freedom from discrimination, transparency. I get the sense that others want to set the North Star somewhere else. Innovation, exploitation, market incentives are kind of primary. As a society, if you were to sort of step back on the moon, look down at the United States at the moment, do you think we have our priorities in order?

Dr. Arati Prabhakar:

You know what I'll tell you is first of all, I think it is a deep American value to seek progress through innovation, but I'll also tell you, having spent 40 years as a scientist and engineer, the reason we want to innovate is to make people's lives better, to make tomorrow better than today. It's not just about cool new technologies. And if you really care about building a better tomorrow, that actually brings you right back to values because it's not just how marvelous the technology is. It is a question of how does it actually come into the world, into people's lives? Is it something that's going to level the playing field and give everyone opportunity or is it going to be biased against certain communities? Is it going to provide marvelous things but at the expense of your privacy, which is a pretty important American value in itself?

So there's a long list of values that have to be integral, and I don't approach this from the point of view of we either get to keep our values or we get to innovate. I want us to be going aggressively after managing AI's risk so that we tamp those down, and equally aggressively going after the big opportunities to use it while doing that responsibly.

Justin Hendrix:

I was, a couple of weeks ago, on a panel there in DC at the Dirksen Senate Office Building with experts, advocates who were concerned with different aspects of AI technology. One of the folks who spoke that stuck out to me was a woman called Hannah Bauman, who's a legislative advocate for National Nurses United, and she said, "Nurses are really concerned about the rhetoric of abundance, how AI is going to cure cancer and then solve all these problems when our direct experience in the hospital shows otherwise." She painted this picture of AI being used to essentially turn nurses almost into gig workers, having to check boxes on an iPad screen and essentially enfeebling their own decision-making. What do you think about as far as a scientific priority in terms of looking at the relationship between AI and labor?

Dr. Arati Prabhakar:

This is so important and to me it's a societal imperative. And the example you gave I think really gets to the heart of it because number one, the thing that has happened with the recent surge in AI is that a whole host of professions that we used to think of as creative work that could never be replaced by a computer now is subject to at least change and possibly even really significant impact on jobs because of generative AI. And so I think we just have to recognize that it's an important moment. Now, if you look historically, we've had decades if not eons of technology changing the nature of work and what we know from history is it is possible for technology to allow people to do more so that they can earn more, but we also know that it can hollow out jobs and eliminate jobs. And you are talking about something that's happening right now that I actually think is one of the darker sides of this, which is about using tech ...

There are companies and organizations that are using this technology essentially to surveil their workers in ways that really, I would tell you in some cases, even dehumanize jobs. So we've got some choices to make. I think it's very possible that we can achieve a future in which we're using AI wisely and amplifying what people can do. But that's not going to happen automatically. It's going to happen because we're really thoughtful about figuring out the different ways that technology can be developed, that we're thoughtful and constructive about how it gets integrated into work. We keep an eye on it. It's going to require workers, and this is why I think the labor community has such an important role to play. They have got to be part of this process of figuring out how AI can be used in ways that really do make us all achieve a better future, but in ways that lift workers up.

And so I think this is, again, this is an area where if we have an oversimplified model that you just throw technology at the problem, it's not going to come out well. If we can do it thoughtfully and if we can stay focused on these public purposes, I think it's possible we can navigate our way to a much better place.

Justin Hendrix:

You've been in and out of government and the private sector across your career, but I want to go back all the way to 1993. You were Director of the National Institute of Standards and Technology, first woman to lead that agency. Do you think NIST right now has the resources it needs to take on this challenge, setting standards for AI? We read headlines about even the offices being under-invested in at NIST. Do you think that there's enough muscle in NIST right now?

Dr. Arati Prabhakar:

Yeah, let's talk about NIST and I want to pull back and talk about federal R&D a little bit more broadly. So absolutely. First of all, NIST has some big homework assignments from the President. In his executive order last October, among many other things, he instructed NIST to start an AI safety institute, which gives it some important responsibilities for getting us on a path so that we have benchmarks and standards and methodologies to be able to assess whether advanced AI technologies, how safe and effective and trustworthy are they? That turns out to be an extremely challenging question. It's fundamental to getting AI right, but it's a pretty hard question, and so they've got a lot of work ahead of them. Elizabeth Kelly, who's leading that effort at NIST, I think is off to a great start, but there's a lot more to be done.

And to your point about resourcing, they've gotten started with a budget, but that is an area where much more resource is going to be needed, and I want to put that in the broader context of what's happening with R&D. What's happening with federal R&D reflects a moment that we are in here in Washington. The President has very consistently pushed for significant increases in federal R&D. He made progress in his first couple of budgets, but unfortunately in this last budget cycle due to the Republicans in Congress and their extreme budget caps, federal R&D took a very significant cut. And I'll underscore how important that is and how concerning that is by saying that the same week that that was happening, the people through Republic of China came out with an announcement that they were going to increase R&D spending by 10%.

So yeah, this is a time when it's AI jump ball around the world. Some of the big advances have come from US companies or US-based research activities, but everyone gets to use this technology. And how it turns out, actually this comes right back to values because every country is racing to use AI in ways that really express their values. And if we're going to make sure that it comes out in a way that really reflects American values, safety and security, privacy, not exacerbating discrimination, working through issues and making sure it's good for workers, then we've got to get on it. And I think, again, part of the reason we're doing this conference tomorrow is to remind people about how big the opportunities are and how much work there is ahead to get there.

Justin Hendrix:

I want to spend a minute on the global issues. You just mentioned competition with China, which is of course one framework with which lots of folks are looking at this question around AI and certainly US competitiveness. Are there other parts of the global conversation that you're particularly tuned into? Are you working with your counterparts abroad? Are you engaged with the European Commission in particular on what it's doing around its AI act?

Dr. Arati Prabhakar:

Yes. All of those conversations are going on. And let me step back and say that last year when the President and the Vice President really turned their focus to AI, which happened in the spring of 2023, they were very clear that we needed to move out. I've talked a little bit about the executive order that came out of that work, but the work also included getting voluntary commitments, the first ever voluntary commitments from AI companies. It definitely included bipartisan conversations with Congress as they are in their process of getting to what we hope will be effective AI legislation. But this role about working with our allies and partners around the world was also integral to the work that was going on all of last year. So as we moved out and got our house in order with the President's executive order, I think I would tell you, I think that was very effective in changing the nature of our conversation with our colleagues in the EU.

That's a region that shares a lot of our values and where we want to try to have a sufficient degree of harmonization to really support global use of AI, but we always approach these issues from somewhat different perspectives. And I think the fact that we did the work on the executive order here helped us do the work with the EU so that their EU AI Act is somewhat more harmonized, I think, than it initially was going to be.

Another example of what's been happening on the global stage is that the United Nations General Assembly just adopted the first-ever resolution on AI. That was a resolution that we initiated and that got I think 122 other nations to co-sponsor. So just the beginning but I think such an important example of American leadership on the global stage.

Justin Hendrix:

You've already mentioned that the executive order gave federal agencies a lot of homework, OSTP right at the center of working with so many aspects of those agencies. I'd love to know what is next for you in terms of the calendar of deliverables, but in particular to ask about this question around the federal workforce. You're given a responsibility to drive a "national surge in AI talent in the federal government." What does that look like and how long do you think that process will take to reflect where your goals are?

Dr. Arati Prabhakar:

Well, that's a great place to start. Among all the homework assignments, that was one that I was particularly passionate about. I think it's clear to all of us that for government, like any other organization right now, is scrambling to get AI talent so that we can get this right. And so an AI talent surge was part of that executive order. We put out a call at the time of the executive order on AI.gov to ask people with all kinds of backgrounds and expertise to come serve. I mean, this is such an amazing moment to make an impact on how AI turns out for the world and public services, a place to stand to move the world right now. And to my delight, we got a huge number of really exciting resumes and working with a few folks who have come into government with all kinds of different private sector backgrounds. It's just so fun to see how much impact they can have when they translate from what they've done inside of a tech company or a startup.

And they bring it here and they translate it for everything from shaping policy to shaping how the government does its own implementation of AI. So I think we've got a very good start on that. This is going to be a project without end because of course it needs to become integral to the hiring process of agencies and departments across government. But people are stepping, and that's been a lot of fun to see. For me, this is personal because I've spent half of my professional life in the private sector and half in government, and I can't tell you how valuable it has been for me to be able to have those different perspectives. And so I really think it's going to be important to get our agencies able to have the benefit of those different views as well.

Justin Hendrix:

And what's next in terms of delivering on the executive order? What is the immediate thing that's on your agenda this week and next?

Dr. Arati Prabhakar:

Yeah, there's a lot of work going on. Obviously this focus on AI aspirations and public purposes. I'll just mention some of the other things that are happening right now that I think are really important back on AI risks and harms. I tell you, a year ago there was a lot of speculation and wondering what would be the first real manifestation of harms at a massive scale from generative AI because we knew that there were a huge range of risks, but the question was, which were really going to materialize as substantial harms in the near term? And unfortunately, I think we have the answer, and I think it is online degradation, especially of women and girls with non-consensual intimate imagery. That was a problem before, but generative image generators make it now so trivially easy and so many companies offer these capabilities to make deepfake nude images and that problem, I think, there's some evidence that that has just really ticked up at an alarming rate.

So that's an area where we are looking at the problem, but then calling to industry, this is a problem that's moving fast and the fastest response possible is really asking industry to step up to its responsibilities. We put out a call to action working with our colleagues at the Gender Policy Council here at the White House. And to my delight, I think we've actually gotten a pretty interesting response. But it's simple examples like payment processors having in their terms of use, whether or not they will support these kinds of transactions. There are cases where payment processors actually have in their terms of use, that they won't support these kinds of transactions, and yet they're not enforcing that. So there are things that could be done right now by companies in the whole tech ecosystem that could get after this problem while we continue to pound the table and really ask Congress to step up with legislation on privacy and many other important issues for protecting people against AI's harms.

Justin Hendrix:

One other issue from the executive order I want to come to is around the environment and around resource intensive nature of the development of foundation models and generally the application of artificial intelligence. What can you tell us about that priority and any specific actions that you're taking in the near term on it?

Dr. Arati Prabhakar:

Yeah, this is an area that's gotten a lot of attention recently, which I think it's good and important, and that's about the increase in electricity consumption as data centers are built around the country. That was a growing part of our electrical load even before this big surge in AI interest with extremely computationally intensive, large language models and others. And so I think it's an important issue to take seriously. As we have dug in, what we have found is that AI, the data center power consumption is growing, but it's still a fairly small part of total electricity consumption in the United States and the projections going out into the future, it's very easy to be confident that energy use will continue to increase. It's very hard to be confident about the ...

And the error bars are huge on those projections, and I think so much depends on how markets develop and how much inference versus training happens and how efficient the chips are and how efficient the processes are and the algorithms are. So there are way too many variables. I think we know that the problem is real and it's growing and therefore it still needs to be dealt with.

And then I'll add one more piece of context, which is if you look at electricity demand in the United States. For an extended period of time, demand really hasn't grown that much, partly because of energy efficiency measures and partly because we've outsourced a lot of manufacturing. And as we come to the end of the road for those, as we start electrifying transportation and building energy use in order to meet our climate goals, then the pressure on electrical generation is going to grow from a lot of things even beyond AI data centers. So it's part of a much bigger landscape of dealing with our clean energy challenges. And so to flip it around from problem to opportunity, I think there really is an opportunity to use data center growth to help us drive this clean energy revolution that the President has gotten going with the bipartisan infrastructure law, the Inflation Reduction Act. We're now finally deploying clean energy at a scale that the climate actually notices. And my hope would be that we can use data center growth to further accelerate that rather than lapsing back.

Justin Hendrix:

This is for my final question. I'll just come back to a theme that we get into on this podcast fairly regularly. I think that when you think about policy generally, you're always thinking about the future, what kind of future you want to have, and what are the societal interests that you want to balance with other interests. It feels like that's especially true in the realm of tech policy where you're also dealing with technological imagination, a lot of things that go into the American technological imagination. Want to get a sense of how you think about that. But I also want to put a slightly pointed question to you, which is do you believe, at least at present, when you're thinking about the public interest and you're thinking about the public mission as you frame this event tomorrow, that the Silicon Valley interest is more salient than it should be with regard to how we think about the future?

Dr. Arati Prabhakar:

My home is Palo Alto, and so I've lived in that world and lived here in DC and R&D and now R&D policy. And what I'll tell you is I think that we've had ... The pendulum has swung. When I was here in the Obama administration leading DARPA, it was a time when the mood in the country was that tech could do no harm. And I would tell you now, I think that the mood is almost the opposite of that. And it really reflects the power of the technologies that have come into our lives. And the thing that has really been my guiding light for a very long time now, I've had the great privilege of working on very powerful technologies. That's DARPA's middle name. And in that life, in my Silicon Valley life, I've seen how powerful technologies really shape our world.

And the North Star for me is to recognize that we talk about the technologies, but it's actually about human choices, about how we develop them and how we choose to use them. That's what's going to make this story turn out the way we need it to turn out. And so I think despite all the conversation about training data and compute and flops and LLMs and all of that, that's important. But ultimately what really matters is who are the people in the corporations and what are the choices that they are making to make sure that this turns out in a way that benefits society and humanity? And I think that to me is the fundamental approach for policy and everything else that we're doing here.

Justin Hendrix:

I appreciate you taking the time to speak to me today, and I look forward to this event, seeing what comes out of it.

Dr. Arati Prabhakar:

Thanks so much, Justin. Great to talk to you.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics