A Conversation with Dr. Alondra Nelson on AI and Democracy
Justin Hendrix / Mar 16, 2025Audio of this conversation is available via your favorite podcast service.
Dr. Alondra Nelson holds the Harold F. Linder Chair and leads the Science, Technology, and Social Values Lab at the Institute for Advanced Study, where she has served on the faculty since 2019. From 2021 to 2023, she was deputy assistant to President Joe Biden and acting director and principal deputy director for science and society of the White House Office of Science and Technology Policy. She was deeply involved in the Biden administration’s approach to artificial intelligence. She led the development of the White House “Blueprint for an AI Bill of Rights,” which informed President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
To say the Trump administration has taken a different approach to AI and how to think about its role in government and in society would be an understatement. President Trump rescinded President Biden’s executive order and is at work developing a new approach to AI policy. At the Paris AI Action Summit in February, Vice President JD Vance promoted a vision of American dominance and challenged other nations that would seek to regulate American AI firms. And then there is DOGE, which is at work gutting federal agencies with the stated intent of replacing key government functions with AI systems and using AI to root out supposed fraud and waste.
This week, I had the chance to speak with Dr. Nelson about how she’s thinking about these phenomena and the work to be done in the years ahead to secure a more just, democratic, and sustainable future.
What follows is a lightly edited transcript of the discussion.
Dr. Nelson. I'm very pleased to have the chance to speak to you. And I have to thank you because I have been using something that you presented at a Columbia University event maybe two years ago. On multiple occasions I've used essentially what I think was your idea. Which is you put up a picture of a World Economic Forum map of the quote polycrisis. All the different problems that the world faces from environmental degradation to economic inequality, to conflict and famine and all those things. And you pointed out that there were a bunch of tech issues on there, but they're off to the side. They're up on the upper left-hand quadrant of the polycrisis. These things include disinformation and the threats of emerging technologies, etc. And you said that seemed odd to you. Seemed like the wrong way to visualize the relationship between tech and these broader problems we face. Why is that?
Alondra Nelson:
Yes. I remember seeing you there and I actually had been thinking about that meeting because I was ... I'm going to another Knight Institute conference at Columbia probably in a couple of weeks so I had been thinking very much about that day. Part of it goes to I think my general philosophy and how I increasingly think about issues of science and technology policy, which I think traditionally had been much like that World Economic Forum graph, which was still a cool object to look at. Places technology or science or innovation issues off to the side and their own vertical. And we live in a world where science and technology policy issues are certainly a vertical or things that you need to know that are specific to them. But they are a horizontal, a through line for literally every other issue. And so this is something that I've felt for a long time and have been trying to operationalize in the policy work I did in the White House and I think also in my own academic work as a scholar. But the World Economic Forum graphic that you described just represented so clearly what was wrong and why it was wrong.
So if you were concerned about climate issues, if you were concerned about healthcare, if you were concerned about education, these are also really important issues about research and about technology and often about innovation. And the fact that we still live in a world in which we imagine the technology pieces are off to the side rather than central to them I think is one of the challenges that we're facing. And I would say to bring it to the present from two years ago when we were first having this conversation, why I think the polycrisis is, as I said, that day a digital polycrisis, we're seeing part of that with what's happening with DOGE. I think that for some people it's been quite a surprise that if you can purloin ... Or however you want to describe it. Get access to, beg, borrow, steal access to technology infrastructure information and data about people that you can actually turn upside down the US federal government, turn upside down trade relationships, tumble about how we think about labor, how we do healthcare research funding, and the like. And for me, that is an unfortunate proof point of that larger point. We really ... And I said this before. In print have got to get better at understanding and creating policy that appreciates that science and technology really sit at the center of all domestic and international policy issues.
Justin Hendrix:
So that does lead me right into my next question, which is how you're thinking about the near term. Clearly, there's a different sheriff in town, a different way of thinking about the primacy of innovation and technology in American leadership in the world. You've got four years ahead of a Trump administration. We'll see what happens after those four years are out. But how are you thinking about the second half of this decade? About this relationship between tech and democracy? What are we going to see play out?
Alondra Nelson:
I would note first Justin, that you use the new sheriff in town, which is a phrase that's been used by members of the Trump two administration, which I think is really regrettable. That we think about the demos, that we think about democracy, that we think about government as being punitive policing a sheriff is just, I thin,k very unfortunate and I think also very much highlights different perspectives that people bring to the relationship between technology and democracy. How are we thinking about it? I think we're just in the middle of a pretty radical transformation. And so for me, how we don't want to think about it ... I think I'll answer in negation to your question. Is that we need to figure out all the technological pieces and what the technology's doing is how we need to figure out what we're doing. So certainly, we need to pay attention to what legislators, policymakers in this administration and bodies like DOGE are doing and the ways that they are deploying technology to do their work. But the work that they are doing impinges on, infringes on, violates, supports, or doesn't support fundamental democratic pillars.
I don't want us to be distracted. As much as it's important as I just ... At the risk of sounding self-contradictory, I don't want us to lose focus on the things that really matter, which is to the extent that technology matters, it matters because it is supposed to be delivering things for people and not taking things that are meaningful and helpful and effective and that encourage wellbeing from people. And so I think I'm very worried that we are so focused on sometimes what AI might bring or a very narrow form of efficiency that is about throwing a silver bullet of technology at something and hoping it's going to fix a vast, complex system or a very complex social problem and much more want to get us back to that space of what's a fair shake for the American public? What is true in the world and not true? What helps people feed their families? What helps people have more education for more people? How do we grow more food for more people and do it in a way that allows us to live in a world that we're not quickly burning down with vast data centers by heating up the climate or by social erosion in which more and more people feel excluded from the broader public?
Justin Hendrix:
It feels to me very true that right now, we're in a period where the leadership in the US, at least, is willing to make the bet. Let's burn the coal, let's burn the oil, let's burn the gas, let's build the data centers. Let's do everything we can to advance artificial intelligence technology through mostly large US tech firms as quickly as possible so that we can get to the other side, whatever that other side might be, when we're promised abundance. That appears to be the general thesis that we're operating from here. Knowing that that's the case, knowing what we're going to live through that policy playing out for a bit, what are your best-case scenarios? What are your worst-case scenarios over the next five to 10 years?
Alondra Nelson:
My best case scenario really centers around the broader public and to the extent that companies American and otherwise the leading AI companies are US-based in some regard have imagined a future for us that many of us may not want or don't agree with or would have some revisions to. They've imagined a future that requires the adoption of these technologies by companies, by workers, by students, by families, and by communities. And the technologies right now have not earned our trust in many of these products. And so I think that we still hold as the people who have to, whether or not you're sitting in a corporation or at a schoolroom or in your kitchen, the people who have to adopt these technologies need to understand that they have a say. And I think the companies get very well that people are not going to use these tools if they know that they're not trustworthy. If they know that they put their jobs in peril. If they know that they put access to adequate healthcare in peril, and if they know that they increase the possibility of kinds of geopolitical tensions around critical minerals around other kinds of national security issues.
So for me, the best-case scenario is ... And I think DOGE has helped with that because if people understand how powerful technology is and ways in their lives that they didn't anticipate, the best-case scenario is I think a public that's awakened to these issues and that really refuses a narrative that says that if you don't have a PhD in AI from MIT or in computer science that you can't have a say about how these technologies are going to be built and used.
Another best-case scenario, I think, is the Trump administration has talked about being concerned about ideological bias, for example, in AI tools and systems. And for me, that I think can be potentially a bridge for those of us who have said for a very long time that there's bias in these systems. There's bias in a very narrow technological sense, and there's bias in the way that they are used in the world and the way that the use cases have real implications for consequential things in people's lives. So I would hope, best-case scenario, that there could be some mutual understanding built around that issue of bias in part because it has some of the same technological origins of the problem and in part because nobody wants to have an experience with these tools and systems that feels exclusionary or discriminatory or biased.
The worst case scenario I think is closer to us right now and I hope will lead people to become more awake to the issues and engaged. Certainly in the federal government, is that we have processes that are casting people out of work, particularly for important services that the government offers. Everything from FEMA to NOAA and climate services to services around Medicaid and Medicare. We are really just one crisis away from seeing how badly we need government and how throwing AI at it is not going to fix it. One of my concerns in the last 28 to 48 hours, there's been reporting about layoffs at the Social Security Administration and I think it's just been reversed and including all of the people who operate the telephones who help people actually get to the services they need. And if that's the thing that we care about, you actually need people to help steward people to the services that they need. So, the worst-case scenario is that you have no people. You have no one working in government to help steward these services at these really important periods.
And one of the things that we worked on when I was working in government was thinking about what are the five to seven critical moments in a person's life in which they really need government? So you have some hurricanes or tornadoes where you really need FEMA. You are in a military family, and a loved one dies, and you need veteran services. These other kinds of critical passage points in people's lives. And how do we make the government better at doing that? I worry that we are going to face ... We're hurricane season away from people not being able to get the help that they need and to not being able to rebuild that system better. That's not to say that there are not things about bureaucracies that can be made radically better. And moreover, it will be done in a way that is deeply surveillant that has cast aside any of the privacy protections that we had in the federal government, lanes of privacy between agencies that were done purposely to allow, in some instances, energy agencies to share data, but really to preserve the privacy of the American public and never allow government ... We should have a healthy skepticism of government to have too much information about our lives. And so I worry about deep surveillance, and I worry about an underserved American public in the context of economic insecurity and heightening global geopolitical tension.
Justin Hendrix:
So let me just briefly query you a little bit on where you think things might get to with AI and government if there is a moment after DOGE. Imagine five years from now, there's a change in administration or a change in our general approach to politics in this country. It seems clear that more AI will be at work in various federal agencies at that point than is today. I don't know how popular the idea of we need to hire a lot more people back into federal agencies. I don't know how popular that will be necessarily. Clearly, the answer won't be ‘get AI out of government entirely.’
Alondra Nelson:
That's never been the answer. So let's be very clear. You've got a political campaign, and then you've got policy. And I think when those things get mushed together, the historical record gets confused. The Biden-Harris administration said, "Let's have more AI in more agencies." It didn't say don't use AI. It said, "Let's have a talent search. How do we get more people with AI expertise in these agencies so we can use these tools better to be more efficient, to serve people better, etc?" And to do so in a way that gains trust, that's responsible and ethical, and potentially also uses the fact that many of these companies want to do business with the United States government as a way to say, okay then how can the United States federal government be modeling what it looks like to helping industry create pathways for uses of AI and products for uses of AI and government that are best in class. Best in class, not just in the sense of capability and power, but best in class as far as embodying sets of values that a free democratic society thinks are important. How do you use the power of the purse of the federal government to get there?
So I just wanted to say more AI in agencies, I think, was a policy as a perspective that I can get behind. I think the worry, my concern five years from now, is in bringing more AI into agencies have we built an infrastructure that builds in any accountability or transparency to the American public at all of these tools? So for example, we're already seeing with DOGE a little bit of playing fast and loose with some of the data. So, is it true? Is it not true? Has this program or grant been wound down, or has it not? And how do we know? One of the things that AI does is create more black boxes or bigger, denser black boxes about processes that make it harder to have accountability and transparency. I think that's a terrible thing for government. It's a particularly bad thing for government at a moment in which we already have low public mistrust. There's already low public trust in government and some institutions, and so to five years from now have created an ecosystem in which you have more AI and less information and accountability, which I think might be where we're likely trending with this administration, that's the terrible thing. And Justin, it's not about the AI, it's about the values.
Justin Hendrix:
And however tortured, my question was, it actually got precisely the types of ideas and responses out of you that I was looking for because I was trying to really imagine if and when you're in that situation where you do have a chance to maybe revert to a different set of values, how do you walk back all of the commercial relationships? How do you walk back all the data practices? How do you walk back all of the relationships between federal workers and the systems that they're using? All of that seems like it may take some time to unwind and yet that will literally be the act of restoring democratic liberation and discourse into the federal government in some way.
Alondra Nelson:
Right. And putting my social science hat on, we talk about path dependency. So there are things that will be built in that will carry forward. So I think that's true, but I also think when you reset all of those relationships ... We are living in a moment right now where there are ... One way of thinking about what's happening is crypto or the memecoin rug pulls brought to the level of government. Lots of commitments and norms and standards are just being rug pulled every day all around. And so I think five years down the road or whenever there's a new moment to assert and implement, I think we can be asserting today, every day, a different way, an alternative vision of thinking about how technology should be in society. But when there's a moment to implement this new vision, I think that one can take away from this moment impart that resets can happen and that we can have different expectations and do them quickly. And we know that administrations change, and all the institutions that make up the broader ecosystem learn to pivot. So I don't think that there will be challenges in doing that.
I think if the reset is to pretty fundamental things around civil liberties, civil rights, free expression, disability rights, then I think whether or not the transformation is hard, it must be demanded because these are the fundamental things that we say we stand for and that need to be enacted and modeled certainly principally by government.
Justin Hendrix:
One of the things of course that you've done in your career is represent the US in international conversations and contexts around tech policy issues. You just gave a speech, for instance at the Paris AI Actions Summit where you talked about various fallacies of AI, including the fallacy that AI will necessarily lead to improvements to the public interest. I don't know. How are you thinking about right now the US's international posture?
Alondra Nelson:
There's lots of interesting things about the AI Action Summit. But you really did see real recognition that what we call AI, the companies, the tools, the models, the systems, the research networks, is, among other things, the real center of geopolitical contestation and competition. All the heads of state, a lot of the conversations there, including President Macron's dinner, and I think made that very clear. It also made very clear – the reporting was certainly by some colleagues in the AI safety community was all of a sudden, they changed the name, and it was AI Action. From the very beginning. It was always going to be the AI Action Summit. As it was announced, it was imagined as expanding international multilateral discourse about AI and pivoting slightly away from; I think, a hard safety frame in that narrow AI safety way. So, no surprises there. And so I think people shouldn't have been surprised by that.
And obviously with the new administration, with the Trump administration in the US and with the United States both empirically playing a huge role in this ecosystem and politically also geopolitically wanting to play a very big role, a leading role. In fact in Vice President Vance's speech, he said, "You choose us or them." That's how the map is being laid out for AI policy. That said ... And to go back to this word path dependency. There has been just under a decade of work building out multilateral cooperation from the OECD, to GPay, to the work of the G7, the Hiroshima principles, the work in the G20. And so that all is not going to go away. So it's not clear how that will be ... All of these things will be harmonized with a new approach in the United States and some new contours in the UK and maybe some re-imagination of the implementation in Brussels of that legislation.
But I think that all of these workflows are not just going to disappear. And moreover it was the first Trump administration that brought the United States to the global partnership on AI and some of these other workflows, so it remains to be seen. But I think AI has really come into its moment as the centerpiece of a lot of international geopolitical conversation, even as the United States wants to think of it very much as a domestic issue that should be led and owned by the United States in a particular way. And so we will be living with how those tensions play out.
Justin Hendrix:
My last question for you is probably, again, keeping your social science hat on. A lot of researchers listening to this I'm sure. There are a lot of folks from civil society and the tech policy press community. A lot of folks who do various work on policy, etc. What would you tell them to study right now? What would you hope that they're paying very close attention to as we go through this period you've described and we imagine perhaps coming out the other end of it, at the end of the decade? What would you hope that people are documenting, exploring, researching, otherwise scrutinizing?
Alondra Nelson:
Oh, wonderful question. I love that question. I worry, and I continue to worry about research. We talk about innovation, which is a type of innovation being captured by trying to fix what the companies do. So I think it's really important that we have red teaming. I have a paper out today with 30 others about how to create an independent ecosystem for detecting flaws in general-purpose, high-capability AI systems. So we need to be keeping a check on the systems and the tools in the companies and on power more generally. But I also think I want to just hold open the lane of just what should we be thinking about? So I think that there's a lot of independent research that needs to be happening, certainly in the space of computer science and machine learning and AI.
And just as a researcher, not as an American researcher, but is just a researcher, one of the things that's been interesting about the DeepSeek example is the research innovation in that it's like, well, we don't try that, or we're going to patch together these three different ways of how we have typically built and thought about building AI systems. So I think that's a tremendously interesting example of the innovation that needs to be happening that's not always just reactive to what companies are doing. I think we need to go back to where we began. Just research that is looking at technology, power, markets around technology and every imaginable research topic there might be. And so whether you're talking about publishing and the humanities, you're talking about how you do social science research, you're talking about how are we going to train a generation of independent thinking researchers across all fields, and what does that political economy of that higher education system look like in a moment? And when we're having a shifting ecosystem for how the federal government helps support research and science.
That whole range of questions. There are obviously questions about the human, what does it mean to be a human in this moment when we have ... We've long had machines that talk, but we now have machines that talk to us that are not just playback cassette tapes or DVDs but are talking to us in different ways. What does that mean about humanity? We need lots of research. There's still, I think, open questions on the impacts of social media, on the impacts of spending a lot of time in front of the screen variously. And we need research that continues to focus on huge issues around how we have a sustainable planet, how we are able to feed people, to keep people healthy and have just, I think, a more generally just society and how we use technology to help advance that rather than allowing technology to be an obstruction to that because it's burning up resources and time and energy and the like. So, there is lots of research to do if we think about science and technology as a horizontal. There's not a single issue I think that someone might want to study that doesn't have that vein in it that's worth teasing and that we need to just tease and pluck out, including the arts and humanities.
Justin Hendrix:
That zooms us right back out to your tweak on the polycrisis and the way we should think about technology's role and all of those various issues that we face and of course the concerns we have about democracy itself. Dr. Nelson, thank you very much.
Alondra Nelson:
Justin, thank you for all that you do, and thank you for having me.
Authors
