Home

Donate

DOGE and the United States of AI

Justin Hendrix / Apr 6, 2025

Audio of this conversation is available via your favorite podcast service.

Across the United States and in some cities abroad yesterday, protestors took to the streets to resist the policies of US President Donald Trump. Dubbed the "Hands Off" protests, over 1,400 events took place, including in New York City, where protestors called for billionaire Elon Musk to be ousted from his role in government and for an end to the Department of Government Efficiency (DOGE), which has gutted government agencies and programs and sought to install artificial intelligence systems to purportedly identify wasteful spending and reduce the federal workforce.

In this conversation, I'm joined by four individuals who are following DOGE closely. The conversation touches on the broader context and history of attempts to use technology to streamline and improve government services, the apparent ideology behind DOGE and its conception of AI, and what the future may look like after DOGE. Guests include:

  • Eryk Salvaggio, a visiting professor at the Rochester Institute of Technology and a fellow at Tech Policy Press;
  • Rebecca Williams, a senior strategist in the Privacy and Data Governance Unit at ACLU;
  • Emily Tavoulareas, who teaches and conducts research at Georgetown's McCourt School for Public Policy and is leading a project to document the founding of the US Digital Service; and
  • Matthew Kirschenbaum, Distinguished University Professor in the Department of English at the University of Maryland.

What follows is a lightly edited transcript of the discussion.

NEW YORK, NEW YORK - APRIL 5, 2025: Demonstrators participate in the "Hands Off!" national day of action to protest the Trump administration's policies. The New York event was one of 1400 global events organized by a coalition of grassroots and labor organizations. (Photo by Bryan Bedder/Getty Images for Community Change Action)

Justin Hendrix:

I'm pleased to have all of you here today to discuss the Department of Government Efficiency in particular and, in general, the approach to putting more artificial intelligence in government, which is something, of course, that's not just happening in the United States, happening around the world. There are so many different trends and themes that are kind of underlying this broader move to involve more technology in government. Elsewhere, they're calling it digital public infrastructure and, in some countries, spending years or decades thinking through how to implement these systems. Here we appear to be in an experiment right now to see if we can do some of these things in a matter of weeks or months. In some cases, relying on speculative technologies to potentially provide solutions to problems that might've otherwise seemed intractable and or very complicated or expensive to solve.

And I think the jury is out on the impact on both services, but also on democracy. And that's what we're going to talk a little bit about today. But Emily, I want to start with you because you had a piece for Tech Policy Press not terribly long ago in February where you pointed out, well, the headline that "DOGE Understands Something the US Policy Establishment Does Not. Technology is the Spinal Cord Of Government." I just wanted to ask you briefly for the listener to kind of state your thesis.

Emily Tavoulareas:

It's been really something over the last several weeks to see this current administration both understand and put into practice something that I and many of my colleagues have spent the better part of the last 10 years trying to convince people of. And that is that technology and technical infrastructure are not an extra add-on to government and public policy. It is the entire backbone of government. Technical infrastructure is the infrastructure of everything, and it can either accelerate or hinder policy goals. And that's something that my colleagues and I at the US Digital Service and colleagues across the federal government and state government as well have been working on for the better part of a decade now. And candidly, it's been a real uphill battle. So I think what we're seeing right now is a group of people who sort of intrinsically understand the role that technology plays in government and really leaning into it in a high-priority manner that is now coming to the surface in ways that I think are really hard to ignore.

Justin Hendrix:

And Rebecca, I want to ask you as well as another person like Emily who has worked in government and has worked on projects that are in the same domain of thinking through how to improve government services and efficiency with technology. I want to ask you just a little bit your piece for Tech Policy Press, also February, "How Congress Can Delete DOGE." I think we're past the deadline you had imagined for that to potentially happen, although I suppose another deadline might come up eventually in future. My question to you is, really when you think about the sort of history of the US Digital Service and you think about the way that DOGE has essentially been implanted into it, what do you think the listener needs to understand about the authorities that were present for the Digital Service, how those have been adopted by DOGE, how they've been changed and what distinguishes what's happening now from what came before?

Rebecca Williams:

I had the distinct, I guess, displeasure of working for the Office of Management and Budget under Trump 1.0 and Russ Vought 1.0. And I'm very familiar with what Russ knows about the budget and various legal authorities. But I think one of the core takeaways that I have about what's happening right now in terms of DOGE's takeover of the government is where we have vulnerabilities in terms of oversight to technical systems. So my current role at ACLU is privacy-based. I'm more familiar with privacy laws now, but I think the public's reaction to a lot of what is happening with DOGE is like, "This can't be legal," and much of it isn't. You can't just fire whole staffs. Court orders are bringing those folks back, but there is a way to navigate some of our very technical bureaucracy and have people with clearances and cybersecurity mandates that are authorized by CIOs, who are not political folks. They just put a DOGE person in at SSA, which we might talk about later.

But you have a series of actually very expert bureaucratic decision-making where you're putting people in and removing people so that everyone that's in an agency is a Trump administration loyalist and will check off the boxes that are needed to make it technically legal to do some of the data access or system overhauls. There are still things in place to prevent some of that. APA, Privacy Act, CFAA. We can talk about what the protections are, but at some point it breaks down to the people and the authorities. And if you replace all those people and authorities and you're actually doing the paperwork, some of what's happening is legal. And I think it's worth thinking about how we can move forward and make sure that we have proper oversight and proper technical support and funding at the programmatic level rather than very high up with less oversight. I think there are other things to adjust, but yeah, I had the observation to see where there was streamlined hiring and spending, and I think Russ saw that, too, and is facilitating some of the DOGE activity now.

Justin Hendrix:

So I'm going to come to Matthew and Eryk in just a moment because I think they'll also bring this perspective, kind of the ideological motivation and the impact of the ideological motivation, some of the ideas that both come from, I think the MAGA movement, but also come from Silicon Valley about what DOGE is up to and what to expect next. But briefly, I just want to put you and Emily in conversation with one another about the history here and the disparities between what we're seeing now versus what came before, what the US Digital Service was all about. Some of the same language, efficiency, moving fast, trying to improve certainly government services and save the government money, avoiding duplicative contracts, et cetera. So some of this language, on the face of it, is not terribly dissimilar from what the DOGE folks want to offer, and yet we're seeing obviously a very different type of implementation. So I'm just putting that out there, the two of you.

Emily Tavoulareas:

I think that's exactly right. A lot of the language is quite similar and I think we start from the same place, which is things are broken and they are not working well, things are not working as they could or as they should. We are spending a lot of money that does not need to be spent in places that, candidly, could be debatable. In my humble opinion, a lot of that is on IT, but that's a different conversation. But we start from the same premise on the surface, that is, things are not working, and they can and should work better. A lot of the language is similar. We talk about the effectiveness of government efficiency and improving the technology at the heart of these institutions. But I think when you see what's happening in practice, that's where things really break down.

I think there's also a difference arguably in the executive order, which I'll get back to in a second, but we're not currently seeing any work or we have not yet to date seen work that is about the technology. It's so far been largely about cost-cutting and reducing the footprint of government. Assuming that eventually the modernization of technology and services does come into play, I think what we have here are two very distinct views of what technology is capable of and the role that it can play in government. On the side of USDS at least, and perhaps there are people who will disagree with me, but I think largely speaking, the US Digital Service was ultimately viewing technology as a vehicle to improving the outcomes of public policy. So technology was the tip of a very long spear. It was a vehicle for fixing outdated policies and procedures and dysfunctional services. I think this might feel to people like a pedantic difference, but it's really not. In practice, either you are centering the improvement of an outcome for people or you are centering the technology itself or some other goal.

But for USDS, my experience, at least, was always that the driving force was improving outcomes for the public, and technology was secondary. So the question was almost invariably, what problem are we trying to solve here? And we would spend a tremendous amount of time trying to understand the full scope of that problem and speaking to people who were on the front lines, who were interacting with the technology, who were interacting with people. And many occasions involved a shift in our assumptions that the problem was not necessarily a technical one, but might be an unclear portion of a form, or an outdated policy, or a procurement that had gone awry. So that's all to say, for USDS, the ultimate goal was always improving outcomes for the public. At this stage, it's clear that is not the goal, and I'm not sure what it is, but that seems to me like a really large shift from my perspective, at least.

Rebecca Williams:

I will agree that USDS supported programming. I would also just zoom out and talk about ... Maybe it's worth talking about the USDS origin story, which folks are relatively familiar with, but it was always a band-aid solution. They're very challenging issues with government program delivery, primarily that it's underfunded, but also that it needs more design attention. And that design attention should also consider how much means testing we do for benefits in the United States, which makes some of our apps complicated. It happens at a higher level or a root cause creating some of our apps being difficult, easier than ... HealthCare.gov had issues staying up, but also other countries have public healthcare, so that's like a vast distinction. But one of the clever things that the Obama administration did in terms of bringing in technologists to fix HealthCare.gov was you're allowed to hire people much more quickly in government if there's a security issue, and cybersecurity counts as a security issue. So you could leverage a special hiring authority to say, "It's an emergency. It's security for the United States. We have to hire somebody."

And that's how early USDS people came in and that's how a lot of the hiring authorities were built out for these folks. And it was very clever and helpful to fix applications at the time, but some of those hiring authorities haven't changed very much. And it also makes it easier for what's happening now with DOGE to happen.

But I think it's also the rhetoric portion is as important as the legal authorities. I want to talk about the legal authorities all day, but the rhetoric portion, not just the federal government, but local governments too often are suffering from not enough appropriate expertise or funding for their tools and programming. And I've described it as a preexisting condition. Government officials just feel like they don't have enough, they have scarcity mindset. And I see technologists, whether they be nefarious vendors, they just want to sell a local government AI to solve their problem or it's some middle of the grounds, like smart city technology. But many government officials are promised that technology are going to solve their political problems and the lack of support for their programming. And I think Elon and DOGE to some extent are selling the same snake oil, maybe the most nefarious version, but they're saying like, "Oh, your government doesn't work. I have the solution. We're going to overhaul SSA and AI is going to fix it." And this conversation happens at all levels of government over and over again.

Emily Tavoulareas:

I want to push back on two things. So one is the origin story of the US Digital Service. So I am currently wrapping up an effort to document the oral history, an oral history of the origin of USDS. So this is very top of mind, but I think it's fair to say that USDS was almost antithetical to a band-aid solution. The founding of USDS itself was an effort to not stop at the surface level of responding to a crisis. So HealthCare.gov was famously this major crisis that was an all hands on deck moment for the Obama administration.

After that was stabilized, people at the highest levels of the US federal government recognized both the degree to which they were reliant on technology to be able to accomplish policy goals, but also the degree to which it was a ubiquitous problem across the federal government and they wanted to build some sort of a muscle within the federal government to be able to both respond to future crises, believing inherently that this was one of many, just the most visible one. But building that muscle and bringing in people who had very different levels of skill and experience that were missing from government into these institutions largely to work in partnership with people in industry was really central to the founding of USDS.

And as I was saying earlier, USDS, not ever that I can remember came in with a specific solution that they were peddling. Every single USDS project started with a discovery sprint and one that was almost anti-technosolutionist. It's the antithesis of technosolutionism. It's saying, "I don't know what the solution is. I have the skills to understand and diagnose what's going on here, so I'm going to jump into this and figure out what is the problem, what are the potential solutions, and then work with people within the institution to identify the right path forward." And often these projects would close with some kind of a handing off back to the institution and building that capacity within the institutions to be able to move the ball forward.

Rebecca Williams:

What I meant by a band-aid, for clarity of listeners, is that folks who work in this program are temporary hires, and they can't work on every single tech shop and every agency at all times. So, the alternative to that, the more permanent non-band-aid solution for me, would be a permanent hire with a permanent budget. And then, in terms of technosolutionism, I agree that USDS did discovery sprints, but the rhetoric of efficiency and will do things for you quicker with technology is the commonality that I saw.

Justin Hendrix:

Maybe stepping even one level up in terms of the conversation here about DOGE and Matthew, I want to kind of bring you. In particular, you had this set of remarks called, "The US of AI," which you delivered Princeton at the end of February. And I think these are some ideas that you're still sort of working out, but you talk about DOGE's deployment of AI not primarily as a technological initiative, so not primarily maybe even as a technological solution, but as a kind of ideological maneuver. So can you sort of explain where you're coming from there?

Matthew Kirschenbaum:

This is building on the work of some of the other folks on the podcast with us, particularly Emily and Eryk, but I think one way to think about this moment is there are multiple frameworks that we can put in place around what's going on. There is absolutely a technological framework. AI is a real technology, it's a material technology. We understand that really well from, among other things, its environmental footprint, so this is not fictitious or imaginary technology. There are also legal frameworks as Emily and Rebecca were discussing, policy frameworks. But there's also yes, a kind of ideological and what I refer to as a discursive framework. What I mean by that, one I think kind of critical observation is that both large language models and the vocabulary, the speech that we used to talk about them, all of these things are products of language. There's a kind of flattening here that's happening where both the technology and our means of speaking about the technology are all what I've taken to calling language games after Wittgenstein, among others.

And just in one example of this, which I talked about at that Princeton lecture you mentioned what I refer to as the McCormick moment. Rich McCormick, a GOP representative in Georgia is holding a town hall, one of the ones we've heard about through the media, lots of angry constituents, people yelling at him. Folks, in particular, are concerned about CDC employees in the state who have been fired. And he replies, let me actually, I think the actual quotation is worth reading direct. His reply to that is, "I'm in close contact with the CDC. They have about 13,000 employees. In the last couple of years, those probationary people, which is about 10% of their employee base, a lot of the work they do is duplicitous with AI."

Now there's a wonderful malapropism here, we could talk about duplicitous with AI. Of course, he means to say duplicated with AI. But I think the real insight here is that we don't know what comes after that sentence. In this case, we're not talking about benchmarking some actual AI system that might be deployed at the CDC to see if it could or couldn't do the work of those fired researchers. As Eryk has pointed out, what we're seeing is a kind of dispersive maneuver where the language of AI, the word itself, is inserted into the conversation as a kind of return of serve. It's McCormick's way of answering or, better, deflecting the question. And all he has to do is pronounce the magic signifier AI and that provides the semblance, the simulacrum of an answer.

And I think we're seeing that, that McCormick moment has been sort of redounding through public events over recent weeks, most recently this morning, the news that we're going to rewrite the entire code base of the Social Security Administration in a couple of months. And how are we going to do that? Well, of course, we're going to do that with AI. This becomes a way of providing cover for deflecting from the actual ideological agendas that are at work here, which I think has everything to do ultimately with the kind of far-right remake of government that you see articulated in the writings of people like Curtis Yarvin, Nick Land, the other neo-reactionary influencers from whom people like JD Vance, Russell Vought, others high up in the administration are directly taking their cues.

Justin Hendrix:

And Eryk, I think that's a place to bring you in just to ask, you're always doing your best to do sense-making around all of this and kind of describe the various forces at play. One thing that you've just written about for us is the idea of AI hype and the extent to which that has infected the conversation here. You point to a loss of faith in democratic politics generally, which I think we can safely say the outcome of this last election, many of the public opinion polls, et cetera, there is a loss of faith in the government's ability and for that matter, the ability of many, many other types of institutions to deliver in the public interest. And it seems like it's almost like manufacturing. It's perfect for the introduction of this elixir, artificial intelligence, to solve all our problems.

Eryk Salvaggio:

I think it's appropriate that Nick Land was evoked in this conversation because of this concept of 'hyperstition.' Right now, what we see is that hype has this irrefutable quality. I think when we talk about hype, it's important to clarify that what I mean by hype is not just the sort of market logic, the sort of stuff put out for investors, but the kind of imagination that type of hype requires us to buy into. And part of that hype is ... The powerful quality of hype is that it is irrefutable because it's a compression of the future orientation to the present moment. So hype is always about the promise, it's always about what is going to happen. And one of the things that weirdly happens with artificial intelligence and the discourse around artificial intelligence is this future promise is increasingly taken for existing capability. It's mistaken for existing capability.

And so when we keep saying let's prepare by reshaping our infrastructure to accommodate AI, we're asking to reshape this infrastructure for a hypothetical future, and that hastens the conditions for that future regardless of whether it ever arrives. And this is something that Nick Land previously has talked about as hyperstition. It is believing something into existence. And there is a path to the current moments that we have been seeing over and over again of this hype about what AI is capable of, about the type of work that AI can automate that is always actually never quite right. And it is always sort of a, it's just around the corner. We've seen a lot of conversations around AGI being just around the corner, but it wasn't always just AGI. It used to be AI that was right around the corner, that AI taking jobs was right around the corner, that AI being able to do such and such a thing was right around the corner.

And then we started using it for those things regardless of whether it could actually do them. We started using them discursively, we started putting them in place as a kind of thought-terminating cliche, as a way of saying, "You want to have a conversation about this? I don't. So I'm going to say the AI will do it. The AI will write the code, the AI will answer the customer service requests." And part of this orientation to that future infrastructure is the exact reversal of what Emily was talking about before, which is not how do we serve the public, but how do we use AI? How do we make AI part of government infrastructure, part of social infrastructure, part of the economic infrastructure? How do we do that? The goal of that ultimately is to put this system which can be controlled and designed to accommodate essentially anything we want because of all these myths of unbiased computation, rationality in a machine when really ultimately these machines are designed to just follow orders, which is not something I think we should be leading into in this particular moment in time.

So I think this hype, this promise that has been sort of lured out for the sake of getting investments has actually grown into areas of policy, has grown into areas of academia, it has seeped into all these corners of our lives. But it is fundamentally a set of promises designed to raise capital, designed to raise investment, and we have come in and we've believed it and we shouldn't believe it. We should be pushing back on that. We should be asking for elaboration on the thought-terminating cliche that the AI will do it.

Justin Hendrix:

A big part of this appears to be about let's get the data, let's get access to the data, alter the information. Matthew in your talk, you talk about the idea of God mode, the centralization of government data. That appears to be something that is a goal of DOGE, to put datasets that previously were unconnected in conversation with one another, to be able to query data in new ways, that to Rebecca's point, may been the letter of the law. But I want to just put this as a question to each of you. What does it mean to you when I ask this question about centralization of government data, the breaking down of barriers between different agencies and the data they hold? What are you thinking when it comes to this question?

Matthew Kirschenbaum:

I have a kind of higher-level response to that. I suspect Emily and Rebecca might get us a little bit more down into the weeds. But for me, what I think is part of what I think we're seeing here is part an ideology about data itself. In some ways it is the dark enlightenment version of the old information wants to be free mantra, but I think of it as information wants to be fungible. In other words, everything is meant to be interoperable, everything is meant to flow from one system to another seamlessly. Crucially, everything is consolidated and condensed into formats and systems that are amenable to data mining and to training LLMs. So the ends here, these are not the old cyber libertarian democratic ends of free information for all. They are about precisely the issues of power, control and capital that Eryk was evoking. And obviously we see Elon Musk's, we understand his interest in this.

I think of it as data fungibility. I think of it also as a certain kind of flow, which is a term that a writer named Anna Kornbluh uses in a kind of analysis of the contemporary cultural moment that's really all about disintermediation. Everything is meant to be instantly accessible to flow, to not resist a kind of data pour from one container to another. Streaming is a kind of archetype of this model, but I see it as a much more sort of generalized circumstance. And that's for me what I see as that stake in these moves towards data consolidation ostensibly for purposes of finding fraud, but as I think we can really see for again, those capitalistic interests that are at the root of it.

Rebecca Williams:

Yeah, this question about data access, Justin, I don't know where to start, and it's all I think about, but I think there are two straightforward risks in terms of DOGE having access to a bunch of sensitive personal information about people in the country. And that is the first is just a straightforward corporate agenda. There's a lot of power in having information about individuals. Fun fact, FOIA, which is often held up as a journalistic tool, the number one FOIA group is businesses always, businesses are FOIAing information to leverage it. There's an underlying thread where Elon Musk and his corporations are benefiting directly from him being at the helm of DOGE and certainly friends of Elon and Trump and whoever else could benefit from all of that access.

But then on the flip side, the other power dynamic that is happening right now with the Trump administration, Russ Vought and others, is this very clear, extreme right-wing, hyper-fascist, if you say something wrong on social media, we will try to deport you out of the country. There's a lot of risk in terms of additional surveillance, and I feel like one of the overarching sorts of themes of the last 10 years in Silicon Valley and the privacy debate has been big tech has been very good at saying, "We're not doing this historic practice, we're doing a new thing that needs new rules, which is no rules at all." And we're seeing that ethos play out in the government. Folks are on Signal when they shouldn't be on Signal.

There's just a lot of things in terms of information and power and the risks that are happening with DOGE access I feel are infinite and the Privacy Act and other things provide us some protections, even to the extent where you're supposed to have consent before your personal information is going into some of these AI models. Will that be honored? I don't know, but I think these folks certainly know the power of information and how to manipulate it.

Emily Tavoulareas:

For me, I think immediately about efficiency and efficiency brain. So much of what we've been hearing about over the last few months is about efficiency, with this assumption baked in that efficiency is a good thing. So everything that Matthew just said, yes, and when we bring it to the practical level of practice, like what is actually happening right now, what Matthew is describing is surfacing in the language, in my view, the language of efficiency and the actions that surround that. This assumption that efficiency is always a good thing is as we have seen over and over again over the last few months, is incorrect. In a democracy, friction exists for very particular reasons, and those are one, protection, and two, stability. Centralized data and centralized control of data or centralized control of anything is inherently less stable than a decentralized structure. This notion that centralizing everything and making it very easy to share information will be an improvement is not necessarily true because efficiency is always optimizing for something.

What is it optimizing for is the question. Efficiency for what? Efficiency for whom? Take, as an example, transportation. If I want to go from place A to place B, I want to get there efficiently. But that could mean I want to get there as fast as possible, it could mean I want to get there using as little energy as possible, it could mean I want to spend as little money as possible, and based on what I'm optimizing for, what my priority is, efficiency will mean something different.

So to my mind, when I think about this, the question is, what are we optimizing for? Efficiency for what? What's the goal? And this brings me back to this earlier question that we started with Justin about this sort of distinction between what predated DOGE is the goal for USDS was not efficiency. That was very often sort of a secondary or sometimes even tertiary outcome. The goal was effectiveness and efficiency and effectiveness are different. The goal is for something to work as it was intended and to have the outcome that was intended, not necessarily to be faster or cheaper. One of my favorite analogies of cost-cutting and efficiency was by someone who I think Rebecca probably also knows, Waldo Jaquith. He compared it to wanting to lose weight by cutting off your legs. And I think that's a really apt analogy.

Eryk Salvaggio:

I think it's also important to look at the way that we've been talking about AI up until now, particularly around data and the use of data and the way that data has been erased as part of these systems, or at least as a goal for talking about these systems. We've been hearing that, "Oh, they're just learning and there's no storage. They look at the images and then generate these images. Large language models read the texts." And ultimately that has shaped a kind of idea, a false idea about that relationship of data that a consensus has emerged that data inside a large language model has distinct properties from data in other forms of storage, but it is there. If these things are said to be useful, then that data has to be in them.

And so if that data is in them, then we are consolidating data across departments where they were supposed to be separated. We are creating a breakdown of the firewall, and we are creating tools for just as the Privacy Act and the thinking behind the Privacy Act was looking at, which is we can have phishing expeditions, and we are starting to see these phishing expeditions. We could see what we can find out about such and such a person. If that data is centralized into a large language model, it's centralized. And I think that's an important point for people to understand because there's been this argument for a long time that the data disappears once the model "learns it" and that's not the case. It's there, it is waiting to be activated. And I think that this is a level of understanding that links this sort of long history of AI and hype to the current political situation that we find ourselves in.

Justin Hendrix:

Matthew's idea of this sort of permanent despotism, this bleak vision, this idea of eventually a government that is conditioning us algorithmically but also governing us through databases and through algorithms. I think a lot of folks share that bleak vision of where we might be headed. I might ask you a different question and put this to the group.

I'm imagining the possibility that the political winds change. Perhaps eventually the folks who are at DOGE who are tossing federal government employees out of their offices get tossed out themselves on some level. If that happens, what will we need to do? It doesn't seem like winding the clock back is quite the right way to think about it or to necessarily put everything back in its place. I have a really hard time imagining the federal government or some politician coming along with a winning argument that the thing to do is to rehire tens of thousands of government employees necessarily. There'll be all kinds of contracts in place, there'll be all kinds of systems that have been changed. There'll be all types of other stuff that will have to be maintained even if you don't like where it came from. I don't know. What do we do next?

Emily Tavoulareas:

Yeah, I love that question, Justin. For me, the answer is largely rooted in people and people interacting with each other and interacting with each other on a local level between communities and the governments that serve them at the local, state, and federal level. I am someone who deeply believes that the government is just made of people. Institutions are just made of people. These products that we use are just made by people. They are reflections of the people that make them and operate them. And I think the more that we internalize that, the more we reclaim our own agency in all of this. And I think it's worth saying that this last election is deeply rooted in the absence of those things and people feeling like the government and politicians do not understand, do not care about, and do not prioritize their needs.

And that is something that when I was working at USDS, and I know USDS over the last 10 years has really tried to put at the center of the work, is centering the understanding and prioritization of the needs of people over the needs of institutions and the people that run them. And this is in direct conflict, I think, with emerging technology because AI and blockchain and all of these new technologies, no matter what their interesting use cases and probably interesting and transformative in some cases, what they indisputably do is they create more space between government and people, they create more space between individuals and communities, which is literally the opposite of what we need right now.

Eryk Salvaggio:

I have some optimism in me and maybe it's just because somebody has to, but I do think that people will tire of that excuse. AI is not going to deliver. AI is going to decimate Social Security. We know that AI is not popular. It feels popular because people keep telling us it is, this is another function of hype, but actually people are afraid of AI. People don't want AI. They think of it in a similar way to offshoring jobs, which is another unpopular position. And just to put it out there, Elon Musk is not popular either. And so there are conditions if there were someone to bother to argue them, which at the moment we do not have showing up in any strong numbers. But there is a case to be made that automation that results in a diminishment of services, the cutting off of services could be seen by voters, could be seen by people tying it to Elon Musk, which is rightful. In this case, it would be further making the case that this is damaging.

But in terms of an opposition party right now, we do see the opposition party really still clinging to AI as part of this sort of technocratic solution approach that has existed in that party since the 90s. That needs to change. I think we are seeing people grasp the fact that needs to change.

But the other thing that I think is really important and I think is a longer-term project is that we got here because democracy has become very frustrating for people. And I think that it needs to be sold, not as frustrating, but as government is the product of the work we put into it. And I think that needs to be sold in a way that, the case needs to be made for the work of government. We have politicians who increasingly look at polls for guidance. They are not leading. There are so many situations, and I don't mean to just be on a soapbox, but I've spoken to people, this is what is happening right now and I think that people need a leadership. What happened in the last election was a kind of argument between vision, a vision that many of us do not agree with, but it was a vision.

Right now, AI is a vision of the future, and it is one of the only visions of the future that is being stated coherently. Even if it is dismal in application, it is a vision. It is an optimistic vision and I think that we need to undercut that optimism rather than buy into it and make the case that actually the future is work. And that's not a sexy message, but I do think that people would follow that message. I do think that so many people understand that inherently, but no one is asking for it.

Instead, we are seeing promises that we don't have to sacrifice, we don't have to raise taxes for government services because AI is cheap, and you, as a voter, are actually an entrepreneur. You are an economic engine for yourself, and AI is now your workforce. And if that's how we see ourselves instead of as a community, and if we see the social safety net as something that we automate and we come to rely on the output of a large language model, then we keep going in this direction. That may not sound like optimism, but to me, I think there are windows into optimism for it.

Emily Tavoulareas:

I think that's so well said. And I also think it's worth highlighting that, yes, it's articulating the vision, but it's also implementing it. And this is what's been, I think really frustrating for the ... Democrats have been in office for how long over the last couple of decades? So much of the critique of the last several Democratic administrations has been around messaging. And candidly, there's a lot of just ... But actually, what did people feel in their actual everyday lives? You passed all of these massive earth-shattering legislative wins and you still haven't spent the money. So I think there's ... Also part of what we're looking at is just a hunger for something to happen, anything to happen, just somebody do something. And right now, I don't think anyone can argue. The current administration is definitely doing a lot. You can argue with what's being done, but they are doing exactly what they promised.

Eryk Salvaggio:

Something is happening.

Emily Tavoulareas:

Exactly. Exactly.

Rebecca Williams:

I think the DOGE project, which is not just about technology but is heavily informed by weaponizing technology, is another example of how power is built. When I think about what's happening in the country and this push towards privatization, push towards the right is my view of what's happening, I would say some of those root causes are campaign finance, various things happening in the filter bubble with media. These are all concentrations of power happening at the same time. And if we want to counter that, we have to build power on the opposite end. And I think that type of building of power will require a lot of organization. I think until that happens or as that's happening, I personally don't see the threat of AI itself, but more AI is the excuse. I think maybe the quintessential example is the genocide that's happening in Gaza, using AI to say you're targeting one person but then blowing up the whole building.

It's not the AI that was actually doing the harm when really you want to do this larger scale effort. And I think that is what DOGE is doing with federal programming. We're doing AI with Social Security, but really Elon wants to shut down Social Security. You don't need the AI to do that. You could just shut it down. But you're using the rhetoric and language of these tools as the solution, but also, really this other larger political thing is happening. And I do think technology has really concentrated power in one direction and put us where we are now, and we have to think about how to counter that with sort of the legal landscape we have currently.

Matthew Kirschenbaum:

Yeah, and for me, I really appreciate everything that's been said. I don't ultimately know if I'm an optimist or a pessimist, I guess I don't know. But rather than ... Yeah. I think that we also, again, this is about pulling back the frame and recognizing that there's not just going to be strictly a technological fix for where we're at. There's not just going to be a kind of congressional or judicial or even a Democratic and the old-school sense of democracy fix. I do believe at heart that it's not just the technological machine that's broken, it's the language machine that's broken. The word that I used in the US Of AI piece is free fall. Discursive, free fall, the sense that nothing means anything anymore. These are not attack plans. Elon Musk is not the head of DOGE. It's Orwellian news speak, but our environment is suffused with it to a level that I think even Orwell didn't anticipate.

I think social media is a particular culprit here, and I don't think we fix this without also fixing social media. I don't think we can fix this without, in some sense, and this is the English professor in me, revitalizing our understanding of what language actually is. That language is not just a token or commodity. That language does mean things. Language matters not just in the sense of it being consequential, but language is a material act. And right now, we have this kind of environment, whether it's social media, where speech is ... It proliferates through algorithmic means that, even before AI, were creating the conditions for what I've referred to as the textpocalypse, the complete sort of unmooring and detachment of language from everyday meaning in a way that was very much anticipated by the continental and French theorists I trained on in graduate school. People like Roland Barthes and Jacques Derrida.

I think the right in particular, they've read those folks. Nick Land, who keeps coming up, has read those people, and they've weaponized them in ways that the left never did. And now we're living in that world and I think that we need to ... I think this very much gets back to what Emily was saying about sort of people and human contact and relationships. There's a rebuilding of a certain kind of commons that has to take place in lieu of our social media feed and in lieu of the AI technologies, which once again themselves run precisely on language. That's both the input and the output and the level of code. It's also the innards and guts of the things. So there's a kind of totalization of the linguistic situation, as English professors might say, that seems to me to be fundamentally sick and broken and impoverished, and that needs some fixing.

Justin Hendrix:

Well, as an undergrad English and philosophy major, the idea that we all need to go back to our Roland Barthes and Jacques Derrida and Ludwig Wittgenstein, I appreciate that instruction. That might be the first step towards solving some of these problems.

Matthew Kirschenbaum:

There's a platform to run on.

Justin Hendrix:

Matthew, Emily, Eryk, Rebecca, thank you very much.

Matthew Kirschenbaum:

Yeah, thank you all.

Emily Tavoulareas:

Thank you.

Eryk Salvaggio:

Thank you.

Rebecca Williams:

Thanks.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Related

Future Fatigue: How Hype has Replaced Hope in the 21st Century

Topics