Home

Donate
Podcast

Following DOGE, US States Pursue 'Efficiency' Initiatives

Justin Hendrix / Sep 28, 2025

Audio of this conversation is available via your favorite podcast service.

After the frenzy of the first months of the second Trump administration and following billionaire far-right political activist Elon Musk's departure from government, the Department of Government Efficiency (DOGE) has fallen out of the headlines.

But across the United States, dozens of state governments have attempted to establish their own efficiency initiatives, some molded in the image of DOGE. A common theme across many of these initiatives is the "stated goal of identifying and eliminating inefficiencies in state government using artificial intelligence (AI)" and promoting "expanded access to existing state data systems," according to a recent analysis by Maddy Dwyer, a policy analyst at the Center for Democracy and Technology.

To learn more about what these efforts look like and to consider the broader question of AI’s use in government, I spoke to Dwyer and Ben Green, an assistant professor in the University of Michigan School of Information and in the Gerald R. Ford School of Public Policy, who has written about DOGE and the use of AI in government for Tech Policy Press.

What follows is a lightly edited transcript of the discussion.

RIVERSIDE, CALIFORNIA—APRIL 5, 2025: A demonstrator expresses concerns over the sharing of private personal data by DOGE at a "Hands Off!" protest against the Trump administration. (Photo by David McNew/Getty Images)

Maddy Dwyer:

My name is Maddy Dwyer. I'm a policy analyst at the Center for Democracy and Technology on their Equity in Civic Technology Project. And broadly, our project focuses on ensuring that public agencies at the state, local and federal levels are using data and technology responsibly.

Ben Green:

I am Ben Green, I'm an assistant professor at the University of Michigan in the School of Information and also have a courtesy appointment in the Ford School of Public Policy.

Justin Hendrix:

I'm grateful to the two of you taking the time to speak to me today. Got in touch with you both after I read a post from Maddy on the CDT website called "DOGE-ifying Government with Data & Tech: What States Can Learn From the Federal DOGE Fallout"? DOGE occupied so many of the headlines in the early part of the Trump administration and on Tech Policy Press, we had many posts looking at both how DOGE was being implemented, but also in particular how it sort of signaled a kind of interest in using artificial intelligence tools to both root out purported fraud or inefficiency in government, but also potentially to replace government labor. And Ben, of course, was one of the people who helped us to consider that phenomenon through the spring and into the summer.

So I was interested in this post in particular because I think there's a sort of narrative these days that DOGE is in the back rear-view mirror at this point that given Elon Musk has departed, we hear a little bit less about it. There are less headlines about DOGE. But Maddy, your work suggests that something altogether different is happening. That state governments are kind of picking up the general idea, the general motivation behind the Department of Government Efficiency. Can you tell us just a little bit about what you've observed and what you've seen happening across these 29 state governments?

Maddy Dwyer:

Yeah, absolutely. I think as you mentioned, the beginning half of this year was sort of like a DOGE frenzy. I think news outlets were trying to pick up what was DOGE doing, who is on the task force that's doing all the work at the federal level? The thing that we saw as DOGE was sort of winding down is state governments picking up those efforts. So as you mentioned, 29 state governments across the political spectrum, so both red and blue states were attempting to establish their own government efficiency initiatives and some actually successfully codified those efforts. So with mixed success, I think 16 states successfully enacted state-level government efficiency efforts, and it came sort of via a few different mechanisms. That's legislation, executive orders, statewide programs or governor legislature-established commissions. But there were also 13 states that proposed legislation or other different mechanisms but haven't yet enacted state-level government efficiency initiatives. But across all those 29 different state-level efforts, we saw that 11 states really specifically addressed data and technology, including incorporating artificial intelligence into making government more efficient.

Justin Hendrix:

And can you tell me a little bit more about what you've observed with regard to AI in particular, how these various efforts intend to use artificial intelligence? What's the type of language that you're seeing that perhaps is common across these 11?

Maddy Dwyer:

Yeah, absolutely. So that was one of the common aspects of the 11 states that addressed data and technology was having this really specific stated goal of identifying and eliminating inefficiencies in state government using artificial intelligence, which I think we saw federal DOGE doing, and states sort of saw this as an effort that they could replicate. So 5 of those 11 states explicitly called for the use of AI to eliminate government efficiencies with approaches like streamlining state rulemaking processes, downsizing agencies, and assessing funding to consolidate or cut. So an example of this that's pulled out in the work that I did was Wisconsin's committee on Government, Operations, Accountability and Transparency or otherwise known as GOAT. They specifically in their mandate, were going to explore how to leverage AI and other technologies to improve government processes. So that's administrative functioning. Again, if there's any places to consolidate or cut funding. So yeah, that's sort of the language that's involved around some of these states using AI for those purposes.

Justin Hendrix:

So move over DOGE. GOAT is on the scene.

Maddy Dwyer:

Right.

Justin Hendrix:

Ben, earlier this year as DOGE got underway in the federal government, as Elon Musk set up this operation and various activities that it got engaged in—and continues to engage in, I should say—you wrote, "DOGE Plan to Push AI Across the US Federal Government is Wildly Dangerous" for us at Tech Policy Press. And then more recently a piece on the idea that "Using AI to Reform Government is Much Harder Than it Looks." I suppose as you look out across this landscape of state governments that are now attempting to kind of mimic or expand or kind of otherwise pursue some of the same approach that DOGE did in D.C. What are you thinking about? How are you evaluating these types of initiatives?

Ben Green:

Yeah, I mean honestly, it's really quite distressing. And I guess I do wonder, sort of a question for Maddy about especially those 11 states, about their range across the political spectrum. But looking at DOGE, in many ways, the idea that AI and the idea of efficiency were a pretense for just an incredibly aggressive austerity program, the goal was to make cuts, the goal was to make headlines and statements about fraud that were not actually true.

And increasingly as more reporting has been done about what happened, especially in those early months, it's clear that some of the statements they were making were knowingly false. In one of my articles, my first article, I talked about Elon Musk having a tweet about saying, "Millions of people are over 100 years old and are getting Social Security payments. There's all of this fraud and waste going to dead people." And I wrote about, "Well, actually that's a known phenomenon. Those people are not getting checks. They're just still showing up." And if he had bothered to ask someone, he would have known that this was not the case, if he had just talked to someone who works in this agency.

Now, some of the more recent reporting has gone back to that story and suggested that actually the DOGE engineer who was working in the Social Security administration at the time did know that that was fake and did try to tell Musk that this is not actually what's happening and he just didn't care. And I think that's sort of a broader pattern that's going on across the board here where yeah, they're really not doing what any good faith effort to approach or try to enhance efficiency would look like if you care about the quality of government and actually doing government right. And certainly they were not taking an effective approach to thinking about how would you integrate technology, whether it's AI or other types of systems into government.

So the thought that states now are trying to replicate this is incredibly concerning. I'm curious to know more honestly about how similar are the state efforts? Are they taking the general idea and maybe trying to make a more good faith approach to what it could be, which I would still have concerns about, but would be better? Or is it really also adopting not just the high level language, but also the tactics of running in and not really caring about the truth and just slashing and burning?

Justin Hendrix:

In both your pieces, Maddy and your post on CDT's website, and Ben, your most recent for Tech Policy Press, one good thing that I think is great around analytical writing, that you give us some bullet points and things that we should be using as an analytical framework. Maddy, I want to come to you first. You point out red flags to look out for when it comes to assessing these DOGE efforts. Can you kind of just walk us through those? What are the red flags that you think the listeners should be paying attention to as they try to evaluate what they're seeing out of these state governments?

Maddy Dwyer:

Yeah, absolutely. And I think I share Ben's concern, what we saw at the federal level, the four red flags that I pointed out, I think we want to try to avoid in state governments because on its face, combating fraud, waste and abuse and saving taxpayer dollars I think is a worthwhile effort. But if we don't have these guardrails in place, it makes it difficult to do that.

So the first red flag that we pointed out was federal DOGE's lack of transparency. So I think one of the major issues we saw about its structure is we didn't know who was staffing DOGE at first. What was its role within the executive branch and within agencies? And what was it legally allowed to do? And then also what access and what type of access did it have to government data? I think all of those things sort of led to this chaotic nature of its rollout and I think ultimately contributed to unfavorable opinion of the work of DOGE, which I think undermined trust between the government and constituents because no one really knew what was happening with their personal data.

And we found out sort of later after the fact, which I think when we're building government programs that are supposed to enhance efficiency and save taxpayer dollars, I think there's a lesson to be learned here that states should really ensure that details about their government efficiency efforts, including whether and how they will use data and technology are clearly and promptly communicated. And this is really important, particularly given the public stake in government processes and services both as taxpayers and beneficiaries of those programs.

The second red flag that we pointed out was violations of privacy protections. So I think as more reporting came out of the federal DOGE, we saw that their unprecedented access to some of the most sensitive information about tens of millions of people across the country, at the time of my writing of the analysis resulted in at least 16 lawsuits that alleged violations of 6 privacy protections. The most common being the Privacy Act of 1974. And this is spanning across eight different federal agencies. So again, we saw a lot of allegations that they were violating longstanding privacy protections. I think this is a lesson for state governments that they also have an obligation to ensure that their government's efficiency initiatives follow state privacy and cybersecurity laws. And I think constituents expect that their governments will handle their data legally and with care. And so I think that's another big piece to pull out there.

The third was security breaches. So we saw that DOGE experienced numerous security incidents due to lax practices with its move fast and break things approach. And this includes sort of a lack of access controls and create an increased risk of fraud and identity theft. And I think a good example of this was a DOGE staffer actually violated U.S. Treasury policy by emailing a spreadsheet to two GSA officials that contained unencrypted PII without approval. And sort of those missteps I think also undermine the work of DOGE.

Justin Hendrix:

That unencrypted PII, if I remember, that was pretty much everybody's Social Security number.

Maddy Dwyer:

This one I think was just internal staff.

Justin Hendrix:

I see. Okay. Different matter. Sorry.

Maddy Dwyer:

Yeah, I know, again, making the point that there were numerous security incidents that we saw happen, but again, I think here states can really learn from these mistakes that can be easily avoided if we're ensuring that proper security measures are taken in the handling of data for government efficiency efforts.

The fourth red flag we pulled out was weaponizing government data. So a lot of the reporting that we saw revealed that DOGE was actually going beyond its originally stated mandate of eliminating fraud, waste and abuse to cut down longstanding data silos between federal agencies and actually maybe enabling immigration enforcement by contributing to tech-driven efforts that consolidate personal data across various different agencies. So I think again here, going beyond what establishes fraud, waste and abuse is another thing that state governments can learn from because I think again, on its face fraud, waste and abuse is a worthwhile effort to be cutting down on. But again, if you're using people's data for purposes that it's not originally intended, I think it sort of undermines that trust again between government and their constituents.

And then lastly, a red flag was using AI in actually unproven ways and wasting taxpayer dollars, which sort of goes against the originally stated mandate of a lot of these programs. So we saw the federal DOGE was using AI tools to make high-risk decisions about eliminating government programs and federal employees. And so I think that the lesson learned here is prior to deploying AI, governments at the state level should ensure that the tools they're using are actually well-suited for the tasks at hand. So those are the main red flags that we pulled out and I think serve as great lessons for state government efficiency initiatives.

Justin Hendrix:

Ben, that last red flag, I think almost translates right over to the types of challenges that you lay out in your last piece for Tech Policy Press on why it's difficult to governments with AI. Can you lay out those challenges as you see them?

Ben Green:

Yeah, and in a way, listening to Maddy, I was also just reflecting on, it feels to me like DOGE is a story of every way that a government effort, an effort to implement technology and efficiency into government can go wrong. And it's like a bunch of examples and recipes for what not to do more than I think there's actually positive stories to be taken away from it.

So to me, the recognition that this is happening now that more states are adopting this, I mean it is concerning that even with a sense of there are red flags, there are issues, but even just by adopting the sort of general language of DOGE or even with slightly different names, it is concerning that there just seems to be at least that view that there's something worthwhile here and something worth directly emulating in a specific way. But I think, yeah, so that speaks to the challenges that I describe in my piece, which is the piece overall I would say is oriented towards someone who is openly curious and optimistic about the power of AI to improve government, and tries to go through a sequence of arguments for why AI isn't actually often that useful. And all of these are trying to dispel the gap between what I would call the hype and the reality, where there's sort of these broad claims about what AI can do and how helpful it would be. But the more that you look at AI use and implementation practice, the more that those promises fall through.

So the first issue is just that there's a huge difference between the benchmark tests and evaluations that engineers often put on AI tools and then what the AI tool is actually going to be used for in practice. And so a lot of AI tools, you'll get kind of bold headlines about AI being able to replace lawyers because they do well on the bar exam or replace coders because they do well on a test for entry-level software engineers. There are questions about whether those evaluations are even correctly measured with the AI, but even if they are, there's a huge difference between an AI satisfying the bar exam and actually acting as a lawyer.

Similarly, an AI's ability to do an entry-level coding exercise does not mean that that AI is able to actually replace human software engineers. Often what's being measured by the test is very different from what a actual software engineer would need to be doing and a lot of the time ... And so those types of tests are fodder for headlines about how AI is going to replace everyone or justification for bosses and managers and policy makers to say, "Oh, well, we're going to throw AI into all of this." They sort of lend credence to those projects. But the AIs actually can't do these types of things.

So just to take one example of coding, a lot of the job of a software engineer is not just writing the first draft of code. It is integrating code within a complex software system. It's following proper security protocols. It's having software that's able to be maintained over time. These are all things that AI coding tools struggle with greatly. And so the idea that just because it can pass these sorts of artificial tasks that you could replace people with AI is I think really it's really quite foolish to think that you could equate those two. But of course there are many people in the industry who want you to be confused or not be aware of those distinctions.

So the second challenge then from there, so the first challenge is really about, yeah, you can't just replace people with AI. The second challenge starts to get into what happens when you bring AI as a tool for human use. And one of the big challenges is that just because an AI tool has capabilities or seems useful doesn't mean it's actually helpful for workers in a very specific domain. An AI tool has to be integrated into their workflows with contextual knowledge about what types of information are they looking for, what are they trying to accomplish, what are the constraints they have? And just throwing a chatbot at federal agents is not very helpful for them.

There was a quote in one article by someone at the General Services Administration after an AI chatbot had been pushed on them by DOGE, and this person said, "It's about as good as an intern, generic and guessable answers." And especially you can think about AI, especially in the context of someone working in an agency with lots of rules and regulations and processes and other teams that they're coordinating with, an AI tool that doesn't know how to integrate with all of that is just not going to be helpful for them.

And the third challenge then is in some ways, I think the hardest for people to wrap their head around, which is going further into human and AI collaboration and specifically the challenge of human use of the tools and human oversight of tools. So one of the most common things that you'll hear as a defense of using AI in government is there's always a human in the loop. No decisions are made without human oversight. And there's tons of evidence, empirical evidence that people are not good at doing this. People almost always defer to AI tools or they at least have a tendency to defer to AI tools. There's a phenomenon known as automation bias. But in general, people are bad, even when they do override a tool or disagree with the tool, they're very bad at determining when they should do that and figuring out where has the AI been wrong, where has it made a bias decision or something like that.

So this idea that, oh, there's always a human in the loop provides cover to agencies or to government officials who are pushing AI, but I think it's not actually a reliable form of quality control. And so it can lead to this false sense of security among both policymakers and the public that, "Oh, well, these AI integrations are okay because they're just supporting people." But the reality is that actually getting people and AI to work together well is incredibly difficult.

Justin Hendrix:

One of the things that you make me think of, Ben in that last comment is something that one of our fellows, Eryk Salvaggio wrote also about DOGE, the idea that AI is kind of an excuse in many ways, that we're seeing it used as an excuse by politicians to do certain things in certain programs to change the way that government works. And in many ways it almost doesn't matter what the AI itself does, it really is just an excuse to kind of meddle with the status quo. I don't know. Do either of you respond to that? Do you see that as sort of being true here in some of these cases that AI is ... And whether it functions or performs and on some level kind of doesn't matter?

Ben Green:

I would definitely agree with that. I mean, in a way it feels like what the AI is doing, it's making the implementation of austerity more efficient. It is not making government more efficient. One case that really stands out to me was an article about some of the uses of AI in the Veterans Affairs Agency and how it was used to cut contracts and two things jumped out. One is that the code that was meant to determine which contracts are acceptable or not was written in a single day. And I have read a lot of stories about bad government. I'm pretty cynical about stuff, and I gasped when I read that line, that someone started a job, wrote code on the first day of the job or the second day of the job, and then it was implemented. I mean, that is just mind-blowing to me as this is not a team that actually cares about getting anything right. They're just quickly having AI write some code for them and then they're just pushing it out.

But then if you look at what this software was actually doing, they essentially decided that they had a few different categories of contracts and they said, "Some contracts are not ready to be cut, but then here's a few categories of contracts that we should be cutting, and decide what's in there or what's not." And of course, it did a terrible job because I think if they actually cared about getting things right, even if I disagree with some of what they're trying to do, they're not even following practices of trying to get things just based on the speed that they're operating with and increasing numbers of stories that show that DOGE staff, and especially higher ups are just kind of pushing, "Here's what we want you to do. We don't actually really care. We're not trying to ensure that the system works or anything like that."

So yeah, I think in many ways AI is just a convenient way to speed the process along of identifying contracts while giving a little bit of cover, both justification for why we're finding efficiency, and then also an excuse for when something goes wrong to be able to point the finger.

Maddy Dwyer:

Yeah. And I think also building off of what Ben said, a lot of my work at CDT is also working with state administrators in different state governments and trying to help them navigate sort of this AI moment. And I think it is true that we've heard across many different domains, we're in an AI hype cycle, and I think that there is this pressure actually on many state governments to just use AI. And I think there's a lot of issues with navigating that because I think people are being pressured to use AI, whether it's for just administrative purposes like writing an email or if it's more high stakes like using AI to parse through benefits applications and determine who's going to get access to it and who's not.

I also think it comes down to the point of we're talking about sort of outsourcing this idea of having a human in the loop with AI. There's also a question of do state governments have the capacity to actually train their employees to do human in the loop well? And I think a lot of the times when we see these issues come up, it comes down to just not having that infrastructure and the funding to be able to support employees in actually making sure that AI tools might be used for government efficiency purposes.

Justin Hendrix:

I want to pick up on one thread of what you were just saying, or both of you were just saying. This idea that we're in a cycle, we're in a boom cycle, we're in a hype cycle, and we're on a political cycle. And to some extent, this is the first political cycle, I suppose, in the post-generative AI age, if you will, since certainly the launch of ChatGPT. And we're seeing these companies now stand up, they're kind of government sales divisions, they want to sell enterprise solutions on through to governments, et cetera.

But I'm also struck by the idea that at some point the cycle will change, maybe legislatures, governments will change hands, between parties. The technology might change as well. The enthusiasm for the technology might change. If there is that type of cycle change, what do you imagine happens next? I mean, another set of government administrators or politicians come along and say, "Well, we want to use AI but not that way," or "We want to reverse some of the things that were done under this last government." I don't know. I have this feeling that putting more people back into the mix won't necessarily be politically popular. That won't necessarily be the way to solve things. I don't know. What do you imagine? Once you try to implement AI and perhaps do it poorly, what do you do next? Does that question make sense, Ben?

Ben Green:

I think it makes sense. It's hard to answer in some ways because it so varies by who's in power as I try to think this through where, yeah, I mean I think the Trump administration doesn't care so much about the things that you were just saying. Like, "Oh, we've used AI and it didn't go well." I'm not sure it's not not going well for them, if you know what I mean. But I think if you take a more good faith, if you think about a administration, whether it's at the federal level or at a state or local level, I do think they will start to see that it's just not delivering for them that much, if they actually care about that. They'll see mistakes, they'll see that their workers don't find it that useful. And this is playing out across the private sector too, where many companies that have invested in AI initiatives and projects just found that it hasn't really benefited them that much.

So I think that from there you could see at least a sort of return to at minimum sort of the Biden era types of regulations and guardrails that I actually still thought were overly pushing AI onto government agencies, but at least had some sense that you should be justifying the use case and you should ensure there was some semblance of accountability and efforts to think about the social impacts of the tools. But I think it's hard because the overall tenor of AI adoption in government and tech adoption generally, this has happened with prior movements like smart cities is very, "We have to use this thing. It is this amazing tool." So that makes it kind of hard for these groups to then pivot or update their thinking once they realize that it can't do everything or there's more backlash, you're sort of left in this limbo state of being kind of stuck between a very solutionistic approach to the technology while also recognizing that can't do much, which I think is hard to square.

And I saw this quite a bit with the smart cities movement, which I previously worked in where there was lots of adoption of technology, then there was a more critical backlash, and some projects got shut down, especially big ones like in Toronto. And then I felt like the teams that were working on this just kind of didn't know what to do. And I think the fundamental issue here is stemming from this idea that AI is this thing that we have to use or this very pro-technology framing around use, which was present even in some of the Biden administration documentation and discourse. As opposed to thinking more less about the use and more about the capacity to determine whether it's useful, and being able to approach, well and cannot this technology do? What are we actually trying to accomplish?

It feels that for so many people who work on technology and government, the technology becomes the purpose and there's a kind of losing sight of what are we actually trying to accomplish? Or you get sucked into these very narrow ideas of efficiency, and then that's kind of your whole frame of what making government better looks like. So I think you have to ... What I would like to see at least is sort of a more agnostic attitude towards technology and government. That's kind of the language I like to use around this, that's less about we have to use technology or even we have to not use technology, but a sense of what are we actually trying to accomplish? What does a better government look like to us? And then can technology help? In some cases, maybe if you do it thoughtfully and in a lot of cases, no, it's never going to be an immediate or easy solution.

But yeah, I think just navigating that divide between the very pro-tech agendas and then the sort of critique and backlash, I think leads to just a lot of stasis once the initial cycle, first wave of the cycle goes around.

Maddy Dwyer:

Yeah. Ben, I'm going to pick up on a really important thread that I just want to continue, and I think it's really important in this conversation is I think regardless of the administration at the federal level, I think even during the Biden administration, we saw this uptick of adoption, like using AI just for AI's sake. Like, "Oh, this is a really new thing. We're really excited about it. How can we use it?" And it's like AI is not always the logical solution to the issue that you're trying to fix. And I think a lot of our work at CDT has been focused on encouraging governments to start from a problem statement place and really assess, are there also non-AI alternatives that could be better fit to solving the issue that we're trying to set out to fix in the first place?

So I think that's a really important place to start at in the AI conversation of just, I hope that we move away from that. And I also hope that we move towards building an evidence base at whatever level of government, because I think a lot of right now, whether it's just government administrative-focused, a lot of our work is also at the education sector of we don't really know if AI is working to improve learning outcomes for kids, or we don't really know if it's making workers more efficient at their jobs. And I think right now is a moment where companies, researchers, governments, I think should be building that empirical evidence base governments across all jurisdictions can actually use AI in ways that improve people's lives, improve the functioning of government and whatever other domains it's deployed in.

Justin Hendrix:

Maybe Maddy, picking up on what you just said, there's a lot of change very quickly, a lot of deployment of technology, a lot of political change around the deployment of these technologies. What are the two of you watching over the next months, year? What are your research priorities when it comes to watching all of these phenomena unfold? Maddy, you've just pointed out how much work there is to be done to observe and to advise and to criticize, I'm sure, how this evolution is taking place or how this various technology deployment, whether it's achieving anything that's in the public interest or working against the public interest. I don't know. What are you paying attention to? What do you both think that the field needs to pay attention to over the next weeks and months?

Maddy Dwyer:

Yeah, I think I can pick up on that, but I think in this current moment we're seeing an administration that largely is pro having no guardrails on AI and its rapid adoption and everyone's just sort of picking up and trying to get government to use AI for whatever reason. I think a big piece of our work moving forward is going to be on the transparency piece and making sure that companies and governments alike that are actually using AI are being transparent about the things that they're using AI for.

Because I think, like I mentioned earlier, it is being deployed in really high stakes ways, at all levels of government for benefits, determinations and other really essential things, even like the use of chatbots in government. I think some people could say it's maybe not high risk, but we've already seen examples of state governments using it and it just spits out incorrect information. And so I think as we move forward, if adoption is going to increase under this current administration, I think the transparency piece really needs to be kept up across states. And I think we're going to also be following all of the state laws that are cropping up, especially in the next legislative session, I imagine that AI will still be a big priority for state governments to regulate.

Ben Green:

Yeah. And so for me, I'd say two things that are on my mind. One is the question of AI hype and trying to make more sense of where does it come from? How does it spread? How does it influence particular decision makers? Whether policymakers in the states that Maddy's talking about or officials and local governments and so on. And it feels like hype to me is super interesting because it is actually not just specific to AI. There's this sort of phenomenon of technology hype generally that spans across multiple waves of whatever the tech of the day is. And I was talking earlier about smart cities, and in many ways this moment feels similar to a lot of the experiences I had and conversations I had 10 years ago or 8 years ago about smart cities.

And so it feels like to me, understanding the broader ... You can fight a specific manifestation of technology, whether it's smart cities or big data or AI, but to actually push back and change how government thinks about technology, there needs to be a broader shift, not just about critiquing, "Oh, here's why smart cities tech is flawed," or "Here's why AI tech is flawed," but here's why the way that you're thinking about technology is flawed, and the way that information about technology gets pushed and shared onto these individuals who maybe just want to improve government and do believe that all of these technologies are effective ways to do it.

And so the second thing that I would say is related is about especially how do Democrats and folks on the left present a vision of government that isn't just tied to efficiency? Because I think the language of making government efficient starts, obviously is connected to austerity, but I think starts from a deep place of distrust. You don't trust human bureaucrats to make decisions for you. You don't trust fellow citizens of a state or a country to not commit fraud. There's sort of this deep distrust that I think ties into the language and the idea of efficiency, especially the way that an agency like DOGE is pushing it.

So I think that for the left, there needs to be a path of not just getting sucked into playing the efficiency game, that we can do efficiency better, or yes, efficiency is what we care about and here's the other way of it. But I think presenting a vision of government that yes, efficiency is one thing that matters, but presenting a vision of government that's based on human dignity and human welfare and the broader public good is going to be important in order to ultimately push back on making these tech adoptions just seem less attractive because once you're in the efficiency game, it's really hard to push for anything other than putting guardrails on AI use.

Justin Hendrix:

I want to encourage my listeners to go check out Maddy Dwyer's post, "DOGE-ifying Government with Data & Tech: What States Can Learn From the Federal DOGE Fallout." That's on the CDT website. It'll be linked in the show notes. And go check out Ben Green's writing. All of his public writing is on his website, benzevgreen.com but also, of course you can find his byline at Tech Policy Press. Ben and Maddy, thank you very much.

Ben Green:

Yeah, thanks. Really fun conversation.

Maddy Dwyer:

Thanks, Justin.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President of Business Development & In...

Related

Perspective
Using AI to Reform Government is Much Harder Than it LooksJune 3, 2025
Anatomy of an AI CoupFebruary 9, 2025

Topics