Home

Donate
Podcast

Considering Trump’s AI Plan and the Future It Portends

Justin Hendrix / Jul 24, 2025

Audio of this conversation is available via your favorite podcast service.

Yesterday, United States President Donald Trump took to the stage at the Winning the AI Race Summit to promote the administration's AI action plan. As Tech Policy Press associate editor Cristiano Lima-Strong reported, "the 28-page roadmap recommends a raft of policies spanning federal procurement, research and development, infrastructure, energy, and more." The document says it is a path for the country to "achieve global dominance in artificial intelligence."

There's a lot to consider in the plan, including details like its instruction to the Federal Trade Commission to revisit and potentially unwind any investigations or legal orders at issue that could "unduly burden AI innovation." Another notable provision says that "the federal government should not allow AI-related federal funding to be directed towards states with burdensome AI regulations that wastes these funds, but should also not interfere with states' rights to pass prudent laws that are not unduly restricted to innovation.”

Trump also used the event as a signing ceremony for three executive orders on AI. The first, titled "Accelerating Federal Permitting of Data Center Infrastructure," seeks to steamroll any obstacles to the rapid construction of data centers and the energy infrastructure to power them.

President Trump:

To ensure America maintains the world-class infrastructure we need to win, today I will sign a sweeping executive order to fast-track federal permitting, streamline reviews, and do everything possible to expedite construction of all major AI infrastructure projects. And this will be done. You will get so good at service in so many ways, not only from [Environmental Protection Agency Administrator] Lee [Zeldin] with the environment, but you need many other types of permits. And you're going to go so fast. You're going to say, "Well wait a minute, this is too fast. I didn't expect to go this quickly. This is a problem." I may cause a problem in opposite, but including factories, data centers and power plants of all kinds, the United States will have total industrial and technological supremacy.

July 23, 2025—US President Donald Trump announces his AI Action Plan and signs executive orders at an event hosted by the All‑In Podcast and the Hill & Valley Forum. Source: White House

Justin Hendrix:

The second order, titled "Promoting the Export of the American AI Technology Stack," is aimed at getting the rest of the world hooked on American AI.

President Trump:

So today I'm signing another major executive order that will turn America into an AI export powerhouse. Under this order, Secretary Lutnick and Secretary Marco Rubio, Marco Rubio is doing a great job as Secretary of State, will work to rapidly expand American AI exports of all kinds, from chips to software to data storage of all kinds. And that's very important. That's going to give you the freedom to do what you want to do.

Justin Hendrix:

And the third order, titled "Preventing Woke AI in the Federal Government," rails against diversity, equity and inclusion and says the federal government, "has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas."

President Trump:

And in just a moment, I will be signing an order banning the federal government from procuring AI technology that has been infused with partisan bias or ideological agendas such as critical race theory, which is ridiculous. And from now on, the US government will deal only with AI that pursues truth, fairness, and strict impartiality. We're not going to go through the craziness that we've gone through for the last four years and then we skip four and then you go back and it started then, but it hung around a little while. Now it's not hanging around at all. Now it's actually very uncool. As somebody told me the other day, it's so uncool to be woke.

Justin Hendrix:

To help make sense of the AI action plan, I invited three experts to my office in Brooklyn to discuss the implications.

Sarah Myers West:

My name is Sarah Myers West, and I am the co-director of the AI Now Institute.

Maia Woluchem:

My name is Maia Woluchem. I'm the program director of the Trustworthy Infrastructures team at Data and Society.

Ryan Gerety:

And my name is Ryan Gerety. I'm the director of the Athena Coalition.

Justin Hendrix:

I'm excited to have you here in my office in Brooklyn. I don't ever do this where I record the podcast in person, so this is an experiment for me. It's exciting, but since you are all New York-based, we felt it might be good to get together on what we knew would be an important day to talk about artificial intelligence policy.

Today is Wednesday, July 23rd and we have, hot off the presses, a new AI action plan just published by the White House. We're expecting, I believe later today, additional executive orders from the president, which will also relate to artificial intelligence policy.

So we're going to talk a little bit about the plan, a little bit about the context for the plan, and try to get your first reactions to it. This is not unexpected. This has been in process for some time, that OSTP process ran from February. I think there were many thousands, maybe over even 10,000 comments that were submitted by individuals and by civil society organizations and of course by corporations.

But Sarah, I might start with you, about the policy moment we're in and what you anticipated coming up to the announcement of this plan based on what we've seen from the Trump administration over the last few months.

Sarah Myers West:

So if you take a step back and look at what brought us to this moment, the Trump administration from day one has taken stances that bring the tech industry right into the heart of the White House. The authors of this AI action plan include David Sacks, who's a well-known VC; Russell Vought, who's responsible for Project 2025; and Michael Kratsios, who used to be high up in the ranks at Scale AI, which is an AI firm that's focused on defense applications. And I think that that's really what we see reflected in the material that's just been handed out.

What's unique about this moment though is that this is going to be the first time that President Trump himself is weighing in on AI. He'll be speaking at podcast taping at the Hill and Valley Forum in front of a room that's going to be filled with tech executives, venture capitalists, and probably some big oil folks as well. So I think this is a plan that's very much reflective of the interests of the tech industry and is speaking directly to it.

Justin Hendrix:

And tell me, from AI Now's perspective, how did you plan to engage with these executive orders over the next couple of days?

Sarah Myers West:

We and a whole group of other organizations took a look at what the tea leaves were telling us about what to expect here and co-convened a group around an alternative plan because honestly, this is not the only path available to us on AI. And there are lots of ideas that are coming from groups like Ryan's and Maia's and many others on alternative paths that are going to serve the needs of everyday people better. So we convened this group of now over 100 organizations around laying out a People's AI Action Plan that puts the needs of everyday people ahead of corporate profits.

Justin Hendrix:

Maia, I want to come to you maybe next. You've been doing a lot of work on infrastructure in particular, thinking about data centers. You were paying attention closely when the president was in Pennsylvania a couple of weeks ago to the goings-on at Carnegie Mellon and to the speech, if we can call it that, the president gave the afternoon during that event. What are you reading this report for?

Maia Woluchem:

Yeah, I'm reading this report for a few things. I think one of the concerns that we have as a team, we're really focused on Pennsylvania and what the impacts are going to be broadly for the overall AI environment as it lands on people in the places that they live. And we're really concerned about this broader path dependency that's being created by the deep deregulatory environment that was spoken about on the stage in Pittsburgh and certainly is part of the AI action plan and will be part of the EO as well. What we're seeing in the signals right now is a real push for speed and what speed often does is it clears the lane for a lot of the regulatory teeth that we might otherwise have to provide extra scrutiny to the companies that are coming to towns to ask different questions about why is labor developing in certain ways and not in others. I think we're really broadly concerned about what this overall environment is enabling to fall on the heads of city council people, counties, regions in the states that are really going to be impacted by this work.

Justin Hendrix:

So we're talking about streamlining permitting, we're about looking at waiving or modifying environmental regulations. We're talking about rejecting the radical climate dogma in order to roll out artificial intelligence. I mean, Ryan, you're spending a lot of time with folks on the ground in communities across the country who are engaged in the local politics of the AI infrastructure project. What does this mean for the people you work with?

Ryan Gerety:

I mean, for the last several years, as Big Tech was growing their infrastructure footprint, and especially in the last two or three years, states have had to contend with these gigantic multinational corporations coming down, buying huge swaths of land, demanding as much electricity as an entire city would use, as much water as a town would use, providing very few jobs, and then asking for tax breaks. And this has been happening all over the country, Pennsylvania, Virginia, Indiana, Ohio, Oregon, Nevada, on and on.

And with this order, Trump is basically clearing the way for these couple of mega corporations to come and do whatever they'd like with no public input, with no state oversight. And I mean as a result, it's not just that it's undemocratic and people won't have a say. And it's not just that it's a really poor use of public resources and opportunities for states to be using all their excess electricity on an industry that's creating very few jobs.

But if states create proper guardrails, you can see more innovation. We can see in this order, because they're trying to throw out regulation, we're going to get a less innovative tech sector in the US because it's the guardrails that allow companies to innovate in a positive direction. And so for people impacted by data centers, it's really not only going to raise their energy prices, and be a burden to their grid, but we're also just going to see a less good tech sector.

Justin Hendrix:

Maia, do you have a similar kind of reaction on the data center front and the infrastructure front?

Maia Woluchem:

Yeah, I have real concerns about… I think the framing is correct, but the way that the conversation is being had and was had in Pittsburgh is that these are spoils that are going to fall on everybody. Isn't it a great thing that we have? We're going to be bringing this opportunity for innovation and everybody's going to have a job in that space, whether it's in electricity or in HVAC that we need for the data centers. But that is a very kind of faulty framing that is still really promising everybody in that space as a worker, as a contributor, but not somebody who's actually receiving benefits of this broader innovation sector, or, that is not the head of a multinational tech company that's going to be reaping the benefits in terms of millions of dollars, billions of dollars over time.

So we are really concerned about the narratives that are really taking root in a lot of these places that are enabling some of these really pernicious tax deals, enabling some of this stuff to happen out of the public eye and how this is sweeping up so many of these localities into this broader idea that okay, it's an exciting idea to be part of this innovation sector knowing that these are just really thin-on-their-face ideas.

Sarah Myers West:

I really want to look under the hood at these projections about economic growth. And I know that there are organizations that have been doing this within their communities, right, Ryan, that are saying like, okay, what do we actually get for the tax breaks that local governments are offering? Because I mean just on its face it plies in the face of common sense because data centers are not places that are teeming with human activity. So the assertion that it's going to bring a ton of jobs and a whole lot of economic benefits to communities, it doesn't really feel like it's going to check out.

And when I have looked, for example, OpenAI has been shopping around this white paper called Infrastructure is Destiny and if you try and check the footnotes in that paper, all of the links tend to break or they lead you down a rabbit hole to a paper that's nowhere available on the public internet that says our consultants came up with these numbers. So I think that it's worth digging deeper into are these claims about benefits to communities going to hold? And I think all of the evidence suggests that likely no.

Justin Hendrix:

Are any of you able to speak to any of the specifics right now about the environmental rollback particularly? I mean I know we've got some questions around NEPA and things of that nature. Did you have a chance to review any of that, Ryan?

Ryan Gerety:

Until the executive order comes out, I think we don't have very many details, but there's a few things that are concerning. One that any mention of climate is to be removed from NIST guidelines around AI and this is in a period of the next few years that are the most critical for our planet. Those considerations will be absent. Then you're talking about agencies where they're now specifically precluded from regulating and their agencies that have been dismantled, when the executive order says that states' regulatory environment will be considered, it means that if states put any guardrails around water use, electricity use, protecting rate payers from data center use, that they'll be less likely to get federal monies. Now I don't think we know much about the shape of what the federal money is and isn't going to look like, but that sets a pretty scary precedent for states that are absolutely desperate for investment, especially right now as federal funds are being cut off, and it's basically telling them you better do what we want or you won't get these funds. It's profoundly undemocratic.

Justin Hendrix:

And this falls on the proposed moratorium which didn't end up in the budget bill, but it's a kind of softer moratorium, I suppose it's using the federal government's heft and procurement weight to be able to try to fiddle with the regulatory ecosystem in the states. Is that something you looked at particularly, Sarah?

Sarah Myers West:

Yes. So it's using the powers that the executive branch is able to utilize here, which is the federal agency's ability to disperse funds. And the indication in the plan is that federal agencies are guided to take into consideration states' regulatory environment. So what kinds of laws do they have on the books, what are the positions of the state attorney generals, are they enforcing a lot of cases on AI? And that that will be taken into account in making determinations about dispersal of federal funding. And it doesn't specify that this is necessarily delimited to funding related to AI. It could be other kinds of pots of federal funding, it's not quite clear. But if there's a restrictive environment toward AI, you might not be getting federal funds. And that's a significant stick to wave at states that are already facing cuts to their budgets left and right.

So I think it's important to couch this as in a moment where the moratorium was voted down 99 to 1, it was widely a bipartisan move to say that this is absolutely nonsensical. It makes no sense to give an entire industry full impunity for a decade. Even Sen. Ted Cruz (R-TX), who was the proponent of this position, voted against it, ultimately. And so at that moment in time when the moratorium is being considered, the Trump administration didn't have a clear stance. Now this action plan indicates it's a doubling down on that posture and it's a signal that we really risk Congress now picking up and running with. So my fear is that without significant pushback against this position, we could see a revival of this ludicrous proposition back in Congress again in the fall.

Justin Hendrix:

Maia, I want to bring you in a little bit more on the energy environment aspect of it. Are there other things that you looked at in the document today that stood out to you?

Maia Woluchem:

I do have concern broadly about the just rush towards unveiling, particularly the state of Pennsylvania, a whole range of types of energy sources that we already know are quite noxious and pernicious in terms of keeping us away from the worst of our climate impacts. So for example, we've spent on our team quite a bit of time around Three Mile Island, famously the site of the largest nuclear disaster in the US, and neither here nor there, I think nuclear is often proposed as one of these potentially less carbon-heavy solutions than like a coal or a fracking, particularly in the state of Pennsylvania. But we also know that the harms of something that perverse can fall really specifically on those communities that surround that nuclear reactor. And as part of the broader plan to unleash energy across the US, there is a plan to unleash the power of the coal industry, bring us back to fracking, particularly in the state of Pennsylvania, as well as invest in these small nuclear reactors that can be co-located alongside data centers.

Now, while I don't think I'm the most foremost expert on our rush towards energy independence, I do think we have real concerns about the focus on industries that we already know have these incredibly high health and environmental and water impacts, especially in an environment where we are already deregulating our ability to be able to hold on to our EPA standards, hold on to our public health infrastructure, and certainly as it's relating to jobs, I think creating such a catch-22 for towns and localities that are having to make decisions between well is it that we just don't accept this data center and don't accept this idea that we could possibly have jobs and an energy future in the near term, having to hold that in concert with perhaps we might also be losing access to our water, and rising energy prices, and inviting a world in which we have rolling blackouts and we can't see the sky, because it's full of smog and all sorts of other externalities as it relates to the broader energy ideals.

Justin Hendrix:

It reminds me of a conversation I had last week. I was talking to someone in Virginia who was saying that they have had a lot of heat wave recently and some of the data centers have actually been brought online their diesel generators in order to actually supply energy back to the grid to try to help load balance I suppose. But it was the first time that I guess a significant number of them had turned on their diesel generators and it was the first time people had really smelled the exhaust and the fumes and people were concerned, they were calling their local officials saying, "Oh my god, what is this? What is this?" And it made me think about a potential future where we're dealing with those types of localized effects of either diesel or other generation in real time. But I don't know.

I mean another question for me though around the economic opportunity, and I feel like if you are in some of the places, Ryan, where you're working, whether it's Indiana or Colorado or some other place, maybe you've had a community that's been focused on manufacturing, there aren't a lot of opportunities coming your way these days. Big data center comes to town promising tax revenues, huge investment, and potentially it seems like part of the proposition is often we're going to bring you into the 21st century. We're going to bring you into the tech economy, essentially. What's the calculus these local officials are making?

Ryan Gerety:

Yeah, I mean I think it's important to realize it's not like these officials are selling out the public. Often, they really do need the tax revenue and they're desperate for something. Our economy doesn't need to work like that. We could have industry all over the country. It doesn't have to be concentrated in California and New York, our profit-making spheres. We certainly could do it better. But right now they're desperate and these data centers do, even if they're giving away public money, and giving these data centers tax breaks sometimes far into the future, it's still revenue, it's construction jobs.

In the end, the data centers might employ 20 people, but it's construction jobs to build them. If you know you're going to build 10, it's more construction jobs. And so unfortunately there's a real calculus. The trouble is, in a state, if you have enough public control to sit down with these companies and say, "Look, you need to do X, Y, and Z," it can be a better deal for the public, but right now the public has very little strength and the executive order takes some of their power away. So they can't really come and say, "Okay, you want to use all this electricity, we're going to need all these grid upgrades, we're going to need all this renewable energy, you're going to pay for it, but then your home will be here." But they mostly lack that leverage that they should have.

Maia Woluchem:

I think this point about the trade-offs, it's really salient for folks on the ground. We talked to some folks in Eastern Pennsylvania who said it's either the data center or it's the local jail or it's like a regional jail or an ICE facility or it's some other kind of thing that we already know is not necessarily a local benefit, but at least on its face a data center it's true, has this idea that okay, we have construction, it might be short-lived, but at least it's better than the alternatives. And I do feel it's really unfortunate, particularly for these post-industrial cities that have this really deep... There once was a time where these were places that had such incredible pride.

They made things, they made all the steel that's part of all the bridges in New York. They were really fomenting a real sense of industry and really strong labor unions. And I think in the absence of those things, I think there really is traumatic, kind of, often a gesturing towards that past and a real desire to put... I mean anything there that can gesture at what else is possible. And I do think it's a little bit unfair that I think localities are having to struggle between these decisions that ultimately in the long run are I think are setting them up for something really difficult.

Sarah Myers West:

Especially when the cuts that are causing these budget gaps are coming from the administration too. It's not as though they've come from nowhere. It's that local officials are being handed this raw deal where on the one hand there's significant cuts to educational budgets, to healthcare, on many fronts that companies are not paying their fair share in the form of taxes. All of these things, if they were remedied, would put local officials in a position where they would not have these significant budget deficits that they have to figure out how to balance through making these very difficult trade-offs that in either instance are going to cause harm to the communities that they represent.

Ryan Gerety:

I think the irony here is that the Trump executive order many times says deregulation is the key. Deregulation is the thing that for the last 40 years has hollowed out most of America. So it is left, especially as the tech sector grew, which is the second irony, that money flowed to a few companies largely in Silicon Valley, and out of the middle of the country. And so the very conditions that people are living under in most of the country, many of which created so much anger that they voted for Donald Trump, is like deregulation underpinned all of that. And that is exactly what he's proposing to do right now, further deregulate and we know where that model leads. It leads to the hollowing out of places where communities end up desperate for investment. So I think there's a deep irony buried in here.

Justin Hendrix:

AI is the ultimate concentrator of power, so we can anticipate perhaps that problem will just get worse. Let's talk a little bit about the labor and employment picture.

Sarah Myers West:

I think that this stance on labor that's articulated in this plan is a feint at economic populism by offering reskilling. It's like that we're going to take away jobs, we're going to devalue work, but we'll just offer you these shallow things in exchange in the form of reskilling that do nothing to perturb that underlying trajectory, when what we really need are policies that put workers at the center and in the deciding position on whether and under what conditions this technology gets rolled out. I'm sure you have a lot more to say.

Ryan Gerety:

Well, the thing I'd say is it's very thin. It's interesting that the Republican Party now feels like it's important to mention workers and it is the party that has ensured that unions would wither in this country. And so it's hard to take seriously, but when you look at the details, it's also clear there's very little there. So they hand wave about retraining and re-skilling, which is kind of the default thing that you would say. They talk about agencies doing that that they've completely gutted. So I don't know who in the Department of Education, which they mention, is going to hold down these things. There's nothing mentioned about unionization, which is we understand when there's a lot of workers unionized, we can shift industry, we can shift technology in a positive direction. In this country our unionization rates are too low, so we can't take any of that seriously.

Workers have no ability to shape their technological future. And then the second piece is, the bar against any state regulation isn't just around data centers, it's any kind of AI regulation. So anything around worker surveillance, automated management, very basic guardrails, that would also potentially be prohibited. So American workers are going to be subject to the whims of these corporations and we're talking about union busters like Amazon, Tesla who are trying to say that the NLRB is actually unconstitutional and this is a handout to those very same companies, so it's pretty difficult to take it seriously.

Justin Hendrix:

Yeah, we don't see a uniform or monolith reaction from the labor side of things on AI. We are seeing some of the unions, for instance, the AFT recently even I believe AFL-CIO has a kind of AI lab or partnership. There are various other unions that are excited about AI for other reasons. The Brotherhood of Electricians, some of the other kind of construction trades are very excited about building data centers for instance. I don't know, how do you think about labor as a kind of counterbalance here?

Maia Woluchem:

Yeah, I feel like for many reasons, I think Pennsylvania is such an interesting test case for this for all of these reasons, right? Pennsylvania is this place where it's a very purple-y state. People are always talking about the white working class in this particular place, and I feel like we saw so many of the narratives. They both were talking about just laid out very cleanly on that stage for the Energy and innovation Summit that happened last week, co-sponsored by Senator McCormick in Pennsylvania, but Trump attended and made a big show of it. And in that room there, was so much conversation about labor that didn't include labor for all of the reasons that are noted, right? Pennsylvania is highly over-indexed in terms of the conversation around the tariffs around building electricians and HVAC folks and work and tradesmen who are going to be building this future. And at the same time we have folks representing universities, really incredible public universities that are necessary across the state of Pennsylvania, like Penn State, like Pitt, where I'm a graduate from.

Also talking about reanimating this vision of education so that you don't need an education so that we don't necessarily need to be investing in other parts of the economy that could lead to a different kind of labor other than manufacturing, other than these very short-term things that we already discussed are really thin. So it's true that really I don't buy the argument and I think looking at it on its face, just following the words that they're naming, it seems very... The arguments are very seductive, particularly in a place that is important for election season. I think it's really an incredibly pernicious way to consider what labor means to this country and broadly what labor means is kind of an arm of democratic functioning, I just find it very fatalist and really strange.

Ryan Gerety:

A couple of other things I didn't mention, I mentioned that Amazon and Tesla are union busters. Google recently had engineers stand up and say, "Google AI shouldn't be used for war crimes," which seems like it would be an uncontroversial position to take. Google fired 50 of those young computer engineers. These are companies that have not been good to their own workers. It's also an industry that concentrated a lot of wealth into the hands of few people. Look at San Francisco, look at Silicon Valley, huge amounts of poverty and then some very, very rich people, and it concentrated all the money in America into those pots. So this is an industry that was allowed to create rampant inequality in this country.

It didn't have to be that way. We could have made a different decision if we had different politics, if there wasn't so much corporate money in politics, we could have made a different decision. But that's the kind of industry that we're talking about right now. That's the tech industry that we've allowed to develop. So for workers all over America, even if you're not connected to the tech industry, the deregulation that even existed going back to Clinton has been bad for American workers. There's a lot we could say about that, but a lot to be worried about.

Justin Hendrix:

I want to ask a few questions about what this plan says about the United States' role in the global AI competition, and of course the word that kind of comes back to me from the Paris AI Summit from JD Vance was dominance, that's what we're after. This feels like to me as sort of my way or the highway kind of plan. This pillar three lead in international AI diplomacy and security. It feels like the idea is to use every aspect of the American government's leverage to ensure that the rest of the world adopts American AI systems first and foremost. Is that accurate, you think?

Sarah Myers West:

So that frame of dominance, I think you're right to pull out as really the animating feature that's driving through the entire plan. It's what motivates the whole section on infrastructure expansion. It's what motivates the deregulatory push. The idea is that the US, in order to achieve its prowess, must completely take a no-holds-barred approach to this sector and go all-in on a few firms. And if you look back in history, this kind of stance, we are really doubling down on building out monopolies, doesn't work. One, from an industrial policy standpoint, and two is exactly what the industry has been pushing for nearly a decade now. The idea that there's this arms race that US AI firms are engaged in with China specifically has been used time and again by corporate leaders from the tech sector. Anytime congress is close to passing laws that would regulate the sector, it revived itself during when Congress was considering a package of antitrust bills that would strengthen scrutiny of the tech sector several years ago.

Revived again when Congress was considering the Algorithmic Accountability Act. And now in the last year or so we've seen a new version of this set of arguments that couples a deregulatory stance with a doubling down on investment in the tech sector, specifically in ways that are going to de-risk the portfolios of the companies that have over-leveraged themselves. And conveniently, this came about at the very moment that the business sector took a big step back from AI. It revived at the very moment that Sequoia and Goldman Sachs released these reports saying that there might be an AI bubble and it might be a risky place to be placing your investments. That's the same moment that the tech sector really stepped up its lobbying of governments around the world. It's when Jensen Huang launched his marketing blitz around sovereign AI and petitioning heads of state all over the place. I think this notion of an arms race, there's sufficient evidence that it's a self-serving PR tactic that has been quite successful for these companies that has let them push through regulatory stances that they've wanted to see for quite some time.

Ryan Gerety:

I thought this undercurrent was interesting. It's like the federal government is going to use its full power to force other countries, our schools, healthcare, the Department of Defense, to buy as much AI as possible. If you want to know when maybe investors are over invested, it's when they tell the President of the United States, "You have to get everyone to buy." Otherwise, I think they could sell their own product themselves. Clearly in foreign policy, he's continuing this position of antagonism against all countries, allies, adversaries. It's leading us to a place where it's just a continuation of his existing foreign policy. Clearly on AI, you could take a different position, which many people have articulated, which is like, we should lead the world, we should be partners with the world to make sure this huge technological revolution benefits everyone. Obviously that's a position Donald Trump isn't going to take, so it's maybe not surprising that the key animating thing is dominance, but this undercurrent of like, we're going to force you to buy stuff because worried our investors are over-leveraged is interesting.

Sarah Myers West:

You had told me a story about how this was showing up in one of the local data center fights that local officials were having to feel pressured to take particular stances because of national security.

Ryan Gerety:

Yeah, I mean it's shown up I think in a couple of ways. We've heard in states some public officials say this is our national duty, this is a thing that America needs to do and our state should serve its part, which is kind of like, that's nice, that's something we should all feel. Let's all do our part for each other, but they're doing their part for these multinational, a couple of huge tech monopolies that have more than enough money. And so it's kind of sad. I mean also in Virginia, the Department of Defense weighed in on state bills that were asking for very straightforward things like transparency around energy use and the Department of Defense stepped in and said like, "Whoa, whoa, national security." And people in Virginia were like, "What? We just want to know like city use of a big sector in our state and you're telling us no."

Justin Hendrix:

So ask not what AI could do for you, ask what you can do for AI. Is that where you appear to be headed?

Sarah Myers West:

It seems to be, yeah.

Maia Woluchem:

Just really briefly, I'm thinking back to the Paris AI summit. It's hard not to look at particularly this language around dominance outside of the context of as many other researchers have named this kind of colonialist, imperialist version of what AI dominance looks like. And I'm thinking to a couple of colleagues who are in global majority countries, I mean it's just very clear. You see very clearly how AI use, as it's been determined by the US government, is a very bubbly sector, that broad application of, oh, we need this for international agriculture and we need this for education.

Forgetting or foregoing entirely the idea that there are many, many different epistemologies, ways of being across the globe that do not adhere to American standards for being together, the ways in which we already know we approach bias and in many ways relish some of those things in our systems. I just don't see a way in which this version of dominance doesn't meet incredible complication when it's met with from partners around the globe. So I don't know. I'm very skeptical of this broader vision of global dominance as articulated in the plan and as has been articulated by many other Western entities throughout this AI race.

Justin Hendrix:

Of course there's a lot of conversation around this AI action plan, maybe the most kind of press-friendly aspect of the plan, has been around wokeness of AI, or woke AI. This question around ideological bias and the Trump administration is trying to get the tech firms to essentially root out what it regards as social engineering to perhaps maybe avoid various types of outputs that some might regard as dangerous or hateful or bigoted in other ways. I don't know, what do we make about this plan and where it takes us in that regard?

Ryan Gerety:

I think the way I read it is, your AI is going to say what we wanted it to say and that is the only acceptable version of the truth and it's couched under the heading of prestige, but then it mentions a set of things that it shouldn't be biased and we know what that means. They mentioned terms like climate change and diversity and race, and so I read it to be a dramatic expansion of... Or I guess I would say the... What did I call it in my notes? I mean it's really the establishment of a censorship regime in the technological space. Like this will be allowed and this will not be allowed.

Justin Hendrix:

So go a little deeper on that because some people might hear what you're saying and think, "Well, it's the opposite, we're trying to avoid, these systems being censored by creating rules that essentially control their outputs. How would that be censorship in your mind?

Ryan Gerety:

I mean, we could go back to the text of the order and of course you could read it in a generic way like yes, we all agree AI should not spread or create misinformation, a term which the president said would be removed from the standards. We think AI should provide fact-based materials. I mean there's problems with all these things. That's true. I think when it's deployed under the Trump administration, we see him going after universities, telling them to fire professors, telling them not to let in people with the country who have particular political beliefs. We see the degradation of free speech in this country. When someone is refused at the border because they posted something about Palestine, that's limiting free speech. When a professor is fired, when Harvard is told to do X and Y, when Columbia does what the Trump administration wants, we see that as a closing of free speech. So that this guidance taken in light of that I think makes it clear what's going to happen. It still remains to be seen, I suppose for some people.

Sarah Myers West:

I mean I think it's also just revelatory of how this action plan reflects the wish list of the right wing from within Silicon Valley because the obsession with woke AI is something that David Sacks, for example, came into the White House all worked up about and now it's made its way into the line and letter of the executive branch stance on AI.

Maia Woluchem:

I think given all of the other threats to other sectors of our institutions, I might add DOE, healthcare, I don't know, environment, defense, certainly I have real concerns about even the fact that this woke AI was named and knowing the impulse and the excitement about speeding this whatever version of woke AI is through all of our government systems. I think we absolutely have real reasons to be incredibly concerned about this vision of the truth being applied to healthcare benefits, being applied to decisions about college, decisions about funding for states and localities. I think there's so many ways in which we can see certainly points of friction in the US and abroad, but also in this environment, given the fact that we don't have as many hooks and teeth as we might have otherwise had to be able to track those incidents, to be able to legislate against them, to be able to sue if we need to.

I think I totally agree it's one of the more worrisome parts of this actual plan, absolutely. But I do think it's also a signal that if we don't have the federal government, if we are losing access to state and localities to be able to legislate around some of this, and we certainly don't have the trust of the companies, or at least a goodwill of the companies. I suppose maybe there's a fear but also an opening for what else is possible that I think is worthwhile for us to be exploring in this new version of this administration.

Sarah Myers West:

I think to your point, the Pentagon just announced late last week that it was giving four multi-million dollar grants to the creators of foundation models. And I think the list included OpenAI, Anthropic, I think it was Google and xAI. So if we're seeing the use of these commercial AI models by the Pentagon in national security use cases, the use cases we're going to have the least amount of scrutiny and transparency and external accountability, in combination with this woke AI provision that's going to, I think if anything, incentivize the production of AI that is wired toward racist outputs. That's really deeply worrisome.

Justin Hendrix:

When I read a document like this, I'm attuned to a couple of things. One is, what is this document saying about the relationship between government and corporations? And I'd be interested in each of your views on that, like the primary balance of power between the federal government of the United States of America and its technology firms. And then I suppose the second, maybe last question, and each of you can think about this a little bit, is what does this future look like that this document is describing? What images does it put in your mind, if 10 years from now the basics of this document have informed policy decisions, have informed corporate investments, have informed government investment? What does the world look like when the basics of this document begin to take form in the real world?

Sarah Myers West:

People is policy. From day one, the Trump administration brought into the fold venture capitalists, the CEOs of tech companies, defense tech firms, to populate the policymaking apparatus around AI. And I think that this document is a reflection of that stance where the corporate interest is being rubber-stamped as the national interest under this administration. This plan promises us that the development of AI is going to foment a renaissance, that this is going to deliver innovation for Americans. But if you look at the interests of the folks that it puts at the pinnacle, one, their vision doesn't go very far.

And where it goes is the development of technology that ratchets up mechanisms of surveillance, of control, of coercion, that scales inequality, that degrades our environment and harms community health that disenfranchises the public. So I think what we've seen is an AI action plan that if anything undermines the interests of the people in this country. And I think we cannot accept that and need to be pushing toward an alternative path, one that really focuses on shared prosperity on a sustainable future and on a path that's going to uplift our collective power rather than undermine it.

Justin Hendrix:

Yeah, I guess one of the reasons I'm asking that question is there's a way that you could read this, that it's almost as if the federal government is recognizing that it is in service to the tech firms in a way. Its purpose is to enable the tech firms to achieve their goals and that somehow by doing that, the government will achieve its goals, which seems to me to be a kind of inversion of where we typically think about starting on making industrial policy.

Sarah Myers West:

I think you're absolutely right, and that inversion took place under the last administration too when Jake Sullivan said the US government needs to be a better customer to this industry.

Justin Hendrix:

Yeah, I think that inversion was also apparent in things like the national security memo.

Sarah Myers West:

Exactly.

Justin Hendrix:

So in some ways, this is a continuation of that.

Maia Woluchem:

I think we have a lot to learn from the folks who've been really thinking about procurement and for a long time in the background of many of these decisions because I think so many folks in that community would say, exactly to this point, there has been increasingly over time this real tighter handshakes, deeper hugs, even just very, very tight relationships between the tech sector and the government. And I do think something particular to this time, I think when we are seeing just the rise of authoritarianism, it seems very aligned that there is the ability to really accelerate the aims of institutions that greatly desire to see many people not have access to their general well-being and health and access to a dignified life.

And so I certainly worry about it, the foundations have been around for a long time, but I am less surprised and maybe more worried that this is a really salient opportunity for the aims of both of these institutions. One being some of the nefarious actors in the tech sector, and others being the nefarious in our federal government, to really take hold of this opportunity and accelerate their wildest dreams.

Ryan Gerety:

I mean, what I thought about really even as I just read the opening quote was like this is marking a very particular moment where you have a president who for a long time was antagonistic toward Silicon Valley and tech, and even if you look back eight years, that relationship was more antagonistic, like he campaigns against them, which is popular because people do not trust these companies and these companies have not been great to us. He campaigned on that. They were also antagonistic about him. They were in some ways real or not trying to uphold some liberal democratic values. We can argue about whether or not that was real, but there was real antagonism.

And you have a sector which now it's been a long time, 30 years, that initially was built on more liberal, open, free ideas. It's really hard to remember any of that now, but I remember that period. Now we are at a time when Trump has realized, oh, these guys are just like any other billionaire and they're monopolists and they need me and they're willing to bow down very quickly and make a deal, make me look good. I can make them some extra money. We can travel around the world together and make deals. So he realizes they're perfectly happy to do that. And I think the shift with the tech sector is, because they've become such huge monopolies, they actually need now an anti-democratic government to hold their power because people no longer support them to be completely unaccountable and unregulated.

Justin Hendrix:

You need too big to fail.

Ryan Gerety:

And they need an authoritarian government to hold up their power because a democratic one absolutely will not. And you look at their response to Biden's very modest steps to rein in Big Tech, and they are so mad about it, and these are like they were important, but small steps. And I think they saw the writing on the wall. We can get this guy who's absolutely illiberal, anti-democratic, but he will have our back and he will do our work. And I think in some ways it's all in this document.

Justin Hendrix:

I appreciate you all joining me here today. I am grateful to you for poring through this document and very quickly this morning right after it came out so that we get informed this conversation. And I look forward to talking to you about all these things again. Thank you very much.

Ryan Gerety:

Thanks so much.

Sarah Myers West:

Thanks.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President of Business Development & In...

Related

News
Unpacking Trump’s AI Action Plan: Gutting Rules and Speeding Roll-OutJuly 23, 2025
Perspective
AI Could Never Be ‘Woke’July 24, 2025

Topics