Home

Donate
Podcast

A Critical Look at Trump's AI Executive Order

Justin Hendrix / Dec 14, 2025

Audio of this conversation is available via your favorite podcast service.

On Thursday, US President Donald Trump invited reporters into the Oval Office to watch him sign an executive order intended to limit state regulation of artificial intelligence. Trump said AI is a strategic priority for the United States, and that there must be a central source of approval for the companies that develop it.

Today's guest is Olivier Sylvain, a professor of law at Fordham Law School and a senior policy research fellow at the Knight First Amendment Institute at Columbia University. He's the author of "Why Trump’s AI EO Will be DOA in Court," a perspective published on Tech Policy Press.

What follows is a lightly edited transcript of the discussion, including audio from the White House signing ceremony.

President Donald Trump:

Well, thank you very much. We have a big signing right now and we have a tremendous industry where we're leading by a lot. It's the AI artificial intelligence. I always thought it should be SI, supreme intelligence, but I guess somewhere along the line they decided in the word artificial and that's okay with me. That's up to them. It's a massive industry.

Justin Hendrix:

On Thursday, President Donald Trump invited reporters into the Oval Office to watch him sign an executive order intended to limit state regulation of artificial intelligence. Surrounded by some of his key advisors, including White House AI crypto czar David Sacks, Commerce Secretary Howard Lutnick, and Treasury Secretary Scott Bessent. Trump said AI is a strategic priority for the United States and that there must be a central source of approval for the companies that develop it.

President Donald Trump:

I think most people agree, but there's only going to be one winner here and that's probably going to be the US or China. And right now we're winning by a lot. China has a central source of approval. I don't think they have any approval. They just going rouge. But people want to be in the United States and they want to do it here. And we have the big investment coming. But if they had to get 50 different approvals from 50 different states, you could forget it because it's not possible to do. Especially if you have some hostile ... All you need is one hostile actor and you wouldn't be able to do it. So it doesn't make sense. I didn't have to be briefed on this, by the way. This is real easy business.

Justin Hendrix:

The executive order Trump signed Thursday was similar to a draft that was circulated last month. The document gives various responsibilities to agencies, including the Department of Justice, the Department of Commerce, the Federal Communications Commission, the Federal Trade Commission, and the White House AI and crypto czar, working in concert with various other parts of the executive branch that are concerned with AI policy. The order calls from the Attorney General to create an AI litigation task force to challenge state laws, including "On grounds that such laws unconstitutionally regulate interstate commerce are preempted by existing federal regulations or otherwise unlawful in the Attorney General's judgment." It calls on the Secretary of Commerce to work with other White House officials to develop a hit list of onerous state laws that conflict with the pursuit of AI supremacy. Of particular concern to the administration are laws that "require AI models to alter their truthful outputs or that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution." Trump indicated he had spoken to the leaders of big tech firms about the order.

Silicon Valley has lobbied intensely for a moratorium on the enforcement of state laws and AI, but the idea has faced stiff opposition, including from key Republicans.

President Donald Trump:

So this centralizes it and it's something which the people behind me, the very distinguished people all. But Tim Cook just left of Apple and spoke to all of the big companies. Great companies. And they won't be able to do this. This will not be successful unless they have one source of approval or disapproval, frankly. You could have disapproval too, but it's got to be one source. They can't go to 50 different sources.

Justin Hendrix:

David Sacks, the White House and AI crypto czar who, as the New York Times reported, is simultaneously working in Silicon Valley as an investor, pointed out one significant change in the final version of the order from the draft, a carve out for laws focused on kids' safety.

David Sacks:

This EO gives your administration tools to push back on the most onerous and excessive state regulations. We're not going to push back on all of them. For example, kid safety, we're going to protect. We're not pushing back on that, but we're going to push back on the most onerous examples of state regulations, sir.

Justin Hendrix:

The final signed version of the order says the legislative recommendations shall not propose preempting otherwise lawful state AI laws that relate to child safety protections, AI compute and data center infrastructure, state government procurement and use of AI and other topics as shall be determined. Senator Ted Cruz, a Republican from Texas and a champion of the failed moratorium effort, claimed the executive order was crucial for American competitiveness and to unlock the economic potential of AI.

Sen. Ted Cruz (R-TX):

You look back to the 1990s, there was a similar inflection point with the beginning of the internet, the dawn of the internet. And Bill Clinton was president at the time. He signed an executive order just like you're doing that put into law a light touch regulatory approach to the internet. And the result was incredible economic growth and jobs in the United States. At the same time, the European Union took a very heavy-handed regulatory approach. Here's an amazing statistic, Mr. President. In 1993, the US economies and Europe's economy were virtually identical in size. Today, America's economy is more than 50% larger than Europe's and the two drivers of that are tech and the shale revolution. It transformed this country and AI is the same thing. It's a race. And if China wins the race, whoever wins the values of that country will affect all of AI.

Justin Hendrix:

Some analysts say US economic growth would be negligible without the AI investment boom, pointing to headwinds ranging from high tariffs to rising debt and sticky inflation. The IMS chief economist recently noted the investment surge in AI has helped the US avoid what might otherwise have been a sharp slowdown. Trump spoke specifically about the role of the AI investment boom in driving growth, suggesting just how reliant the economy is on data center investments in particular.

President Donald Trump:

And we also know that a big part of our economy, it could be 50, 60% of our economy going forward for a period of time, at least, especially during this startup, is AI and AI based. We have trillions of dollars of construction going on and that construction would stop or would certainly a lot of it would be halted.

Justin Hendrix:

With so much hinging on whether AI delivers, will this executive order succeed in giving the president the singular power over AI regulation that he believes is necessary? Today's guest says no. He's a legal scholar who focuses on information and communications law and policy.

Olivier Sylvain:

My name is Olivier Sylvain, professor of law at Fordham Law School and a senior policy research fellow at the Knight First Amendment Institute at Columbia University.

Justin Hendrix:

Olivier, I appreciate you joining me from an airport lounge, it should be said. So should a listener hear a little bit of noise, they'll just understand that you are on the move and have taken the time to talk to us a little bit today about Trump's AI executive order and some of the ideas that you have shared already on Tech Policy Press under the title, why Trump's AI EO will be DOA in court. Let's just maybe step back for a second and for the purposes of setting the stage, from your understanding, what types of state laws does the Trump administration appear to want to target with this executive order?

Olivier Sylvain:

Well, the president being the shrewd politician that he is, particular with the constituencies that support him, his front foot has been talked about DEI and woke AI, that is concerns about postal elites suppressing the truth in service of their own woke agenda. So there are a variety of states that have sought to protect against bias in AI. And this is not a controversial development in AI's recent history. Much of the AI powered services that we rely on are not inevitably free of biases that are otherwise extent in the world. Sometimes they exacerbate them. So people have been concerned about bias and AI. The president has been, as you know, wanting to get rid of any DEI speak or any DEI related things in government, and this is one of the targets.

Others, we can talk a little bit about others if you'd like, but I do think that he has allies that are worried about aggressive regulation of AI powered pricing algorithms. These two, by the way, are potentially harmful for consumers, but not just on the basis of protected categories like race or gender. This is just circumstances like your Uber app or even real pages price setting algorithm for landlords. There's been a lot of research about how it's harmed consumers. So I can't be in the mind, and I don't really want to be in the mind of the Trump administration, but I have to assume that these are the kinds of things that worry them.

Justin Hendrix:

From your perspective, you lay out pretty clearly that you think that the president's not as powerful as he imagines in this case, that there may be reasons that this executive order will fail in court. What's at a high level your argument? What do you see as a missing in this executive order or erroneous about its construction or the basis for it that makes it likely to fail in court?

Olivier Sylvain:

Executive orders are not legally binding on anybody other than the agencies who are bear addressed. You can go back 100 years to see an executive order and see that's addressed to federal officials. So that's an important thing. In spite of the pomp that this president likes to surround himself with in the White House when he does these Oval Office signings, he hasn't done anything other than direct his officials to do something. Now, what the piece I wrote focuses on are the charges to the Department of Justice and the Federal Trade Commission, but there are also those involving Department of Commerce and other agencies. The reason I focus on those is because those I think are the ones that are most directly addressed to state or target directly state efforts. My argument is that historically, in the absence of a clear delegation from Congress to an agency, the ones that are actually responsible for implementing law, courts have pushed back without a clear statement.

The great interesting and important fact in this circumstance is that Congress a couple of times already this year has declined, has been unable to muster majorities for AI preemption laws. This is the only reason now the president is seeking to this executive order strategy because Congress hasn't been able to do it. And this is an important point, right? This is to say Congress has failed to do the kind of delegation or to impose the kind of obligations that the agencies need in order to go after them. That's the bottom line argument. We can be a little more specific about it, but I think that the key problem for this administration is the lack of clear congressional delegation, particularly in the wake of failures to amend or include any preemption language on the federal side.

Justin Hendrix:

So you write that federal agencies like the DOJ and FTC cannot encroach on lawful state regulations without a clear delegation from Congress. And you point to some interesting precedent. What does Gonzalez v. Oregon have to do with all this?

Olivier Sylvain:

Yeah. Gonzalez v. Oregon is not a tech case as such, but it's useful for thinking about the power of a federal agency to intrude on a state prerogative or regulation in the absence of congressional delegation. And the mechanism for it is very much the kind of thing that is at work in the executive order. And let me just say word about what the executive order does. The executive order tells the Department of Justice to build up the task force, but really to sue the states for violating the interstate commerce provision in the constitution. The Federal Trade Commission is charged with publishing a policy statement that says that the states can't regulate or intrude on truthful AI. Okay. Just to set the stage, the Gonzalez v. Oregon case, as I said, is not a tech case. It's a case about physician assisted suicide. 25 years ago, Oregon, more than that, three decades ago, the citizens of Oregon passed a ballot initiative that a small margin that would allow physicians to prescribe and actually give a patient a fatal dose of medication if that patient is of sane mind and has incurable disease.

Ashcroft, the Attorney General Ashcroft at the time, was a staunch opponent of such things. He prolongated an interpretive rule, which by the way, is very much like a policy statement. It's not binding on anybody. It just came out of the Attorney General's office. And the command was that any physicians that use these kinds of medicines would be in violation and could be sanctioned somehow or denied a license. Physicians, pharmacists, patients brought a case saying that the action was unlawful, that there was no authority for it and that Congress had failed to delegate that authority. That's exactly what the Supreme Court said. Now, there are a couple other twists and turns in this case that are relevant actually, but I don't talk about. But the bottom line is that in the absence of a statute that said the AG may under the Controlled Substances Act, regulate the use of physician assisted suicide. In the absence of that, the interpretive rule must fail was unlawful.

Justin Hendrix:

You say that there is something to the argument that the local benefits of AI state laws don't outweigh the burdens of interstate commerce. This interstate commerce argument, we're seeing various industry groups make this argument. There was a policy paper put out by policy folks at Andreessen Horowitz on this subject recently. What is relevant about the argument or what gives you pause to say that there is something here?

Olivier Sylvain:

There's no question that AI is a phenomenon that is interstate, at least because it is appended to the internet. But that gets to why I'm not as worried because so many of the things we do in public life today are contingent on the internet. AI is just one recent manifestation or application of it. And the reason I have paused is because the strongest argument is that say California's regulation of AI and transparency requirements would intrude on other states to the extent the AI powered applications are available in those other states. That's as far as it goes. I don't think it's a winning argument, however, because in today's world, everything is functionally tied to this transnational communications infrastructure. But more than that, geofencing technologies have enabled companies to specify and target their applications based on what the states provide or what the state laws provide. So I'm not as worried about it, but it is something that will draw more and more attention.

And can I say one more thing about this? The great irony is that the architects of this federalist concept that is to push against broad application of the interstate commerce clause are the Federalist Society and presumably people we associate with this administration.

Justin Hendrix:

One of the things you talk about a little bit in this is the idea of the FTC's deception authority and the idea that AI outputs don't fit cleanly within it all the time. What kind of federal authority would be appropriate to regulate AI harms if in fact Congress goes back to the drawing board on this?

Olivier Sylvain:

For me, the most interesting kinds of regulation are addressed to the development of AI and the kinds of things you see in the EU, and I want to be careful of that. This is in opposition to what's happening in U.S. at the Office of Tech. We at least look to California, and that is imposing on developers and of AI services and applications, obligations to attend to the risks that their services pose while they're developing the application and even after that it is out to market. And this isn't necessarily a firm legal obligation for what it's worth. This is just imposing the burden to attend to potential risks. You can see in the long run that failure to do so would amount to a violation of law. One of the things that you wouldn't focus necessarily on are outputs, just that the practice, the design, the development of the services were alert to potential risks.

And if a company is, by the way, no matter what the potential harms or risks are, they're on safer ground. That's the kind of thing that would be far more interesting. When we see this happening in some of the states, California in particular, it's a risk-based obligation imposed on companies. Just to underscore, focusing on outputs is not a bad thing. So in the context of discrimination, we might be worried about disparate impact of the use of certain kinds of AI powered services and products, facial recognition, technology, for example. And that is output oriented. Its based on an impact analysis. But that, by the way, is what historically civil rights law has allowed, even if it was contested under this administration. I'm not saying we don't look at outputs or we shouldn't, but that there are a variety of other things that are probably far more important to attend to the structural harms that AI clause or AI powered services may cause.

Justin Hendrix:

I want to ask you about a couple of things that I've wondered about having read this order that you don't necessarily directly address in your post on tech policy press. One is the idea that this order, let's just say it were carried out. This task force gets created, the DOJ receives from the task force a basket of state laws that they find objectionable and they go and they start to pursue the states. Even in the announcement, Trump made a jab at Illinois Governor Pritzker. This just seems like a roadmap to political enforcement against states that the administration doesn't like or specific laws that have rubbed it or its allies the wrong way. That seems to be one of the key political problems with this order to me.

Olivier Sylvain:

It's a great observation, Justin. You're going to be much of an expert more on the political consequences or the potential here. I wasn't lucky enough to watch the Oval Office event, so I'm glad that you gave me an update on what happened there. I think you're right. People talk of weaponization of law in ways that at this point, I think we're all numb to that word, but the president has made no secret though. This is the bizarre thing. This is a president that has made the most secret that he wants to go after this political foes. I laugh because it's absolutely outrageous that this is what a president could comfortably say. It's a laughter out of nervousness.

Now, the interesting thing though, in spite of what your anticipation is and mine is as well, that is that this could be politicized, is that the states that are worried about harmful AI are red and blue. Just a couple of days ago, Ron DeSantis wanted to join the bandwagon and has talked about a Florida-based AI Bill of Rights. This is kind of incredible, given that this administration, the Trump administration, within a month in office repealed the Biden era AI Bill of Rights. So this is a red and blue problem. This is not just a problem that is occurring in Illinois and other blue states. But I think you're right that this is just an opening for the president to politically harass his political enemies.

Justin Hendrix:

There are a lot of calls even today now following the issuing of this order that Congress needs to step up. Congress needs to go ahead and take the baton here. Ted Cruz stood by the president last night at the signing ceremony and talked about the importance of essentially taking action. And the idea here, of course, is that there'll be legislative proposals that are put forward under the order. I don't know if Congress can't get its act together. What do you see as the most serious risk of preventing the states from regulating AI?

Olivier Sylvain:

Well, I do think that these legal actions are going to occupy states in ways that are going to detract and create political fodder. I worry a lot about those. The other things that appear in the executive order that don't appear in the piece that could matter are threats to funding. One of the sections of the executive order asks the Department of Commerce to evaluate which states are the most troubling, and all agencies are charged with thinking about using the power that they have budgeting to go after the states. I don't know how this is going to shake out, but you have seen over the past year the way in which the Supreme Court has equivocated, not yet plainly said how much power the president has to limit or cut back on funding that Congress has already approved. I just assumed that Congress has the power of the purse and we have to honor that, but it's hard to know. So that's another potential threat apart from the direct legal assaults that DOJ or FTC may launch.

Justin Hendrix:

In many ways, it feels like with AI, we're just seeing the same cycle again that we saw perhaps with the internet. There were early efforts that Ted Cruz mentioned even at the signing ceremony yesterday. He mentioned the tax exemption on internet services. Others have pointed to section 230 as being a shield for liability for tech firms. So generally this idea that we have to clear the obstacles for new technology. We have to create an open playing field to see where the innovation will take us and we'll take the economy. I don't know. What do you make of that, that kind of general idea here?

Olivier Sylvain:

I think it's a very troubling and worrisome and has traction. There's a long history in this country of devotion to the mantle of devotion to innovation. People assume that the US's global positioning is largely because of a lack of restraint on its industry. I think, and there may be something to that. There may be something to the fact that we don't have as many barriers to corporate and commercial entrepreneurship maybe that companies can discover interesting opportunities for consumers. The thing is, you just have to open a social media account or know anything about what our social media environment or much of this information environment looks like now to know what it looks like when there is formidable regulation of tech.

In 1990s, the tax law that you mentioned that Cruz mentioned, Section 230, which was passed in 1996, these are laws that Congress enacted for the purposes of ensuring innovation. And literally there's language in 230's precatory provisions, let's say, is to block the unfettered market and then any federal state regulation would be unhealthy for free speech and innovation. These are the very same arguments we're seeing here. The added leg on which people today, the Andreessens and the Thiels and the big tech folks argue about global competitiveness, Sam Altman worried about competition with China. This is the additional argument that's coming in. I think one question to ask, apart from the way in which unrestrained market operates and what we see is happening in social media context, who are the beneficiaries of this argument about China? It's clearly these companies are the most immediate beneficiaries. And to me, this underscores the cratocracy at work. I didn't use that term lightly, but it's hard to describe anything else. To use another term to describe what's going on here, these are companies that have really supported the president and he's returning the favor.

Justin Hendrix:

Olivier, I'm looking forward to having you on the podcast again in 2026. I understand you have a book out. Can you tell the listener just what to expect and what to look out and can they pre-order it now?

Olivier Sylvain:

You are very generous to ask, Justin. So Reclaim the Internet is a book that I've been working on for the past year. So the Columbia Global Reports is publishing it, and I'm really proud of it actually. It's synthetic of things I've argued. It's coming out in March. And the argument is much like the thing we've been discussing. It's concern about the laissez-faire approach to see companies. I focus in particular on First Amendment doctrine and Section 230, but I do talk about these debates as they relate to emerging AI applications and that we should be as vigilant as ever given what we know has happened to the information environment in the past 30 years.

Justin Hendrix:

I promise to have you back on the podcast when that book is out. And hopefully I'll catch you at a time when you are not literally about to board a plane, but I will say to you, thank you very much for taking the time amidst your travel to talk to us and to share these ideas with our listeners.

Olivier Sylvain:

Justin, you're really kind to have me join you, and I'm also grateful for the great editing that Cristiano helped me with. Happy to do this again at any point.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President of Business Development & In...

Related

Perspective
Why Trump’s AI EO Will be DOA in CourtDecember 12, 2025
News
Trump Signs Executive Order To Combat State AI RegulationDecember 12, 2025

Topics