The Trump AI Action Plan is Deregulation Framed as Innovation
J.B. Branch, Ilana Beller, Tyson Slocum / Jul 30, 2025
US President Donald Trump arrives to the White House AI Summit at Andrew W. Mellon Auditorium in Washington, D.C., Wednesday, July 23, 2025. (Official White House photo by Joyce N. Boghosian)
On July 23, the Trump administration released a sweeping set of AI directives that will shape the federal government’s approach to artificial intelligence for years to come. Framed as a plan to “accelerate American leadership” in AI, the new AI Action Plan and accompanying executive orders signal a decisive federal pivot away from oversight and toward aggressive deregulation. The plan includes provisions that may undermine state-level laws, weaken agency enforcement powers, politicize technical standards, and fast-track the construction of data centers on public lands—all in the name of innovation.
At a time when generative AI systems are fueling election disinformation, voice cloning scams, and the creation of synthetic child sexual abuse material, the Trump administration’s AI strategy appears to prioritize industry interests over public protection. While the Action Plan gestures toward workforce training, infrastructure investment, and open-source development, its proposed mechanisms are vague and vulnerable to political manipulation. Many provisions reflect a broader effort to weaken independent regulatory authority and infuse partisan ideology into what should be neutral technical standards—setting a troubling precedent.
Although some industry stakeholders may view these moves as a victory, that perspective is short-sighted. By seeking to rapidly dismantle existing safeguards from the Biden administration, and replace them with ideologically driven directives, the plan invites legal challenges, regulatory instability, and policy whiplash. Simply put, industries require regular and predictable regulation. If a future administration reverses course, the result could be years of litigation, wasted taxpayer resources, and inconsistent governance that fails innovators, the public, and will ironically work against America “winning the AI arms race” as referenced in the AI action plan.
Moratorium lite
The administration’s AI Action Plan establishes a series of directives that collectively tilt federal power toward industry self-regulation while diminishing the role of states, independent agencies, and public safeguards. The Action Plan (page 3) proposes that “the Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations.” The term “burdensome” is vague and undefined, leaving states that regulate AI vulnerable to the whims of the federal government.
This comes on the heels of a proposed federal moratorium of state AI laws that was put forth in the recent tax budget reconciliation bill. State lawmakers from both sides of the aisle—including numerous Governors, AGs and state legislators from all across the country—came out in strong opposition to this moratorium. They raised concerns relating to states rights and the removal of thoughtful protections that state lawmakers had put in place for their constituents, as well as the federal government's failure to put protections in place thus far. Ultimately the moratorium was removed from the budget bill via an amendment that was sponsored by Sen. Marsha Blackburn (R-TN) and supported overwhelmingly in the Senate with a 99 to 1 vote.
Interestingly, the Action Plan does note that the federal government should “not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.” Again, the term “unduly restrictive” is vague and undefined. However, it is worth noting the acknowledgement of the significant concerns raised by state lawmakers around any federal preemption of state AI regulations.
The Action Plan also empowers the Federal Communications Commission to review whether state AI laws interfere with its authority under the Communications Act of 1934. The idea here being that FCC may be able to regulate the AI space and effectively preempt state law. That said, Republicans have long opposed the idea of the FCC having authority over the internet—and Supreme Court decisions have historically also weighed heavily against that claim.
In his speech launching the Action Plan, Trump said “We need one commonsense federal standard that supersedes all states.” However, Congress has failed to pass almost anything on AI thus far. Meanwhile, state lawmakers have done thoughtful work to put common sense protections in place to address the real and present harms experienced by their constituents.
Scorch the Earth for AI
A key focus of Trump’s deregulatory push is reflected in Pillar II of his AI plan, which describes a litany of evasions of federal environmental laws with a goal of expediting permitting of AI data centers and associated energy infrastructure, including categorical exclusions under the National Environmental Policy Act (NEPA), expand eligibility for expedited review under the Fixing America’s Surface Transportation Act, establish a uniform Clean Water Act Section 404 permit review process and pledges of end-arounds of Clean Air Act and other public health and safety laws.
The plan calls for using federal land for siting data centers and energy infrastructure, with Energy Secretary Wright publicly naming four sites (Idaho National Lab, Oak Ridge, Paducah and Savannah River) with little to no evidence that the sites already feature necessary energy infrastructure. Accessing land isn’t the primary challenge for data center siting; rather, it is whether energy infrastructure exists. The plan’s pronouncements are lean on actual details of how these “streamlined” permitting procedures will be carried out. That’s because the primary means the President will promote data centers and associated fossil fuel infrastructure is likely through the abuse of emergency authorities, which we have previously detailed. Trump intends to designate AI data centers as national security assets, allowing the federal government to usurp federal, state and local law—including zoning—to force data centers and fossil fuel infrastructure onto communities. Trump may not use emergency authorities to barge into, say, a safely Republican controlled state like Alabama. Instead, he'll likely use such aggressive authority only in Democrat-held states. While advocacy organizations and watchdog groups such as Public Citizen will be able to mount legal challenges to these emergency declarations (as we have for his abuse of emergency authorities for coal and natural gas power plants), construction and operation will be allowed to proceed while the legal challenges drag on. While Trump will continue to press for Congressional support for “permitting reform” and other legislative tools to promote corporate interests, Trump is not dependent on additional congressional authority as long as he can continue to abuse emergency powers unfettered.
The Executive Order on “Accelerating Federal Permitting of Data Center Infrastructure” will also rely on his emergency declaration to execute, but notable additions in the order include Section 2, where the definitions of facilities that will qualify for Trump’s assistance implicitly excludes wind and solar power. So, a data center proposed to be powered by wind and solar will not qualify for expedited treatment, whereas a fossil fuel powered facility would. Section 3 authorizes direct federal financial support for qualifying AI data center projects, including loan guarantees and offtake agreements. Trump has already received a $1 billion fossil fuel slush fund in appropriations under the Defense Production Act, which authorizes the President to dole out the public’s cash to address any national security or national defense priority. Data centers with fossil fuel infrastructure would qualify. In addition, Trump’s Department of Defense has an Office of Strategic Capital which acts like DOE’s Loan Programs Office with an additional $200 billion in funding authority that can be used to subsidize AI data center projects.
The AI Action Plan (at printed page 16) also suggests a politicized role for the Federal Energy Regulatory Commission (FERC) to promote primarily gas-fired generation to the front of the queue, and to reorder market pricing to advantage gas and coal generation. FERC is historically a bipartisan independent energy market regulator that would rebuff such crude and unlawful efforts, but Trump has declared that independent agencies cannot exist, and has two pending nominees that would likely convert FERC into a political arm of the White House. Controlling FERC with compliant loyalists would be necessary for Trump to twist market rules to prioritize data center loads powered by fossil fuels.
The less scrutiny on industry the better
The Federal Trade Commission (FTC), which under the Biden administration was active in its efforts to police AI firms and to rein in anticompetitive practices, is instructed to stand down efforts that may impede the AI industry. The plan (page 3) calls for a review of existing FTC investigations to ensure they do not “advance theories of liability that unduly burden AI innovation.” This provision, while subtle, represents a potential erosion of the FTC’s ability to protect consumers and enforce fair business practices—especially as AI is increasingly embedded in advertising, finance, and e-commerce.
Another ideological shift: a directive that AI “objectively reflect truth rather than social engineering agendas.” In the next bullet, the plan (page 4) instructs the National Institute of Standards and Technology (NIST) to revise its AI Risk Management Framework to remove references to “misinformation, DEI, and climate change.” By excluding these domains from federal risk assessments, the administration appears to be codifying political preferences over scientifically grounded evaluations—at a time when synthetic media and algorithmic bias are pressing policy concerns.
While the plan’s endorsement of open-source AI is notable, it stops short of defining what a representative stakeholder convening would look like. Without a clear commitment to including civil society voices—particularly those concerned with algorithmic discrimination, safety, and accountability—there is little assurance that such convenings will offer meaningful checks on industry influence.
The Action Plan also outlines a push to deploy AI across critical sectors, notably health care. The plan (page 5) states that the health care industry has been “slow to adopt due to a variety of factors,” including “distrust and lack of understanding.” This framing overlooks legitimate regulatory constraints, such as HIPAA compliance and the risks of incorporating AI systems that may mishandle sensitive health data or train on protected medical information without consent.
Elsewhere, the plan (page 5) encourages the use of “regulatory sandboxes” that allow for rapid testing and deployment of AI tools, overseen by agencies like the FDA and SEC. But without accompanying requirements for public disclosure, harm mitigation, or civil liability, these environments may function more as exemptions than safeguards.
The administration also touts a “worker-first AI agenda” that includes tax-free training reimbursements and workforce development programs (page 6). But these initiatives echo past efforts—particularly in post-industrial communities—that failed to deliver stable employment after promising retraining. The current plan provides little detail about job placement outcomes or protections for workers displaced by AI-driven automation. Moreover, it is difficult to believe that AI-training is going to provide safe employment for those who lose jobs to automation while the tech industry is seeing massive layoffs with people already trained in computer science and AI.
Finally, the Action Plan includes a welcomed acknowledgement of malicious deepfakes and synthetic media as legal and societal threats (page 12). But this concern rings hollow. It is hard to take seriously when mere days before the AI Action Plan was released, President Trump shared an AI-generated deepfake of President Obama being arrested on social media.
Ideological control in the name of neutrality
In tandem with the Action Plan, the Trump administration issued an Executive Order called “Preventing Woke AI in the Federal Government” that directs federal agencies to procure only AI models that align with two “Unbiased AI Principles”: “truth-seeking” and “ideological neutrality.” But the accompanying language reveals a broader agenda to reshape public-sector AI procurement around partisan definitions of accuracy and objectivity.
The order explicitly identifies “diversity, equity, and inclusion” (DEI) as an “existential threat to reliable AI,” listing concepts such as critical race theory, intersectionality, and systemic racism as distortive ideologies. This framing conflates efforts to ensure fairness and representation in AI with political indoctrination—while ignoring well-documented harms such as algorithmic bias in hiring, criminal justice, and content moderation systems.
Rather than acknowledge the complex tradeoffs inherent in building safe and inclusive AI systems, the Executive Order demands that agencies reject models that incorporate DEI-related training data or outputs. It directs the Office of Management and Budget (OMB) to issue guidance requiring contractors to disclose “ideological judgments,” but stops short of requiring transparency around model weights or training data—raising critical questions about how objectivity will be defined or enforced.
The irony is difficult to ignore: in the name of ideological neutrality, the federal government is asserting control over the political content of AI outputs—an approach that risks chilling free expression while undermining scientific and technical rigor. This contradiction is made especially stark by the administration’s continued support for models like Grok, which recently referred to itself as “Mecha Hitler” and disseminated antisemitic hate speech. Despite these alarming outputs, Grok reportedly secured a substantial federal defense contract.
This reveals a selective application of the administration’s ideological concerns: models that perpetuate white supremacist narratives would appear to pass muster, while those trained to recognize systemic racism or gender inequality are deemed unfit for public use. The result is not neutrality, but a dangerous redefinition of it—one that privileges certain ideologies while disqualifying others under the guise of objectivity.
Conclusion
The Trump administration’s AI plan is not a visionary roadmap for the future. Instead it relies on the same deregulatory strategies that have worked to advance the corporate interest over the public interest for years. Its attacks on “woke AI” are ideological red meat that have no grounding in any AI or LLM literature. Rather than improving accuracy or objectivity, erasing DEI considerations could allow for biases and blind spots in the very products this plan aims to increasingly weave into our lives.
At the same time, the administration appears to be working hard to find other backdoor means to undercut state-led AI regulation despite historic opposition. By specifically targeting state regulations that may “unduly burden” the AI industry, the administration is showing its clear favoritism for the Big Tech lobby over the average American. This attack on local and state interests is double downed when considering the giveaways for data center development and fast-tracked permitting meant to usurp local governments.
Disturbingly, for all the rhetoric about ‘winning the AI arms race’ against China, this plan sells out the American people to Big Tech. It will reward a handful of large corporations in the fossil fuel and tech industries with subsidies, legal shields, and market dominance while offering no real regulatory oversight, worker protection, or democratic accountability to balance the scales. This is not a vision for all Americans. It is a vision for a select few billionaires. The American people deserve to be the rightful beneficiaries of AI progress. If we’re serious about shaping an AI future that strengthens democracy, fairness, and resilience as the AI Action Plan claims–then this plan is a step in the wrong direction.
Authors


