Four Pages That Could Reshape American AI Policy
Neil Chilson / Apr 8, 2026Neil Chilson is the head of AI policy at the Abundance Institute.

President Donald Trump delivers remarks at the White House AI Summit at Andrew W. Mellon Auditorium in Washington, D.C., Wednesday, July 23, 2025. (Official White House photo by Joyce N. Boghosian)
Late last month the White House released its National Policy Framework for Artificial Intelligence. It’s four pages long. Critics immediately dismissed it as empty. They’re wrong, and missing the point.
In the early stages of complex business deals, parties often share their “term sheets,” outlining what they want. This framework is President Donald Trump’s term sheet. It weighs in on the most consequential AI negotiations Congress has ever had.
The Trump administration has been building toward this moment. The President set a pro-innovation AI tone in his first term. At the start of his second, he rejected the Biden administration’s fear-driven approach, replacing it with an AI Action Plan and launching the Genesis Mission. The result: nearly $3 trillion in AI and related tech investment and continued American dominance in AI model development. The framework is the next logical step—translating executive vision into legislation.
The framework is substantive because it is specific. Each of its seven sections maps onto a live policy fight in Congress, where dozens of bills are in play. When the President takes a position on child safety, he’s effectively weighing in on KOSA, COPPA 2.0, and the KIDS Act. When he addresses intellectual property, he’s speaking to the NO FAKES Act and the TRAIN Act. When he calls for preemption of state AI laws, he’s staking out ground in a debate that will define whether America has one AI market or fifty.
The President is charting a path through an active legislative landscape. If you haven’t tracked committee markups and staff negotiations, it’s easy to miss. But nearly every line is doing real work.
The framework’s critics have fallen into several traps. Some claim the framework is blanket preemption, stripping states of power while offering nothing in return. That ignores six of the document’s seven sections. The framework endorses strong protections for children and empowering parents. It calls for safeguarding communities from AI-enabled fraud and protecting ratepayers from data center energy costs. It addresses intellectual property, free speech, workforce development, and small-business access to AI tools.
Far from leaving Americans unprotected, Trump has laid out a more comprehensive AI policy agenda than any previous president. And on preemption itself, the framework carves out significant room for states—preserving their authority to enforce generally applicable laws, control zoning for AI infrastructure, and govern their own use of AI for public services. That’s far more federalism-friendly than the Constitution requires.
Another common objection is that this is a partisan exercise. But both parties care about deepfakes of children, seniors targeted by AI-powered scams, and small-business owners who need help adopting AI tools. Protecting kids, workers, consumers, and ratepayers isn’t partisan. Both parties have active bills on these issues.
Perhaps the most confused critics call the framework “immunity” for AI companies and compare preemption to Section 230. This is exactly backward. Section 230 was tort reform. It limited indirect liability under existing legal theories because the courts were flooded with cases that threatened to kill the early internet.
The framework’s preemption provision addresses something quite different: state legislatures creating new causes of action and new regulatory duties that target AI developers for harms caused by third parties. The framework does not recommend immunizing AI companies from general tort law—in fact, it would specifically preserve the enforcement of generally applicable laws. The real concern is that a patchwork of novel state laws will redirect enforcement away from bad actors and toward deep pockets, while making it impossible for any company—large or small—to build AI products that comply with fifty different legal regimes simultaneously.
The point of preemption isn’t immunity, it is consistency. AI model development is inherently interstate. A model trained in one state is deployed in all of them. If California, Colorado, and North Carolina each impose different obligations on frontier developers, the most restrictive state’s rules could become the de facto national standard. This is regulatory overreach by a handful of state legislatures, imposed without any democratic mandate from the rest of the country.
A coalition of more than thirty organizations—spanning consumer groups, small-business advocates, taxpayer organizations, and technology policy centers—sent a letter to Congress endorsing the framework. Their message was simple: without a consistent national standard, the US risks ceding AI leadership to global competitors, while leaving most Americans on the sidelines of the AI economy.
The critics who want this effort to fail should ask themselves what the alternative looks like. More state-by-state fragmentation. More regulatory uncertainty. More advantage handed to China. And more Americans who are unable to access the tools that millions are already using to answer health and health insurance questions, learn faster, and make new scientific discoveries.
Congress has done an enormous amount of prep work on AI policy. The White House has now given members a plan to organize that work into a federal law. These four pages won’t generate that law themselves. But the framework could set the direction of federal AI policy for a generation—if Congress musters the will to follow the President’s lead.
Authors
