America’s AI Governance Crisis Is a Democracy Crisis
Laura MacCleery / Mar 24, 2026
President Donald Trump and members of Congress at the State of the Union address on February 24, 2026, on the House floor of the US Capitol in Washington, D.C. (Official White House photo by Andrea Hanks)
On Friday, the White House released its national framework for artificial intelligence, which urges Congress to preempt state laws, avoid any new regulatory body, and shield developers from liability. It arrives amidst a nearly 300-page draft bill from Sen. Marsha Blackburn (R-Tenn.) and expected legislation from Rep. Jay Obernolte (R-Calif.), chair of the House AI Task Force, who has said he is working on a bill on preemption.
The conventional reading has been that AI regulation is hard, technology moves too fast, lawmakers lack technical expertise, and encouraging innovation requires doing nothing to get in the way. That reading is wrong. What we are witnessing is not just a failure to govern AI. It is the predictable outcome of a decades-long project to dismantle the democratic infrastructure that would make governance possible.
The industry and its allies have been preparing this ground for decades, but it’s now inescapable. Colorado’s landmark AI Act, the first comprehensive state law of its kind, was just stripped down to the studs. Gone is the duty of care—a common standard in product liability—a ban on algorithmic discrimination, and impact assessments. More than 150 industry lobbyists apparently worked to gut the new law. State Sen. Julie Gonzales (D-34) said on the Senate floor that “[a]ll 35 of us in this building know that we too have witnessed the stunning brunt of AI leverage.”
The White House federal policy follows a preemption playbook from a similar attempt in Congress last year to bar states from regulating AI. But it puts little meat on the bones while attempting to preempt all state laws on AI, for now and forever. Given the widespread use of algorithms in virtually every industry, it would also blow an AI-sized hole in state laws that protect basic consumer rights on everything from insurance to housing and financial services.
Although it includes limited language on deep fakes and scams, the framework lacks a privacy standard or limits on surveillance—in fact, agencies would be required to make data available for AI. It tells Congress not to create a new regulator, and to rely instead on sectoral rules and “industry-led standards.” On copyright, it declares training AI on the work of others is legal, and Congress should stay away. It includes unconstitutionally vague bromides warning against “partisan or ideological agendas.” On child safety, it calls for parental controls while warning against “open-ended liability” for companies.
In short, this is preemption as a weapon. Just as the tobacco industry and gun lobby did, the play is usually to argue that a patchwork of state laws creates uncertainty, while ensuring a substitute federal law does little or nothing. Notably, there’s not even much of an attempt to bother with any meaningful legal standards here. More absence than presence, it’s just a point-blank power grab: mostly empty space so the tech industry can keep doing whatever it wants.
What we are witnessing, then, is not merely an attempt to lock in our failure to govern an extremely powerful new technology, although it certainly is that. In full view of the frightening prospect of totally unregulated AI, we must grapple with the underlying condition that our democratic capacity has been eviscerated.
It is true that our captured politics has so far failed to regulate tech. But it’s also the case that the accumulating power of the industry gave rise to a billionaire class that wields global power and can now easily overwhelm our political system. We have never required these companies, not really, to even be in dialogue with the U.S. government. After profiting obscenely from this neglect, they apparently now have enough power to make the ultimate flex: outright displacement of government.
I have spent 25 years working to develop sensible regulatory safeguards in areas like auto and food safety, campaign finance, data privacy, and AI. At its best, the work was about making sure ordinary people had a say, rather than allowing powerful industries to make choices for them. At Consumer Reports, I testified before Congress on the limits of self-driving cars at a time when major tech players were making false promises about full autonomy. A decade later, self-driving technology still can't reliably navigate a city street, but the playbook worked: promises of imminent transformation bought years of deregulation. Now the same cycle repeats in the so-called race to AGI.
In these efforts, I was a reluctant witness as concentrated economic interests got their way, much of the time. So this story is not new. It is, however, the latest, most brazen expression yet of a winner-take-all attempt to replace democratic accountability with the implacable indifference associated with the sheer exercise of power.
Collapse of the rules on money in politics is an underappreciated but important factor in how we got here. Few today likely recall that right-wing provocateur Steve Bannon made several movies for Citizens United, the organization behind the Supreme Court case that unleashed unaccountable spending on elections. As E.J. Dionne wrote, “A recent New York Times analysis found that 300 billionaires and their immediate family members (out of our population of roughly 340 million) gave 19 percent of all contributions—more than $3 billion—in the 2024 federal elections, either directly or through PACs.”
That decision was an essential step in removing constraints on political spending and bringing about our current age of dark money, in which politicians must always look over their shoulder and billionaires can run ads whenever they like. It also decentered political parties, letting dark horse candidates like Trump gain steam.
Once the constraints evaporated, our entire political landscape shifted. Public financing became inviable. Both parties became structurally dependent on the same donor class. And the range of policies that either party could pursue narrowed to what that class would tolerate. AI rules fell out of favor before the conversation even started. The first generation of tech platforms were also shielded from liability by Section 230, creating, over time, a nearly untouchable technology sector.
The Biden Administration understood AI governance was necessary but chose executive action over legislation given Congressional gridlock. While its policy was a decent start, it was written in sand. And it consumed political capital and time that could have been spent on legislation, giving Congress cover to do nothing but write bland reports.
The bugaboo of competition with Chinese AI is another perpetual excuse. Privacy and surveillance rules, even basic accountability checks, are all reframed as a disadvantage, and guardrails continue to erode. Now that the Administration has declared an AI company that insisted on Constitutional constraints a supply chain risk, any alleged democratic advantage is up in smoke, along with our alleged principles.
The AI industry has also poured millions into positioning itself with this Administration. Yet consumers of this tech have no comparable seat at the table, much less a peek at what is known about us. Information about how people interact with these technologies could transform public understanding if it ever saw the light of day.
Social media and AI companies know exactly how algorithms extract attention, exploit emotional needs, affect teens’ self-image, and impact our decisions, yet have never before been required to divulge it. The Meta trial is surfacing the kind of evidence tobacco litigation exposed a generation ago. One internal memo acknowledged that “our product exploits weaknesses in the human psychology to promote product engagement and time spent.” When a study showed users who paused Facebook for a week felt less depressed and anxious, Meta killed it. An employee asked rhetorically if it could look like “tobacco companies doing research and knowing cigs were bad and then keeping that info to themselves.”
After the Ford-Firestone tire crisis, I worked with Congress to create an early warning database that required companies to report deaths and injuries so regulators and the public could see patterns before they became catastrophes. We’ve allowed the tech industry to conceal essential information about our lives and the technologies that infuse them, rather than using it to prevent harm to all of us.
Accountable governance would start where any serious regulatory framework does: by countering the specific vulnerabilities of people. In the 1960s, an engineer named William Haddon upended how we think about car crashes. Before Haddon, safety was treated by the auto industry as a driver behavior problem: if people crashed, they were just the “nut behind the wheel.”
Haddon reframed the problem as physics. A crash transfers substantial energy to a human body, but the body is fragile. The task is not to prevent every crash, as some are inevitable, but how to manage forces so people survive. That understanding gave us seatbelts, crumple zones, airbags, and guardrails. It has saved millions of lives and prevented millions more serious injuries. It also made cars better, creating the American way of life on the road and powerful incentives for beneficial innovations.
AI governance needs a similar approach. It must start with understanding human vulnerabilities, and users should inform how it works. We already know the most pressing concerns: developmental effects on children whose sense of the world is being formed on devices. Risks from disinformation, surveillance and weaponization of data by government and data brokers. Threats to human creativity from data stolen without consent or compensation. Biased outcomes in employment, housing, medical care, and lending. Concentration of power in a few gigantic companies. Infrastructure that accelerates environmental cataclysm.
The point is to ensure we assign those risks and costs to industry, which can most easily address them, and open space for beneficial competition to create a race to the top. This means building governance as a process, rather than a set of static rules, and learning from the equivalent of crash tests by creating performance-based standards that drive improvements over time.
This would be far preferable to what we have today: an industry that understands us– and the harm it can do to us–in exquisite detail but has no obligation to tell anyone, and a shameless White House proposal to write the most powerful companies in the world a blank check to keep doing whatever they want.
This is not, fundamentally, a technology problem but a democratic one. Democracies are not—and cannot be—incompetent to understand how technology works and what it does to people. In the end, tech is just a very powerful product.
For too long, we have let the people who profit from it decide what it can do to the rest of us. The hard slog of creating accountability is the work ahead. But first, we must acknowledge the real issue, which is that, as our Founders well knew, absolute power corrupts, absolutely.
Authors
