Home

Donate
Perspective

How Trump’s AI Executive Order Gets It Wrong on Civil Rights

Leah Frazier / Dec 19, 2025

US President Donald Trump speaks at the White House on Dec. 17. (White House)

The Trump administration’s executive order seeking to preempt state and local artificial intelligence regulation targets disparate impact liability — civil rights protections against seemingly neutral policies that disproportionately harm certain groups. Their interpretation relies on a distortion of AI models, AI regulation and federal laws like the Federal Trade Commission Act that per the order may conflict with state protections against algorithmic bias.

The convoluted and forced line of reasoning that the order uses to call for preemption of such provisions demonstrates that its attack on civil rights protections seems to be more of a vehicle for the president to expand his assault on disparate impact liability than it is about AI regulation.

The order declares that action must be taken “to check the most onerous and excessive laws,” despite acknowledging the absence of a “national framework” that could conflict with state law. To manufacture such a conflict, it asserts that state AI regulation may mandate alteration of truthful AI outputs.

It specifically calls out the Colorado Artificial Intelligence Act, which it describes as “banning ‘algorithmic discrimination,’” warning that it “may even force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.” It calls upon the secretary of Commerce to publish an evaluation of state AI laws that conflict with the administration’s approach to AI, ordering the secretary to “identify state laws that require AI models to alter their truthful outputs.” And it directs the FTC to issue a policy statement explaining the circumstances under which “State laws that require alterations to the truthful outputs of AI models are preempted by the [FTC] Act’s prohibition on engaging in deceptive acts or practices affecting commerce.”

The order’s disingenuous and sweeping conception of AI model outputs as true or false ignores how AI has been used as a predictive tool in high-stakes decision-making.

While some AI outputs may be true or false, such as false positive matches from a facial recognition system, outputs from predictive models cannot be categorized in that way. Many such models, which are used in life-changing contexts, such as criminal justice, healthcare, housing, finances and employment, make predictions about people. Such predictions include how likely someone is to miss a court appearance, reoffend, engage in other undesired conduct, default on a lease or loan, contract an illness or disease or engage in self-harm.

Describing AI-generated risk scores or predictions as true or false in those settings buys into the misconception that AI accurately predicts the future instead of what it actually does — draws on correlations. And, ironically for the administration, one of the clearest instances of algorithmic bias arose in a context where AI outputs could be categorized as true or false: facial recognition technology. A groundbreaking study examining three commercial gender classification systems showed those systems to have error rates of up to 35% for darker-skinned females and up to a .8% error rate for lighter-skinned males.

The order’s inaccurate generalization of AI model outputs as true or false in the context of anti-bias protections aside, its framing that regulation requires changing or doctoring so-called true outputs erroneously assumes that outputs are provably true or false when generated and that predictive models are accurate in the first place.

Research has shown that in many instances, predictive models in hiring, lending and criminal justice often fail to accurately predict outcomes. Rather, they recognize patterns in data while not accounting for phenomena or events outside of their inputs that could drastically impact the outcome they are tasked with predicting. And the notion that anti-bias laws require modification of truthful AI outputs assumes that one could prove that the outputs are true or false when generated, which in the case of predictive model outputs often isn’t possible.

For example, an output categorizing someone as a high risk for defaulting on a lease can’t be proven true until that person defaults; an output categorizing someone as a low default risk can’t be proven true until after that person has paid their rent on time over a specified period of time. One thing that can be determined without predicting the future, however, is whether AI makes different risk predictions about similarly situated people belonging to different demographic groups.

The order also twists state AI regulation and how civil rights protections would function in the context of AI. Contrary to the false narrative that legal protections to curb AI bias “embed ideological bias” into AI regulation, legal guardrails against algorithmic discrimination do not require alteration of truthful outputs or production of false results.

For instance, the congressional Artificial Intelligence Civil Rights Act, which was recently re-introduced and touted by civil rights advocates including our own as a template for protecting against algorithmic bias, would in no way mandate changing AI outputs.

Nor does the Colorado Artificial Intelligence Act, the only state AI law flagged by the order as potentially requiring alteration of true outputs, contain such provisions. Rather, it calls for developers and deployers of defined “high-risk” systems “to use reasonable care to protect consumers from reasonably foreseeable risks of algorithmic discrimination.”

Additionally, it would require developers to disclose information to other developers or deployers about risks, limitations, testing and evaluation and proper use and monitoring. It also would require deployers to disclose information to consumers about algorithmic systems making decisions about them and to provide an opportunity to appeal the decision and request human review.

Nowhere does it call for a developer or deployer of an AI system to doctor system outputs.

Another point that bears mentioning is that if requiring alteration of truthful AI model outputs is the conflict between state and federal law justifying preemption, it wouldn’t justify preemption of laws that apply to developers that may not even have access to outputs that a system provides to deployers.

In addition to distorting state law to create a false conflict between AI civil rights protections and federal law, the order also distorts federal law. This is evident in the order’s directive to the FTC by mischaracterizing AI regulations as requiring “alterations to truthful outputs” and then positing that such alterations would somehow violate the FTC Act's prohibition against deception.

This line of reasoning misconceives the FTC’s deception authority and inverts its consumer protection mission, weaponizing federal law against consumers to roll back civil rights protections and strip them of the modest protections they have against Big Tech oligarchs.

The commission explained its deception authority over two decades ago in its Policy Statement on Deception, stating that deception occurs when there is a material “representation, omission or practice that is likely to mislead the consumer,” “acting reasonably in the circumstances.”

In other words, the FTC Act’s prohibition against deception bars businesses from tricking consumers into purchasing or using their products or services. The concept advanced by the order that complying with anti-discrimination law somehow requires tricking consumers defies logic. It is nonsensical that a business’s compliance with an AI regulation to limit use, for example, of an algorithm that rejects Black lease applicants at higher rates than similarly situated white applicants would require deceiving consumers.

It is also nonsensical that common-sense protections, such as giving consumers the right to appeal automated decisions to a human, deceives consumers. Developing and deploying AI that does not discriminate does not conflict with the FTC Act’s prohibition against deceptive acts and practices.

The order’s reliance on obvious distortions of how AI functions, how AI regulation works and how federal law applies show just how weak the legal support for the administration’s preemption approach is.

Authors

Leah Frazier
Leah Frazier is the Director of the Digital Justice Initiative at the Lawyers’ Committee for Civil Rights Under Law, where she oversees litigation and engages in policy advocacy at the intersection of racial justice, emerging technology, and privacy. She previously worked at the Federal Trade Commis...

Related

News
Trump Signs Executive Order To Combat State AI RegulationDecember 12, 2025
Podcast
A Critical Look at Trump's AI Executive OrderDecember 14, 2025
Perspective
The Preemption Fight Goes Far Beyond AI. States Must Persist.December 15, 2025
Perspective
Why Trump’s AI EO Will be DOA in CourtDecember 12, 2025

Topics