Home

Donate
Perspective

The Proposed AI Moratorium Will Harm People with Disabilities. Here’s How.

Ariana Aboulafia, Travis Hall / Jun 19, 2025

WASHINGTON, DC - MAY 22, 2025: US Speaker of the House Mike Johnson (R-LA) speaks to the media after the House narrowly passed a bill forwarding President Donald Trump's agenda at the US Capitol. (Photo by Kevin Dietsch/Getty Images)

Among Congress’s chief duties is to pass a budget that appropriates funding for federal agencies. On May 22, the US House of Representatives passed a budget reconciliation package that would structure government operations for the 2026 fiscal year, sending it to the Senate. As many disability rights and justice advocates have said, many provisions in the bill would have devastating impacts on people with disabilities — particularly the $600 billion in cuts to Medicaid that could lead to over 10 million people losing their health coverage.

Buried deep in the bill is another provision that also has the potential to detrimentally impact disabled people: the proposed ten-year moratorium on the enforcement of state or local AI regulation. If enacted, it would prevent states and local governments from enforcing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems,” with limited exceptions.

The bill aims to prevent states from regulating emerging technologies that have genuine benefits, but that also are putting their constituents at risk in very real ways, including those with disabilities. Indeed, implementing AI tools in decision-making systems in high-stakes contexts like employment, education and healthcare presents a particularly high risk of harm for people with disabilities. These risks have been previously referred to as “tech-facilitated disability discrimination” — an umbrella term that encapsulates all of the ways that AI and other emerging technologies can cause people to experience discrimination on the basis of their disability. While existing disability rights statutes (like the Americans with Disabilities Act) do provide some means of legal recourse for this type of discrimination, state-level AI regulation remains a vital avenue for harm mitigation.

Previous research by the Center for Democracy & Technology (CDT), a nonprofit, has found that people with disabilities are disproportionately likely to be “screened out” from jobs as a result of their interactions with automated employment decision systems, or AI-enabled hiring tools, like video interview tools and personality assessments. In addition to presenting accessibility concerns, these tools also contribute to employment discrimination, wherein qualified disabled workers are blocked from jobs — and employers miss out on valuable hires.

Similarly, many facial recognition tools often simply do not work on people with certain disabilities, including individuals with facial differences and blind or low vision people. This is partially because these tools are trained using underinclusive datasets, which leads to them being incapable of recognizing people with any sort of facial difference. These types of tools have been implemented into various systems: for example, they are built into testing software used by some schools, and as a security measure for some types of transportation, among other things. As a result, facial recognition tools can impact the ability of disabled people to take tests, board airplanes, and more.

As yet another example, AI-enabled tenant screening — where potential tenants are assigned a “score” that landlords or property management companies then use to determine whether or not to offer them a lease — also poses additional risks for disabled people. These “scores” can consider medical debt as a negative factor when making these determinations. People with disabilities are twice as likely to carry past-due medical debt than those without disabilities, and as a result, this process can impact the ability of disabled people to secure housing.

Finally, as previously outlined in Tech Policy Press, disabled people may disproportionately experience harm as a result of surveillance pricing. While there are several pending state-level bills that aim to regulate this practice, the moratorium would likely prevent many or all of them from being enforced.

Across the nation, states have been active in writing and passing laws that aim to regulate emerging technologies, including the use of facial recognition technology and other biometrics, particularly in the context of policing. State and local legislators have also enacted laws to regulate the use of AI in hiring, particularly whether or not a company can do so without notifying applicants, and bills have been proposed that would similarly provide transparency in the use of AI tools for housing-related decisions.

The AI moratorium would likely prevent enforcement of all of these laws. A recent paper by Americans for Responsible Innovation argues, for example, that Illinois HB 3773, which prohibits employers from using AI in employment decisions if it causes discrimination against protected classes, would “likely” be voided by the moratorium. The report also found that SB 24-205, Colorado’s AI Act that was also passed last year, is similarly “likely” to be voided by the moratorium. This bill requires transparency for the use of “high risk” systems, such as those that could lead to discriminatory outcomes for groups such as people with disabilities.

The scope of the moratorium appears broad enough that it would prevent states from regulating all sorts of technologies and tools — to the detriment of people with disabilities, and other constituents, nationwide. Importantly, as currently drafted (and despite claims to the contrary), the moratorium appears to further endanger the use of “general purpose” consumer protection and civil rights laws at the state level from being enforced when AI systems are involved, making entities that use them truly unaccountable for the harms they cause.

The harms from AI tools and other emerging technologies are not inevitable, and there are real benefits to some of these technologies, including their incorporation into assistive technology tools for disabled people. Many pending or passed state-level AI bills aim to allow people, both with and without disabilities, to enjoy the benefits of emerging technology while being largely protected from the risks. But the moratorium would prevent all of that.

More than 70 million adults in the United States are disabled — around one out of every four U.S. adults. The impacts, and risks, that these individuals face in their interactions with emerging technologies will only grow over the next ten years. While the provision has significant procedural issues such that it should get removed from the proposed budget reconciliation bill, it is likely that proponents will continue to push for a similar moratorium in other must-pass legislation if it fails now. Prohibiting state lawmakers from passing AI bills to protect their constituents, both with and without disabilities, is more than procedurally unlawful — it is simply bad policy.


Authors

Ariana Aboulafia
Ariana Aboulafia leads the Disability Rights in Technology Policy project at the Center for Democracy & Technology. Her work currently focuses on maximizing the benefits and minimizing the harms of technologies for people with disabilities, including through focusing on algorithmic bias and privacy ...
Travis Hall
Dr. Travis Hall is the State Director for the Center for Democracy & Technology, where he helps to champion, coordinate, and strategize CDT’s policy initiatives at the state level. Prior to joining CDT, Dr. Hall served as the Associate Administrator for the National Telecommunications and Informatio...

Related

Perspective
Clocked In: How Surveillance Wage-Setting Can Affect People with DisabilitiesJune 4, 2025
Analysis
DOGE & Disability Rights: Three Key Tech Policy ConcernsMay 12, 2025

Topics