Home

Donate

The Coming Year of AI Regulation in the States

Dean Woodley Ball / Jan 7, 2025

American AI policy in 2025 will almost certainly be dominated, yet again, by state legislative proposals rather than federal government proposals. Congress will have its hands full confirming the new Trump administration, establishing a budget, grappling with the year-end expiration of Trump’s earlier tax cuts, and perhaps even with weighty topics like immigration reform. Federal AI policy is likely to be a lower priority. Thus, statehouses will be where the real action can be found on this vital topic.

In 2024, state lawmakers introduced hundreds of AI policy proposals. Only a small fraction passed, and of those, the vast majority were fairly anodyne, such as creating protections for malicious deepfakes or initiating state government committees to study different aspects of AI policy. Few constituted substantive new regulations. An AI transparency bill in California and a civil-rights-based bill in Colorado are notable exceptions.

In the coming year, expect to see far more major, preemptive AI regulatory proposals. These will look more like European Union regulations than the more modest US proposals that predominated in 2024.

Already, there have been rumors of New York legislators working on proposals similar to the vetoed SB 1047 in California, which would have imposed negligence liability on frontier AI model developers when their models were misused by others. It also sought to create an auditing regime for the same firms, among other provisions. Meanwhile, the original bill’s author, State Senator Scott Wiener, has indicated that he might take a second stab at it during California’s next legislative session. While SB 1047 was the only major attempt at “frontier” AI regulation in 2024, we are very likely to see multiple similar proposals in these and other states.

But perhaps the largest number of major AI proposals will more or less mimic Colorado’s civil rights-based bill. Similar bills failed in Connecticut and Virginia last year, and it would be unsurprising to see them re-introduced. New states will bring their own iterations. Texas Representative Giovanni Capriglione, for example, has introduced the Texas Responsible AI Governance Act (TRAIGA), which he describes as a “red state model” for AI policy, despite its striking similarity to proposals from blue states like Colorado and Connecticut.

These bills differ in some important details but share the same broad framework. Developers of AI models or systems that may be used in “consequential decisions” (in industries like financial services, health care, insurance, electricity, and numerous others, as well as common business activities like hiring) will need to write lengthy “algorithmic impact assessments” and, often, implement “risk management frameworks.”

The same requirements also usually apply to AI deployers—that is, businesses choosing to use AI systems in ways that may affect a person’s access to or the terms of service for the industries and business practices mentioned above. These documents all need to be written prior to the release or deployment of AI systems.

Requirements like these might make some sense for “narrow” machine learning systems—a statistical model made by a bank, say, to evaluate the likelihood that a loan applicant will repay their loan. It is fair to wonder whether the data used to train that statistical model contains racial, gender, and other biases from America’s historical patterns of discrimination. But applying these requirements on a per-use-case basis to generalist AI models like ChatGPT is another matter altogether.

ChatGPT and similar models have thousands of potential use cases for a business, while a narrow statistical model has only one or a few. The civil-rights, impact-assessment-based approach is, therefore, fundamentally outdated for the very AI systems that policymakers seem most eager to regulate.

These and other requirements imposed by these bills in their current form are, at the very least, likely to deter AI adoption by covered businesses, and they could even diminish frontier AI development in the United States more broadly.

As currently written, these bills cover a huge range of commercial activity. Contractors who provide electrical or plumbing services, for example, may have to write algorithmic impact assessments prior to using a large language model for something as simple as drafting invoices for customers because invoices inherently affect the customer’s terms of and access to the service in question. When Governor Jared Polis of Colorado signed his state’s version of this bill, SB 205, he noted his “reservations” about the “complex compliance regime.” (It is worth noting that SB 205 does not go into effect until 2026).

Indeed, this and related civil-rights-based bills are considerably longer and more complex even than California’s SB 1047. They seem to have emerged out of a “multistate AI policy working group” convened by the Future of Privacy Forum. All the legislators who introduced the aforementioned bills from Connecticut, Colorado, Virginia, and Texas were members of that working group’s steering committee. Another version of this framework has been proposed as agency rulemaking by the California Privacy Protection Agency, which was also represented on the same steering committee.

This approach ostensibly protects against the risk of a “patchwork” of state AI regulations by adopting a common legislative framework. But in practice, the bills contain differences substantial enough to create their own regulatory patchwork, even with a common (and flawed) framework.

In more ways than one, the approach to AI regulation typified by Colorado’s bill is the worst of both worlds: a nebulous, pre-emptive framework imposed on a wide variety of businesses and AI developers of all sizes, with none of the benefits of a unified nationwide approach. If too many states adopt it as a de-facto approach, America could steer headlong into a regulatory regime that is worse than the European Union’s, with all of the ambiguity of the EU’s AI Act plus a patchwork of differing rules across jurisdictions and America’s generally more litigation-intensive culture.

If America proceeds in this direction, it will be up to the federal government to preempt these flawed bills with an approach of their own. How and whether this will happen may well be the most important AI policy question in the coming years.

This post is part of a series examining US state tech policy issues in the year ahead.

Authors

Dean Woodley Ball
Dean Woodley Ball is a Research Fellow in the Artificial Intelligence & Progress Project at George Mason University’s Mercatus Center and author of Hyperdimensional. His work focuses on emerging technologies and the future of governance. He has written on topics including artificial intelligence, ne...

Related

Time for California to Act on Algorithmic Discrimination

Topics