What Will Drive State AI Legislation in 2025?
Riana Pfefferkorn / Jan 23, 2025With the new year comes a new legislative session in the statehouses across the United States. As policymakers head back to work, the promises and perils of artificial intelligence (AI) remain squarely in the spotlight. The 700 AI bills introduced across the country last year yielded a bumper crop of new state laws, and the momentum shows no signs of slowing.
In evaluating their legislative priorities, state lawmakers are taking stock of which checkboxes they ticked off last year, compared with which bills stalled out and why. Policymakers’ punch lists of targets for legislation will draw from the steady drip of reports that have been or soon will be issued by AI task forces in the House of Representatives and states such as Illinois and Virginia.
As those reports reflect, AI affects domains as diverse as cybersecurity and data protection, energy and the environment, civil liberties, and public service delivery. Given that broad impact, lawmakers still have their work cut out for them even after last year’s accomplishments. Triaging which topics to prioritize will require politicians well-versed in the art of the possible to assess political will, public sentiment amongst their constituents, and a rapidly evolving evidence base of emergent real-world social, technological, economic, and global political developments.
There may be some easy wins to pick up, for example, on issues where politicians can expect little or no formal opposition — such as prohibiting sexually explicit deepfakes of children, the topic of over 20 state laws last session — or where the government is regulating itself, as when establishing new state offices or advisory councils on AI. Picking off relatively low-hanging fruit will be optically important to lawmakers because they’ll be able to showcase those victories to the public when navigating trickier legislative challenges. Those include topics where prudent drafting entails drawing on specialized technical expertise, threading a constitutional needle (as discussed below), allocating enforcement authority, or accommodating multiple equities that may be in tension with one another (some of which might have significant lobbying power behind them).
Above and beyond domain-specific bills, some legislatures will aim their sights higher: We can expect 2025 to bring various swing-for-the-fences efforts to regulate AI writ large, informed by the European Union’s AI Act as well as landmark new legislation in Colorado, the first state in the nation to successfully pass a comprehensive AI bill. Some states aren’t losing any time getting started: late last year, legislators in Texas introduced a major AI bill that has already drawn criticism for both going too far and not far enough.
Lessons to be learned from California
In California, where many prominent AI companies are headquartered, lawmakers have set themselves a high bar following a prolific 2024 session. Governor Gavin Newsom signed 17 AI-related bills last year covering a wide swath of policy domains, from education to labor to privacy to health care to elections. In addition, the governor vetoed SB 1047, a comprehensive AI safety bill that proved controversial for (among other things) its potential negative impact on innovation and its use of specific model training computing power and cost thresholds to determine which AI models it covered.
SB 1047 and its demise probably received more media attention than all the AI bills Governor Newsom did sign combined. That is unfortunate to the extent that it gives a false impression of governmental inaction when, in fact, the last legislative session was remarkably productive on AI issues, as the governor’s office emphasized in public messaging. The raft of new California laws suggests that policymakers should not over-index on comparing the respective fates of California’s and Colorado’s comprehensive AI bills. Lawmakers should carefully consider whether to focus their energies on European-style major legislation that tries to cover a large amount of ground in a single bill or whether, instead, a package of bills addressed to specific use cases and harms might sufficiently achieve the same goals while also potentially proving more politically feasible to pass.
That is not to say comprehensive AI legislation is not worth the effort – only that it should not displace more incremental movement on more targeted issues. For example, in addition to a comprehensive AI bill, Texas legislators have also introduced multiple individual bills that target harmful applications of AI, such as malicious sexual deepfakes.
One advantage of a package of bills over the “one bill to rule them all” approach is that a narrow topical bill is more distinct and severable than the same language tucked into a larger piece of legislation. Contentious portions of a complex bill can create enough drag to keep it from passing before the legislative session expires. Similarly, if a court strikes down some portions of an enacted law, the whole law may be at risk if it is not drafted carefully. The legality of states’ nascent efforts to regulate various deceptive or harmful uses of AI is unsettled, meaning some regulations for some use cases may hold up to judicial scrutiny where others fall down. Snipping out one part of a patchwork quilt of AI laws still leaves the rest in place to (hopefully) do their job as intended.
A timely example is state laws addressing deceptive election-related deepfakes, where two similar statutes have so far met different fates in court. In early October 2024, a federal district court enjoined one of California’s three new laws on this topic on First Amendment grounds barely two weeks after the bill, AB 2839, became law. (The ruling seemed to attract little notice outside tech policy circles, perhaps because it was overshadowed by the SB 1047 veto earlier that same week.)
In a strongly-worded opinion, the court held that AB 2839 fell short of the demanding standard required for content-based restrictions on speech to survive constitutional scrutiny. Even deceptive or false speech is generally still presumptively protected by the First Amendment (but for a narrow, well-established handful of exceptions), so California’s law regulating “materially deceptive” deepfakes in elections had a high bar to clear. The court rejected what it characterized as an attempt to “bulldoze over” longstanding free speech protections, which it said apply with equal force “even in the new technological age when media may be digitally altered.” The court enjoined all but one minor provision of the statute. The court has also temporarily stayed enforcement of another election-deepfakes law, AB 2655, the constitutionality of which is likewise in question. (Several cases challenging one or both laws have now been consolidated with the original case.)
Contrast these developments with the opinion just issued on January 10 by a federal court in Minnesota, which reached the opposite result in similar litigation filed by the same plaintiff as in the original California case, a conservative political content creator. The court rejected the plaintiff’s request to preliminarily enjoin Minnesota’s election deepfakes law, reasoning that he had labeled his deepfake content as a parody (which is constitutionally-protected speech). That disclosure, said the court, put his content outside the law’s scope, which covers only deepfakes that a reasonable person would believe to be real. In stark contrast to the California court’s vigorous paean to the First Amendment, the Minnesota court tidily avoided engaging deeply with the significant constitutional questions raised by the statute (such as the permissibility of compelled labeling of synthetic content).
Lawmakers should continue to pay close attention to these cases. Imposing restrictions on AI-generated or -modified content is a fraught business given America’s robust constitutional protections for speech — especially political speech, which lies at the very core of the First Amendment. That said, 20 states already enacted political deepfake laws; with the November election now behind us, the remainder may feel less urgency about this issue during the current off-cycle year.
In 2025, states will launch more rockets before seeing where last year’s land
Legislative efforts to address the harms of AI must walk a fine line to balance numerous interests, including free expression, privacy, consumer protection, individual dignity and safety, economic impact, and innovation. Whether any particular state got the balance right in any given AI law remains to be seen since the real test of 2024’s legislative productivity is largely yet to come: Many of the AI laws passed last year don’t take effect until at least next year. That delay gives covered entities the opportunity to come into compliance and regulators time to issue guidance and plan their approach to enforcement.
Nevertheless, states are currently looking to each other (and to the EU) for examples of how to write their own AI laws, without knowing how those examples will play out in practice. That dynamic may be inevitable due to the current national (indeed global) zeal for AI regulation. The combination of the Brussels effect, politicians’ desire to be seen getting things done, AI’s very real present-day harms, and the breakneck pace of technological developments translates into little patience among lawmakers for a “wait and see” attitude. We can thus expect the 2025 legislative session to be as or more prolific than 2024 in terms of AI bills introduced, even if the fruits of those labors likewise end up deferred.
As artificial intelligence continues its reach into multifarious aspects of our lives, the law must address AI’s present-day impacts while also predicting what will come next. In crafting legislation, policymakers can benefit from the cutting-edge research coming out of Stanford and other institutions. With an evidence-based policy approach, lawmakers can help shape the AI-enabled future to serve us all, not just as their voters or constituents, but as humans.