Home

Donate
Perspective

Do We Need a ‘NIST for the States’? And Other Questions to Ask Before Preempting Decades of State Law

Anna Lenhart / May 20, 2025

National Institute of Standards and Technology campus in Gaithersburg, Maryland. J. Stoughton/NIST. Source

When ChatGPT first caught the Congress’s limited attention in 2023, many tech policy scholars (myself included) argued to little effect that the underlying technology isn’t entirely new. In fact, generative AI is already covered by a range of existing laws, including those on consumer protection, discrimination, and biometric and data privacy.

A glance at the bills introduced during the 117th Congress (2021–2022) reveals dozens that would have addressed societal issues related to generative AI tools: proposals to create new agencies, advance privacy protections, improve platform transparency, require deepfake labeling, and reform competition. The House even held hearings on generative AI tools before 2017 (e.g., The Dawn of Artificial Intelligence).

But this more nuanced view—that generative AI is simply the latest step in the evolution of advanced data processing and machine learning—doesn’t benefit the industry. It doesn’t generate headlines or help slow down regulation like the flashier narrative, which suggests that generative AI is a groundbreaking innovation so complex that only tech CEOs can understand how to protect the public from it, and that doing anything that might impede it will somehow limit American competitiveness.

Here we are in a new Congress, with new leadership. Still, this time, instead of silly ‘insight forums’, Congressional leadership is grabbing a DOGE-inspired sledgehammer—a 3-page moratorium on state AI laws inserted into a budget reconciliation package. This is particularly troubling because generative AI isn’t some radical break from the past; it's part of a longer trajectory of machine learning and automated decision-making technologies. The framing of ‘AI’ as something entirely novel is opening the door for a moratorium that risks stripping consumers of protections under long-standing laws that address child safety, privacy, fraud, and more.

Some legal scholars argue that general state data protection laws and those targeting unfair or deceptive practices may survive under the moratorium as it is written. The “rule of construction” seems to allow for the enforcement of “generally applicable laws” as long as they apply equally to ‘AI’ and ‘non-AI tools’ offering comparable functionality.

But here's the problem: the definition of “artificial intelligence” in the legislation is so broad (“a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments…”). In today’s digital market, many, if not most, products and services include functions that skirt this definition, and it is challenging to think about a ‘functionality’ with a “non-AI” alternative.

Regardless, ambiguity favors monopolies. OpenAI and companies like it have vast legal teams ready to exploit every gray area. On the other hand, state attorneys general may have just a handful of lawyers and limited resources to fight back. This power imbalance may result in state AGs backing down on all cases related to the digital environment, even cases relying on decades-old consumer protection laws that may not officially be included in the moratorium.

If we want a national standard, drop the sledgehammer and build

When I think of a comprehensive “AI policy,” it includes mandates for transparency, researcher access to training data, risk assessments, audits, and privacy rules. These types of mandates work better and make more sense when implemented at the federal level. However, in the US system of governance, when one branch fails to act (as Congress has), other bodies often step in. In this case, state legislatures have started to fill the gap. States are the de facto US digital regulator, for better or worse, and will be until Congress undergoes major reforms related to campaign finance, committee structures, staff capacity, etc. Now is the time to ask: how do we get a strong AI policy implemented nationally via the states?

One of the reasons I am supportive of national standards is that the internet’s borders are squishy, not just across state lines but international ones, too. Until recently, the National Institute of Standards and Technology (NIST) was engaged in shaping international AI standards. Similarly, the State Department was actively engaged in global discussions related to the safe deployment of AI systems at the UN Global Digital Compact, G7 Hiroshima Process on Generative Artificial Intelligence (AI), G20 Maceió Ministerial Declaration on Digital Inclusion for All, and several other forums.

I say “until recently” because both agencies now face significant cuts and value shifts. NIST’s international standards work depends heavily on internal research and external partnerships, many funded through National Science Foundation (NSF) grants. But recent DOGE cuts have slashed NSF support for AI ethics research, and even support for standards appears to be waning. For instance, during a recent Senate hearing on AI competitiveness, OpenAI CEO Sam Altman gave a wishy-washy response on the value of NIST standards when asked about the subject by Sen. Maria Cantwell (D-WA):

Sen. Maria Cantwell (D-WA):

Do we need NIST to set standards? If you could, just yes or no, and just go down the line.

Sam Altman:

I don't think we need it. It can be helpful.

His tepid response suggests Silicon Valley leaders may retract support for even voluntary guidance on standards. That’s bad news for NIST, which relies on bipartisan support in Congress. The State Department hasn’t fared much better in this new political environment. The Office of the Science and Technology Adviser was recently eliminated as a standalone office. And the Trump-era State Department is no longer prioritizing free expression, LGBTQ+ rights, or gender-based violence in its human rights reporting—all crucial values in the development of sociotechnical AI standards.

There are many reasons why having 50 states legislate the internet and AI isn’t ideal. Navigating many jurisdictions can overwhelm startups and civil society groups, while large tech firms can simply hire more lawyers. But a less discussed challenge is that states lack the foreign policy infrastructure described above to engage in international tech governance. Despite this, states increasingly reference international standards in their laws (as I discussed in a recent article).

This lack of foreign affairs infrastructure isn’t an insurmountable problem. And because other regions (like the EU) are ahead of the US on privacy, transparency, and AI auditing, American consumers could benefit from international standards being referenced in state-level laws. Additionally, while the federal government has been able to show up at international negotiations with values and frameworks, states can show up with laws that need to be harmonized. And laws have far more weight in influencing global standards than abstract principles.

In the absence of federal resolve on AI regulation, states that are active in regulating technology should work together to collaboratively build a new (or expand an existing) association that functions like an interstate-level NIST. This entity could represent US states in international standards bodies, conduct shared research, and issue harmonized frameworks similar to the NIST AI Risk Management Framework (which states are already referencing).

Of course, many details need to be worked out: How would this entity be funded and governed? What process would it use to harmonize or align laws? Which international bodies should it engage with, and how can it best engage consumers and civil society? There are many paths to building strong, coherent national AI regulations. But dismantling the progress states have already made (without offering a new set of regulations) is by far the laziest and most dangerous approach.

Authors

Anna Lenhart
Anna Lenhart is a Policy Fellow at the Institute for Data Democracy and Politics at The George Washington University. Most recently she served as a Senior Advisor at the White House Office of Science and Technology Policy and as Technology Policy Advisor in the US House of Representatives.

Related

Perspective
Leveraging International Standards to Protect US Consumers Online, No Congress RequiredApril 9, 2025
Analysis
US House Committee Advances 10-Year Moratorium on State AI RegulationMay 14, 2025

Topics