Home

Donate

The State of State AI Laws

Justin Hendrix / Aug 6, 2023

Audio of this conversation is available via your favorite podcast service.

Lots of voices are calling for the regulation of artificial intelligence. In the US, at present it seems there is no federal legislation close to becoming law. But in 2023 legislative sessions in states across the country, there has been a surge in AI laws proposed and passed, and some have already taken effect. To learn more about this wave of legislation, I spoke to two people who just posted a comprehensive review of AI laws in US states: Katrina Zhu, a law clerk at the Electronic Privacy Information Center (EPIC) and a law student at the UCLA School of Law, and EPIC senior counsel Ben Winters.

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

I'm so pleased the two of you could join me today to talk about this report you've just posted on the EPIC website, The State of State AI Laws 2023. Another of these occasional roundups of legislative activity that EPIC does. These are always great, and I think one that is incredibly pertinent at the moment, looking at artificial intelligence and legislation that might govern it in different ways. How long did this take to do? You've collected a bunch of stuff here.

Katrina Zhu:

So, Ben gave me this assignment at the beginning of the summer around May, and it took me all summer to track down some laws. I did a lot of research in May and June, and I was pleased to find that there were some other laws that came out in June and July as well. So, it took me about three months.

Justin Hendrix:

Folks, of course, can visit this thing themselves, but you've got a broad set of categories. You've got laws going into effect this year, laws passed this year, laws proposed this year, a bunch of different things focused on the specifics. If we zoom back out on the moon and we look down at the United States and it's 50 states, what is going on with artificial intelligence legislation?

Katrina Zhu:

I would say that a lot of laws that have been passed and going into effect in 2023 are part of larger comprehensive consumer privacy bills. So, a lot of these bills have general data protection requirements for certain data processors and controllers, and they also have AI provisions as well. They give consumers the ability to opt out of profiling or require impact assessments for controllers that are doing activities that have a heightened risk of harm, which includes some AI and automated decision making systems. There are a lot of laws being proposed in other states around specific industries, specifically employment, healthcare, generative AI, a lot of more specific AI related laws, but we haven't really seen a lot of those laws being passed in 2023 yet.

Justin Hendrix:

We'll get into some of the specifics as we go through the conversation, but maybe let's talk about just the laws going into effect this year. What are the key trends you're seeing about the things that are taking effect in 2023, and are there a couple of examples you'd share?

Katrina Zhu:

So, specifically we found six laws with AI related provisions going into effect this year, and of those, five of them are part of comprehensive consumer privacy bills, specifically in California, Colorado, Connecticut, Virginia, and Utah. So, for example, the Colorado Privacy Act is a pretty good example to look at. They allow consumers to opt out of profiling, and also require an impact assessment for systems engaging in behavior with a heightened risk of harm.

Justin Hendrix:

And how about laws passed this legislative session? How would you characterize the trends you're seeing of the laws that have made their way through the state houses this year?

Katrina Zhu:

We found 12 AI related laws passed in 2023, and of those, six were part of broader privacy bills in Delaware, Indiana, Montana, Oregon, Tennessee, and Texas. Aside from those, there were not many substantive laws passed on AI. There were some around task forces, some around commissions, some declarations about the harms of AI, but not much substantive there.

Justin Hendrix:

Are there a couple that you'd point to specifically, a couple that were interesting to you?

Katrina Zhu:

I think one of them that was interesting to me was a law in North Dakota that was an emergency measure declaring that AI isn't a person. I just found that one pretty interesting. And aside from that, as I said, just some task forces that investigated the potential harms and risks of AI.

Justin Hendrix:

So, that is HB 1361 in the 68th legislative assembly of North Dakota. I'll just read the actual language. “Person means an individual, organization, government, political subdivision, or government agency or instrumentality.” And this is an underline. “The term does not include environmental elements, artificial intelligence, an animal or an inanimate object.” I'm glad we've got that clear in North Dakota. Are there other examples in this set that you found interesting?

Ben Winters:

One in Connecticut S1103 is a large spanning bill. It takes some transparency measures in the state use of artificial intelligence and automated decision-making systems, which is something we've been calling for for a while, and something that the federal government is making incremental progress on it. But also, in that same bill, they establish an office of artificial intelligence within the state government and endeavored to develop an AI Bill of Rights, which is something similar to what the White House Office of Science and Technology policy published earlier this year. And that is not necessarily anything that will be actionable, but will hopefully chart the path of what Connecticut will continue to try to do in both privacy and AI legislation.

Justin Hendrix:

Now, I suppose we'll get into what's perhaps the most interesting area and the most diversity, certainly in the ideas around laws proposed this legislative session. And, of course, again, a lot of the laws proposed are a part of comprehensive consumer privacy bills, but there are a bunch of other kind of categories, at least, according to your review. Let's talk about big picture. What are you seeing as far as the trends in this legislative session? What does it appear that state houses are most concerned with?

Katrina Zhu:

So, as you were saying, there are some proposed bills that are again, part of comprehensive consumer privacy bills. But, aside from that, we see focus on specific industries like employment, healthcare, insurance, government use of AI, and a lot of investigative bills investigating the impacts of AI, and setting up commissions, and task forces and things like that. What I was surprised to not see that much of is bills regulating the general use of AI. There were a few, but I think that most of the AI regulation was specific to certain industries that the legislators seem concerned about.

Justin Hendrix:

You mentioned that there are a lot of commissions, and task forces and investigative groups being formed. So, it sounds like we'll have a lot of white papers and reports and things to look forward to in 2024.

Katrina Zhu:

I think that they were varied. Some of the investigative commissions would investigate AI use in government specifically, and some of them would investigate AI use more broadly, specifically impact on the economy or things like that. So, there were a few different things that people wanted to look into.

Justin Hendrix:

Let's dig into a couple of these particular trends. You talk about regulation of AI in employment settings. I live in New York, of course, we've just seen a law pass that would make for the possibility of audits of essentially systems that select candidates for jobs. What other regulations for AI in employment settings are we seeing out there in the states?

Katrina Zhu:

There were a few states that proposed bills around regulating AI use, specifically with hiring decisions, similar to the New York City law that was recently passed. But, there were also some states that proposed bills that would regulate employee monitoring in the workplace itself. Specifically, Massachusetts proposed an act preventing a dystopian work environment, which would require employers to notify workers if they're being evaluated using AI. And it would also prevent employers from making important decisions like maybe promotion or firing decisions based solely on an automated decision-making systems recommendation.

Justin Hendrix:

So, an extraordinary title again for this piece of legislation, Bill H.1873 in Massachusetts, "An Act Preventing a Dystopian Work Environment." So, we're actually legislating against black mirror outcomes here.

Ben Winters:

And one exciting thing to see in some of these bills around employment, and insurance and healthcare is the recognition of the life cycle of surveillance and automated systems used, not just in the hiring process, but once people are on the job in evaluation, firing, all of that stuff. And so, that's heartening to see these proposals, especially ones that have fun names like that.

Justin Hendrix:

I wish we had time to go through all of these categories. You get into regulating AI, and healthcare and insurance, but you mentioned earlier the regulation of, the use of AI by government as being one of the areas of focus for these laws. Are there a couple of trends here that you're seeing, perhaps a couple of good examples?

Ben Winters:

Absolutely. So, increasing transparency around the use of AI in government is something we've been working on for five or six years and have been seeing a little bit more traction. Some of the task forces and commissions like Katrina was saying that were passed in past years, actually had the effect of creating some transparency around the use of AI in government. But, a lot of the most concerning use of AI in policing, and the criminal justice system, surveillance, in transit, have to do with government use. And it's long held that the government should have some sort of responsibility to its citizens. So, I think that over corporations, the government use of AI is a really good place to start to lead by example. And so, we're seeing a reflection of that not only at the White House and in Congress a little bit, but in a lot of these states.

And so, I think one of the strongest ones is California A302. It has an inventory of all high risk automated decision making systems used by state agencies. And so, that is similar to what happened in the Connecticut bill. And so, the Connecticut bill actually passed, so we'll be able to see a version of that soon. But, both California and Washington for several years have been proposing these bills that do a few things. One, create this list online. Basically, here are all the ways government is using AI, and here's maybe some of the data we're using related to that. And then, also creates a specific prohibition on certain uses that are discriminatory against folks. Particularly, again, we see that a lot in the use of facial recognition, in the use of any sort of emotion recognition, a lot of surveillance context, and definitely at the border, as well as public benefits. So, I think that that should naturally be the first place to get some movement. And as it's reflected, that's where the better bills are than in regulating the corporate environment in general.

Justin Hendrix:

It's interesting to think about. Again, from my vantage in New York, you've got the POST Act here, which forces the NYPD, for instance, to reveal what types of surveillance systems that it's using, not just AI, but of course, many different things. And that's been somewhat useful as a kind of measure, having a list of the things, but it hasn't really resulted in any actual transparency or real public oversight of the use of those particular technologies in any meaningful way. I wonder if these list or disclosure laws will have any efficacy in the long run.

Ben Winters:

One tricky part of it is that disclosure and these inventories can mean a lot of different things. So, sometimes they're just straight up a list, which is sometimes we get at the POST Act, honestly. You just get a sheet that says, we use facial recognition, whatever. But, in the Washington and California bills for example, there are a series of questions that every agency would have to answer about the context in which a system is being used, the checks and balances around that within the office and the decision making process, the data that they're using. And so, that's the level of transparency that we really need to be able to help protect the individuals from potential discrimination by their government. And you can't even start to bring some sort of challenge if you don't have that type of information, because you might get an adverse decision, but you don't even know where to start to challenge it. That level of transparency, that higher level is something, it's really wild that we haven't seen that yet, but I'm glad, I'm heartened to see a few states recognize that in their bills.

Justin Hendrix:

Well, these are "the laboratories of democracy," so perhaps we're learning something as we go here. Let's look at another trend, regulating generative AI. Of course, hot on the scene, ChatGPT, November 2022. Some state houses have already got round to this.

Katrina Zhu:

As you mentioned, some states have gone around proposing bills to regulate generative AI. There aren't that many right now, I think around six, but I'm sure that we'll see more in the future and in future legislative sessions. One bill that I found really interesting was a bill in New York that would prevent production companies using state funds from using generative AI to replace a natural person. And the legislators have expressed that this bill is directly to protect screenwriters and actors' jobs, which I assume can only be from a lot of the movement that has been going on in Hollywood around not replacing writers and actors with generative AI.

Justin Hendrix:

I see two here. One that would require advertisers to disclose their use of synthetic media in New York. One that would require political communications to disclose the use of synthetic media. And then, another, as you say, that would prevent film production companies receiving production credit from using AI to replace actors in their productions. Of course, production credits are a big tax incentive in New York City, New York State. So, it's interesting to see that come along.

Ben Winters:

And one other trend we're seeing is trying to glom onto some of the most impactful instances of generative AI. And so, the New York A7106 that you just mentioned about disclosure in political communications gets at the fact that, especially in these upcoming election years, generative AI used misinformation regarding elections is going to be more and more prevalent. And so, we saw that proposed at the federal level by Yvette Clarke (D-NY), and there should be a little bit more than disclosure, but we'll take that. And then, the other one that's really key is in Pennsylvania, H1063, specifically criminalizes disseminating basically revenge porn where using generative AI systems to create what look like real nude photos of somebody.

And there's been an open question in state laws about what a photograph actually means, when it's illegal to disseminate those photographs? Because, that was a really long, big movement to get that by itself. And so, if it's not actually a photograph of that person, are people left a little bit in a lurch? And so, this bill in Pennsylvania endeavors to clarify that explicitly through legislation, and I know that there were discussions around introducing that in California and other states. So, that's one other major thing to check out for.

Justin Hendrix:

Let's talk about a grab bag here of other bills that have been put forward. Are there things outside of the key trends that you've gathered that are worthy of note?

Ben Winters:

I think just overall the trend is being comfortable with transparency a little bit, which we have started to see honestly since 2016. But, more and more, especially with the popularity of generative AI, we're getting state houses more interested in saying, oh, how do our state agencies use this? What are these generative AI tools and how are they used? I know at the federal level there's all these bootcamps and hearings and stuff like that that have proved very popular, and I know that state houses around the country are really trying to learn about it, learn about how to actually cut through the different narratives, especially around the companies that are focusing on this long-term existential risk. But, now I think state houses are becoming a little bit more determined to find out and identify the current risks.

And I think a lot of the other trends that I want to mention is that, although generative AI is taking up a lot of the space around check at the federal level, and as you see some in the state level, we are seeing regulating the AI we were talking about for the last 10 years before November. And that's like the automated decision making tools, the sort of simplistic, even tools that really affect people's rights, livelihoods and being. And so, I'm glad that states have not been completely hooped up by that. And it's sometimes comforting to revisit and articulate that fact that, even though it's sometimes all about generative AI rhetorically, the states are moving towards regulation more broadly.

Katrina Zhu:

I also think in terms of outliers, a couple of things stood out to me. First, there were more than one bill that were proposed around regulating AI use in gambling settings, which is just not something that I personally had thought about before. But, specifically, it would prevent data collected for people who were engaged in gambling to predict their behaviors, which makes sense. It's just something interesting to call out. And the second one is that, there were a couple bills here and there that were around enabling AI use. I think that there was one bill to fund specific businesses if they were to invest in AI technologies, and there was another bill in California to investigate how the government could use AI, could leverage AI in improving their services. And I think that those stood out in terms of all the other bills that we've seen regulating AI.

Justin Hendrix:

In Illinois you can be happy to know that there are laws in place to prevent data collection on gambling platforms to use to predict how a player might gamble. So, Illinois kind of out ahead of perhaps the rest of the states on that front. Any other weirdo bills that we missed?

Katrina Zhu:

I think there were a couple that I found pretty interesting. Specifically Massachusetts has a generative AI proposed bill that's titled, An Act drafted with the help of ChatGPT to regulate generative AI models like ChatGPT. And I just found the irony that they use ChatGPT to call out the harms of ChatGPT and regulate ChatGPT to be a little, I just thought it was really funny and humorous.

Justin Hendrix:

Bit of a stunt. I think we've seen that happen in a couple of different jurisdictions.

Let me ask about all of these states in general, any interaction with the federal government. You mentioned that Sen. Chuck Schumer's (D-NY) got this policymaking process. He's announced a set of hearings we understand for the fall. What do you think is the likely impact of all the state legislation on the federal picture?

Ben Winters:

I think just as we're seeing in the consumer privacy space, the more related bills that come up through the state houses tend to put a little bit more pressure on the federal government to create some sort of standard, set some state of play. But, as I mentioned a little bit earlier, Schumer as a majority leader has taken, forced his way into this leadership role and has become a little bit of a gatekeeper for AI related legislation, and has really reiterated prioritizing innovation, which is troubling to hear over a lot of the harms we've discussed, but also prioritize that this is a big change and really require a lot of hearings, and study and whatnot, which to one point, obviously, you want our lawmakers to be thinking carefully about this.

And to the other point, we have been doing, the government has put out two really great reports, one by OSTP and one by NIST in the last few years that have really thought about it. And so, one thing we keep repeating and want to at the federal level is like, hey, we already know what to do. We don't need all of these education sessions. So, it's a little bit frustrating, but I'm hoping that state movement will put a fire under some of the federal legislatures.

Justin Hendrix:

I would personally agree and wonder why more lawmakers don't see the obvious connection to the necessity of federal privacy legislation as part of preparing for our AI future. But, let me just ask you another question. Any challenges, lawsuits, controversy about any of these laws that have been put into effect or have passed? Are we going to see any pushback?

Ben Winters:

For some of them, obviously industry is not thrilled with having additional obligations or even forced transparency, because a lot of them see those ideas as important assets, but not many specific challenges, especially the ones that are already passed. One that I will call attention to is a group of companies has sued, I think successfully, the California Privacy Board to actually delay enforcing the California Privacy Rights Act regulations. And so, I think that that will basically push the date of enforcement back a full year. And that's just a lot of challenges on very hyper-specific administrative law in the weeds. And I think industry has lots of lawyers going through that stuff. So, anything they can do to push back the date that they actually have to start is big. But, we are seeing the impact of these big groups that represent companies being able to push voluntary frameworks, being able to push only transparency stuff and not actually getting on the clear red lines around use that would jeopardize either current or future business paths.

Justin Hendrix:

I suppose I have one last question. I was looking at the agenda for the National Conference of State Legislatures, which is coming up soon in Indianapolis this year, and there are multiple sessions to do with artificial intelligence, and computing, and technology more generally. Noticed that a couple of the big tech firms are sponsoring those sessions, trying to make legislators aware of where the technology's going, that sort of thing. Folks like Google and Amazon, of course, in the room. Is the hand of industry present in some of these laws that you've looked at? Can you tell whether the industry is more or less getting what it wants?

Ben Winters:

I think that industry is, it's not any surprise or anything new to say that industry is extremely powerful in lobbying context, both as an industry actor but also as a government contractor. And so, they play a big role in a lot of conferences of state legislators, which I think are good breeding grounds for discussion and talks among legislators from different states for regulation, the people that are really looking forward to move that forward. But, again, with any of these gatherings, we see a lot of the influence of industry, like those two things you mentioned. Two of the four total speakers among them are from Google and Amazon who are both federal contractors and state contractors to the tune of many millions of dollars, but also have a real stake in not having meaningful regulations. I think that they tend to create this fear that they're going to stifle innovation, that they're going to get people away from putting data centers in their states, which are real money makers.

And so, I think that those sort of failed threats, for lack of a better word, of adverse effects to their state and the economy is really what's keeping a lot of these bills fairly weak, fairly about the commission, study, task force level, because they are being told that they don't get it, that they are going to ruin the consumer and economy experience and can't quite intervene yet. So, we are totally seeing the impact of it, but thankfully not as explicitly yet as we see in the consumer privacy space, but I'm sure we will get there with more news.

Justin Hendrix:

Well, I thank you so much both for your labor in collecting all of this information, and also for your time in talking to me today.

Ben Winters:

Thank you.

Katrina Zhu:

Thank you so much.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics