Home

Donate

What's Going On In California?

Justin Hendrix / Sep 8, 2024

Audio of this conversation is available via your favorite podcast service.

Thirty tech bills went through the law making sausage grinder in California this past session, and now Governor Gavin Newsom is about to decide the fate of 19 that passed the state legislature. The Governor now has until the end of September to sign or veto the bills, or to permit them to become law without his signature.

To learn a little more about some of the key pieces of legislation and the overall atmosphere around tech regulation in California, I spoke to two journalists who live and work in the state and cover these issues regularly:

  • Jesús Alvarado, a reporting fellow at Tech Policy Press and author of a recent post on SB 1047, a key piece of the California legislation;
  • Khari Johnson, a technology reporter at CalMatters,a fellow in the Digital Technology for Democracy Lab at the Karsh Institute for Democracy at the University of Virginia, and the author of a recent article on the California legislation.

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

I'm so grateful that two of you could join me today to take our listeners through what the heck's going on in California, this wave of AI legislation that has coursed through the California legislature, some of it now sitting on the governor's desk. Really hoping to dig into a little bit of what's going on with some of these high profile bills, what's made it there, what got left on the floor of the legislature and what we should know about the path forward.

Khari, I want to start with you maybe just as a background or for those who might not be following California all of that closely, help us with the timeline here. When did this wave of legislation start? How long have these various laws been under consideration and where are we at right now in the process?

Khari Johnson:

California is a bit different than other states in that there's a full-time legislature and the legislative session begins at the start of the year. Last Saturday up until midnight, they were debating different bills or pushing them through different committees. Roughly 20 different bills related to artificial intelligence were passed by the California legislature touching on deep fakes or the potential to have catastrophic harm or protecting kids or healthcare or all kinds of stuff.

Justin Hendrix:

Beyond just the general sort of interest in artificial intelligence that we've seen in legislatures around the country, around the world, was there something particular about this legislative cycle that led to this raft of legislative proposals around AI?

Khari Johnson:

I think there's still this kind of aftershock of AI, but last fall, governor Gavin Newsom signed an Executive Order requiring state agencies to explore how artificial intelligence can be used to solve different problems. There's roughly half a dozen different pilot projects associated with that ongoing right now. Another one was announced yesterday related to homelessness.

I think in the aftershock of ChatGPT and more money and interest flowing around generative AI, there was a lot of attention being paid, but the majority of major AI companies call California home. There's a history of open source projects or tools that were built in California like PyTorch or TensorFlow, BERT that Google released early on back in 2018. It has a lot to do with the AI arms race dynamic that we're seeing today.

California has a lot to do with artificial intelligence. At the start of the legislative session, a lot of leaders in the California Assembly and Senate pledged to introduce bills that would address this and to take on the responsibility of regulating AI and to try and be a world leader in doing that and just talking about the imperative of that. One big undertone of the conversation that existed around Senate Bill 1047, which was authored by Senator Scott Wiener from San Francisco was that there's a responsibility to act because California is the home to a lot of AI, but also Congress is not going to act. Congress has not acted.

Justin Hendrix:

Jesús, I want to come to you. You just wrote for us at Tech Policy Press about Senate Bill 1047 and about some of the arguments that are still playing out around it as it awaits action by the governor. I want to ask you, again, maybe similarly to what I've just put to Khari, a more like atmospheric question about how the tech policy debate has unfolded in California.

You've also covered for us recently issues around the child safety debate, the California Age Appropriate Design Code. What do you think of as a description of the environment for this type of tech policy debate in California? As Khari mentioned, you've got obviously a lot of folks who know technology, but a lot of big money there as well that has an interest in stopping some of this legislation.

Jesús Alvarado:

It's interesting because since ChatGPT came into the picture when it became available to the general public, I think there was this discourse of ‘this is fun.’ It's like Google, but it actually gives you answers and whatnot. Of course before that we had Dolly, which is a similar software, but instead of text it's more visuals.

That all was fun, but I think once generative AI started making it a presence in schools was when we first heard from teachers, educators, professors, that they were the first ones in a way to ring the alarm bell and say, "Hey, this is out of control. This is being used this way. We see it among our students. We need legislation." I remember that because it was two years ago when I started to make this sort of pivot of not just focusing on general tech policy but specifically in artificial intelligence.

I remember even covering what is generative artificial intelligence, and so it's been wild in the interim I guess some can say. To the flip side of that, we've also heard, or at least I've heard from many sources, how regular folks now leverage the use of generative AI to make things like automating tasks either in their everyday work duties or in their everyday life tasks. I feel like it's been a double-edged sword.

Now having this bill, SB 1047 out of California, it would be very impactful if Governor Gavin Newsom decides to either sign it into law but still impactful even if it's vetoed because what would that say to other companies that want to come to California and start up their AI business? We have yet to find out.

Justin Hendrix:

Let's spend a bit of time on 1047. This is a Wieners bill. Khari, you mentioned it's received probably the most attention outside of California. Folks are calling it the AI Safety Bill, I suppose, for short. There's criticism of this thing from multiple sides. You've got both folks that I don't think of as friendly to the companies. You quote, for instance, Alex Hanna from DAIR, the Distributed AI Research Lab arguing that SB 1047 focuses too much on catastrophic risks.

That sounds almost similar to Yann LeCun, Jesús, who you quote in your piece on 1047 who says the law is too focused on things that have been dreamed up in a few extreme think tanks. What do we make of the current debate on 1047 and its prospects for getting Governor Gavin Newsom's signature? Khari, I'll start with you.

Khari Johnson:

I agree with Dr. Hanna and I think that there's been a dynamic in conversations about artificial intelligence that's existed for a few years now where it's my opinion that the majority of lawmakers, however verse they are on artificial intelligence, do not know if they're speaking with somebody who might refer to themselves as "I work in AI safety" or "I work in ethics and responsible AI," or "I'm an effective altruist."

I think that their motivations can come from very different places. I feel like a lot of people on the ethics and responsible AI side, who I would include Dr. Hanna in that, are more interested in addressing harms that exist today, that are proven, that are clear and present dangers. Whereas I think a lot of AI safety people can focus on... An effective altruist can focus more on high hypothetical harms that don't necessarily exist yet, but are deeply concerned that it's right on the horizon.

That's a big challenge and something that we'll continue to see in how these bills are drafted. I think it says a lot, and I try to articulate this in my story, I think a lot, or I should say a lot of my sources do, it says a lot that 2930, the bill that was meant to address discrimination, did not pass. This is the second year I believe that it's been proposed. The author, Rebecca Bauer-Kahan plays a big role in regulating artificial intelligence in other forms of technology in the California legislature because she's the chair of the Consumer Privacy and Protection Committee.

I don't really know the prospects of 1047 being vetoed or passed. I think we can read tea leaves and paying attention to a Governor Newsom signing an Agreement Memorandum of Understanding with NVIDIA a couple of weeks ago of continuing to suggest AI solutions to problems. Governor Newsom ordered UC Berkeley and Stanford to co-host a symposium in May. There, he mentioned that he doesn't want to see an over-regulation of AI, that we had to strike a balance, that we have to pay attention to the calls for regulation that are being made by people who are in the artificial intelligence community or industry, but that we shouldn't over-regulate.

It's anyone's guess. I agree with Jesús that a veto or signing it into law, there's a lot to pay attention to there. I'm certain that there are lots of other issues, and the people who I spoke with would say that there's a lot of work to be done to effectively regulate the technology and protect people's human rights.

Justin Hendrix:

Jesús, you pointed out in your piece on 1047 that among the individuals who've lined up asking the governor to veto it, are the California Congressional delegation, folks like Rep. Zoe Lofgren (D-CA) and many of her colleagues. What are they arguing?

Jesús Alvarado:

I'm not even going to lie about this. When I was going through the letter, it was echoing much of what these big AI companies were already spewing on X, for example, formerly known as Twitter. For me, I was so confused, especially when I saw the name Rep. Ro Khanna (D-CA) because those of us who have done tech journalism have at least once or twice visited his office and we've spoken to him. He's good at what he's trying to legislate and how he wants to legislate for people to see Silicon Valley differently, if that makes sense.

When I saw him under this letter, I was like, this is so confusing. Am I reading someone's thread on X? But no, it's just this letter. They were just echoing much of what we've seen all these critics say about the bill and essentially just straight up telling the governor "Veto this. We don't want this." I don't know. I got to be honest, I don't know what to make of that letter if it's going to have any influence on Governor Gavin Newsom's decision when this bill does get to his desk.

I feel like it's in parallel to what Senator Scott Wiener has said that Congress hasn't acted and that's the reason why he came out with this bill. Had Congress acted, SB 1047 wouldn't be on the table. I guess we'll see.

Justin Hendrix:

Khari, anything to add to that? You seem to agree with that characterization around the idea that maybe the language in that letter from the Congressional delegation syncs up with the interest of industry. I guess maybe to ask a secondary question there, this one, it seems like it's not easy to tell what the motivations of every actor in this case is or what the sides are.

Sometimes I feel like on this podcast when we're talking about policy questions, it's clear industry's against it and the civil society's for it. The Democrats are against it and Republicans are for it, or what have you. It seems like this is a much more complicated one.

Khari Johnson:

I would agree that it seems like there are similar talking points and some of the sort of usual suspect talking points that I'm seeing in the Congressional letter and the letters that companies like Google and Meta and Microsoft and OpenAI.... I haven't seen the OpenAI letter actually yet, but some of the points that they sent me in the course of reporting, I'm seeing a lot of that and what the various letters that were sent by members of Congress to Senator Weiner.

I think one powerful point that Senator Weiner made that I tend to agree with and appreciate that it's being put into that context of recent history is similar points about this is stifling innovation and companies are going to leave California if this bill doesn't pass. Were made around data privacy. The passage of a data privacy law in California in 2018, at the time there was a lot of concern about a lack of federal legislation and I think a lot of people were saying "We would prefer if Congress acted," but they didn't.

If California hadn't acted in 2018 on that or net neutrality, there wouldn't be a law here to protect the citizens. You look back six years later, Congress hasn't done it and so why would artificial intelligence be any different? I think that states are getting more active in regulating technology because of a lack of effort by Congress and California is a centerpiece of that. As I mentioned before, I think California is distinct in that different committees have experts on staff, have full-time staff, lawmakers working full-time. That's not the case in a lot of other places.

I think policy that can come out of California sometimes can seem more mature because of that. I think that when people say that whole phrase of "So goes California goes to the nation," that's I think practically when the rubber hits the road, what we're talking about with California.

Jesús Alvarado:

It is interesting that latter point you mentioned there. We can see this even in this bill, 1047, where it's basically proposing this regulatory body. I think it's two people who are in the industry, two people who are academics in the AI realm. One person who is appointed by whomever. And so, they really are looking for a wide variety and diverse, almost full of people to potentially lead what could be the first AI regulatory body here in California.

Justin Hendrix:

Khari, I want to hit some of these other bills as well as we go along. We've already talked about how 1047 seem to get most of the publicity, most of the oxygen, and I'm in danger of essentially repeating that in this podcast as well. You point to a number of bills that did find success, some that are very interesting, one that would require companies to supply AI detection tools at no charge to the public put forward by Josh Becker in Menlo Park. There's others around asking government agencies to assess the risk of using generative AI or disclose when it's used. A couple that focus on children.

Khari Johnson:

One of the bills that stood out to me was a bill that would require social media companies to, one, they'd have to turn off notifications to kids during class hours. There's a study by Common Sense Media here in California, though they work and suggest policy in various parts of the country that says that kids receive roughly 60 notifications on their phones during school hours currently when they spend roughly 43 minutes a day on their phones.

The name of that bill I think is Protecting Kids From Addictive Social Media. It requires that notifications are also turned off between midnight and 6:00 AM when they should be asleep. It turns off algorithmic curation of content unless a kid has permission from a parent. There's another bill that would create an AI working group that would give advice to different California school districts about how to safely and responsibly use AI. That, from my own reporting, could be important because I'm seeing teachers in various parts of the state are beginning to use AI for grading papers.

I think that opens up the question of whether or not that could have a high impact on kids' lives. I'm interested in that as well. There's also a bill that makes it a crime to create child pornography with generative AI. I think that it's pretty clear, the message that the legislators are sending about protecting kids from the different bills that were passed, something Veena Dubal, who's a lawyer at University of California Irvine's Law School, pointed out is it's evident that lawmakers are sending this message that it's good to protect kids. What's not as clear is the push to place restrictions on other forms of the technology.

She thinks it should be a lot easier to get to a place where agreement on protecting people from discriminatory AI and demanding accountability from the companies that use the technology, that should be easier. That's something that we should agree on and should be agreed upon. One of the other things that stood out to me in terms of the slate of bills that did or did not pass is 3211 was a bill that would've required watermarking of imagery that was created by text image systems and things of that nature.

It had support from companies like OpenAI and Microsoft and Adobe, but failed. Something that was interesting to me is earlier this year I spoke with Gerard de Graaf who's the Director of the San Francisco office for the European Union. At the time, he had done roughly half a dozen trips to Sacramento this year when I spoke with him in June. He was advising lawmakers on how to come into agreement and alignment with the European Union's AI Act, and he was saying that Senate Bill 1047, AB 3211 and 2930, the discrimination one, the trio of those bills would cover the majority of what is in the EU AI Act.

There was just this conversation about Sacramento and Brussels being the two places where you can expect the toughest AI regulation and to lead that forward. 3211 and 2930 did not make it, and 1047 it looks like could be facing a veto. It seems like part of the story with the bills that did and did not pass in California this year is that it might've been a missed opportunity to align with the EU AI Act.

Gerard says that there was some good things done in terms of aligning with the European Union's definition of artificial intelligence, and he liked a bill that would require companies to disclose more information about the data sets that they use. He feels like that's in line with the EU AI Act. It's pretty clear that those three bills, we may have seen a missed opportunity to align Sacramento and Brussels on regulating AI.

Jesús Alvarado:

Something that caught my attention is this sort of like Sacramento being Brussels situation. I did get into that conversation with one of my sources that I spoke to last week, Cameron Kerry at the Brookings Institution, who seemed very much on the same viewpoint as Governor Gavin Newsom in the sense that he believes that in regulating AI but also not over-regulating it. As he put it, he's always been afraid of overregulation equaling no innovation or slow innovation.

I didn't get too much into the weeds with him about that topic specifically, but that is the second time that someone within the tech policy realm mentions that to me. I think last September, if not October, I spoke to Andrea Renda at the Center for European Policy Studies and he was actually studying the effects of the EU's AI Act and found that this is going to slow down innovation. I think at that time it was like Gemini and ChatGPT weren't allowed in the UK.

I think now they are obviously, but he used that as a thesis. I think that's always interested me for the business side, how this potential law, SB 1047, what would that actually mean? Will it actually protect us consumers or will it stop innovation? Or is that the cost of our protection? We don't know.

Khari Johnson:

Something that stuck out to me in the conversations that I was having with people about how they feel how California lawmakers did in regulating artificial intelligence in this legislative session, the word "ban" came up a lot more than I expected. I think the conversation about bans is rooted in a place of appreciating that there are uses of this technology that should not exist, that are a danger to society.

Determining what fits that description is part of regulation and the dynamic between... One of the sources I spoke with was talking about that there's a danger in trying to both win the AI arms race and regulate AI effectively. Trying to have both at the same time seems like part of the conversation that's present when talking about regulating AI, that if you're going to determine that some things require a ban or that some things should not be allowed to happen in order to protect people and their rights, the conversation about being business friendly and things of that nature, maybe those two things don't always mix.

Justin Hendrix:

Yeah, I think that's been the catch-22 in many legislative discussions around this. Chuck Schumer, the Senate majority leader, when he was hosting those AI Insight Forums, his phrase was that "Innovation is the North Star for all thinking about AI and AI policy." Of course, when the Senate's roadmap came out, it was criticized by lots of civil society groups for essentially taking such a pro-business or pro-AI perspective and not addressing the harms substantially enough.

It seems like we may be stuck slightly in that same catch-22 in California, depending on the outcome of a couple of these bills that are still before Governor Gavin Newsom. I just wanted to ask about one other set of measures which Khari you point out did pass, which are around AI and elections, and of course we've seen some effort at the federal level to introduce somewhat similar measures, but what's happened here? This seems like uniformly good news to me.

Khari Johnson:

There was a trio of bills that passed here and one of them will allow a judge to order an injunction and requiring an individual to take down the content or pay damages. In this scenario, Elon Musk posting a deep fake of Kamala Harris would probably qualify as something that would be regulated under that. Interestingly enough, there's another one that requires large platforms to remove or label deep fakes within 72 hours.

There was an incident, I think it was last week, where former President Trump posted what looked like a deep fake of all of his political enemies in orange jumpsuits, and because that was posted on Truth Social, it's my understanding that under this law that would not have required a take-down because Truth Social is considered too small for that. There was three bills that are intended to prevent people from being harmed by deceptive forms of AI.

It's important to note that humorous or parody forms of it, things that might get into a place of free speech, may not fit. Let's say a deep fake of Donald Trump hanging out with a bunch of black people in order to deceive people to believe that Donald Trump loves black people and has a lot of support from the black community would potentially be qualified under these laws as something that would need to get taken down because it's intentionally deceptive.

Justin Hendrix:

We've covered a lot. Is there anything that we should cover before we find a way to conclude?

Khari Johnson:

Yeah, I would be remiss if I didn't mention that... I mentioned up top that effective altruists and ethics can have different points of views at time. I think something that 1047 contains that both of those communities that people who are concerned about how AI can harm people is that it requires testing, testing before deployment and protection of whistleblowers. I think in my conversations with people from both of those communities, those two things are agreed upon and part of what's important they feel like in order to protect humans.

Another thing is that it would appear that in the filings with the state of California and by lobbyists that OpenAI hired its first lobbyist since the company was founded nearly a decade ago, but yes, first California lobbyist, in order to express its opposition to 1047 and view on other bills like AB 2930 to protect people from discrimination. The lobbying dollars do not appear to be that much higher this year. Politico reported, I should say last fall that OpenAI hired its first lobbyist in DC recently as well, and so that seems noteworthy.

Jesús Alvarado:

I think one last noteworthy, if we're talking about noteworthiness, from my end is that this is only in California what we've been talking about specifically to these AI bills. I think in absence of this national regulation, if you will, of this type of technology, I feel like something that I would want to keep an eye on moving forward is what other states come after California to copy/paste some type of AI regulation.

Then when it does go down the road, for Congress to actually do something about it, what that would mean when you're clashing against a state patch of all these regulations. Of course, they would trump all of them, but I'd be interested to see how that'll play over the super long run.

Justin Hendrix:

With little immediate hope that Congress will act, a lot of attention again on California. We'll see what ends up making it through the gauntlet, getting the governor's signature or receiving his veto, and that will give us some sense of perhaps as you say, Jesús, what may be copies or replicas of those laws may proliferate elsewhere across the country. I guess one thing that's also clear is that no matter which of these do pass, we can expect there to be likely legal challenges and the process will carry on from there.

As you've just mentioned in particular, Khari, lots of speech implications and a lot of these laws and certainly various parties will line up to challenge aspects of those that they think may offend on First Amendment grounds or on other grounds. I suspect that you'll be reporting on these laws, both of you, for quite some time going forward. This is hardly a done deal at the end of September.

Khari Johnson:

Definitely, I think 2930, this anti-discrimination law and the one requiring watermarks on bills, the lawmakers who proposed both of those have said that they plan to bring them back next session as well, so the story's not over there.

Justin Hendrix:

Khari, Jesús, thank you so much for joining me and I hope I can call on your expertise and reporting again in future when we find out what's happened with all of this wrath of legislation in California and elsewhere. Thank you so much.

Khari Johnson:

Thank you.

Jesús Alvarado:

Thank you.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics