Home

A Look at California’s Sweeping AI Safety Bill

Jesús Alvarado / Sep 6, 2024

Jesús Alvarado is a fellow at Tech Policy Press.

California Governor Gavin Newsom at his inauguration in 2023. Wikimedia Commons

Relative to other US states and the federal government, California has taken leading steps when it comes to regulating its consumers’ data, online privacy and how to better protect kids from the harms that come with the internet. Another step the state’s legislature has taken in the realm of tech policy? Regulating the harms that could come from artificial intelligence.

Last month, both houses of the California legislature passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), or the AI Safety Bill, as many have nicknamed it. The bill aims to regulate the development and use of artificial intelligence technologies within the state. It proposes the creation of a regulatory body that would be responsible for developing guidelines and standards to ensure the safety of AI systems. And it would also monitor AI usage across various sectors and conduct risk assessments to prevent potential harm from AI technologies. And as of now, the bill awaits Governor Gavin Newsom’s signature, which could come any day this month.

Since generative AI tools like OpenAI’s ChatGPT and the AI image generator DALL-E were made available to the general public in recent years, many have had a lot to say about the technologies. For some, AI is seen primarily as a tool for automation of day-to-day work and personal tasks, a learning tool, or even a source of inspiration. But others have concerns about the implicit biases that come baked in the large language models and data used to train them, and the harms of automating important systems. For that latter point, California’s SB 1047 addresses some of those growing concerns about the ethical implications of AI, highlighting that it would aim to protect consumers against “the incapacitation or destruction of which would have a debilitating effect on physical security, economic security, public health, or safety in the state.” State legislators there hope this law could further advocate for AI systems to align with human values and rights, aiming to balance technological advancement with social responsibility.

Key to SB 1047 is its proactive approach to managing AI-related risks. By establishing regulatory frameworks early on, the bill seeks to prevent misuse or unintended harmful consequences of AI, which has the potential to position California as a leader in responsible AI governance.

Key provisions of SB 1047

Below are the main provisions of SB 1047:

  • SB 1047 applies to AI systems that meet a significant computational threshold and require training that costs over $10 million.
  • Developers of advanced AI systems must conduct pre-deployment safety tests and identify potential harmful capabilities.
  • The bill mandates implementing cybersecurity safeguards and monitoring systems post-deployment to prevent misuse, such as cyberattacks or the creation of dangerous weapons.
  • It includes protections for employees of AI labs who report concerns about safety violations or unethical development practices.
  • A public cloud computing infrastructure, which the bill calls CalCompute, will be established to support startups and researchers in developing AI responsibly, ensuring innovation isn't exclusive to larger firms.
  • California’s Attorney General has the authority to impose penalties or take legal action if AI developers fail to comply with the bill’s safety measures or if their systems cause harm to the state’s residents.

A contentious path to the Governor’s desk

SB 1047 didn’t have an easy journey getting to where it is now. It received major backlash from Silicon Valley, and is the subject of an ongoing campaign to secure a veto led by many big AI companies, like Meta, Google, and OpenAI. Last month, a16z, a venture firm founded by Marc Andreessen and Ben Horowitz, sent an open letter to California State Senator Scott Wiener, an author of SB 1047, arguing the law could have a negative impact on innovation from smaller startups, citing the computational threshold and cost of training AI models, which would be covered under this law.

And those arguments might have merits, said Cameron Kerry, a distinguished visiting fellow at the Brookings Institution, in an interview. “At a distance, I was inclined to think that some of those reactions might be overstated, but having looked at the bill, yeah, I think they are legitimate arguments,” he added.

Even Meta’s Chief AI scientist Yann LeCun took to X – formerly Twitter – to criticize the law, saying, “SB1047 attempts to regulate AI research and development, creating obstacles to the dissemination of open research in AI and open source AI platforms… predicated on the illusion of ‘existential risks’ pushed by a handful of delusional think-tanks."

Michael Cohen, artificial general intelligence safety researcher at the UC Berkeley’s Center for Human-Compatible AI, said some of these threats aren’t unrealistic, however. “No serious people think that there aren't bad guys out there interested in holding hospitals for ransom with a cyberattack,” Cohen said in an interview, adding that “it would be fair for any future political opponent of Gavin Newsom to say, ‘if you hadn't vetoed that bill, it's hard to see how that wouldn’t have happened,’” alluding to a potential veto by the California governor.

What’s next?

Governor Gavin Newsom has until the end of the month to apply his signature to then have the law go into effect. If he signs it, SB 1047 could serve as a model for other states and even influence federal policy, given that this bill will regulate all AI companies housed in California and those that have business in California.

However, as of now, it is unclear where Governor Newsom stands with this law. Notably, there is pressure from eight House members who represent California Districts – Reps. Zoe Lofgren, Anna Eshoo, Ro Khanna, Scott Peters, Tony Cárdenas, Ami Bera, Nanette Barragán and J. Luis Corréa – who last month sent the governor a letter asking him to veto the law. (Legal scholar Lawrence Lessig rebutted their arguments in a recent post in The Nation.) And even if the bill does become law, SB 1047 will likely face a legal challenge from industry groups. The Governor’s signature, if it comes, will almost certainly not signal the end of the matter.

Authors

Jesús Alvarado
Jesús Alvarado is an audio journalist and is currently a producer for Marketplace Tech, where he focuses his work on tech policy, internet culture, and health technology. Holding a Master of Science in Journalism from the University of Southern California, Alvarado honed his reporting skills during ...

Topics