Home

California’s AI Reforms Scare All Developers, Not Just Big Tech

Ben Brooks / Aug 23, 2024

Wes Cockx & Google DeepMind / Better Images of AI / AI large language models / CC-BY 4.0

Ben Brooks is Head of Public Policy at Stability AI, an artificial intelligence company, best known for it's text-to-image model Stable Diffusion.

With a week left in its 2024 session, California's legislature is racing to pass SB 1047, a bill imposing unprecedented restrictions on AI model development. These are the "read, write, and reason" engines that power tools like ChatGPT. While aimed at frontier models from Big Tech, the bill may unintentionally suppress the release of open-source models, crippling a wider ecosystem of independent researchers, small businesses, and everyday developers who rely on open technology.

If signed into law, SB 1047 will make California the world's most extreme regulatory outlier. While the EU’s AI Act and the White House's Executive Order on AI impose a variety of disclosure requirements on developers, both efforts stopped short of regulating the capabilities or distribution of models: indeed, the National Telecommunications and Information Administration recently declined to endorse restrictions on the availability of powerful models. In Congress, most of the hundred-odd federal AI bills avoid regulating the development of models, focusing instead on the deployment and use of AI systems.

Related Reading: California SB 1047: Watchdog or Lapdog?

California's SB 1047 goes far beyond these frameworks. In doing so, it requires developers to make binding guarantees that are fundamentally incompatible with openly sharing their technology.

Despite nine rounds of amendments in the state legislature, several provisions in SB 1047 continue to pose a serious threat to open innovation. Chief among these is a requirement that developers implement “administrative, technical, and physical” protections to prevent the misuse or modification of their models for certain harmful purposes. Yet developers of open models have limited control over downstream experimentation. The benefit of open sourcing is that other developers or researchers can independently probe, optimize, and build on their models to create exciting new applications.

Similarly, the bill requires developers to “accurately and reliably attribut[e]” the actions of an AI system back to its underlying model, which demands a level of surveillance inconsistent with open release. Open sourcing enables anyone to integrate and deploy raw technology, such as models, in useful systems — say, software copilots, financial analysis tools, or personal assistants — without exposing their private data to a third party. Open model developers cannot be expected to track this activity. Tracing model outputs is akin to asking a paper company to monitor what its customers choose to write or print.

Indeed, earlier drafts of the bill insisted that models have a "full shutdown" capability. With over eight million downloads of Meta's open Llama 3 model in the past month alone, no developer could "shutdown" or rescind digital files held by millions of people. That proliferation is a good thing, as it means more developers and businesses are learning how to test, improve, and safely deploy AI in their own domains.

These obligations might not be so alarming if they were limited to catastrophic risks, such as whether a model can enable the creation of weapons of mass destruction. However, the bill also covers an array of incremental and cumulative harms. For example, it requires developers to guarantee that a model, even with ten million dollars of modification by sophisticated bad actors, cannot be misused to cause economic losses totaling five hundred million dollars. This sets an impossibly high standard; nearly every digital technology can be implicated in these kinds of harm, from software libraries used in malware to messenger apps used for fraud. Developers will not be able to make these assurances without restricting access to models altogether.

To avoid the most extreme outcomes, SB 1047 relies on discretionary enforcement by the state Attorney General using hazy "reasonable" and "appropriate" standards. However, vague standards will make it difficult for developers to determine compliance. A jury will decide what they mean in practice and only at trial. The resulting uncertainty and threat of regulation by litigation will mean that few, if any, developers choose to release open models.

The bill's author, Senator Scott Wiener, stresses that this bill will only affect future models from a small cluster of firms, but the recent history of AI shows that goalposts move quickly. The computing power and financial resources used to train large models is growing multiple times per year. Soon, there may be several benign models that fall within scope but cannot be released openly while satisfying this bill. That will devastate the vast downstream community of developers, researchers, and startups who rely on these models to drive their next breakthrough or build their next business – and it will leave the United States reliant on a handful of firms for paywalled AI technology.

While Senator Wiener maintains that the bill “does nothing to stifle the power of open-source,” SB 1047 requires the impossible from open-source developers. Indeed, no language in the bill expressly prohibits open technology. But that is precisely why proposals like SB1047 are so troubling: legislation can restrict open innovation indirectly without attracting the same scrutiny as proposals that do so explicitly. The EU took three years and thousands of amendments attempting to balance effective AI oversight with open innovation. On its current trajectory, California's legislature will have taken less than six months to bring open innovation to a grinding halt.

California can improve AI oversight without stifling open technology, and there is still time to refine SB 1047, if not in this session then the next. With targeted adjustments to a handful of provisions, California can regulate AI while preserving the culture of open innovation that underpins its AI ecosystem.

Authors

Ben Brooks
Ben Brooks is the Head of Public Policy at Stability AI, where he works to promote transparency and competition in AI through open innovation. He has testified before the US Congress and UK Parliament. Previously, Ben championed the safe and effective regulation of emerging technology with Uber, Coi...

Topics