California Is Getting its ‘AI Act’ Together
Drew Liebert, David Evan Harris / Oct 17, 2025Drew Liebert and David Evan Harris are the director and senior policy advisor, respectively, of the California Initiative for Technology and Democracy (CITED), a project of California Common Cause.

The Assembly Chamber at the California State Capitol. Ben Franske / Wikimedia / CC BY-SA 3.0
While Washington remains gridlocked on technology policy, California is charting its own course. Earlier this week, Governor Newsom signed critically important AI legislation on transparency and protecting children. In Silicon Valley’s home state, legislators this year have shown that progress — albeit incremental in the face of furious opposition from some in the tech industry — is still possible. But it’s not time for a victory lap yet—there’s still more work ahead in 2026.
In an ideal world, Congress would establish a strong national framework to balance innovation with accountability. But federal inaction — plus misguided efforts in Washington to override state laws that protect platform users — has left that vision out of reach.
California’s intervention isn’t defiance; it’s necessity. Our state’s residents and our democracy cannot wait for a federal standard that may never come. When inaction leaves people – especially kids – vulnerable to deepfakes, data exploitation, scams, and algorithmic discrimination, state leadership must fill the void.
This year, Sacramento took a number of important steps toward doing just that. While none of these laws are a panacea, together they demonstrate that democratic institutions can still reasonably govern technology in the public interest without impeding the nation’s innovation leadership.
Tired of explaining to your relatives that the Facebook video they shared or the scandalous image they saw on X is AI-generated? One new law, the California AI Transparency Act of 2025, which we at the California Initiative for Technology and Democracy proudly sponsored, brings much needed transparency to fake content online. This measure, through the tenacious stewardship of Assemblymember Buffy Wicks (D), will require large social media platforms, messaging apps, and search engines to identify AI-generated content and require smartphone and camera manufacturers to let users embed digital provenance data into images and videos. This will give people critically important tools to better determine their authenticity and nature of content in an era of AI-created deception and deepfakes.
Another landmark law, Senator Scott Wiener’s (D) SB 53, introduces baseline safety and transparency standards for the most powerful AI systems. It’s an initial but important effort to prevent catastrophic misuse and provide protections for whistleblowers with the courage to come forward with important safety disclosures.
New laws also target more immediate harms, like regulating the use of AI chatbots that have encouraged self-harm among minors and mandating mental-health warnings on social media platforms. Others clarify that companies can’t escape responsibility for harmful algorithms or harassment that their systems amplify, and create easier ways for users to opt out of the sale of their data.
Collectively, these pragmatic reforms focus on giving individuals more control and ensuring technology companies begin to bear some measure of responsibility for their products.
However, much more progress needs to be made if tech giants are to be held accountable for what they create. None of these measures begin to match either the breadth of the European Union’s AI Act or the comprehensive protections advocates have long sought to better protect users.
One particularly worrisome gap involves protecting our location privacy. The legislature sadly fell short in outlawing the sale and misuse of precise geolocation data, a practice that allows anyone, including government agencies like ICE, to track where we move in our workplaces, places of worship, or to marches and political events.
Another policy gap involves algorithmic fairness. A proposal that would have required companies to assess and disclose how automated systems affect decisions in housing, employment, and credit was shelved for the year. Without such safeguards, digital redlining and algorithmic bias remain unchecked in an era when AI already threatens the livelihoods of potentially millions of workers.
And while the Legislature made important inroads regarding online safety for kids, it stopped short of creating real financial accountability when platforms harm children. The time for additional urgent legislative action in protecting children is now, before more of our kids are harmed.
If California is to truly lead, next year’s legislative session will be the time to tackle these unresolved, complex, and urgent consumer, child and worker protection issues. Each represents not just a policy challenge but a test of whether democracy can keep pace – or even begin to catch up – with ever-changing technology.
Skeptics warn that regulation will slow innovation. Looking at automobile safety shows just the opposite. Laws requiring seat belts, air bags and speed limits allow us to drive where we need to go while getting there safely. Innovation depends on consumer trust, and trust is built on transparency and accountability. Clear and fair rules can actually spur innovation, create new jobs, and provide the structure and legitimacy that we need to enable responsible progress.
California’s efforts in 2025 do not yet merit a victory lap, but they are important strides off the starting line in a race we can’t afford to lose. To truly safeguard democracy in the digital age, the Golden State must now build on the recent legislative session with ambition and urgency equal to the critical stakes.
Authors

