Home

Donate

Time for California to Act on Algorithmic Discrimination

Evelina Ayrapetyan / Sep 30, 2024

President Joe Biden hosts a meeting on Artificial Intelligence, including California Governor Gavin Newsom, on June 20, 2023, at The Fairmont Hotel in San Francisco. (Official White House Photo by Adam Schultz)

With much of the national spotlight on SB1047 - California’s high-profile response to regulating generative AI (GenAI) models - other critical AI legislation has largely flown under the radar. Bills like AB2013, which establishes transparency obligations for GenAI systems; SB896, requiring state agencies to include disclaimers when using GenAI to communicate with the public; AB2839, banning the use of deepfakes in election communications close to Election Day; and AB2655, which combats online disinformation by mandating large platforms label or remove deceptive election-related content during specified periods, are equally significant but less discussed. AB2839 and AB2655 were signed into law by Governor Newsom and will go into effect immediately. With time running out, Governor Newsom signed AB 2013 into law just before the September 30th deadline.

AB2013, authored by Assemblymember Jacqui Irwin, passed unanimously in both the California Senate and Assembly (38-0 and 75-0, respectively). The law will require GenAI developers to disclose summaries about the datasets used to train, test, and validate their models, a step aimed at enhancing transparency and holding developers accountable. Initially, AB2013 sought to create transparency for all AI systems, including GenAI and Automated Decision-Making Technology (ADMT). However, the bill's final version focuses solely on GenAI, stripping out crucial transparency requirements for ADMT.

This change is significant because ADMT, not GenAI, is currently being used to determine access to education, housing, credit, and employment for millions of Americans. Limiting AB2013’s scope to GenAI weakens its potential, especially in high-risk applications. The Biden Administration's 2022 Blueprint for an AI Bill of Rights emphasizes that algorithms often replicate and exacerbate existing inequalities, introducing harmful bias and discrimination. As the blueprint states: “These outcomes are deeply harmful - but they are not inevitable.” Transparency is key to holding developers accountable and safeguarding the rights of all Americans.

In contrast, SB1047 focuses on hypothetical future harms from GenAI, such as cyberattacks on critical infrastructure or the development of chemical, nuclear, or biological weapons. Critics like Fei-Fei Li, known as the godmother of AI, argue that the risks SB1047 addresses are still largely speculative, warning that such legislation could harm the AI ecosystem by focusing on unproven threats rather than pressing issues. Others suggest SB1047 is based on an "illusion of existential risk" posed by GenAI.

The harms of ADMT, however, are already well known.

While SB1047 seeks to prevent a dystopian future, AB2013, in its original form, aimed to create safeguards to prevent discriminatory practices across high-risk applications. These systems aren’t being wielded by rogue adversaries - they are embedded in institutions we interact with every day. As Vice President Kamala Harris noted at the 2023 AI Safety Summit:

“[In addition to global threats there are] threats that are currently causing harm and which, to many people, also feel existential. When a senior is kicked off his healthcare plan because of a faulty AI algorithm, is that not existential for him? When a woman is threatened by an abusive partner with explicit, deepfake photographs, is that not existential for her? When a young father is wrongfully imprisoned because of biased AI facial recognition, is that not existential for his family?”

At a 2023 House Committee on Oversight and Accountability hearing, Merve Hickok, President of the Center for AI and Digital Policy (CAIDP), highlighted the discriminatory impact of high-risk AI systems: “These systems replicate existing biases in datasets as well as the choices of their developers, resulting in discriminatory decisions that disproportionately affect marginalized groups."

In July 2024, I testified at a California Civil Rights Council hearing in support of AB2930, a bill that sought to prevent algorithmic discrimination in employment and other consequential decision-making areas. Though the bill was defeated, addressing the issue remains urgent. AI is rapidly reshaping the employment landscape, and without the proper safeguards, it will amplify existing biases and harm historically marginalized communities.

Critical issues like biased decision-making in ADMT are exactly what SB1047 does not address. The original version of AB2013 did. So why was its scope narrowed to exclude ADMT? It's worth noting that another 2023 California AI bill concerning automated decision tools (AB-331) specifically aimed to regulate ADMT, died in the legislature in early 2024. But, once legislators narrowed AB2013 to apply only to "GenAI," it easily passed the state assembly.

First, let’s clarify the distinction between ADMT and GenAI.

According to the California Privacy Protection Agency, ADMT is “any technology that processes personal information and uses computation to execute a decision, replace human decision-making, or substantially facilitate human decision-making.” Such technology can include software derived from machine learning, statistics, or other data-processing techniques. In other words, ADMT uses data to make consequential decisions in areas like housing, healthcare, and employment.

The problem is that the data that trains and shapes the outcomes of an ADMT is often biased or incomplete, and biased data leads to biased outcomes. As the saying goes, "Garbage in, garbage out." For example, what happens when an algorithm trained primarily on data from white patients is used to diagnose a non-white patient? The result is a flawed system that not only perpetuates biases but amplifies them. And when such decisions lack transparency, public trust in AI erodes further.

GenAI, on the other hand, generates new content—text, images, or videos—based on the data it’s been trained on. While the potential existential risks of GenAI are widely debated, the risks posed by ADMT are more immediate and concrete. Some AI researchers estimate a 5% probability that AI will gain superintelligence by 2040 and pose an existential threat. But the risks from ADMT are already affecting people’s lives today.

So why has the California Legislature struggled to regulate ADMT technologies?

ADMT spans many sectors and societal contexts, which makes regulation difficult to design. Regulating such a broad field is complex, and lawmakers lack specific information and expertise to understand the nuances of ADMT application across a diverse range of contexts. Moreover, California already has frameworks like the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA) that offer some degree of protection around data collection and use. However, these laws fall short of mandating transparency for ADMT decision-making processes. In contrast, the EU's General Data Protection Regulation (GDPR) provides individuals with rights against automated decision-making. California must follow suit by creating transparency obligations for ADMT.

Public trust in AI is plummeting, with confidence in AI systems dropping from 50% to 35%. This isn’t a partisan issue; it reflects a deep-seated concern among Americans that AI systems—especially in employment, lending, and criminal justice—are harming people.

We are often presented with a false choice: either allow unfettered AI development or stifle innovation with regulation. But as Renee Cummings aptly put it during a discussion with All Tech is Human, “There is room for both innovation and regulation.” California’s Legislature missed the mark by stripping transparency obligations for ADMT from AB2013, which offered a sensible approach to regulation in its original form. The public has a right to understand how these systems make life-altering decisions.

As AB2013’s author, Jacqui Irwin stated: "To build consumer confidence, we need to start with the foundations—and for AI, that is the selection of training data." AB2013 [in its original form] would have required AI developers to provide essential documentation about the data used to train their systems, including whether developers used synthetic data to fill gaps.

I applaud Governor Newsom for signing AB2013 into law and urge California legislators to build on AB2013 to create regulatory frameworks for ADMT. By regulating ADMT, legislators can ensure fairness, transparency, and accountability in the systems that are already shaping our society. A comprehensive framework would address both present-day risks and emerging technologies, ensuring that all forms of AI, including ADMT, operate within ethical and legal boundaries that protect our civil liberties. The potential harm from unchecked AI is high, and policymakers must grasp the stakes not only in the context of GenAI but also for ADMT.

Authors

Evelina Ayrapetyan
Evelina Ayrapetyan is a Research Fellow at the Center for AI and Digital Policy (CAIDP), where she recently launched the CAIDP California Affiliate to advocate for the safe development and deployment of emerging tech in the state. Evelina also serves as a Security Fellow with the Truman National Pro...

Topics