Home

A Roadmap for Regulating High-Risk AI Under Existing US Law

Rachael Klarman, Adam Conner / Jul 1, 2024

Rachael Klarman is Executive Director at Governing for Impact. Adam Conner is Vice President for Technology Policy at the Center for American Progress.

Clarote & AI4Media / Better Images of AI / Power/Profit / CC-BY 4.0

If there’s one thing everyone seems to agree on when it comes to confronting the foreseeable harms of AI, it’s that we should start by vigorously enforcing the laws on the books.

But, what exactly are those existing statutes, and how can they be maximized to meet the urgent challenges at hand? Efforts to map this landscape beyond the laudable Executive Order the White House took last year have been lacking, leaving regulators in the dark as key tools in their arsenal sit idle.

Until now.

Our organizations – the Center for American Progress (CAP) and Governing for Impact (GFI) – have spent months working to answer this question, culminating in the report we released in June identifying those untapped statutory authorities. It turns out there is a lot the federal government can do to mitigate significant AI risks, even if Congress is unable to pass new legislation designed to address the current frenzy.

For example, the White House’s Office of Management and Budget could impose a wide range of binding AI-related obligations and worker protections on federal contractors, who collectively employ roughly one-fifth of all US workers. They could require these companies to subject all automated employment systems to strong transparency regimes – along with pre-market testing and ongoing evaluation – to guarantee workers’ rights to health, safety, privacy, fair compensation, organizing power, and nondiscrimination.

And that’s just one of some 80-plus executive actions we’ve identified in our new report that could be taken under current law.

Among the other powers at the federal government’s disposal are enhancing workers’ health and retirement benefit protections. They could require affirmative disclosure and plain-language description of any AI system involved in a benefits determination, and guarantee the right to an appeal heard by a human if a claim is denied, as well as expand protections against sudden employment termination at the hands of algorithmic management tools.

They could drastically expand US preparedness to respond to plausibly foreseeable national emergencies related to AI, exhaustively outlining likely scenarios and the breadth of statutorily authorized tools at the government’s disposal to counter these threats – and the criteria that would trigger such actions – which could include things like freezing assets or restricting transactions associated with AI technologies contributing to a crisis.

They could begin the standard-setting process to regulate the use of electronic surveillance and automated management (ESAM) in the workplace to the extent that it creates hazards to workers’ physical and mental safety and health, and require purveyors and users of workplace surveillance technologies to comply with the Fair Credit Reporting Act.

They could require credit reporting agencies to describe whether and to what extent AI was involved in formulating reports and scores, and mandate that financial institutions implement reasonable AI safeguards and practices, including minimum risk management practices for high-impact AI systems, like implementing red-teaming and audits, and ensuring decisions are explainable.

They could designate leading cloud service providers – like Amazon, Microsoft, and Google – as “systemically important financial market utilities'' under the Dodd Frank Act passed after the 2008 financial crisis. This would subject those companies to supervision and regulation by the Federal Reserve, in recognition of the outsized influence they now maintain over the stability of the entire U.S. financial system, which continues to grow rapidly as an explosion of new AI products and services are built on top of their expanding infrastructure.

Again, we’re just scratching the surface of the existing powers the federal government possesses to address AI harms, as documented in our new report. And that’s good, because it’s increasingly clear that waiting on Congress to meaningfully regulate the tech industry – as has consistently been the case – is a fool’s errand.

Are the executive actions we outline sufficient, in and of themselves, to contend with this unprecedented moment? There is little doubt that they are not. But – unlike so many proposals out there – they are both substantial and immediately actionable; they would significantly alter the course forward for AI, and protect people from preventable harms.

We don’t need to accumulate more proof points of the societal damage that can occur when we fail to erect safeguards on cutting-edge technology; we have two-plus decades of evidence. We have watched as the darlings of previous tech revolutions have gone from from disruptive upstarts to innovation-stifling monopolists; as platforms that promised to serve as great democratizers have been weaponized as instruments of surveillance and oppression.

We don’t need any more painful evidence; indeed, we cannot afford it. What we need is swift and material action – action that is already authorized under existing law. And our report offers a blueprint for federal agencies to do exactly that.

Authors

Rachael Klarman
Rachael Klarman is the Executive Director of Governing for Impact. Prior to joining GFI, she was a legal policy analyst at Democracy Forward, where she identified and developed litigation challenging the Trump Administration’s actions regarding health care, labor and education. She has worked on a n...
Adam Conner
Adam Conner is the vice president for technology policy at American Progress. He leads the newly created Technology Policy team as its inaugural vice president with a focus on building a progressive technology policy platform and agenda. Conner has spent the past 15 years working at the intersection...

Topics