Home

Donate

Building an AI Superfund: Lessons from Climate Change Legislation

Kevin Frazier / Oct 10, 2024

Clarote & AI4Media / Better Images of AI / Labour/Resources / CC-BY 4.0

In an ideal world, AI will not lead to any catastrophic harm. In a world in which once-in-a-lifetime events seem to be occurring on an annual basis, however, it would be irresponsible not to prepare for some social, economic, and political chaos brought on by rapid and unexpected advances in AI. A climate change cost recovery law in Vermont may serve as a model for how to at once ensure state governments are ready for some AI worst case scenarios while also making sure the responsible AI labs and companies hurriedly adopting AI are held accountable for those bad outcomes.

Climate Cost Recovery Legislation

Cost recovery laws share a common, straightforward rationale: if a corporation causes widespread damages, then that corporation should be on the hook for efforts to mitigate and adjust to those damages. In the context of climate change, the Vermont Climate Superfund Act reasons that companies known to have emitted extreme amounts of greenhouse gasses (upwards of 1 billion metric tons) over a long period of time (about twenty years) surely had some role in causing the environmental chaos that has recently upended communities and drained state resources. Qualifying companies or “responsible parties” must pay their proportional share of the costs they have imposed on Vermont residents as a result of events like the Great Vermont Flood of 2023–a two-day period in July of 2023 during which the state experienced significant and long-lasting damage as a result of 3 to 9 inches rain. In legalese, the Act holds companies strictly liable for sustained and significant damage to Vermont’s environment.

How to identify “responsible parties” and calculate their share of the bill for Vermont’s climate mitigation and adaptation strategies is an open question. Vermont is actively working with consultants to implement the Act’s provisions–establishing regulations, identifying qualifying corporations, and tallying climate-related damages. The hope is that by mid-January of 2025 the state will know how much in damages they seek from responsible parties as well as the share of the costs each party will bear. Once the companies make their respective compensatory payments, Vermont plans to spend the money on projects “designed to respond to, avoid, moderate, repair, or adapt to negative impacts caused by climate change and to assist human and natural communities, households, and businesses in preparing for future climate-change-driven disruptions.”

Vermont is not alone in pursuing this kind of legislation. New York’s state legislature passed similar legislation–the bill awaits the governor’s approval. California, Maryland, and Massachusetts have considered related proposals. The popularity of cost recovery legislation taps into the commonsensical idea that the biggest contributors to a shared problem have a larger responsibility in helping the community respond. As indicated by Vermont’s ongoing efforts to think through the actual implementation of the Act, this regulatory approach is easy to establish on paper but a lot harder to implement in practice. Companies will likely contest whatever payments Vermont tries to impose–resulting in months, if not years of costly litigation. Several legal questions will have to be worked out, such as whether Vermont effectively enacted an ex post facto law–punishing corporations for what was then legal behavior–and whether the law violates the due process clauses of the Vermont and U.S. constitutions.

Cost Recovery in the AI Context

In the same way that climate cost recovery assigns financial responsibility to significant contributors to statewide issues, the likely harms associated with rapid diffusion of ever-more sophisticated AI may merit a similar regulatory strategy. Put differently, because AI labs and companies that adopt AI are knowingly causing costly public policy issues borne by society, they should bear some financial burden for the inevitable recovery effort.

These actors–the AI labs and large corporations–could slow or alter their development and use of AI, respectively, to minimize negative externalities. AI labs, for instance, could refrain from releasing a model likely to facilitate cyberattacks by bad actors. Large corporations, for their part, could decide to slowly introduce AI into their operations so as to prevent job loss on a rapid and significant scale. Like in the climate context, if these entities instead opt to forge ahead despite known and likely risks, then they should not be left off the hook when those risks manifest as harms.

To imagine how an AI Superfund may work, it’s worth exploring some hypotheticals. Suppose that over the next five years there’s a meaningful uptick in cyberattacks. Assuming (and this is a big assumption) that some meaningful fraction of those attacks can be traced to advances in AI, then the developers of those models ought to receive some portion of the bill for public funds spent to recover from those attacks.

The apportionment may be easier than some think. As of now, a handful of labs have deployed the most advanced models. Google, Meta, Microsoft, and OpenAI account for 45 foundation models–sophisticated models intended for use in a variety of contexts and by a variety of actors–from individuals to massive corporations. A subset of those models, including OpenAI’s ChatGPT and Meta’s AI, have more than 100 million monthly users. These technically-advanced and popular models dominate the AI marketplace and, consequently, are likely contributors to any societal issues that may emerge from excessive reliance on AI tools.

There’s a strong argument to be made that only downstream developers or users of these models should be held accountable for their misuse of AI. Car manufacturers, for one, are not liable for accidents solely caused by drivers. However, those manufacturers are responsible for defective vehicles. This responsibility is not only to the driver but also, in some cases, to individuals injured by the flawed vehicle. The thinking goes that they should have and could have designed safer cars, so they have to clean up their mess by recalling those vehicles, stopping any further use of those vehicles, and making whole those who have been wronged due to the defect. A similar thinking explains why AI labs should not dodge being held to account for misuse of their models–if they cannot design models resilient to being used to create substantial social ills, then they should simply not deploy those models.

A different hypothetical explains why large corporations could also be on the hook for the public costs imposed by certain AI practices. Looking over a series of years, it seems feasible that economists could tally and identify with some precision the causes of unemployment. Presumably, AI adoption by companies will be a substantial cause of unemployment in the near future. Some would argue it already is. Those corporations that contributed most to that unemployment should face related costs for helping those workers and communities get back on their feet.

No company is mandated to adopt AI as quickly as possible nor to integrate AI into the company’s operations in one fell swoop. An unchecked push for profit, though, may propel the corporation to do just that. In contrast, a corporation aware of the social costs of unemployment–including costs to support families and investments in retraining programs–may refrain from taking such rash steps and adopt a more incremental approach. This would result in more economic security for individuals and more economic stability for communities. This latter kind of corporation–those that place people over profit–should not be punished. In fact, the opposite should be the case–corporations that lean into AI without also making proportional investments in the social safety net should be held accountable for the costs they impose on the public writ large.

Conclusion

This AI Superfund scheme, like the Climate Superfund in Vermont, is contingent on a lot of factors. Both will only work in practice and under the law if the financial burden assigned to a relevant actor can be fairly and accurately determined. As noted above, this is a big if. Vermont is still working through how best to apportion the bill to polluters for the state’s disasters. How to design cost recovery laws in the AI context is similarly a complex task. Rather than attempt to outline all those details now, though, civil society and regulators should keep an eye on Vermont and all the states likely to pass Climate Change Cost Recovery legislation. By watching the development of Vermont’s regulatory approach, policymakers in other states can pass AI cost recovery laws likely to survive similar legal challenges and implementation hurdles.

Let’s hope that cost recovery legislation is not needed to help states respond to AI-induced chaos. That hope, though, should be paired with responsible and proactive development of novel regulatory frameworks–cost recovery legislation may be a good place to start.

Authors

Kevin Frazier
Kevin Frazier is an Assistant Professor at St. Thomas University College of Law, a Director of the Center for Law and AI Risk, and a 2024 Tarbell Fellow. He joined the STU community following a clerkship on the Montana Supreme Court. A graduate of the Harvard Kennedy School and UC Berkeley School of...

Topics