Home

Donate

The End Of The Beginning For AI Policy

Anton Leicht / Dec 3, 2024

Elizabeth Kelly, Director of the US AI Safety Institute, speaks at a gathering of the International Network of AI Safety Institutes in San Francisco, California, held November 21-21, 2024. Source

When OpenAI’s ChatGPT was released to substantial public and political attention in late 2022, policy organizations concerned with risks from frontier AI saw a chance to make lasting political inroads: they saw a policy window opening – and put in a lot of work to use it.

Looking back at almost two years of that effort, it is starting to look like the policy window was too small for the ambitions thrown at it. Assessing today’s global landscape of regulations aimed at mitigating frontier AI risk reveals outright failures and precarious achievements that face increasing public and political opposition. Moving from west to east:

  • California’s SB 1047 failed, leaving behind a galvanized opposition to AI safety policy.
  • The Biden administration’s landmark executive order on AI safety faces peril, and with it, the power and reach of the AI Safety Institute, as safety advocates failed to insulate their policy achievement against adversarial politicization.
  • The UK stands as an exception, with 2023’s Tory commitments to safety seemingly being mostly upheld by the new Labour government.
  • The EU AI Act has been passed, and ongoing efforts are trying to cement its impact on more advanced AI – but as Europe’s role in AI wanes, its regulatory reach remains doubtful, and it has exhausted a lot of political goodwill while remaining very unpopular in the member states most relevant to AI.
  • Lastly, it seems to have left behind a French government that seems to have made interfering with safety efforts a central priority. (The political dynamics I comment on here mostly do not extend east- and southward of Europe, but developments elsewhere often indicate governments might have relatively more interest in exploiting AI than containing its risks as well.)

On the road to these policy outcomes, a lot of awareness has been raised, a lot of reports have been written, and a lot of good conversations have been had. But for all the stakeholder engagement, it does not seem obvious that chances of passing actually comprehensive safety-focused legislation with the potential to reduce outsized risks by, say, 2028, are much higher than they would’ve been without these policy pushes. Increased salience of the issue is a benefit, but many trajectories towards seriously dangerous AI presumably come with intermediary medium-scale harms that should raise awareness anyway. On the other hand, an organized and well-funded opposition, pre-existing political damage to leading figures and regulatory institutions, and more apparent ostensible downsides hurt advocacy.

This begs the question – could AI safety advocates have played this period differently? Some might contest its premise: Maybe I overstate the extent of political opposition or underrate the endurance and efficacy of existing frameworks. But even then, there might be valuable lessons to learn as the first episode of regulating frontier AI draws to a close and the policy debate fractures into established arenas from defense to economic policy. In particular, a closer look at three issues seems prudent: regulating future technologies, forging political alliances, and building resilient institutions.

Regulating What Does Not Yet Exist

The timing of the policy window that opened in 2022, which occurred before most truly harmful large-scale capabilities substantially manifested, has invited attempts to pass regulation aiming at technologies that do not yet exist: Advanced language models capable of assisting large-scale cyber attacks, general or specialized models lowering barriers of access to bioterrorism, or more independently agentic systems with the capability to deceive or escape their human users. To be clear, plenty of strong evidence suggests that these risks are real. But it’s much less clear that this means we ought to do something – or advocate for doing something – about them now. Of course, given the scale of risk, foresight seems prudent, and is somewhat well-precedented. But speculative regulatory attempts have proven treacherous under the specific circumstances of frontier AI regulation.

First, they risk a ‘boy-who-cried-wolf’-effect. To motivate the kind of restrictive regulation – surveilling supply chains, by-default restrictions on training runs, etc. – that might meaningfully affect future risks even now, a vivid motivating picture has to be painted, drawing on speculation around the risks and the speed at which they could arise. Policy organizations usually hedge carefully in their direct communication, pointing out that any one risk might be unlikely to occur but still worthwhile to address. But filtered through quickfire meetings and media reception, dire scenarios stick – many of which are not coming to pass. In the leadup to the next generation of frontier AI regulation, these past warnings could quickly become a burden, leading decision-makers to disbelieve newer, more concrete, and more pressing warnings. One such tension arose in the context of AI-powered pathogen development, which was an attractive political hook to make AI risk tangible to policymakers. Related advocacy left some decision-makers with a much more urgent sense of that particular risk than was ultimately justified – so when more empirical evidence surfaced that suggested limited current biological AI risks, the credibility of safety advocates that had leaned into featuring these risks suffered.

Second, attempts to regulate future technologies necessarily frontload costs and delay benefits. Perceived limitations come hard and fast - reporting obligations, new paperwork and bureaucracy, perceived barriers on the way to the top disincentivizing new attempts to innovate, and the broader perception of ‘yet another regulation.’ But by its very aim of addressing future risks, passing a safety-focused policy is unlikely to make anyone’s near-term life tangibly better. This is a recipe for unpopular policies, and it is fertile ground for mounting broad opposition – particularly in the midst of general prevailing anti-bureaucracy sentiment in the US and Europe.

Political Entanglements

The movement for safe AI policy has repeatedly entered – or perhaps sometimes chosen, sometimes stumbled into – political alliances and partisan perceptions. This has led to the kind of politicization that has endangered both past policy successes and future uptake of similar arguments.

The first part of this politicization is the external perception it confers on safety policy supporters. This is perhaps most apparent in Europe, where the landmark piece of frontier AI regulation is also a behemoth bill that includes many major concessions to a broad civil society coalition which are fairly unpopular among more right-wing parties and industry. Safety advocates were primarily pushing for the – valuable – inclusion of general-purpose AI regulation into the AI Act, but in trying to save the entire bill from failing at the last minute, they found themselves aligned with the broader coalition. Disentangling that contribution from the political baggage of these allies will be difficult in the future – the ‘decel’ moniker is prone to stick, and it’s prone to hurt in a European political environment full of concerns about economic and technological competitiveness. Very recently, the US has seen a similar effect, with AI safety critics calling out civil society alliances. The alliance mobilized in support of SB-1047 runs similar risks, where many institutions that can easily be cast as shallowly motivated by their own interest to slow down AI adoptions – such as the SAG-AFTRA union – endorsed the bill; though the vocal earnest supporters from all over the political spectrum make the Californian case a bit less clear cut. The inverse has also repeatedly happened, where extensive evaluation requirements that aligned with the safety agendas of leading AI developers were cast as regulatory capture by opponents, making safety-focused policy advocacy the object of much civil society criticism.

The second part is the emergence of anti-safety-regulation coalitions. Policy proposals with thorny elements like enforcement and obligations invite some genuine opposition and provide fertile grounds for motivated general anti-regulatory opposition far beyond. In California, for instance, the vocal critics of SB-1047, from open-source-friendly academia to non-incumbent industry, have largely maintained their strong opposition even in spite of substantive concessions – and I suspect they will revitalize it at a moment’s notice once California or the US see the next push for safety-focused legislation. This opposition will likely be more effective next time around: the opposition knows each other, they won’t need time to assemble and rally again, and they know what lines of attack to go for.

Both politicization of the issue and the pitfalls of regulating hypotheticals seem unavoidable if the goal is to actually change policy through advocacy during that first policy window. But if you agree that these two trends might have substantially hindered future policy, that might suggest that policy advocacy might have been better timed – e.g., at an instance where the risks were more tangible already and attempts of adversarial politicization much harder. This is, of course, predicated on the notion that the transition to greater and greater risks will go fairly smoothly, with no discontinuous jumps that prevent any big warning shots. Some serious scholars disagree with that notion, and I agree that under that notion, the view articulated in this piece is less persuasive.

Building Institutions

What could have been done instead of the costly and politicized attempts to push binding regulation early? I believe the success cases of the AI Safety Institutes are the major positive lesson to draw from the last two years. Their establishment comes with almost none of the costs outlined above while providing many of the benefits that regulation was meant to achieve.

First, these institutions have proven remarkably effective at building agency and expertise, acknowledged by a broad range of stakeholders. Second, they have done so while largely avoiding the partisan entanglement that has plagued other policy efforts. Wherever that is not the case, it’s usually downstream from these institutions being tasked with enacting some more controversial pieces of regulation, such as where criticism of the AI Act falls back onto the EU AI Office; or downstream from these institutions being established in the context of other, more contentious policy, such as in the case of the US AI Safety Institute. The political economy of establishing these institutions has also proven far more favorable than that of passing regulation. While comprehensive regulation draws organized opposition from affected industries and interest groups, institutions face much less resistance in their founding phase, with most concerns centering on specific mandates rather than their core mission.

Once set up, these institutions can build long-term credibility crucial for future policy efforts. When risks become more concrete and the next policy windows open, recommendations from established government institutions will carry a lot of weight. In that way, institution building has cashed in on very similar objectives as ‘raising awareness’ and having good conversations, but with much fewer downstream vulnerabilities.

For all their merits, these institutions remain vulnerable to the risks above - risks that are mostly imposed by policy that goes beyond institution building. Focusing on building, funding, codifying, and safeguarding these institutions even further might have been a more prudent use of the first policy window – creating capabilities and credibility that could be leveraged when the next one opens, rather than saddling valuable institutions with vulnerabilities and depleting political capital on premature regulatory attempts.

Has AI Policy Jumped the Gun?

Has safety-focused AI policy jumped the gun? It depends on what the target was. If it was to immediately force the kind of regulation that would ultimately be necessary to pass the risks they are truly concerned about – e.g. because they believe this was the only shot – then no. Arguably, they should have gone more all in, if anything, because the burst of policy engagement has failed to set up sufficiently binding and lasting measures so far. If the target was to establish a strong position in the policy discussions to come as the next windows for reform draw near, then yes, it might have jumped the gun: going in too hard and fast, antagonizing important stakeholders for the purpose of ultimately insufficient policy gain, introducing sticky perceptions of politically inconvenient trade-offs, and bleeding important credibility. A more defensive approach to the first episode of frontier AI policy might have been more prudent. Future phases of AI policy and analogous early stages of governing emerging technologies could take note: Choose the moment to politicize the issue carefully.

Related Reading

Authors

Anton Leicht
Anton Leicht is a doctoral researcher working on democratic alignment of advanced AI. He also works as a policy specialist with KIRA, an independent AI policy think tank. He has a background in economic and technology policy.

Topics