After Paris: What Future AI Summits Must Learn from Climate Change and the COP Series
Jared Perlo / Feb 17, 2025
Paris, France- On February 10-11, 2025, France and India co-chaired the Paris AI Action Summit, hosted at the Grand Palais des Champs-Élysées. Source
Just over eight years ago, global policymakers convened in Paris to pass a landmark series of commitments to reduce greenhouse gas emissions with the Paris Climate Accord. Global leaders again descended on Paris a few days ago, this time to discuss the promise and perils of artificial intelligence (AI).
It is perhaps unsurprising that the eight-year-old Paris commitments on emissions, made during the Conference of the Parties (COP)’s 21st meeting, have failed to avert global warming. Greenhouse gases from human activities have accumulated since the start of the Industrial Revolution. But just this week, scientists found that the past twelve months have already exceeded the 1.5℃ warming limit set by the Paris agreements.
Meanwhile, AI is developing at a breakneck pace. While the meaning of the threshold is contested, and some argue it is not a useful ‘north star’ goal, many in the industry promise that human-level general AI is potentially just a few years away. Even with the necessary caveats, it seems clear that AI will revolutionize the world and could pose severe—some say catastrophic—threats to society if left unchecked. International leaders must learn from COP’s missteps to urgently shape the future of the AI Summit series, mitigating AI risks in a way the COP series never did for climate change.
Though this Paris convening was only the third installment of the AI Summit series, many observers noted this year’s AI Action Summit for its discord rather than consensus on the role of AI risks in international AI conversations. Whereas prior summits focused on narrow AI safety concerns in a few dozen key countries, this year’s gathering featured global representation and covered topics ranging from AI’s intense energy and water demands to the need for high-quality AI tools to embrace less-commonly-spoken languages. It is also widely known that experts disagree on the extent of AI’s risks. Renowned computer scientist and Turing Award winner Yoshua Bengio sees AI as a potential existential threat to humanity, while Meta’s Chief AI Scientist Yann LeCun recently called such assertions “complete B.S.”
The world’s first independent International AI Safety Report, released in January by the United Kingdom’s official AI Security Institute and crafted by 96 international AI experts, highlighted this uncertainty. Surveying AI risks ranging from increased chemical weapon attacks to loss of control of AI systems, the authors acknowledge that “some think that such risks are decades away, while others think that general-purpose AI could lead to societal-scale harm within the next few years.”
This uncertainty in scientific consensus might seem like a key difference between AI’s development and climate change’s progression. By 2001, there was an extremely strong international scientific consensus that humans were climate change culprits, even though media outlets and politicians have continued to trumpet scientific uncertainty for decades afterward.
Crucially, the COP series of gatherings showcased nations’ desire to act on solidifying but (what was then) still unsettled scientific opinion. The COP meetings built on decades of prior efforts to raise awareness about and address climate change before strong consensus existed, starting with 1972’s Conference on the Human Environment in Stockholm. Decades of conferences and growing worries about climate change’s impact eventually led to the 1992 Earth Summit declaration in Rio de Janeiro. This declaration created the COP mechanism in order to encourage stronger commitments and concrete action.
At the first COP in 1995, Angela Merkel—then Germany’s environment minister—noted key actors’ differences of opinion regarding responses to the climate change threat. Nevertheless, she said that countries must seek common ground and not simply “close their eyes” by choosing a path of inaction. This Berlin meeting helped pave the way for the legally binding Kyoto Protocol two years later, which mandated reductions in greenhouse gas emissions for industrialized countries.
Yet the Kyoto Protocol failed to stem global emissions. While many countries fell short of their emissions-reduction targets by the 2012 deadline, the United States’ failure to ratify the agreement at all essentially scuttled the entire project’s carbon-capping aims. COP discussions then limped along from road maps in Bali (COP13) to non-binding declarations at Copenhagen (COP15) to platforms for enhanced action in Durban (COP17), eventually leading to the Paris agreements.
There was strong global momentum to produce binding commitments at COP1—countries explicitly rejected prior efforts to fight climate change as inadequate, even though strong scientific consensus was years away. Global leaders would be wise to adopt the same mindset today with AI’s development, taking action to prevent future harms even as scientific consensus develops on exactly how and when AI risks will manifest.
As the US and China increasingly race to develop frontier AI models and even human-level AI, their strained relations also echo tensions that permeated the COP process. China signed the Kyoto Protocol but was not obligated to reduce its emissions due to the agreement’s exemptions for developing countries. This exception raised domestic American backlash, as US politicians quickly realized that signing the Protocol and self-imposing emissions caps would severely hamstring America’s economic competitiveness if China could freely fuel its growing economy with coal.
As a result, the US has targeted China’s claim to the ‘developing country’ label for years, culminating with the House of Representatives’ unanimous passage of the suitably-named “PRC Is Not a Developing Country Act” in 2023. Both the US and China would do well to learn from this crucial missed Kyoto opportunity, in which overly optimistic negotiators failed to properly account for domestic political appetite for ratification.
Of course, the analogy between the AI Summit series and the COP gatherings is not one-to-one. The disagreement over China’s classification as a developing country has not involved the same kind of cutthroat and jingoistic national security concerns that feature in the emerging AI arms race. Similarly, there is no direct parallel for the concentration of state-of-the-art compute resources. Today, countries are on a continuously evolving roller-coaster ride of AI development that just a few nations (primarily the US and China) control. It seems unlikely that India, Russia, or Brazil will ever achieve AI dominance in the same way that they contribute to greenhouse gas emissions.
Nonetheless, the failure of the COP series to meaningfully limit global emissions should send a resounding wake-up call to policymakers regarding the need for global AI agreements. It took 21 years of COP gatherings for global policymakers to sign the Paris Agreement on climate change—a meager pact whose core emissions-reduction stipulations are non-binding and that “even at the time of negotiation…was recognized as not being enough.” Given this bleak precedent for speedy and effective international action, it will be critical to set an AI agreement in motion at the next Summit to at least have a chance at ironing out meaningful and practical details during future negotiations.
Of course, one might argue that the plodding pace of international efforts on climate change mitigation merely matched global warming’s slow-motion materialization. Setting aside that COP’s eventual agreements are still woefully inadequate to prevent global warming, this past week’s AI Summit moved away from needed and binding international agreements. Not only did the watery declaration produced in the Grand Palais go unsigned by the US and the UK, but the discourse was distinctly anti-regulation. International governance mechanisms simply do not move as quickly as AI will develop; policymakers must take action now (or yesterday, or ideally this past Tuesday).
So, at the next Summit, policymakers must, at the very least, initiate serious discussions on binding, enforceable agreements to control, limit, or adapt to advanced AI systems. Leaders can outline the frameworks and even draft text that might be relevant when risks become real. Regardless of one’s personal take on the potential for AI risk, present uncertainty does not preclude significant action now. This proactive approach would help speed AI negotiations past COP’s shortcomings, avoiding the high-minded-but-fruitless talk of Berlin, the empty promises of Kyoto, and the too-little-too-late climate agreements in Paris.
Failure to act now will make meaningful future action all the more difficult, and having a playbook or agreement ready to go in case (or when) severe AI risks become concrete is more crucial now than ever. When it comes to AI, policymakers simply do not have time to retrace COP’s lurching and lackluster journey.
Authors
