Home

Donate
Perspective

The G7 Summit Missed an Opportunity for Progress on Global AI Governance

Afek Shamir, Paul Khullar / Jun 23, 2025

G7 attendees pictured in Kananaskis, Canada on June 17, 2025 (X)

As G7 leaders convened in Alberta, Canada last week, the agenda encompassed pressing geopolitical developments in the Middle East, Ukraine, trade, and energy security. But mostly absent from discussions was the topic of governance for leading artificial intelligence models and systems, a notable omission given that the G7 previously pioneered the voluntary guidelines leading AI companies agreed to report on via the Hiroshima AI Process’s Code of Conduct.

The relatively limited consideration of frontier AI safety reflects a broader global shift in the way AI governance has been conceived. Whereas the past couple years marked an era that brought AI safety front and center — birthing the United Kingdom's Bletchley Park Declaration, the European Union’s AI Act, and voluntary frontier safety commitments from leading AI companies agreed in Seoul — this year has ushered in a new paradigm that centers on AI innovation, sovereignty, and tech competition.

This year alone, the EU's proposed AI Liability Directive was withdrawn, while the EU continues to mull postponing enforcement of parts of its AI Act. The UK and US's AI Safety Institutes (AISIs) received a branding makeover: now the two are called the UK AI Security Institute and the US Center for AI Standards and Innovation, respectively. Finally, the US shelved its Biden-era AI risk management framework and repealed the AI Diffusion Framework, which separated the world into three tiers of chip-receivers.

The G7 summit echoed a similar story. The summit’s key AI deliverable — a “Leaders’ statement on AI for prosperity” — primarily emphasizes shared economic opportunities, frequently enlisting the words “growth”, “prosperity”, and “competitiveness”. The flagship initiatives are a “GovAI Grand Challenge” to speed up government adoption of AI and a G7 AI Adoption Roadmap for small and medium-sized enterprises (SMEs). One bilateral agreement between Canada and the UK includes a partnership agreement between the UK and its Canadian AISI counterparts, as well as a memorandum of understanding (MOU) with Canadian AI firm Cohere.

Perhaps this broader retreat on governance reflects the disruptive speed of AI capabilities. The models released in the past year — GPT-4.5, Gemini 2.5, Claude Opus 4, and others — possess reasoning capabilities that would have seemed incomprehensible just two years ago. Inference-time compute scaling has enabled AI systems to solve complex problems previously thought to require human-level intelligence. Hence, as AI systems grow more capable — the nonprofit research outfit METR finds that generalist autonomous AI agents have been doubling the lengths of tasks they can take on approximately every 7 months — countries increasingly turn to competing on how to best extract value from this disruptive technology.

Meanwhile, the infrastructure buildout is equally seismic. Gigawatt-scale data centers, such as the United Arab Emirates’ planned 1GW (gigawatt) “Stargate” AI campus in Abu Dhabi, are rising on multiple continents. Research by the RAND Corporation finds that AI data centers alone may require an extra 10 GW of power capacity this year and could necessitate up to 68 GW globally by 2027. Countries that lack advanced AI infrastructure risk becoming digitally dependent on those that do.

All the while, some Nobel prize winners, leading AI companies' CEOs, and AI scientists continue to warn that Artificial General Intelligence (AGI) — the much debated point whereby AI systems could match or exceed human cognitive abilities across diverse domains — is on the horizon.

This acceleration of capabilities may explain why governments are abandoning multilateral safety frameworks in favor of national AI policies, like the UK’s AI Opportunities Action Plan, the US’s AI Action Plan, and the EU’s AI Continent Action Plan. Yet this pivot towards innovation and adoption-first policies — while aiming to unlock the mass benefits that could be realized from integrating AI into the economy — increasingly miss synergies among allies that could make sure the technology itself is safe and secure. The same AI capabilities that promise positive breakthroughs in medicine, robotics, and education also pose risks that no nation in the G7 can manage alone.

Consider the security challenges emerging from advanced AI systems: model weights that, if stolen, could enable adversaries to rob states and companies from years of research and development; massive data centers that represent targets for cyber and physical attacks; and AI systems which are already potentially capable of accelerating the development and misuse of chemical, biological, radiological, and nuclear (CRBN) weapons. Leading scientists also warn of loss of control issues that could emerge when advanced AI systems become capable of breaking away from human goals and values.

The security implications are materializing quickly. Anthropic's recent decision to classify its Claude Opus 4 model under its second highest internal safety classification — following unexpected capability gains during development — highlights that companies are grappling with models that exceed dangerous risk thresholds.

G7 nations — largely in control of the AI value chain — are well-positioned to address these challenges through coordinated action. Unlike other multilateral forums, the G7 combines the world's leading AI developers and adopters with already established security cooperation mechanisms. Securing AI supply chains and coordinating responses to AI-enabled threats through the AISI network represent areas where cooperation serves everyone's interests: from Canada to the US or Japan.

The low-hanging fruit for G7 nations is nonetheless substantial: joint research on evaluating capabilities and threat models, coordinated disclosure processes for AI companies that build on the existing Code of Conduct, shared intelligence on AI-enabled attacks, and collaborative development of export controls. These are only some areas where cooperation would be mutually beneficial. These initiatives do not require nations to sacrifice competitive advantage. Rather, they enhance it by creating more secure AI adoption among allies.

The Canadian G7 summit hinted at some ongoing gaps in AI coordination. In an era where highly capable AI systems can be trained in one country, deployed in another, and potentially cause damage in a third, purely national approaches to AI governance may struggle to address cross-border risks.


Authors

Afek Shamir
Afek Shamir is an analyst at RAND Europe and a fellow at RAND’s Technology and Security Policy Center, where he primarily focuses on European AI policy and the geopolitics of AI. He is an alumnus of the Talos Fellowship and has previously worked at the Tony Blair Institute and Pour Demain. He holds ...
Paul Khullar
Paul Khullar is an analyst at RAND Europe. Working primarily in the Science and Emerging Technology team, his focus is on AI and emerging tech policy and quantitative research methods. Prior to working at RAND, he worked as a researcher at the Alan Turing Institute. He holds an M.Sc. in artificial i...

Related

Perspective
How Canada Can Advance International AI Governance at the G7June 13, 2025
Podcast
Podcast: Canada’s Post-Election Outlook On Tech PolicyJune 6, 2025
News
Proposed Federal Moratorium on State AI Laws Clears Hurdle in US SenateJune 22, 2025

Topics