How Canada Can Advance International AI Governance at the G7
Matthew da Mota, Christo Hall, Emily Osborne / Jun 13, 2025Canadian Prime Minister Mark Carney’s mandate letter outlining his agenda and the announcement of a new artificial intelligence ministry have made it clear that the issue is a priority for the country in the coming years. But, in the wake of the AI Action Summit in Paris this February, trade wars and an AI race dynamic between the United States and China, the emphasis around the globe is increasingly on rapid adoption of the technology rather than safety and regulation.
Amid those developments, new Canadian Minister of AI, Evan Solomon, stated earlier this week that the nation will not go it alone in pursuing comprehensive domestic AI regulation. However, there is a strong case for Canada to use its G7 presidency to push for increased international governance in a forum where headway has already been made.
As the G7 summit rolls into Canada this weekend, AI will be high on the agenda. This forum offers an opportunity for governments to recognize the potential fallout from rapid deployment by seeking avenues to foreground safety without unilaterally impacting any single state.
One of the most direct impacts the summit could make on AI safety is to enhance the Hiroshima AI Process’s (HAIP) Code of Conduct and its newly launched Reporting Framework — two voluntary tools to increase the accountability of organizations developing advanced AI. In doing so, G7 leaders can continue to make incremental progress on the G7’s commitment for the HAIP to be an instrument of robust international AI governance.
AI and the G7
AI governance has been a central focus of the past two G7 summits. In 2023, the Japanese summit led to the Code of Conduct. The Code expands upon the OECD’s AI principles and implores organizations that develop and deploy advanced AI systems to follow eleven actions that promote responsible practices.
Then, last year’s Italian summit led to the voluntary Reporting Framework for the Code, for which companies are already submitting reports. The Reporting Framework is an extensive questionnaire designed to capture AI organizations’ safety and security incidents, the measures they take to identify, evaluate and mitigate safety and security risks, and their commitments to corporate responsibility.
While it is not perfect, no other transparency initiative to date offers the structure and the buy-in by states and companies, especially the United States and its market-leading frontier AI developers.
The HAIP, extended beyond the G7 and operating in partnership with the OECD, has already grown larger than when it was launched, yet the G7 has a stake in its future. The HAIP’s current initiatives represent progressive steps towards the HAIP’s Code of Conduct becoming an influential instrument of global AI governance. Under Canada’s presidency over the summit, two key enhancements can help it get there.
Beyond voluntary compliance
When they adopted the HAIP’s Code of Conduct, G7 states agreed, in principle, to develop monitoring tools and mechanisms to “help organizations stay accountable” to its eleven actions, which cover risk management, information-sharing and security.
While the Reporting Framework is a valuable information gathering tool, it does not offer an accountable framework to mitigate the most important AI risks and challenges. To adequately address those risks while seizing AI opportunities, multi-stakeholder informed compliance mechanisms are necessary.
One way the G7 can develop effective accountability measures is by advocating for states to mandate submissions to the Reporting Framework for organizations developing and deploying AI within their jurisdiction. Alternatively, the G7 may advocate for states, some of which may wish to internalize the reports, to develop their own mandatory reporting framework that is interoperable with the HAIP’s.
Governments could indicate their commitment to the Reporting Framework by obliging domestic AI organizations to report on their risk management practices, either through legislation or via procurement and funding conditions.
Global alignment
The lack of an evaluation process means that if an organization fully completes the questionnaire, regardless of whether they submit incorrect facts or offer sub-par risk management practices, they will still be recognized under the HAIP brand. While the HAIP website clarifies in its frequently asked questions section that the brand is not an endorsement of an organization’s practices, it is undoubtedly an incentive for voluntary compliance.
To address those limitations, the G7 can develop a multi-stakeholder council to review, assess and inform how to translate these disclosures into policy development. This council can develop reports that synthesize the best practices it sees amongst the submissions to provide recommendations to participating organizations through publicly available reports, make suggestions to further develop the Code, improve the reporting framework’s questionnaire, and inform interoperable policy responses among G7 and OECD states.
One of the HAIP’s goals is to harmonize the standards and policy responses of G7 and OECD members. However, beyond the attempt to make the reporting framework align with other reporting requirements, no specific mechanisms exist to achieve policy alignment.
An evaluation process of this kind is necessary for the initiative to transition from an information gathering tool to a forum that can diffuse policy globally that addresses advanced AI risks, especially given the voluntary nature of the current arrangement.
Neither of these measures would make the Framework binding for states but would create an environment of verifiable information sharing to support future governance efforts. This is a piecemeal approach in a geopolitical environment that demands no more, and in a threat environment that demands no less.
Canada’s impact
By advocating for a stronger Reporting Framework, the G7 can help the HAIP achieve its objectives, foster a safer and more trustworthy ecosystem for AI adoption, reduce regulatory fragmentation, and position itself as a champion of international cooperation and responsible AI development.
The HAIP brand is currently adorned with Japan’s cherry blossom and Italy’s olive tree, representing the two countries’ contributions. This year, by advocating for sensible improvements to the initiative, Canada can add its maple leaf as a symbol of its enduring influence on global AI governance.
Authors


