Home

Donate
Perspective

A Proposed Scheme for International Diplomacy on AI Governance

Judit Bayer / Jul 11, 2025

Building Corp by Jamillah Knowles & Digit / Better Images of AI / CC by 4.0

Organizations and agreements related to AI governance are proliferating, and yet the vision of a consensual and binding governance agreement among the major powers remains out of sight. Even between the US and the EU – seen as robust allies for many decades – the dividing lines in AI policy have dramatically deepened. Nonetheless, new lines of alliance are dynamically developing around both major Atlantic powers. The EU’s impact on global regulation has come into doubt, especially as its AI Act appears to be losing support even among EU officials.

This moment of uncertainty presents the opportunity to re-evaluate European priorities for AI governance, based on the changing geopolitical landscape. It suggests thinking and planning as if from scratch, going back to the drawing board to calibrate a novel approach towards a global AI governance agreement.

Back to level zero

Two major dividing lines of political and economic interest are identified: one is the political spectrum of liberal and illiberal ideologies on which states found their governing regimes, with the US and, perhaps, China or North Korea representing the two extremes. The other is a spectrum on which the interests of elites and corporations are on one side, and human or social well-being on the other side. Among other values, let’s use the combination of these values to define policy options for individual states.

As the two values do not necessarily overlap, the chances of agreement in terms that reflect either one become very scarce. For example, Western European countries and the US are very close on the liberalism axis, but farther away in prioritizing societal or corporate interests, and so forth.

Setting a goal

Therefore, the suggestion is to find a common denominator and start with a very basic goal to achieve a global agreement, leaving aside any values and goals that might cause division. For example, tackling risks to the labor market and to freedoms requires value-based policy solutions that will differ in countries depending on their commitment to a political ideology and prioritising private or public interests. However, other risks affect all states and humanity, even though their likelihood of occurring is minimal. Existential risks to humanity or civilization caused by AI are often framed as distant, diverting attention from more imminent dangers. This may be true, and the aim of this suggestion is not to divert attention from other important problems. The primary aim is to lay the grounds for a peaceful global cooperation in the field of AI, which can serve as a foundation for expanding future cooperation. The means to this end is to convene a multistakeholder organisation with a goal that can be accepted by all states and the key industrial stakeholders.

While there are several future scenarios of AGI development, scholars agree that the possibility of its emergence within the coming decades cannot be entirely dismissed. In the context of geopolitical instability, the risk of misuse by rogue actors, terrorist groups, and reckless authoritarian states is growing exponentially: insecurity and lack of control could allow some future AI system to destroy, or threaten to destroy, civilization or significant parts of it.

As the likelihood of such a threat is relatively low, few resources are devoted to addressing it. I suggest taking this case as a level zero to enter into a global agreement. We should think of such a threat as akin to an alien invasion, in which case the governments of the world might set aside their feuds and ally against a common enemy—in our example, this would be an autonomous AI getting out of control. Framing AI governance as a shared global security interest shifts the focus from mutual suspicion between states to collective risk mitigation.

Find the common value at level zero

The level zero also implies that the terms of the agreement must be stripped of any values that are not equally embraced by all states, and focus on the technical details of cooperation to avoid the said outcome. Unlike several recent recommendations and guidelines, this agreement should not reference constitutional and human rights values. Evidence suggests that transnational organizations that do not discuss values cooperate more successfully, such as the IEEE and other standard-setting organizations.

Even the minimal goal of avoiding catastrophic risk carries some value judgments, as it presupposes the intrinsic value of human life and of civilization, and it stands in tension with immediate corporate profit. A corporation may prioritize taking the unlikely risk of human extinction for the high likelihood of immediate profit. However, even some corporations have adopted mottos such as "do the right thing" and "don’t be evil", at least for a while. Signs of initial progress suggest that such an agreement is possible among adversary states, notably the joint declaration by Chinese and American leaders under the aegis of the UN.

The distance of the risk reduces the perceived cost of compromising. The treaty – let’s call it Global AI Treaty Organisation, or GAITO – would not require sharing resources, but merely committing to transparency and cooperation in the narrowly defined scope.

In summary, the primary objective of GAITO would be to achieve consensus on preventing global catastrophic outcomes arising from AI or AGI. Its standards should include: a signalling and oversight system for AGI development; basic safety requirements with sufficient breadth to allow their adaptive application through evolving technical standards, developed by GAITO’s independent working groups; and a crisis response mechanism to react to instances of misaligned AI behavior.

However, once this level zero is achieved, the organisation can further develop itself by adding optional protocols and branches that pursue other technical or social goals, similar to the development of the UN.

Membership

Most standard-setting organizations (SSOs) do not represent governments, which makes them fairly immune to political ups and downs. However, GAITO should ensure legitimacy by granting governments a prominent role, with formalized processes in decision-making and veto power in defining the basic ethical standards.

Nevertheless, Big Tech corporations are demonstrably difficult for states to regulate because they exercise a form of functional sovereignty. Similar to medieval lords who disposed over land, vassals, and provided military services to their kings, Big Tech possesses key technological infrastructure, computing power, and data, and some of them even utilize their power to influence opinions, which provides them an additional political lever. They exert considerable lobbying power, amounting to regulatory capture.

Additionally, when states hold antagonistic interests against each other, industrial actors are more likely to agree on standards, as shown by the example of IEEE. Where states' hesitancy and formal requirements tend to delay agreements, industry actors can expedite the processes, driven by vested interests in international cooperation that facilitates development and secures access to global markets.

The incorporation of major industry actors in such a scheme includes the representation of corporate interests, whereas states typically support public interest objectives. Although the two are often mingled at the level of government policy, a transparent institutional inclusion would better channel private lobbying efforts into an open and accountable process.

Moreover, some states, such as the US, may attribute strategic importance to their tech corporations in terms of national competitiveness. Through the involvement of these actors, the enhanced weight of leading national powers in AI is acknowledged, while maintaining a fair and equal representation of global state interests.

Enforcement power

International law is based on the voluntary cooperation of states. While international sanctions are occasionally possible, their implementation ultimately depends on the cooperation of the executing states. The UN, including its Security Council, is notoriously ineffective in achieving measurable changes. This does not render it useless, but it is preferable for AI governance to be situated in a venue capable of achieving more immediate and practical effects.

Studying international organizations reveals that effectiveness is greatly improved by incentives, such as technical levers (ICANN), monetary levers (IMF, WTO), or reputation (COE). Several international organisations have soft power through cooperation and consultation, such as CERN, IPCC, IEEE, WHO, and IGF. However, only international bodies that control a physical or technological bottleneck—and can thereby compel cooperation—can carry out their mission effectively.. For example, the IMF controls the allocation of monetary funds, the WTO can levy trade sanctions, and the ICANN has control over the root zone DNS (through IANA).

ICANN stands out as the only multistakeholder body with de facto hard power, which is grounded in private contracts rather than state authority. Similar to the IMF and WTO, it holds power over the allocation of limited resources. Entities that refuse to comply risk being excluded from the DNS, making their domains non-functional.

In the context of AI, compute power, data, and algorithmic coding are sometimes considered bottlenecks. However, these are too broad categories in themselves to provide effective control over the development or deployment of AI, as they can be individually created, stolen, or copied, as demonstrated by the example of DeepSeek. Data and talent are ubiquitous and can be developed organically, escaping central control. A strict global regulation of their utilisation is neither in sight nor recommended.

Controlling chips or the development, training, and deployment of frontier models could offer some degree of effectiveness. However, restrictive control tools generally carry the risk of illegal development or trade, aggravating the situation. Warning examples are the zero-tolerance drug policy or the US prohibition on alcohol in the 1920s. These provide neither control nor transparency. The same can be said for a blanket restriction on developing, training, or registering frontier AI models. These are constantly developing and may escape regulatory attention, unless a certain physical or technological control point is identified to ensure intervention.

API governance as a policy lever

Instead, I suggest creating an artificial bottleneck to create a new policy lever. For example, the regulation of application program interfaces (APIs) that serve as access points for AI models to interact with the network and other applications could serve as such a control lever. Considering that AI models make their impact in the world through such interactions, controlling the access bridge that connects these models to networks and services could provide an oversight mechanism and a potential control tool over their operations.

APIs are freely created and used by often the same companies that provide the AI models, in a vertically integrated structure. There is no physical constraint on establishing an unlimited number of APIs across the globe. However, it would be possible to create a legal bottleneck: a central repository of high-capability APIs that are used for advanced AI applications, and to allow the creation and use of new APIs for this purpose only, with some conditions. API providers would need to sign a contract with the GAITO outlining the basic ethical standards that they adhere to and that they enforce against other users of their API. Enforcement, however, could remain in the Member States, which would not compromise their technological sovereignty. Access keys could be suspended or denied for non-compliant actors, with enforcement through audits, licensing, or integration into cloud service agreements.

Leading developers such as OpenAI, Google, and Anthropic already use and enforce restrictions through their API terms of service, excluding applications deemed unethical (e.g., OpenAI’s policy against military use). API governance is also a tool for corporate governance.

The aim of the conditional access is to ensure transparency and safety, not to prevent or restrict access.

The purpose of this lever is to supervise high-risk deployment of AGI and autonomous AI. Some states might opt for using it also for other purposes, but regional agreements can prevent this. It is worth noting that a range of connected narrow-AI models can also be as harmful as an AGI; and that a non-connected superintelligent AGI, if it were ever to exist, could also achieve considerable impact without an API, but through human mediation. Nevertheless, introducing API control would significantly raise the barrier to committing malicious attacks, as publicly available AI models typically require access via public APIs.

The role of the EU

The Brussels Effect may be currently subject to debate, but the EU's regulatory powers are widely acknowledged, underpinned by its market size and quality, regulatory resources, and competence. The EU's international influence is growing as the US is losing trust in the international arena. Therefore, the EU is well-positioned to leverage its influence and the trust it has earned to foster the creation of this global institution. Even if an explicit reference to human rights is excluded from the scope of this body, its mission perfectly aligns with the EU's human-centered policy and the International Digital Strategy of the EU of June 5, 2025.

Therefore, the EU should take the lead in initiating this global multistakeholder organization which should include all major state actors, even if they display conflicting competitive interests; as well as major industry actors active in the field of AI; influential research institutions in the field of AI safety and alignment; and relevant civil society organizations. The multistakeholder approach secures technical expertise and practical implementation through industry, while ensuring that civil society infuses the broader public interest at the heart of the process.

Conclusion

Whether with or without relying on API control, this organization would plant the seeds of a global cooperation in AI. Its scope can gradually expand over time through its branches and working groups, embracing discussions and negotiations in emerging areas of concern.

In an era defined by the fragmentation of the international order, the pursuit of meaningful cooperation on AI governance is undeniably ambitious – but no less essential. The engagement of major state actors is vital to any effort that seeks to safeguard the future of humanity. While this endeavor embraces peaceful competition, it requires a basic level of trust between the nations.

If this level of trust with regard to a distant, but existential threat cannot be found between nations, that means the worst outcome for the middle-term future: it is a sign that some states plan on using autonomous advanced AI against their adversaries, even at the risk of a global catastrophe.

Authors

Judit Bayer
Judit Bayer is an associate professor of media law and international law at the Budapest Business School, Hungary. Her research field is the intersection of digital technology and human rights. Recently, she has focused on AI and human rights, platform regulation, and media freedom and pluralism. Sh...

Related

Perspective
The Need for and Pathways to AI Regulatory and Technical InteroperabilityApril 16, 2025
Analysis
Key Findings from the Artificial Intelligence and Democracy Values IndexMay 2, 2025

Topics