From Competition to Cooperation: Can US-China Engagement Overcome Geopolitical Barriers in AI Governance?
Nayan Chandra Mishra / Sep 23, 2024US-China cooperation on AI governance remains at a crossroads. While both nations are locked in a fierce competition for dominance over emerging AI technologies, they also acknowledge the pressing need to collaborate in addressing AI's global, transboundary challenges. As the two leading AI superpowers, the US and China possess not only the most advanced technological capabilities but also the financial and political influence required to shape the future of AI governance. However, this competition for supremacy—driven by national security concerns, economic interests, and ideological differences—has complicated efforts to establish a cohesive global framework for AI regulation. Amidst these tensions, the US and China are beginning to show a cautious openness to engagement, reflected in their recent support for joint UN resolutions and growing participation in key international dialogues—indicating a potential shift toward more constructive collaboration on AI governance framework.
Shifting Landscape in Dialogue Between the US and China
In June 2024, a ray of hope emerged when the UN General Assembly unanimously passed the China-led resolution “Enhancing International Cooperation on Capacity-building of Artificial Intelligence,” supported by the US and other 120+ UN members. Previously, in March 2024, China supported a US-led resolution on “Safe, Secure and Trustworthy Artificial Intelligence Systems for Sustainable Development.” Both the resolutions emphasized broadly similar issues, such as promoting Sustainable Development Goals (SDGs), capacity building, socio-economic development, and safeguards against malicious use of AI systems. In terms of governing AI, both the resolutions reaffirmed the need for international cooperation and multi-stakeholder consultations involving developed and developing states to “formulate and use effective, internationally interoperable safeguards, practices and standards.”
The reciprocity of support by both nations arose in the context of more frequent engagements at various forums in the run-up to both resolutions. For instance, China and the US came together at the UK Safety Summit in October 2023, where all attendee states unanimously passed the Bletchley Declaration. The broad themes of the declaration, such as safe and responsible AI, focus on SDGs, international cooperation on AI safety and capacity building, and a multi-stakeholder approach towards governance, were similar to the contents of the UN resolutions.
Subsequently, Biden and Xi Jinping met at the Woodside Summit in November 2023, where they agreed to convene a meeting to “address the risks associated with advanced AI systems.” It culminated in the first-ever bilateral meeting between both states on global AI governance in Geneva in May 2024. Although it did not result in a joint declaration or any actionable plans for the future, both states showed their concerns toward each other regarding misuse and unilateral restrictions on AI.
More importantly, Beijing’s readout of the meeting highlighted its receptiveness towards the inclusion of the US in “international communication and coordination” to prepare a “global framework and standards for AI governance,” which was again reiterated by the Chinese spokesperson when the China-led UN resolution was passed. Meanwhile, the US readout was more subtle, focusing on building “open lines of communication on AI risk and safety as an important part of responsibly managing competition.” Both nations are set to meet again at the Summit of the Future in New York in September 2024— a UN-led forum aimed at advancing a global framework for AI governance- where they might indulge in advanced dialogue on affirming certain principles and norms stated in both resolutions.
Beyond government-level bilateral engagements, there has been a notable increase in informal dialogues between the United States and China. These informal channels, particularly through Track-1.5 and Track-2 diplomacy, provide the flexibility to address specific political issues more openly and candidly. For instance, in 2020, the Center for International Security and Strategy at Tsinghua University, the Brookings Institution, the Berggruen Institute, and the Minderoo Foundation organized Track-2 meetings focused on AI-based military systems.
This rise in informal engagements also extends beyond the US-China relationship, reflecting a broader pattern of increased interaction between China and Western nations. Concordia AI’s “The State of AI Safety in China Spring 2024 Report” highlights enhanced Chinese engagement with Western countries, showing a broader consensus among two dissimilar ideological groups in geopolitical calculus to cooperate on regulating the transboundary nature of AI. It stated that since 2022, eight Track 1.5 or 2 dialogues on AI - apart from governmental dialogues like the joint statement between China and France and the first China-US bilateral meeting - have taken place between China and Western countries.
The moderating stance becomes significant at a time when both states are following a fragmented approach toward governing AI. For instance, G7’s Hiroshima AI Process, which seeks to enhance international cooperation in AI governance, includes only Western-allied countries, while the EU’s AI Act of 2023 aims to create a regulatory monopoly by leveraging its supranational unity. Similarly, the GPAI, though it includes India and a few other developing countries, has put China and Russia beyond its ambit. Whereas on the other side, BRICS nations have created their own “AI study group,” and China has started the Global AI Governance Initiative in 2023 along with the annual World AI Conference by inviting a range of stakeholders to jointly promote the governance of AI. The vision document of the Initiative also indirectly repudiated the USA’s unilateral approach by stating that they “oppose drawing ideological lines or forming exclusive groups to obstruct other countries from developing AI.” Keohane calls this oligopolistic attitude contested multilateralism, where “states and/or nonstate actors either shift their focus from one existing institution to another or create an alternative multilateral institution to compete with existing ones.”
In this complex state of affairs where fears of contested multilateralism still lurk, the recent engagements signal a policy shift where countries are loosening up their oligopolistic approach to discuss core differences and take confidence-building measures for cooperating on AI governance as against compartmentalizing it. The source of optimism also stems from history, where the governance of nuclear energy- another disruptive technology developed at the height of geopolitical turmoil- originated when the US and USSR came together to restrict its transboundary implications amidst geopolitical fragmentations between the Western and Eastern Bloc. This historical parallel highlights the future importance of developing and maintaining multiple channels of communication, which may similarly lead to a system of dialogues where the foundations of AI governance will be established.
Core Challenges in the US-China AI Cooperation
Despite the promising progress, recent engagements are not without their limitations, which could pose significant obstacles to building a genuinely effective framework for AI governance. If we look at the UN Resolutions, they explicitly have a non-political basis, focusing on the social, economic, and public use of AI at large. On the contrary, they do not cover the practical impediments and may end up in cold storage like other contemporary UN resolutions. For instance, neither resolution “touch [on] the development or use of artificial intelligence for military purposes.” At the same time, both states are aggressively expanding AI into the military domain.
The US established its National Security Commission on AI in 2019 “to comprehensively address the national security and defense needs of the United States” and subsequently witnessed a three-fold surge in investment in military AI between 2022-2023. It also formulated the AI Partnership for Defense, organized by six NATO members and other US allies, including Israel, Japan, and Sweden. At the same time, China has been clandestinely working on integrating AI into its military forces since 2018 with the goal of becoming a “world-class” military by2050.
These realities were evident when China rejected (along with India, Russia, and Israel) the US-authored “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” which prepared a normative framework for a responsible approach towards military use of AI capabilities. Ultimately, the cooperation on military AI will be a deal breaker in upcoming negotiations.
Moreover, the core values behind governing AI between the US and China differ from each other. The US, being the promoter of the free market and home to cutting-edge research in AI, will promote self-regulation, voluntary guidelines, and a market-oriented flexible approach to bolster the dominance of its companies. On the opposite side of the spectrum sits China, aiming for absolute state control over its citizens using AI capabilities and ensuring that algorithms are reviewed by the Communist Party of China beforehand to ensure it follows the “core socialist values.” These divergent perspectives were evident during the bilateral meeting in Geneva, where China took up the issue of “U.S. restrictions and repression on China in the field of AI,” while the US highlighted the risk of AI misuse by China.
China’s Global AI Governance Initiative has also opposed the market dominance of American big-tech companies by stating that they “oppose creating barriers and disrupting the global AI supply chain through technological monopolies and unilateral coercive measures.” At the same time, China has a history of violating international commitments given to organizations, including the World Trade Organisation (WTO) and the International Telecommunication Union (ITU), making it even more difficult to bridge the trust deficit. These deep-rooted differences could significantly impede the establishment of common principles and norms for AI regulation, as they reflect not only conflicting governance philosophies but also competing geopolitical interests.
Finally, the recent engagements might also be seen as agreements based on short-term calculations of interests for public relations purposes and hence cannot be truly said to be a precursor of robust international cooperation and solutions. We cannot discount the fact that states do take presumably positive steps as part of their propaganda efforts. For instance, the US and USSR, in the initial phases of nuclear negotiation, came up with extreme solutions, which were intended to be a propaganda mechanism to put the other side on the defensive and “seeking to convey the image as the sole party committed to achieving a breakthrough.” In order to filter out such engagements, Krasner notes that “since regimes encompass principles and norms, the utility function that is being maximised has some sense of general obligation,” such as reciprocity of obligations and political legitimacy/intent of deliberations. However, as of now, there is no such reciprocity or legitimacy, which questions the true intent of both countries in engaging with each other.
Future Pathways: The “When” of AI Governance
Whether recent dialogues are truly a path-breaking step towards AI governance or mere cosmetic changes fit for propaganda and short-term interests can be answered with time and has to be read with the next steps of cooperation. The immediate next step will involve building political legitimacy as part of the engagements between both states, which will first emerge from the acknowledgment of the core principles and norms by every negotiating party. Principles and norms, unlike rules and procedures, are basic defining characteristics of a potential negotiation under whose influence reciprocal rules and procedures are deployed.
The upcoming Summit of the Future will be significant in this regard. Secondly, as geopolitical tensions and varied regulatory approaches between both states remain a point of contention, any hard law approach involving legally binding rules and procedures may be out of reach in the coming engagements. On one side of the spectrum, it might affect innovation, while on the other, it may belittle the sovereignty and strategic interests of both states. Resultantly, we will witness both nations discussing a soft law approach to accommodate each other’s governance approaches as it offers compromise “between actors with different interests and values, different time horizons and discount rates, and different degrees of power.” These next steps are not only determinative of the significance of US-China engagements but will also illuminate the broader direction of international AI governance.
At a time of rising unilateral tendencies of superpowers through trade restrictions, sanctions, and declining dialogue on international issues, the increased engagement provides a counterpoint to the power-centric theories of conventional realism that dictate the US and China should hijack the AI global governance discussion for their own purposes or propagate agreeable suggestions solely for public relations purposes. Both states must acknowledge that AI cannot be compartmentalized within physical boundaries and will require a global effort to regulate a seemingly never-ending war of technological dominance. The coming months and years will be critical in determining whether the US and China can transcend their differences to shape the future of AI.