Home

Donate
Perspective

Trump-Xi Summit in Beijing Should Make Shared AI Risks a Priority

Mark MacCarthy, Carl Schonander / May 8, 2026

President Donald Trump greets Chinese President Xi Jinping before a bilateral meeting at the Gimhae International Airport terminal, Thursday, October 30, 2025, in Busan, South Korea. (Official White House photo by Daniel Torok)

Security concerns prompted by the limited release of Anthropic’s Claude Mythos signal it is time to start a new government-to-government AI risk reduction dialogue between the United States and China, starting with the upcoming Trump-Xi summit, set to take place in Beijing next week. The purpose should not be to negotiate specific outcomes, but rather to lay the groundwork for a common understanding of both AI’s opportunities, as well as the range of its possible harmful and possible countermeasures. Some reports suggest the US and China are considering putting AI issues on the summit agenda.

The advent of Claude Mythos was apparently a gamechanger, with the administration now considering a US government review requirement before the release of new AI models. It should also motivate the administration to seek common ground with China to ensure that AI models do not facilitate the acquisition of biological or chemical weapons by bad actors and do not facilitate cyberattacks.

A growing chorus has called for this. New York Times columnist Thomas Friedman (twice) and journalist and author Sebastian Mallaby both urge cooperation between the US and China. AI researchers Christina Knight and Scott Singer argue that cooperation on controlling AI risks at the level of technical experts is possible and necessary. David Meale, former deputy chief of mission at the US embassy in Beijing, call for AI governance to be “a defining issue for the summit.” Professor S. Alex Yang and law scholar Angela Huyue Zhang say that the right response to Mythos is to put AI risk reduction “at the top of their agenda.”

We agree. Experts in the US and China already share a very general understanding of the harmful uses of AI and reasonable countermeasures. Recognized countermeasures include alignment techniques to make AI agents do what we want, and not what we don’t want, and control techniques that keep AI agents within safe environments and harden attack surfaces.

It is in the US interest to develop a more precise body of shared practices and control strategies with China.. As Yang and Zhang point out, a cyberattack that shuts down large parts of China’s manufacturing output would have global effects similar to the COVID pandemic.

In addition, China’s open source models are riskier than those developed in the US For instance, a recent risk evaluation of the Chinese model Kimi K2.5 shows that it does not reject requests to make bioweapons as reliably as US models do.

China might be ready for these discussions. As Carnegie Endowment for International Peace senior fellow Matt Sheehan pointed out, China now seems to be taking AI risks more seriously with the release of updated AI guidance from a key technical advisory committee that provided a public and realistic assessment of Chinese AI model vulnerabilities. Given that China has had generative AI regulations in place since 2023 and has recently updated them for “anthropomorphic” AI and the open-source AI agent OpenClaw, and the Trump administration seems to be considering a regulatory tilt toward more intervention rather than less, it seems like there would be an opportunity for substantive exchanges between the two sides.

Track II discussions between various non-governmental groups have taken place over the years. For instance, since 2019, the Brookings Institution and the Center for International Security and Strategy at Tsinghua University have convened experts for an unofficial Track-II dialogue on artificial intelligence in national security. These dialogues helped lay the groundwork for a government-to-government meeting in Geneva in May 2024 focused on AI military applications.

While the informal conversations have continued and have produced some progress such as this year’s Brookings report on a common understanding of the AI risks from nonstate actors, the government-to-government dialogue has not been resumed. We agree with Brookings’ China expert, Kyle Chan, who thinks the upcoming summit should lead to “opening official communication channels on AI risks, developing nonbinding safety guidelines, and sharing limited information about AI misuse or safety incidents.” That conversation should be at the level of government technical experts.

The cooperation between the US and the former Soviet Union during the Cold War, while not completely analogous, is instructive. For example, the two countries collaborated on arms control verification technologies related to satellite monitoring methods, on-site inspection techniques, and data exchanges on missile systems. It should be possible for China and the US to exchange information on, say, the best ways for red teaming AI models in areas of concern such as biological or chemical weapons proliferation.

Claude Mythos confirms the need for cooperation on AI risk reduction. The administration should use the occasion of the Trump-Xi meeting to call for bilateral government dialogue on the issue.

Authors

Mark MacCarthy
Mark MacCarthy is an adjunct professor at Georgetown University in the Graduate School’s Communication, Culture, & Technology Program and in the Philosophy Department. He teaches courses in technology policy, including on content moderation for social media, the ethics of speech, and ethical challen...
Carl Schonander
Carl Schonander is a global technology policy professional and former diplomat with the US State Department. He has held senior positions with the Software & Information Industry Association (SIIA), Global Digital Finance (GDF), and Tencent. Schonander is the author of Data Flow Promotion in Interna...

Related

Perspective
The White House Wants to Vet AI Models. It Won’t Solve the Safety Problem.May 7, 2026

Topics