Home

Donate

Navigating Trump's AI Strategy: A Roadmap for International AI Safety Institutes

Stephanie Haven / Nov 20, 2024

BROWNSVILLE, TEXAS - NOVEMBER 19, 2024: US President-elect Donald Trump speaks alongside Elon Musk (R) and Senate members including Sen. Kevin Cramer (R-ND (C) before attending a viewing of the launch of the sixth test flight of the SpaceX Starship rocket. (Photo by Brandon Bell/Getty Images)

As the Biden administration prepares to host the International Network of AI Safety Institutes (IN AISI) for its first meeting this week in San Francisco, uncertainty looms over the gathering. Just two weeks after Donald Trump was elected to return to the White House, the network – founded earlier this year by US Commerce Secretary Gina Raimondo – grapples with questions about its direction and sustainability under a leader who promised to revoke the Biden AI Executive Order that created the US AI Safety Institute.

With the global proliferation of artificial intelligence, the IN AISI’s mandate to foster international collaboration on AI safety is vital. But preserving US membership and leadership in the international network will require deft navigation of Trump’s AI policy priorities.

Understanding the Trump administration's likely approach to AI – heavily influenced by his relationship with Elon Musk – reveals potential paths forward for the IN AISI. From President-elect Trump’s campaign statements and the Feb. 2019 and Dec. 2020 AI Executive Orders he issued in his first term to Musk’s public commentary about AI, a few principles emerge about the new administration’s likely AI strategy. The approach will prioritize strategic competition with China, existential risk management, deregulation, and innovation.

The Current Landscape

The US AISI, established in 2023, made significant strides in its first year. Operating within the National Institute of Standards and Technology (NIST) under the Department of Commerce, the Institute signed an agreement with leading AI labs to test their pre- and post-deployment models (which I analyzed here) and published best practices for managing generative AI risks.

The upcoming meeting will bring together AI Safety Institutes from the United Kingdom, Australia, Canada, the European Union, France, Japan, Kenya, South Korea, and Singapore. While China isn't a member of the IN AISI, it has participated in previous AI Safety Summits and plans to attend the next summit in Paris in February 2025 – after President-elect Trump takes office.

Trump's AI Strategy

Three key principles are likely to shape the new administration’s approach to AI safety and the IN AISI:

1. Strategic Competition with China

Membership: President-elect Trump views China as the “primary threat” to US AI dominance. Based on his first-term policies, he is expected to expand restrictions on China's access to critical AI development resources, including semiconductors, compute capabilities, and energy for data centers. This poses a delicate challenge for the IN AISI: while completely excluding China from dialogue could be counterproductive for global AI safety, President-elect Trump is unlikely to support an organization that welcomes Chinese membership.

Recommendation: Establish clear criteria for joining the IN AISI, with a tiered membership model that could allow for structured engagement with China without jeopardizing US participation. This approach can address President-elect Trump’s concerns about strategic competition without excluding critical voices from global AI safety discussions.

Open-source: While congressional Republicans have advocated for open-source AI as a way to challenge Big Tech dominance and foster competition, recent reports revealing China's military adaptation of Meta's open-source Llama model may force a shift in this position. This creates a conflict between international efforts to promote AI transparency and President-elect Trump's priority of maintaining a US strategic advantage over China.

Recommendation: The IN AISI will need to carefully navigate this tension – potentially by recommending a tiered access framework for open source models, with enhanced monitoring and testing protocols for more capable models. Such an approach could preserve innovation while implementing safeguards against military exploitation, making it more palatable to a Trump administration focused on strategic competition. The timing is particularly sensitive, as a goal of the IN AISI meeting is to prepare for the 2025 AI Safety Summit in Paris– which will focus on open-source models

2. Existential Risk Management

Understanding the Trump administration’s potential willingness to engage with the IN AISI also requires interpreting the role of Elon Musk. Officially named co-leader of the so-called Trump’s Department of Government Efficiency, Musk’s prominence as an informal advisor has grown during the transition period. Musk's likely influence on Trump's AI policy cannot be overstated. Musk has consistently prioritized managing catastrophic AI risks over addressing near-term concerns like misinformation and deepfakes. Musk’s track record also includes:

Recommendation: For the IN AISI to maintain US support under President-elect Trump, prioritizing existential risks could be integral. This approach could include monitoring GPU capacity usage to detect highly capable model training and assessing AI systems for Chemical, Biological, Radiological, and Nuclear (CBRN) risks. This technical, security-focused approach would align with Musk's long-standing concerns about catastrophic risks and nimbly reprioritize discussion about AI bias and fairness, which both Trump and Musk have denounced.

3. Deregulation and Innovation

Censorship: President-elect Trump and Musk have both criticized Big Tech for designing AI models that generate content they see as politically biased or politically correct. Indicating his intent to spur start-up competition with Big Tech, Trump appointed Big Tech critic Brendan Carr to lead the Federal Communications Commission, his plans for which Carr wrote a chapter about in “Project 2025.” This cabinet appointment, Trump’s rhetoric around Big Tech “censorship,” and Musk’s disdain for “woke AI” suggest that the IN AISI will face scrutiny if it advocates for governance frameworks perceived to favor progressive agendas or Big Tech.

Recommendation: There's potential common ground in promoting innovation while managing extreme risks. The IN AISI could position itself as a vehicle for US leadership in international AI testing and evaluation, focusing on sharing best technical practices for managing existential safety risks while enabling strategic domestic and global competition.

International governance: Notably, Musk attended and lauded the UK’s 2023 AI Safety Summit, demonstrating interest in international governance that stands apart from Trump’s approach. In a 2023 interview, Musk outlined three key roles for a future AI regulatory body: seeking insight into AI, soliciting industry opinion, and proposing rules. The IN AISI could integrate this framework into its governance structure while maintaining flexibility for national implementation. This approach could help thread the needle between necessary oversight (which Musk advocates) and preserving each national AI Safety Institute’s competitive advantages.

Recommendation: Position the IN AISI as a platform for sharing safety protocols and testing methodologies rather than setting regulatory constraints. Highlighting the network’s role in advancing global adaptation to AI without revealing proprietary data that could fuel international competition will be key.

Looking Ahead

President-elect Trump’s track record of abandoning international agreements, from the Paris Climate Accord to threatening to pull out of NATO, underscores the precarious position of the IN AISI. By focusing on existential risk management, maintaining a careful approach to China, and enabling national AI Safety Institutes to set their own guardrails for AI innovation, the network could stand the test of a Trump presidency. The success of international AI safety cooperation may depend on finding this delicate balance between global innovation and AI safety governance.

Authors

Stephanie Haven
Stephanie Haven is a UC Berkeley Tech Policy Fellow researching generative AI's impact in war and conflict zones, with a focus on AI ethics and safety. Prior, she led Trust & Safety operations for elections and international conflicts at Meta, developing external governance mechanisms and content mo...

Topics