Chris Riley is Executive Director of the Data Transfer Initiative and a Distinguished Research Fellow at the University of Pennsylvania’s Annenberg Public Policy Center.
Spurred by ChatGPT and similar generative technologies, the news is filled with articles about AI replacing humans. Sometimes the concern is over AI replacing employees, displacing jobs; sometimes it’s about AI serving as a relationship partner, fulfilling human social and emotional needs. Most often, it’s even more direct, taking the form of fears that AI will dispense with humanity entirely.
But as powerful as AI technologies are, these fears are little more than science fiction in the present day. They’re also a distraction – but not yet, it seems, from ongoing efforts to regulate AI systems or invest in greater accountability. News and updates on both of these fronts continue to advance every day.
Rather, digital replacement fears are distracting the US from thinking about two other ways in which AI will shape our future. On the one hand, AI offers a major upside: It can amplify today’s massive investments in revitalizing the country’s industrial leadership. On the other, a major downside: It could contribute to breaking the already fragile post-World War II international order. These possibilities are intertwined, and their prospects will depend on US technology policy actions… or the lack thereof.
First, the upside. Through what’s increasingly being called Bidenomics, the US is witnessing a resurgence of domestic industrial and manufacturing capacity. The Inflation Reduction Act included $369 billion in incentives and direct investments specifically directed to climate change, catalyzing massive new and expanded battery and electric vehicle plants on American soil. It was followed by another $40 billion to connect every American to high speed internet. The CHIPS and Science Act adds money for semiconductor manufacturing, as does the Bipartisan Infrastructure Law for roads and bridges.
Along with private investment, the net result is double or triple past years’ investments in core US capacities. And the economic benefits are showing. Inflation is improving faster in the US than other countries, and unemployment remains at record lows; the nation’s economy is alive and well.
These investments also offer perhaps the clearest benefits of machine learning systems: improving logistics and efficiency, and handling repetitive and automatable tasks for businesses. Whether or not large language models can ever outscore top applicants to the world’s best graduate schools, AI offers massive improvements in areas that the EU’s AI Act would categorize as “minimal risk” of harm.
And the US has significant advantages in its capacity for developing and deploying AI to amplify its industrial investments, notably including its workforce, an advantage built in part through many years of talent immigration. Together, this is a formula for the US to reach new heights of global leadership, much as it reached after its massive economic investments in the mid-20th century.
Meanwhile, AI has long been regarded as the 21st century’s Space Race, given how the technology motivates international nation-state level competition for scientific progress. And just as the Space Race took place against the tense backdrop of the Cold War, the AI Race is heating up at another difficult geopolitical moment, following Russia’s unprovoked invasion of Ukraine. But the international problems are not just in eastern Europe. Although denied by US officials, numerous foreign policy experts indicate a trajectory toward economic decoupling of the US and China, even as trans-Pacific tensions rise over Taiwan’s independence (the stakes of which are complicated in part by Taiwan’s strategically important semiconductors industry).
Global harmony in the online world is no clearer than offline. Tensions among the US, China, and Europe are running high, and AI will exacerbate them. Data flows between the US and EU may be in peril if an active privacy law enforcement case against Meta by the Irish data protection authority cannot be resolved with a new data transfer agreement. TikTok remains the target of specific legislation restricting its use in the United States and Europe because of its connections to China. Because of AI, the US is considering increased export controls limiting China’s access to hardware that can power AI systems, expanding on the significant constraints already in place. The EU has also expressed a goal of “de-risking” from China, though whether its words will translate to action remains an open question.
For now, the US and EU are on the same side. But in the Council of Europe, where a joint multilateral treaty for AI governance is underway, US reticence may put the endeavor in jeopardy. And the EU continues to outpace (by far) the US in passing technology laws, with significant costs for American technology companies. AI will further this disparity and the tensions it generates, as simultaneously the EU moves forward with its comprehensive AI Act, US businesses continue to flourish through AI, and Congress continues to stall on meaningful tech laws.
It seems more a matter of when, not whether, these divisions will threaten Western collaboration, including in particular on relations with China. If, for example, the simmering situation in Taiwan boils over, will the West be able to align even to the degree it did with Ukraine?
The United Nations, with Russia holding a permanent security council seat, proved far less significant than NATO in the context of the Ukraine invasion; China, too, holds such a seat. What use the UN, another relic of the mid-20th century, will hold in such a future remains to be seen.
These two paths – one of possible domestic success, the other of potential international disaster – present a quandary. But technology policy leadership offers a path forward. The Biden Administration has shown leadership on the potential for societal harms of AI through its landmark Blueprint for an AI Bill of Rights and the voluntary commitments for safety and security recently adopted by leading AI companies. Now it needs to follow that with second and third acts – taking bolder steps to align with Europe on regulation and risk mitigation, and integrating support for industrial AI alongside energy and communications investments, to ensure that the greatest benefits of machine learning technologies can reach the greatest number of people.
The National Telecommunications and Information Administration (NTIA) is taking a thoughtful approach to AI accountability, which if turned into action, can dovetail with the EU’s AI Act and build a united democratic front on AI. And embracing modularity – a co-regulatory framework describing modules of codes and rules implemented by multinational, multistakeholder bodies without undermining government sovereignty – as the heart of AI governance could further stabilize international tensions on policy, without the need for a treaty. It could be a useful lever in fostering transatlantic alignment on AI through the US-EU Trade and Technology Council, for example. This would provide a more stable basis for navigating tensions with China arising from the AI Race, as well as a foundation of trust to pair with US investment in AI capacity for industrial growth.
Hopefully, such sensible policy ideas will not be drowned out by the distractions of dystopia, the grandiose ghosts of which will eventually disperse like the confident predictions of imminent artificial general intelligence made lately (just as they were many decades ago). While powerful, over time AI seems less likely to challenge humanity than to cannibalize itself, as the outputs of LLM systems inevitably make their way into the training data of successor systems, creating artifacts and errors that undermine the quality of the output and vastly increase confusion over its source. Or perhaps the often pablum output of LLMs will “fade into the miasma of late-stage online platforms,” producing just “[a]nother thing you ignore or half-read,” as Ryan Broderick writes in Garbage Day. At minimum, the magic we perceive in AI today will fade over time, with generative technologies revealed as what Yale computer science professor Theodore Kim calls “industrial-scale knowledge sausages.”
In many ways, these scenarios – the stories of AI, the Space Race, US industrial leadership, and the first tests of the UN – began in the 1950s. In that decade, the US saw incredible economic expansion, cementing its status as a world-leading power; the Soviet Union launched the first orbiting satellite; the UN, only a few years old, faced its first serious tests in the Korean War and the Suez Crisis; and the field of AI research was born. As these stories continue to unfold, the future is deeply uncertain. And AI’s role in shaping the future of US industry and the international world order may well prove to be its biggest legacy.
Chris Riley is Executive Director of the Data Transfer Initiative and a Distinguished Research Fellow at the University of Pennsylvania’s Annenberg Public Policy Center. Previously, he was a senior fellow for internet governance at the R Street Institute. He has worked on tech policy in D.C. and San Francisco for nonprofit and public sector employers and managed teams based in those cities as well as Brussels, New Delhi, London, and Nairobi. Chris earned his PhD from Johns Hopkins University and a law degree from Yale Law School.