Home

Donate

Paris Just Hosted a Major AI Summit. Its Biggest Debate Was a Trap.

Daniel Stone / Feb 21, 2025

PARIS — On February 10-11, 2025, France hosted the Paris AI Action Summit at the Grand Palais. Source

Last week, under the soaring glass dome of the Grand Palais in Paris, some of the world’s most powerful voices and experts on AI gathered to set a blueprint for the future. But instead of laying the foundations for progress, governments seemed trapped in a limiting, almost tribal binary: are we for ‘innovation’ or ‘regulation’?

This binary doesn’t just mislead—it traps governments in a reactive stance. With AI cast as an industry-led race, policymakers are left reacting to whatever models and business practices tech firms choose to develop. Instead of actively setting the terms, they mediate disputes between dominant players. The result? Over the past year, we’ve seen regulators scramble to keep up while a handful of actors dictate the trajectory of technological progress. Escaping this cycle requires rejecting the binary altogether—and asking a different question: how do we build AI to serve the public?

This failure is already having consequences behind closed doors. Over the past six months, colleagues from think tanks and government departments across the US, UK, Australia, and the EU have described mounting pressure from leaders to ‘pick a side.’ But with ‘innovation’ and ‘regulation’ reduced to buzzwords, the debate is framed to ignore a more urgent question: what AI systems do people actually need, and how should they be built? Instead of exploring possibilities, policymakers lock themselves into a false choice that limits both imagination and action before the conversation even begins.

The framing is wrong—and worse, it’s failing the public.

To the public, AI isn’t about code—it’s about control

Over the past two years, I’ve tracked global polling on AI policy, and one striking pattern has emerged: most people don’t think about AI in isolation. When asked about it, they don’t begin with the technology itself—they anchor it in the broader uncertainties that shape their lives. AI becomes a proxy for deeper anxieties about power, inequality, and a world that feels increasingly complex and out of their control.

Across France, the EU, the US, the UK, India, Nigeria, Indonesia, Australia, and Brazil, these anxieties take shape in concrete ways. People see the cost of essentials rising while stable, well-paid jobs disappear into the gig economy. They doubt their governments can resist corporate influence, domestic elites, or foreign powers. Without strong public oversight, AI is seen as just another force accelerating these pressures—one more system designed to serve those in power rather than the public good.

Yet, for many, AI tools carry the hope of something different. Could we use them to ensure fairer access to opportunity, improve essential services, and make daily life more affordable? Or will they be used to erode stable jobs, deepen inequality, and make decision-making even more detached from their needs? The stakes aren’t just about technology—they’re about whether AI is used to entrench failures or help solve them.

If the public’s concerns about AI are fundamentally about power—who wields it, who benefits from it, and whether it will be used to improve their lives—then the answer isn’t just setting guardrails around private companies. Across the world, people want to see their governments act to ensure AI tools are built and governed in the public interest, not concentrated in the hands of a few private firms—right from the ground up, with public oversight over its foundations, from datasets to compute power to the models themselves.

This isn’t just about populism or economics—it’s also about national sovereignty, opportunities for local innovation, and democratic control over the technologies being used to shape society’s future. Without real public oversight, our approach to AI tools risks deepening economic reliance on foreign tech giants, limiting national sovereignty, raising the cost of innovation, and locking entire economies into extractive business models they have no control over.

We’ve seen this play out before. When the transformative technologies of railroads, electricity, and the internet were left in private hands, monopolies formed, progress stalled, prices soared, and entire economies became dependent on corporations with no obligation to serve the public. That was until governments stepped in, not just with regulation but to build public infrastructure that fueled decades of innovation and shared prosperity. AI is no different. If governments fail to act now, they will soon find themselves paying rent to a handful of corporations for essential AI services.

A different path is possible

Governments don’t have to accept a future where a handful of US firms control AI tools and infrastructure. They can collaboratively build AI as a public good—one that serves the needs of society first. That’s exactly what the launch of Current AI represents and why it was the most significant outcome of the Paris Summit. With an initial $400 million investment, this international partnership—uniting 10 founding countries, businesses, and philanthropies—marks the first-ever collaboration of its kind to develop open-source AI models, robust auditing frameworks, and accessible datasets. By investing in shared infrastructure that markets won’t, it lays the foundation for a public interest AI ecosystem—one that fosters competition, preserves local cultures, prioritizes sustainability, fuels new startups, and ensures AI benefits everyone, not just a privileged few.

Public Interest AI can unlock possibilities that today’s models ignore. Imagine an AI that helps small entrepreneurs in Nairobi grow by predicting supply chain disruptions, enables doctors in remote Australian towns to diagnose rare illnesses quickly and affordably, predicts floods in Bangladesh, or ensures that families in São Paulo save money through efficient energy systems. These aren’t pipe dreams—they’re the kinds of tools that become possible when AI is designed for the public good, not private profit.

Skeptics ask: Can governments build AI at scale? Can open models really compete? The truth is, they already do. DeepSeek, Stanford, and LLaMA have shown open-source AI models can match proprietary systems and are becoming more competitive every day. And private AI? It already depends on public investment. Governments fund the research, subsidize the compute, and train the models on public data. The real myth isn’t that governments can build AI—it’s that private firms have a right to own it. AI can be a force for good—if we build it that way.

It’s also easy to assume that AI leadership must come from Silicon Valley or Westminster. But the first major AI initiative built outside that bubble seeks to prove a different point: AI governance should not require permission or be dictated by the priorities of a handful of tech firms or two governments. France, India, Switzerland, and many others, home to some of the world’s wealthiest, most populous countries and advanced AI ecosystems, are proving that AI governance can, and should, be globally owned. As an open, collaborative partnership designed to grow over time—many countries have already signaled their interest in joining once the foundational projects are underway.

Standing beneath the glass and steel of the Grand Palais, I was reminded that the greatest achievements in human history weren't built in isolation. They emerged when people from different backgrounds—scientists, engineers, artists—came together with a shared vision. Bold ideas like these have shaped nations, transformed industries, and redefined what we believe is possible. The window to shape AI’s future is closing fast. If governments fail to act, they won’t just lose control—they’ll find themselves locked into a system built for profit, not for people. The Paris Summit showed us we can choose a different path forward—not between ‘innovation’ or ‘regulation,’ but one where AI serves as a foundation for shared progress rather than a tool of concentrated power. The choice is ours to make.

Authors

Daniel Stone
Daniel Stone is the Executive Director of Diffusion.Au, a fellow with the Centre for Responsible Technology Australia, whose research with the Centre for the Future of Intelligence at the University of Cambridge recently explored global AI policy narratives.

Related

AI Countergovernance: Lessons Learned from Canada and Paris

Topics