Home

Donate

Podcast: Paths Diverge at the Paris AI Action Summit

Justin Hendrix / Feb 16, 2025

Audio of this conversation is available via your favorite podcast service.

At the Paris AI Action Summit on February 10-11, remarks by EU and US leaders indicated significant divergence on how to think about AI. But on balance, nations are moving decisively toward innovation and exploitation of this technology and away from containing it or restricting it.

In this episode, I surface voices from the Summit, as well as reactions and discussion on these matters at this year's State of the Net conference on February 11 in Washington, DC, including comments by Center for Democracy & Technology vice president for policy Samir Jain, Abundance Institute head of AI policy Neil Chilson, and former Biden administration assistant director for AI policy Olivia Zhu.

Below is a lightly edited transcript of the episode.

Justin Hendrix:

Good morning. I'm Justin Hendrix, editor of Tech Policy Press, a non-profit media venture intended to provoke new ideas, debate, and discussion at the intersection of technology and democracy.

News Reel:

The great Paris Exhibition of 1937 is opened by the president of the Republic, Monsieur Lebrun, who is accompanied by the Premier Mr. Léon Blum, and almost every member of the French Cabinet. Mr. Lebrun visits first the palace of ...

Justin Hendrix:

Paris has always been an excellent host to international gatherings. France built the Grand Palais to host the Universal Exposition of 1900 for world leaders and those in attendance, marveled at new inventions such as the escalator, talking films, and the dry cell battery. 125 years later, this week, French President Emmanuel Macron, welcomed leaders at the Grand Palais once again to talk about the latest technology, artificial intelligence at the Paris AI Action Summit on February 10th and 11th.

France and India co-chaired a gathering of heads of state and government, leaders of international organizations, CEOs of small and large companies, academics, representatives of NGOs, and artists and members of civil society for the third in a series of international discussions on policy issues surrounding AI that started in 2023 at Bletchley Park under the banner of AI safety. The Paris Summit's main themes were public interest AI, the future of work, innovation and culture, openness and trust in AI, and global AI governance.

As multiple contributors have noted in Tech Policy Press ahead of the summit, the framing of the Paris discussions seems to indicate a shift among leaders away from concerns about safety and regulation, and more towards innovation and exploitation of this technology. French President Emmanuel Macron brought up this tension during his remarks and framed AI as a matter of sovereignty for all nations.

Emmanuel Macron:

First, we're all convinced that this is a time of innovation and acceleration, and this is why, in the course of the discussions, and we've seen it today, that we have this dilemma of what's positive, what's negative, whether we're afraid or feeling there's hope. And we'll see that the actions will speak for themselves. We know that we want to be involved in this. We realize that this is competition and we are decided to do this because this is a matter of sovereignty for all participants, so we're living a moment of great innovation and extreme speed. It's a time of acceleration, acceleration of choices, and of investments.

Justin Hendrix:

President Macron's co-chair, Indian Prime Minister Narendra Modi, put it this way.

Narendra Modi:

Governance is not just about managing this and rivalries, it is also about promoting innovation and deploying it for the global good.

Justin Hendrix:

Other leaders also brought up this supposed tension between regulation and innovation. For instance, Henna Virkunnen, who serves as executive vice president of the European Commission for Technological Sovereignty, Security and Democracy, laid out the EU regulatory stack and concerns about safety and democracy.

Henna Virkunnen:

There we have important principle that we want to have a digital environment that is fair and safe and democratic, and it's very much the cornerstone of all our digital legislation, especially now when we speak about AI. Of course, the AI Act is very crucial part of that, but also Digital Service Act and Digital Markets Act, so these are the main regulations in this field.

Justin Hendrix:

But she also spoke of concerns in Europe that regulation is hindering innovation and growth.

Henna Virkunnen:

I think it's also very important that we are looking, that we have environment that encourages innovations and investments. And we have to take it now very seriously that we have faced a lot of criticism from our start-ups and from our SMEs, from our industries, that we have too much bureaucracy and administrative burden in the European Union.

Justin Hendrix:

European Commission President, Ursula von der Leyen,also spoke of cutting red tape.

Ursula von der Leyen:

This is the purpose of the AI Act—to provide for one single set of safe rules across the European Union, 450 million people, instead of 27 different national regulations and safeties in the interest of business. At the same time, I know that we have to make it easier and we have to cut red tape and we will.

Justin Hendrix:

The Europeans came ready to announce big investments in AI, including public sector resources they hope will provide a substrate for start-ups and companies to make progress.

Ursula von der Leyen:

I welcome the European AI Champions Initiative that pledges 150 billion euros from providers, investors, and industry. And today, I can announce that with our Invest AI Initiative, we can top up by 50 billion euros, so thereby, we aim to mobilize a total of 200 billion euros for AI investment in Europe. We will have a focus on industrial and mission-critical applications. It'll be the largest public-private partnership in the world for the development of trustworthy AI. And finally, cooperative AI can be attractive well beyond Europe, including our partners in the global south. And in this spirit, we fully support the AI foundation that is being launched today.

Justin Hendrix:

But despite their similar emphasis on balancing the need for regulation with growth and innovation, it was clear that different visions and priorities were on display in the speeches delivered by various world leaders. But perhaps most significantly, the gulf between Europe and the US appeared only to widen. The summit concluded with United States Vice President JD Vance's first speech on the international stage. Vance promised American AI dominance and warned other countries, particularly the European Union, against tightening the screws.

JD Vance:

When conferences like this convene to discuss a cutting-edge technology, oftentimes I think our response is to be too self-conscious, too risk-averse. But never have I encountered a breakthrough in tech that so clearly calls us to do precisely the opposite. Our administration, the Trump Administration believes that AI will have countless revolutionary applications and economic innovation, job creation, national security, healthcare, free expression, and beyond. And to restrict its development now, would not only unfairly benefit incumbents in this space, it would mean paralyzing one of the most promising technologies we have seen in generations.

Justin Hendrix:

Vice President Vance trumpeted American dominance in the field.

JD Vance:

The United States of America is the leader in AI, and our administration plans to keep it that way. The US possesses all components across the full AI stack, including advanced semiconductor design, frontier algorithms, and of course, transformational applications.

Justin Hendrix:

But he also took a stand against regulatory efforts like those in Europe, which he called a mistake.

JD Vance:

Now, this administration will not be the one to snuff out the startups and the grad students producing some of the most groundbreaking applications of artificial intelligence. Instead, our laws will keep big tech, little tech, and all other developers on a level playing field. Now with the president's recent executive order on AI, we're developing an AI action plan that avoids an overly precautionary regulatory regime while ensuring that all Americans benefit from the technology and its transformative potential.

Now, we invite your countries to work with us and to follow that model if it makes sense for your nations. However, the Trump Administration is troubled by reports that some foreign governments are considering tightening the screws on US tech companies with international footprints, now America cannot and will not accept that.

Justin Hendrix:

The Vice President targeted Europe's Digital Services Act and the General Data Protection Regulation, in particular.

JD Vance:

Companies are forced to deal with the EU's Digital Services Act and the massive regulations it created about taking down content and policing so-called misinformation and, of course, we want to ensure the internet is a safe place. But it is one thing to prevent a predator from preying on a child on the internet, and it is something quite different to prevent a grown man or woman from accessing an opinion that the government thinks is misinformation.

Meanwhile, for smaller firms, navigating the GDPR means paying endless legal compliance costs or otherwise risking massive fines. Now, for some, the easiest way to avoid the dilemma has been to simply block EU users in the first place. Is this really the future that we want, ladies and gentlemen? I think the answer for all of us should be no.

Justin Hendrix:

Just hours after the speeches at the summit concluded, I was in Washington DC for the annual State of the Net Conference, where numerous speakers addressed AI and what should be done to regulate it. For instance, California representative Jay Obernolte discussed the recommendations of the House Task Force on Artificial Intelligence final report released late last year.

Jay Obernolte:

We embrace something that we call sectoral regulation, which is very different than is being proposed in other parts of the world. So for example, the European Union has proposed a different model where they are creating a nearly universal licensing requirement for anything but the lowest risk use cases for AI. And they're spinning up a brand new bureaucracy to create standards and tests for the issuance of those licenses. And we think that that is not something that's well-suited to our American system of regulation. Instead, we suggest empowering our existing sectoral regulators to regulate within their spaces.

Justin Hendrix:

But in his remarks, Senator Edward Markey, a Democrat from Massachusetts, decried the relationship between tech firms and the Trump Administration and warned of a failure to contain consolidation.

Edward Markey:

Just look at where big tech oligarchs sat at the inauguration. They sat in front of Trump's cabinet. James Madison spinning in his grave thinking about this, this very thought that Article I is the Congress, Article II is the president, Article III is the Judiciary, and now we have Article 3.5, the Muskocracy, which then trumps all of the original three articles of the United States Constitution. I don't know what those minutemen and women in Lexington and Concord are thinking. I don't know what James Madison is thinking, but this is not what they were fighting for.

They were fighting to make sure there was not a king, there was not authoritarianism. That's where we are right now. And tech oligarchs closing up to an authoritarian president is a dangerous combination, both for the free and open internet and for our democracy. And finally, after over a decade of the government failing to regulate the big tech platforms, the rise of artificial intelligence threatens to double down on our consolidated tech ecosystem.

Justin Hendrix:

One panel discussion, moderated by Politico's Mohar Chatterjee, was devoted specifically to discussion over policy frameworks for AI regulation in the US. And panelists spent quite a bit of time considering the divide between the EU and US approach. For instance, Samir Jain, vice president of policy at the Center for Democracy and Technology, wondered what it will mean if the US decides to go it alone.

Samir Jain:

So there is that conversation going on in Europe as well. But I think the question of how do these different tensions play out? Does the US in fact continue to lead the world? Do people follow the direction that the Trump Administration is taking, or does the Trump Administration go off in one direction, Europe and others go off in a different direction where they're continuing to emphasize safety and at least some basic regulations? I think it's up in the air right now whether or not the US will continue to lead on AI policy or whether it just sets itself apart and there's a real tension there. And I think that that question has really been set up by Vice President Vance.

Justin Hendrix:

Olivia Zhu, former assistant director for AI policy in the White House Office of Science and Technology Policy under the Biden Administration, also pointed to the burden of European regulation on American companies.

Olivia Zhu:

So before I joined the Biden administration, I worked in the tech industry for about six years building AI models at Microsoft Research and Amazon, so speaking from that experience. So I was actually around at these tech companies when we were implementing GDPR and the companies I worked at, we took it very seriously. It was a very high cost. I managed software engineering teams and we had software engineers who were pretty much full-time working on implementing GDPR. Legal teams, obviously, very dedicated to understanding the implications of GDPR, translating that into requirements for companies, and then having the software engineers implement that.

And so as someone who directly managed tech teams, I really felt that impact. And so I do not imagine that, for example, with compliance with the EU AI Act, companies are going to take it any differently. I imagine they're going to be very serious about this, has very serious legal ramifications, and I think it's going to take a lot of energy from companies to comply with things like the UA AI Act.

Justin Hendrix:

Neil Chilson, head of policy at the Abundance Institute, a nonprofit that says it wants to create space for emerging technologies to grow, thrive, and have a chance to reach their full potential, said he thinks the pushback on EU regulations is already having an effect.

Neil Chilson:

I think there's going to be a little bit of a shift. I think GDPR is a great example. So I think the companies took it very seriously because they thought this was part of the deal. If we take this really seriously, we spend a ton of money, we're going to be good with the European Union. And that has not happened. And I think we've seen over and over and over the European Union levying enormous fines against American companies who are spending tons of money trying to comply with European laws. And I think what's different, we can already see some evidence of this. I don't remember social media companies not rolling out products or making public announcements about not rolling out products to Europe because of regulation. We've seen that repeatedly in the AI space already. People who are saying, "Look, we know you really want this technology. We know you're eager for it. Your regulations are making it hard."

And so I think that's what's driving a lot of the questioning in Europe about this approach. And another big difference is that we knew a lot more about the mechanisms of privacy in many ways online when GDPR was passed than we know about the mechanisms of AI governance when the AI Act was passed. And so I think the EU jumped into that, that that's maybe their comparative advantage, is passing laws quickly. I think they're starting to think about whether or not that's good for their long-term advantage. And so I think it will be different this time. I think the companies are pushing back. I think they have an ally in the Trump Administration to push back against uneven application of European law against American companies, and so I think it will look different.

Justin Hendrix:

CDT's Samir Jain suggested this pushback may show up in how the EU chooses to implement and enforce its laws.

Samir Jain:

I mean, look, this is now where the rubber is hitting the road on the AI Act and the DSA, right? They've passed, but there are very high level laws, and the real question is how do they get implemented and how do they get enforced? And that's the fight that's going on, that's really just starting right now. And I think that's going to be the subject of a lot of debate and push and pull both within Europe and between the Europe and the United States.

We've certainly seen American companies pushing back against it. I don't think it's ... I think Mark Zuckerberg, for example, made quite clear that part of the reason of some of his changes in policies around Meta are because he's hoping and expecting that the Trump Administration is going to push back against the EU. I don't think that's a particular secret.

Justin Hendrix:

The panel also discussed how state laws are advancing on the AI. The Abundance Institute's Neil Chilson said some of the ones that target specific harms make sense, while others, he says, appear to be jumping the gun.

Neil Chilson:

There's a wide array of state laws. I think there's already been well over 300 introduced in these current state sessions. I think there were over a thousand last session. And so it's hard to talk in like real generalities, but some of what they're responding to is concerns about real harms. And so there's laws on the books focused on AI-generated deepfakes, non-consensual, pornography DeepFakes. Those types of targeted laws seem to make some sense.

Justin Hendrix:

CDT's Samir Jain was more supportive of the comprehensive approach taken by states like Colorado.

Samir Jain:

I think to some degree, they're just responding to a vacuum, right? I mean, the federal government isn't really acting at the moment. Certainly Congress isn't passing. And so I think states are stepping in. I mean, I agree with the idea that we should be responding to real harms, non-consensual intimate images is certainly an example of that. But I think another real harm, and you're seeing Colorado and others respond to that, is we know these systems are being used in ways that make really important decisions about people, about whether or not they get a job, whether or not they get access to a loan, whether or not they get access to housing.

And the systems are being used in many ways in making those decisions without any transparency around the fact that they're being used at all, let alone how they're making those decisions, whether or not they, in fact, are biased in some way against particular groups of people. And so I think it is really important that someone step in and try to mitigate those harms.

Justin Hendrix:

Yet the former Biden administration official, Olivia Zhu, worried about a potential patchwork of competing regulations.

Olivia Zhu:

I appreciate the state's leadership in stepping up to address real harms. I think at the same time, I'm worried about creating a, in the absence of federal leadership, just a patchwork of different state frameworks, which is going to ... when we talk about government efficiency, that's a huge burden on companies to look at 50 or even 20 or even 10 different frameworks and understanding how they can comply in each of those states. Not to mention the amount of government burden that is.

When a state enacts a law, then they've got to enforce it. They've got to do that oversight. And frankly, technical AI talent in government, especially at the state level, I think we could all agree, could be improved and always needs improvement. And so we think about the number of people who are going to be doing similar duplicative work across multiple states, not super efficient. And so while I do really appreciate the state's leadership, I am also worried that the federal ... lack of federal consistent policy would create a vacuum in which we've got 50 different duplications of similar type laws.

Justin Hendrix:

Whether the US will advance a federal policy or a patchwork of state policies remains to be seen. But the transatlantic divide on AI regulation might shape the global AI ecosystem in the years to come. Where will other players like India, China, and developing nations end up? What's clear is that AI is no longer just a technological challenge, it's a geopolitical one, as President Macron put it.

Emmanuel Macron:

We don't want a system of those who are in power and the rest are vessels. We want to have this serving the entire planet. And I don't believe that the world will be divided between the global north and the global south just looking at one another. Because if we want to have trust in artificial intelligence, we have to make sure there is equitable access to these innovations in all continents. And this is a challenge for both public authorities and the private sector. And, of course, this is a challenge in our societies too. We must not let a divide be established between the older generation, younger generations, or in certain parts of our countries with differences in access to artificial intelligence.

Justin Hendrix:

But while the Paris AI Action Summit was framed as a moment of unity, it ended without the US and UK signing the summit's final communique, which garnered the signatures of more than 70 other nations, including India, Brazil, China, Canada, and the EU. Yet, no matter what the nations agree in these big international events, activists will continue to push for transparency in more democratic forms of governance. On the sidelines of the summit, Signal President Meredith Whittaker had this to say to Sky News.

Meredith Whittaker:

…where the control is still held by a handful of companies and where ultimately AI, whether you call it an agent, whether you call it a bot, whether you call it something else, can only know what's in the data it has access to, which means there is a hunger for your private data, and there's a real temptation to do privacy-invading forms of AI.

And frankly, it is fairly absurd to me that any organization in such a pivotal position with such heavy responsibility infiltrating so many core functions of our social and economic life, would assume that we should just trust rhetoric when we should be asking for receipts. We should be asking for access. We should be demanding open source, verified, democratically governed systems if they're going to have such a pride of place in our world.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Related

At Paris AI Summit, US, EU, Other Nations Lay Out Divergent Goals

Topics