Home

Donate
Podcast

How the EU's Voluntary AI Code is Testing Industry and Regulators Alike

Ramsha Jahangir / Jul 13, 2025

Audio of this conversation is available via your favorite podcast service.

Last week, following months of negotiation and just weeks before the first legal deadlines under the EU AI Act take effect, the European Commission published the final Code of Practice on General-Purpose AI.

The Code is voluntary and intended to help companies demonstrate compliance with the AI Act. It sets out detailed expectations around transparency, copyright, and measures to mitigate systemic risks. Signatories will need to publish summaries of training data, avoid unauthorized use of copyrighted content, and establish internal frameworks to monitor risks. Companies that sign on will see a “reduced administrative burden” and greater legal clarity, the Commission said.

At the same time, both European and American tech companies have raised concerns about the AI Act’s implementation timeline, with some calling to “stop the clock” on the AI Act’s rollout.

To learn more, I spoke to Luca Bertuzzi, senior AI correspondent at MLex, to unpack the final Code of Practice on GPAI, why it matters, and how it fits into the broader rollout of the AI Act.

What follows is a lightly edited transcript.

Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy of the European Commission. Source

Ramsha Jahangir:

Thanks so much for being here, Luca, during this very busy time. Maybe let's start with, can you briefly explain what the Code of Practice on General Purpose AI is and how does this voluntary Code relate to the broader EU AI Act and its rollout?

Luca Bertuzzi:

Thank you, Ramsha, for inviting me. The Code of Practice is basically meant to show to model providers how they will implement the AI Act provisions on general purpose AI. So what that means in practice is that yes, it's a voluntary tool, but it will set the regulatory benchmark companies will have to comply with when implementing the AI Act. That is why it's very difficult to underestimate its importance. I mean, the AI Act is comprehensive AI law. It touches upon very different aspects of the AI value chain, including AI models. So what we are looking at is potentially the first attempt of the EU to set the regulatory yardstick for AI companies, because it's obvious that AI models are where AI companies are more likely to follow AI standards also in other markets, because it's very difficult, of course, to change an AI model compared to an AI application. So this is a very significant development and I'm happy to be here to talk to you about it.

Ramsha Jahangir:

So the keyword here is voluntary, right? So what does signing onto the Code mean? And do you expect companies to sign? And have any announced yet that they will sign it?

Luca Bertuzzi:

The Code was only recently published. A lot of companies are still analyzing it. Mistral, France's darling startup on AI has already announced they would sign it. To understand the implications, I have to explain a bit the politics behind. Basically, what has happened in Brussels is that the policy narrative around tech regulation has completely changed in the past year. The AI Act was initially boasted as the next GDPR, Brussels setting the rules for the rest of the world. With the new legislative term, the narrative has completely, because there was a realization that Europe was lagging behind in the tech race, and that the AI Act, the GDPR could basically be an obstacle for that. So the European Commission is under pressure to show that the AI Act is not a problem in terms of technological innovation. And the best way to demonstrate that is if AI companies sign the Code.

So politically right now, it is very sensitive. There are also discussions ongoing that parts of the AI Act should be delayed because the technical standards to implement it are late. This would mean a much needed win for the commission. And the commission, let's not forget, will be the regulator for AI models. So what that means in practice is that the commission has a strong incentive to secure support for the Code. How they are doing that is basically suggesting a grace period. Now, it is basically something that was initially not spoken of. It was proposed in private meetings with model providers, but after press revelations now basically the commission has admitted it. Long story short, what this means in practice is that the commission is saying to companies, "If you sign, you'll be in the good guys list. If you don't sign, you'll be on the bad guys list." And you are basically putting a target on your back.

In fact, the AI Act says that the enforcement only start in August, 2026, including for the rules on general purpose AI. But the commission could start building a case. They could start as of September to send out requests for information. And as soon as the enforcement powers kick in, they could launch a full-scale investigation. I mean, it's all politics and to what extent companies are willing to take a risk, because we are talking about a regulatory risk. Now, how much the commission will follow through with this potential threat is still to be seen. We haven't seen a very muscular enforcement of other digital laws like the Digital Services Act and the Digital Markets Act, but this is the game they are playing, essentially.

Ramsha Jahangir:

Interesting couple of points there, that I want to unpack. So one thing is the grace period that you mentioned. Do you anticipate all companies rolling out their models just because they can right now in this grace period? And I guess the other thing connected to this is also this staggered timeline of the law itself, and the way different deadlines work. Will all of this together contribute to companies speeding up and releasing new models before enforcement begins?

Luca Bertuzzi:

So basically, there is already in the AI Act a provision that says that, "If your model is already on the market, you won't have to comply until 2027," if I'm not mistaken. It's a very complex legislation. But anyway, so there is already a sort of grace period for models already on the market, because of course it takes time to bring them in line. Now, it is less clear what happens if you update that model or if you fine-tune that model. Would you still enjoy such a grace period? Or would that be you consider, if the change is significant enough, would that be considered a completely new model? But I think at the end of the day, product teams in these international tech companies don't really care if there is a grace period under the AI Act. They will roll out their products anyway and then it's up to the legal team to figure it out. So I wouldn't say anyone is holding their breath for this.

Ramsha Jahangir:

And it is August, 2027, so you were correct. Just confirming that for our listeners. So okay, diving into the more substance of what the Code requires. What are those things that are covered and what are those things that need more guidance and will be covered in future iterations that you, I think reported is likely to be published next week?

Luca Bertuzzi:

Yeah, so indeed, we don't have the full picture yet. The commission has yet to publish guidelines on the AI Act, general purpose AI rules and a template for model providers to disclose their training data. Now, to what extent the Code of Practice and the guidelines will match, will matter a great deal. So for instance, the guidelines will define things like what is a general purpose AI model, what do you define fine-tuning? What is the line between fine-tuning and creating a completely new model? And these of course are key questions not only for model providers, but for all the downstream value chain, right? If you're a company, you're much more likely to fine-tune a pre-existing model to your needs than to create a new one. There are only a few model providers at the frontier of AI developments, whereas there could be thousands of fine-tuners.

So these are critical questions. We are expecting the guidelines as early as next week, but they will for sure be published by 2nd of August. The template, we simply don't know. The officials made us understand between the lines that it's not a given that it would be published by 2nd of August. And the reason is that this template is super sensitive. So if you're asking an AI company to disclose, even if it's just a summary, the data that they fed into their large language model, you are opening the flank to litigation from right holders, to investigations from data protection authorities. And of course this is something incredibly sensitive because we know that so far AI models have been developed in a sort of gray area in terms of copyright law and data protection rules. The question is why is the commission holding up to these templates? One theory is that they could face a very strong backlash either from the AI companies or from the right holders.

And so they're just waiting for the right moment to play that card. I suspect the right moment could be in the middle of August when everyone is away for the holidays in Europe, and so they get as little reaction as possible. But let's see. If that's actually the case, I wonder how many model providers will commit to the Code before seeing the template first. By the way, let me also say that this template is very significant, not only for right holders and data subjects in Europe, but also in other jurisdictions. I know right holders are looking at it from the US where they could launch litigation as well based on these data disclosures. Because of course the main problem for right holders right now is that they have a hard time proving that their content has been used to train AI models. So these will be perhaps an even more significant development than the Code of Practice.

Ramsha Jahangir:

Absolutely. I mean, we've also just seen in the DSA context the Data Access Delegated Act published. So there's a lot of hope and expectations from the researchers and also civil society tracking these laws to get the most out of it. But as we've seen in the DSA context, it's not as simple as it seems. So there's a lot that needs to go in the right direction for this to work. On-

Luca Bertuzzi:

It's never as easy as it seems. When you're talking about enforcing these rules with these massive companies, of course, they have a lot of leverage in telling you what is technically possible and what's not. But indeed, as you say, US research community has been looking forward to these data access provisions and hopefully it will open the black box of social media's recommender system and other algorithms.

Ramsha Jahangir:

It takes me to this question of how ready the AI office is and the role of the AI board in ensuring that all timelines are met and enforcement unfolds smoothly. So what is, from your perspective, how ready is the AI office and also the AI board?

Luca Bertuzzi:

For the listeners, the AI board is basically EU member states. So it's the national authorities that will also have a key role in enforcing the AI Act on the AI applications, whereas the commission will be on the lead for policing AI model providers. I mean, the short answer is everyone is struggling. The commission has had a hard time keeping up with legal deadlines. We have seen guidelines being delivered late. The Code of Practice in itself was meant to go out on May 2nd. So there is almost a three months delay. And national authorities are meant to be appointed by 2nd August of this year. And basically almost no EU country is there yet. They're still in the lawmaking phase where they have to pass national laws too, to set up this national regulators and give them enforcement powers.

How did we arrive at this situation? I think that the AI Act has unrealistic deadlines. I think this is partially due because the commission as a regulator was used to much more relaxed timeframes for antitrust and competition cases. And members of the European Parliament pushed during the political negotiations to set very tight timeline to ensure that the AI Act is up and running as fast as possible.

Now, the fact it's in the law doesn't mean it's happening in reality. For instance, the European Commission has been looking for a middle manager for its AI safety unit for almost one year, and it's still not there yet. And the AI Act is all about AI safety. So you can draw your conclusions already with that. They're also looking for a lead scientist, and they're struggling to find the right person, and the position is still open. The discussion on posing parts of the AI Act is not related to the public authorities, though. It's related to technical standards.

So the AI Act is part of this product safety legislation that basically sees technical standard as a way to demonstrate compliance with the legal requirements. The problem is the AI Act provides two years to complete the technical standards for AI applications. And normally a good standard takes up to four years. It's very complex process. You need to build consensus among different stakeholders, and it's also not that you can put a gun to their head. This is all voluntary. These are companies and civil society organizations giving up their time for free to develop a technical standard. So now this AI pause, stop the clock in Eugergren is on table. There will be a discussion at the next AI board meeting in September, where the commission will present some options on how to move forward, and then there will be a ministerial meeting in October that will likely rubber stamp what has been decided at the AI board level.

So I think what is looking like the most likely scenario is a pause of around one year for the AI Act high risk requirements, which actually was the original position of EU member states. You could wonder, I mean, if they have more time to develop a standard, they have more time, it's good. But the problem is when you have such tense negotiation on something where commercial interests are so high, you sort of need a deadline to reach an agreement. If you postpone the deadline, are we sure we won't be in the same situation one year from now? So I'm just putting this out there because I need to be critical, of course. But what can I say is that we will need to watch this next AI board meeting in September to see what the decision is.

Ramsha Jahangir:

And you're not alone in being critical. A lot of civil society actors, including many tech policy press contributors, have been saying that they've had limited opportunity to shape the outcome. Also, most recently I saw a group of MEPs that the commission had allowed last minute removals of elements around public transparency and introduced weaker risk assessment and mitigation provisions in the Code. So there's definitely a big concern of industry shaping the outcome. And that actually brings me to my final question related to the Code that I don't know if we addressed before. Is it possible for the industry to pick and choose which provisions they adhere to?

Luca Bertuzzi:

Yeah. So in terms of who won the lobbying battle around the Codes, what I can say is that speaking to a lot of different stakeholders, no one is happy. Usually that means a compromise has been reached or expectations were impossible to match. So I think in the last iteration, what we saw is that civil society is mildly happy, right holders are not particularly happy, but I still have to find a right holder that is happy, to be honest, and model providers are really not happy. And a lot of their decision, I think, will depend on the guidelines and the template. So I think overall, I'm not sure where you can, if it was a delicate balance that was reached or if there could have been done better, I think probably under the current circumstances, I think the drafters of the Code were under immense political pressure, and we will see if there is a sufficient industry uptake to call this success.

But let's also keep in mind that codes of practice are meant to be co-regulatory instruments. I mean, there was a strong push from the European Parliament, also involved civil society, and other stakeholders, right holders, and downstream players. But essentially what is in the AI Act is that this was meant to be an instrument developed with model providers, with contribution from other stakeholders. I hear, let's say concerns from all sides. But I think overall so far, model providers are those that seem the more unhappy with this. Will that mean that they won't sign? I'm not a hundred percent sure of that. And coming to what you were saying, will they be able to commit to only parts of the codes or do they have to commit to the entire thing? The Code is a voluntary instrument, so it would make sense, then you can decide if you use parts of it to demonstrate your compliance and find your own way to comply with the AI Act.

The problem with that, and to be honest, this would also provide a good way for the commission to save face if they see, there is not enough industry uptake for the Code of Practice. But the problem is that if you start saying, "You can cherry-pick the commitments," you might find yourself in a situation where all key model providers decide to opt out from the copyright measures or from the risk mitigation measures, which basically means that that part of the Code has no significance, and also the idea that the Code could send the international benchmark for how AI models handle safety and the transparency would be crippled from the start. So I think that this is really the last card the commission will play in this very delicate political game. And yeah, it remains to be seen whether they will play it or not.

Ramsha Jahangir:

On that note, thank you so much, Luca. This has been very insightful, as expected. I really appreciate your time and also the fantastic reporting that you've been doing on the AI Act and other related topics. So thank you so much for all your work.

Luca Bertuzzi:

My pleasure. Thank you.

Authors

Ramsha Jahangir
Ramsha Jahangir is an Associate Editor at Tech Policy Press. Previously, she led Policy and Communications at the Global Network Initiative (GNI), which she now occasionally represents as a Senior Fellow on a range of issues related to human rights and tech policy. As an award-winning journalist and...

Related

Perspective
EU Rules for General Purpose AI Model Developers Are Ready, Despite What Industry SaysJuly 10, 2025

Topics