Home

Donate

The EU AI Act Enters Final Negotiations

Justin Hendrix / Oct 1, 2023

Audio of this conversation is available via your favorite podcast service.

While US Senators are busy holding hearings and forums and posing for pictures with the CEOs of AI companies, the European Union is just months away from passing sweeping regulation of artificial intelligence.

As negotiations continue between the European Parliament, Council, and Commission, Justin Hendrix spoke to one observer who is paying close attention to every detail: the Ada Lovelace Institute's European Public Policy Lead, Connor Dunlop. Connor recently published a briefing on five areas of focus for the trilogue negotiations that recommence next week.

Below is a lightly edited transcript of the discussion.

Justin Hendrix:

Connor, what does the Ada Lovelace Institute get up to?

Connor Dunlop:

Yeah, well, we get up to a bunch of things. Primarily, we are an AI and data research institute. We're independently funded and we have the remit to look at how AI and data impact people and society.

Justin Hendrix:

Where does it get its name?

Connor Dunlop:

Ada Lovelace was an icon in the UK. She was one of the first computer scientists. We have The Alan Turing Institute in the UK, so I think it was fitting to also set up an Ada Lovelace Institute, and yeah, she worked with Charles Babbage on the concept for a computer, and she was the daughter of Lord Byron as well, interestingly. So yeah, iconic for a few reasons.

Justin Hendrix:

I'm going to talk to you today a little bit about some work you've done around the EU's AI Act and various recommendations you've made along the way. Can you just remind us the history of the act and catch us up on where we are in the process?

Connor Dunlop:

The AI Act was first proposed in April 2021. It is the first attempt globally to set up horizontal legislation on the development and deployment of AI. From the perspective of the Ada Lovelace Institute, that was a big reason for us to expand our work beyond the UK, to also look at the EU. We are getting towards the end, this should be finalized by the end of this year. They're now entering what are called the trilogue negotiations, which is when the three EU institutions come together to thrash out a final text for the AI Act. Maybe some details, if it's helpful, on the framing of the act. It's a product safety legislation in the EU, which means it's focused on the AI system going on the market, and the remit is to reduce risks to health, safety or fundamental rights.

The way they do this is via what they call a risk-based approach. So, there's a pyramid tiered system. The top tier is some practices which are completely prohibited, a tier below this is high-risk AI systems, which is the heart of the act, and which gets the most attention in terms of what obligations high risk AI systems would have to follow. Then below that again, there's low risk AI, which would be subject to some transparency requirements, and then at the bottom, a lot of other AI systems that wouldn't be touched by this regulation.

Justin Hendrix:

One of the pieces of news that I suppose was salient enough to cut through in the US was the idea that the EU really had to scramble to go back and accommodate generative AI after the launch of ChatGPT and some of the potential harms of large language models and other generative systems became apparent. Is that process further along?

Connor Dunlop:

Yes, that's definitely something that has changed from the starting point. The European Commission who drafted the initial text for the AI Act, they would say that they weren't scrambling, per se. They say that they were aware of these systems, but the attention was paid to basically the deployer of this system. So, they thought they would be captured once a deployer adopted general purpose AI foundation model, generative AI to a high risk context. So as the process has developed, of course, ChatGPT and similar systems have shown a new light on this type of technology.

But yeah, I think what's changed is the recognition that putting the full burden of compliance on the deployer is tricky and arguably impossible for an SME or a startup to fully comply. To take an example, there's some obligations around data governance requirements. If a downstream deployer builds on top of a generative AI foundation model, general privacy AI as you want to call it, they might not have access to the source data or the model. So it's very hard for them to make the model compliant. So that's one example of how the conversation has shifted. A lot of the attention now is really honing in on how to share this burden across the value chain, having some obligations on the upstream developer and some on the downstream deployer.

Justin Hendrix:

You've made a series of new recommendations as these trilogues get underway. Let's just go through them sequentially and talk about some of the highlights of each one. First, you're calling for a centralized regulatory capacity to ensure an effective AI governance framework you say. What's important in this centralized regulatory capacity, you want there to be a new office?

Connor Dunlop:

Yes, exactly. So as it was originally conceived, there would have been an AI board, which was basically a board of experts from across EU member states, and that was going to be the central function for governance. In our opinion, looking at historical examples such as the GDPR, there's been challenges around enforcement when it's primarily done only by member state regulators.

So having the central function, we see it as a good way to ensure uniform enforcement across member states and to not have divergence and levels of protection. We've heard concerns from some member states, particularly smaller ones, that they don't have the technical expertise and the regulatory capacity to do complete oversight of the AI Act. So they would maybe appreciate support at a central level where that knowledge and expertise can potentially sit and then be diffused across regulators. So yeah, that's something that we're very keen on.

I also think what the office could do is really to be a way to reduce these information asymmetries that we see between regulators and developers of AI. We've looked at governance in other jurisdictions such as the Food and Drug Administration in the US, and you do see examples quite over time, a strong regulator can learn and upskill. This is really helpful for enforcement, so to us it makes sense to have that at a central EU level in an AI office. The other thing that we looked at, did a gap analysis of what could a central function do compared to what a member state regulator could do, and another big one that was coming up in our research is to do monitoring and foresight activities.

So rather than having ... That's of course vital for regulators to make future push regulation, and it intuitively makes sense that this could be done at a central level rather than each member state doing that in their own initiative. The final one I would add is it could be a really nice way to establish feedback loops as well between developers and also civil society and effective persons and essential regulator. There's the option to set up what they call permanent subgroups in the AI office, so you could imagine developers or civil society experts sitting there feeding into regulators what they're seeing out in the world and allowing future-proof regulation in that regard.

Justin Hendrix:

Who's going to pay for all this?

Connor Dunlop:

Again, we looked at the FDA in the US and also the European Medicines Agency in the EU. One thing that we think could be explored would be to have mandatory fees levied on developers themselves. We've also seen this with the EU Digital Services Act. Some of the top tier or the very large online platforms as they're called, they also pay fees that goes towards regularly capacity. So yeah, I think there would have to be some thought given to who would pay these fees. It wouldn't just be across the board. It might be some sort of threshold approach, but that's something that we're excited about exploring more.

Justin Hendrix:

Let's talk about one of the areas that has been contentious, which is around foundation models. These are the general purpose models that may or may not be possible to discern the harms of across so many different types of applications. You say that these present novel governance challenges, perhaps I'll ask you to maybe just detail a couple of those novel governance challenges and what you think should be done to address them.

Connor Dunlop:

The novel challenges come from a few different elements of these foundation models. I think one of them is that they are incredibly complex pieces of technology. The developers themselves say that they don't understand the technology. It's a black box is the term that we often hear. So this poses a lot of challenges for accountability and also around understanding how the model will interact when deployed in the world. I think an added complication is that there is a chance these will be a new digital infrastructure. A lot of developers will be building on top of these models. Anything that goes wrong at the upstream model level can proliferate quickly and have widespread societal implications.

Justin Hendrix:

You also address open source foundation models as a key area for the trilogues.

Connor Dunlop:

Yeah, indeed. This has been one of the thorniest questions of the trilogues, and I think the questions we're grappling with are looking at the resources that are needed to build these foundation models. So it requires a lot of data, a lot of compute, a lot of talent to develop the algorithms and what we're seeing as the market exists today that most of these developers are closed providers or large commercial companies who might say that they open source their model, but it's not open source as we historically have understood it in terms of software.

So to take an example, Meta and Llama 2, they open source this, but they have massive commercial interest to do so. So I think that's the main question for us. We really don't want to have downstream developers building on top of an open source model that was exempt because there's a push from some quarters that have exemptions for open source models in the AI Act. So yeah, I think that would lead to divergence levels of safety. And if it is a big commercial, well-resourced actor developing the model, we think they should also comply with the Act. I do think that this question is not settled. So what I've said there is taking a very precautionary approach.

And this is actually why we led as our number one recommendation with this central well-resourced regularly capacity, because we really think that you need to have a future-proof regulation, which can adapt those quality approach for example, open source models. We might see innovations, which mean smaller players can get in the game in terms of foundation models like decentralized training and innovations in terms of fine-tuning. So we think maybe take the precautionary approach to begin with, but if it does turn out that there's public benefit or public interests for true open source models, then you would find ways, I think, to alleviate the burden on them through this future-proof regulation. That's how we've thought about open source so far.

Justin Hendrix:

Did you happen to read this paper from David Gray Widder, Sarah West, Meredith Whittaker, on open source? Did that affect your thinking on this at all?

Connor Dunlop:

Yes. Yeah, we definitely did read this. We found it super informative. Yeah, I think the debate was a little bit binary and like I said, trapped in this idea of what open source has been historically in terms of software. So I think that paper was very useful for getting concrete examples around how commercial interests have used open source in the past to advance business interests. So one example is how Google open sourced Android to make a new infrastructure for applications. So you can see arguably similar trends in open source foundation models. So yeah, that for sure informed our thinking and it added some very welcome nuance to the debate.

Justin Hendrix:

So for my listeners, that paper is called Open for Business, Big Tech Concentrated Power in the Political Economy of Open AI. Let's look at some of the ideas around mitigating risk. So you've got various ideas about how to do that through the AI lifecycle. What are some of the key ones?

Connor Dunlop:

What we're really aspiring to is to develop an ecosystem of inspection, is what we're calling it. There will be pre-market conformity for AI systems, but we think it's not a one-and-done scenario. As I've said, they can interact in unusual ways in the world, and they can also learn through the lifecycle. So what we're really excited about is strong pre-market checks. We would be very keen to see third party audits before going to market. We think that's a good safety mechanism. But then throughout the lifecycle, we also want to find ways where auditors and other interested parties can have this thriving ecosystem to test for vulnerabilities and see how the models interact when out in the world. So one example of that is what we, well, not we have termed it, it's a term from the Digital Services Act, which is vetted researcher access.

So we would basically like similar provisions in the AI Act for vetted researchers to be able to access foundation models potentially by API to stress test them. They do this with a completely different lens to what an internal red teaming process would be and different incentives. So we're quite keen on that. And yeah, I think relatedly, in terms of this ecosystem, we're very keen to see what we call an EU benchmarking institute, but basically this could be funding for benchmarking initiatives across EU member states. We think this would really help with the science of measurement for AI systems and doing effective evaluations. We're often hearing that's missing so far. So I think the EU could be a driver to build that ecosystem of inspection and ecosystem of measurement.

Justin Hendrix:

Does the institute exist in a university or in some other context?

Connor Dunlop:

There's national metrology experts who already do similar work in different sectors in the EU. We think there's a lot to be learned from that. I actually read today that in front in November, they're going to have the first of what will be an annual AI benchmarking convening. Those types of initiatives seem good to us. Academia would play a role as well. I think it's just getting this expertise, which is a bit decentralized, and finding a way where they can all feed into one place seems really useful.

And we're especially seeing this in the context of maybe this is going to too deep or too niche, but in terms of EU standard setting, because at the minute standard setting processes, which will operationalize the high risk requirements of the AI Act, they're dominated by, well, first of all, technical expertise and not so much social technical expertise and also a lot of industry voices. So we think finding ways such as via benchmarking expertise and national metrology expertise, they can support and add independence to the standard selling process.

Justin Hendrix:

And is industry going to pay for this one too?

Connor Dunlop:

I think if we do find a good way to have certain thresholds, that will require some thinking, but if we find a way to set some thresholds for the most advanced or the most impactful AI models developers, I think they could probably play a role in this as well. And to be fair, I'm hearing all the time, we're all hearing the time from AI labs that there is not a good science of measurement out there. There's not adequate evaluations out there. This is some of what we're seeing in the UK with the UK government working on evaluations for foundation models. So it's clear that's a gap, and yeah, I think they should be willing also to possibly contribute to filling that gap with some fees.

Justin Hendrix:

So let's talk a little bit about high-risk categorization and generally the sort of risk-based approach in the regulation. What should be on the list? What should be categorized as high risk?

Connor Dunlop:

So the starting point from the high risk categorization was they basically set a list of sectors which would be deemed high risk, and if you deploy an AI system in that sector, you'd be categorized as high risk. So some examples would be education or the judiciary system. I think there was eight in total. A lot of them focused on the public sector. So that was the starting point. If you deploy in a high-risk area, you're deemed high risk. What has changed over the course of the negotiations is that the European Parliament and the Council of the EU have suggested adding an additional filter to this high-risk categorization. So it's not just based on if you deploy in a high-risk area, there are some provisions that could exempt you. So one of the institutions suggested if the decision was purely accessory is the language that they used, but it was used in a high-risk area, you wouldn't have to follow the high-risk obligations.

Similarly, if the developer or the deployer thought that the system didn't pose substantial risk, was the language used by another EU institution, then they could get out of the high-risk obligations. For us, that seems problematic. I think it puts a lot of [inaudible 00:18:09] in the hands of developers and deployers to self-exclude themselves from the rules. Again, like I mentioned earlier, we're quite keen on a precautionary approach, so we think it's better to stick to the original proposal. If you deploy in high risk area, you have the high risk obligations, and maybe there would be a way with this flexible well-resourced regulator, you could then over time give implementing guidance on which types of uses might not actually be high risk.

Justin Hendrix:

So let's talk about your last area, which is protection and representation for affected persons. What needs to be included here? What forms of redress need to make their way through the final version of the AI Act?

Connor Dunlop:

So yeah, this was something that we saw as a gap from the very start that if you look at the whole AI value chain, the provisions ended with the user, or this was basically the deployer of the system. There wasn't any sort of language on affected persons. So the people who the decisions of AI systems that will be affected. We think that's important to first of all just have the definition of affected persons in there. And I think that's the first step to give legal fitting to protections for these affected persons. And as you mentioned, we really want to see comprehensive remedies and redress framework for affected persons. So one example of that is the right to lodge a complaint of the supervisory authority or to pursue an explanation for decision-making of an AI system. So we see these as the first steps towards accountability.

They also seem important for another piece of legislation which closes the circle of the AI Act, which is the AI Liability Directive. That will also be very helpful for providing protections if something goes wrong with an AI system. But we do think this fitting within the AI Act is also necessary on top of the AI Liability Directive. And yeah, maybe a final point around the affected person's piece we're quite excited about, as I mentioned before, using these subgroups of the AI office potentially as a mechanism to have more democratic oversight of future AI governance. So we can imagine something like a citizen's assembly sitting as a permanent subgroup. We think it would need to be further developed, or we think that's an idea that would be exciting to explore. I think that would be very helpful for offering a all-encompassing protection for affected persons in the AI Act.

Justin Hendrix:

I want to ask you about industry's influence at this stage of things. We know that industries put a lot of effort into lobbying around the AI Act. Billy Perrigo in Time this summer had some documents that he was able to unearth that looked at how OpenAI in particular had lobbied to water down the AI Act. Are you able to see industries influence on the legislation at this point? Are there aspects of it that you feel like industry has got what it wanted?

Connor Dunlop:

That's a good question. Industry lobbying is not a new thing at all in the EU. It's happened always in terms of digital regulation and beyond. I feel like over time there has been a bit more cynicism to industry approaches to lobby and to water down the act. There's more of a desire to find independent expertise than maybe there was before. I think a lot of the expertise, which is true, a lot of this is inside the industry, but I think there's more and more independent expertise popping up. So I think that's been a welcome change.

I think in terms of how happy the industry will be and what the end goal will be, it's hard to say. I think a lot will be decided by these final months, I think in the European Parliament's text. I think speaking personally for us at the Ada Lovelace Institute, broader civil society, I think the European Parliament did a stellar job that they didn't succumb to industry efforts to water down some of those elements. But let's see, because now there's three EU institutions in the game for the trilogue stage. And yeah, I think we'll know by the end of this year with whether industry will be happy with the end result.

Justin Hendrix:

OpenAI was particularly concerned about being designated a high-risk system or having its foundational model, GPT, designated as a high-risk system. Ties back to some of your recommendations and some of the concerns around the high-risk categorization.

Connor Dunlop:

And to be fair on that one, I can see the point that a foundation model or general privacy AI, the risk also comes from the area of deployment. So I think that's why the focus has become around what does responsible development look like and what can the upstream do to make sure a product is safe before it goes to market? That goes a little bit beyond just saying it might get high risk, it's a bit more refined and a bit more tailored, and I think that's been a welcome change over the process.

Justin Hendrix:

Ada Loveless Institute is based in the UK. The UK is taking maybe a more laissez-faire approach. What can you tell us about how things are evolving in the UK?

Connor Dunlop:

My take, a little bit from afar from Brussels, is we're excited that the AI Safety Summit is happening. We think it's great that safety is the lens they're applying. As an institute, we would've liked to see a broader scoping of safety. At the minute, it's only talking about frontier AI risks. But yeah, I think a more all-encompassing approach to safety would probably be welcome in our perspectives to look at the harms happening out in the world. Also today, I think it's good the UK is waking up on this topic at least.

Justin Hendrix:

You've already mentioned that the act will likely be approved by end of year. What is the next point on the calendar? What is the next date we should be looking for?

Connor Dunlop:

So there will be a handful of what they call trilogue negotiations. I think we're looking at the 2nd, 3rd of October there will be some. The number of trilogues left to happen, it's a little bit in-flux, but we should have at least five to six more trilogues, I would guess. So those are the big milestones, but those happen in very closed door settings. Yeah. Milestones, but we don't always know exactly what's going on in there. A lot of the work then is just happening on the side. The policy advisors, they work on the technical text in between the trilogues, so that's the day-to-day grind. And then yeah, each trilogue is the big milestone I would say.

Justin Hendrix:

So still some room for surprises?

Connor Dunlop:

Yes, I think definitely. I think based on history, for example, Digital Services Act and other pieces of legislation, you can see quite substantial changes during the trilogues. So yeah, I think people in Brussels like us, we're definitely paying attention this year, how this develops.

Justin Hendrix:

Well, Connor, I appreciate your close attention to this. I also appreciate seeing Tech Policy Press sited in the recommendation. So thank you very much for that and hope perhaps we can have you back on to talk about this when the new draft is available.

Connor Dunlop:

Yeah, I would love to, Justin. It was really a pleasure. Thanks so much.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics