Home

Donate

Exploring Global Governance of Artificial Intelligence

Justin Hendrix / Jun 25, 2023

Audio of this conversation is available via your favorite podcast service.

Over the past few months, there have been a range of voices calling for the urgent regulation of artificial intelligence. Comparisons to the problems of nuclear proliferation abound, so perhaps it’s no surprise that some want a new international body similar to the International Atomic Energy Agency (IAEA). But when it comes to AI and global governance, there’s already a lot in play- from ethics councils to various schemes for industry governance, activity on standards, various international agreements, and legislation that will have international impact, such as the EU’s AI Act.

To get my head around the complicated, evolving ecology of global AI governance, I spoke to two of the three authors of a recent paper in the Annual Review of Law and Social Science that attempts to take stock of and explore the tensions between different approaches.

  • Michael Veale, an associate professor in the Faculty of Laws at University College London, where he works on the intersection of computer science, law, and policy.
  • Robert Gorwa, a postdoctoral researcher at the Berlin Social Science Center, a large publicly-funded research institute in Germany.

Justin Hendrix:

I'm going to talk to you today about this new paper you have out, "AI and Global Governance: Modalities, Rationalities, and Tensions." There's also a third author on this paper.

Michael Veale:

Yeah, that's Kira Matus. Kira's based at the Hong Kong University of Science and Technology. She's a chemist by training initially, but worked across a whole area of the intersection of complex, value-laden issues with science and public policy, so she's joined us in this as the science policy governance expert.

Justin Hendrix:

Maybe the first place to start is really just where you start. What are we talking about when we talk about AI?

Michael Veale:

That's a really great place to start, I think, and I'm not sure that we always have it right, particularly when we look now and there's all the discussion around large models of all types. We are tempted to think about AI as the model, whether it's the release of this model, the use of this model, what this model might do in the world. But in practice, AI, it's really a way of reshaping and changing organizations. As well as also a set of business practices, it's also a set of data collection and use practices that entails complex supply chains, interactions between companies as they move information from one place to another. Query systems in one place, run things on cloud servers in another, run things on device in another. So we start to see when we talk about AI and its actual deployment, something that quite a lot more nuanced and different, I'd say. So we'll move away from looking at AI as a model, and instead looking as a kind of applied practice and all the elements of that practice.

We can see AI governance all over the place internationally already. We do already see links in the way that people are trying to govern platforms, govern computing, govern data flows, govern labor practices in complex multi-country supply chains and value chains. And when we see that, we can start to say that AI governance is not really quite as new as we might think it is. But at the moment, the debates that are happening are very grandiose, almost saying, "Look, there are some clever people in some organizations making some models, and those models need some international organization as if it's some nuclear weapon that's being produced." That might be one way of looking at it, but I think it's quite a limiting way. And if we only look at it through those lenses, we're going to see the world through the lens of a kind of singular model focus. I don't think it's going to capture very much. In a way, it's going to capture a lot less than we think might.

Justin Hendrix:

There seems to be a kind of tension between thinking of AI as software or kind of product of the software industry, and sort of thinking about regulating it in that way, thinking about it as more akin to a force or a sort of phenomenon like the internet on some level. Is that an appropriate way to think about it? Is that the sort of spectrum from sort of software to the internet and the different kinds of governance questions that would apply to either one of those?

Michael Veale:

There's certainly close links between these technologies, particularly when we look at, I think, the ambitions of companies like OpenAI and also platforms like Azure or Google Cloud or so on. Where their model really is intermediation, getting in the middle of things inside of the network.

What they would like to see is more and more of their models queried for more and more human purposes. These models being used to transform data between one form or another. These models being used to intermediate between the way that people, organizations both do computation within their boundaries, but also to communicate and interact with each other. And every time you do it as queries, you get some money from your query as a result, and it's the normal cloud model, but really large.

So we can't disconnect AI from networks, because in practice, the way it's going to have an effect on the world is through being inserted into networks and being queried and used and developed and become part of these networks, and therefore, we're going to have to analyze these things very closely. That's very different from looking at it as if it's a model, because that sort of draws attention to it almost as a good in itself. It's like you almost say, "Hey, there's a computer and it contains this model. Who should be looking after this model? Who should be looking at its limits, testing its safety and so on?" Well, really, the safety comes from the way in which these systems are used in practice and integrated in practice.

I know there are researchers who are worried about AI escape or safety issues around a model from the vantage point of seeing it almost inside a server itself, but I think that seeing it as a practice inside that network, we see a huge array of other kinds of risks and reconfigurations that all need governance and close attention, and the complexity of them is pretty daunting. And I'm not sure that the current paradigm of thinking, whether it's an international agency or so on, is ever going to be granular and detailed enough to grapple with the kinds of changes we might see from just introducing fairly arbitrary and new and sometimes strange forms of computing into what are being quite mundane communications and interactions before.

Robert Gorwa:

So one of the things that I think we're trying to do with the paper, and we begin with a little bit of this definitional landscape and how complicated and modeled it is, and we go from there to a kind of typology of the different things, as you mentioned in your first question, Justin, that are already happening that might be called AI governance. There's a whole typology that we develop, thinking across the realm of kind of more voluntary, informal practices that different organizations, different policy networks are doing, to the kind of more formalized, complicated, bureaucratic things that might be happening within a single government or within a kind of collection of governments, like the EU, more institutionalized things like the AI Act.

But the second thing we're trying to do here, right? And this is coming I think partially from my background as a global regulation scholar. So Michael is the researcher. He's been doing work on public sector machine learning and the applications of machine learning and AI systems with data protection, especially, for example, for the last few years. I've been working mainly on global regulation and technology policy areas with a focus on the kind of platform governance, platform regulation space. And one of, I guess, the main moves we're trying to do here is just to provide a more critical intervention and actually a bit of a pushback to say, "Hey, maybe AI global governance itself isn't necessarily a good thing in of itself," right?

So if you're looking also more widely at the history of global regulation in complicated, high stakes areas where there's a big mesh of different industry interests internationally, government interests internationally, things like labor, manufacturing, supply chains, environmental regulation, unfortunately, global governance often doesn't work well. I'm hoping we'll get into this a little bit too in this discussion, right?

So I guess what we're trying to throw into this debate are some of the lessons as well from that literature, which brings in the political dimensions that kind of inherently underpin any kind of global governance regime or any kind of global regulatory politics constellation or assemblage or whatever you want to happen. We're briefly kind of throwing into the ring some of these political dynamics, which I think are really important when we're trying to think critically about all sorts of these different initiatives and declarations and letters and ethics councils and things that we're seeing in the space. So some of these things are the way that different governance frameworks being developed by different actors can actually compete with each other, the way that these different governance frameworks can potentially interact with national regulation strategies and maybe even forestall them or be used as a strategy to forestall them in certain jurisdictions, to signal to policymakers that, "Hey, we're doing X, Y, Z in this area, and maybe that's enough. This is a fast-moving, innovative space. Stay out of the way. We've got it," to other kind of, I guess, political dynamics of this space.

Justin Hendrix:

You point, in a couple places, to the idea that some of this is all well and good, but it doesn't work outside of the Global North. Some of these phenomena aren't present outside the Global North. The ethical AI, all the councils and things of that nature seem to be mostly in rich world countries.

Robert Gorwa:

Well, I think, at least from what I've seen, there is increasingly an effort to include these terms, basically a larger and more diverse range of stakeholders, including especially increasingly from civil society organizations all over the world, right? And I think, in a sense, that's obviously welcome.

But we draw quite a bit on a really great paper, that I can recommend to everyone, by Marie-Therese Png, which was published in the FAccT proceedings a couple years ago, which kind of provides a really helpful overview of this, from a more explicitly theorizing from the Global South perspective. And one of the problems there, which she raises in her paper, and I think there's major questions as to how you resolve this, it's one of these really difficult challenges, is what she calls a paradox of participation, where basically you have this inclusion at a surface level without meaningfully changing the structural roots and, I guess, organization of these types of initiatives and organizations that are developing standards, principles, what have you, right? So at the end of the day, isn't these organizations that are in meaningful roles where they have great power.

And again, that's very difficult to orchestrate, right? Just for resource reasons and also for the reasons of who is driving these conversations in terms of technological development and who is also driving these conversations institutionally from an organizational side. It's one thing to have these folks included as participants in meetings and as stakeholders. It's another thing to meaningfully give them control of the reins and say, "Okay, let's see what this looks like for you. What are the needs that your communities actually have in this space?" That's a challenging one.

I would welcome that. I think there has been an increasing focus in some of these organizations and spaces towards maybe more applied, on the ground, real term AI impacts that we're seeing now. So things like issues of labor in the kind of global data annotation and labeling pipeline, right, and how that feeds into big AI infrastructures, big data sets. And there has been some work that I've seen, which is engaging a little bit more meaningfully with the questions of working conditions and the supply chain, but I still think this is really a nascent conversation.

Justin Hendrix:

We've talked on this podcast before about the sort of sprawling number of ethical codes and councils and ethical frameworks that have been developed to potentially help industry think about how to regulate itself on some level. You point to the reality that these types of things may forestall actual regulation. Is there any benefit to any of these ethical codes and councils that we're seeing emerge around the world?

Michael Veale:

I think a challenge here is that the kinds of codes and councils that are emerging, they also are involved in framing the issue in a very particular way. So it's the framing of AI regulation, which is where a lot of the real contestation occurs. Are we framing it as something which is, as I say, about a model or a platform? Are we framing it as a way of intervening or restructuring the way the businesses and public sector organizations relate to each other or relate to computing in general?

All of those things, if you bring those into the scope of AI regulation, then you've got a very different picture and a very different discussion you're having. You're having a discussion of what is the role of a teacher now, what is the role of a clinician, or what is the role of a small business in deciding how it builds a whole customer relations system.Those change quite rapidly when you start to introduce AI that's designed by someone completely different with very different interests that wants to extract value from those organizations and put that profit elsewhere.

If you start from a different approach to say, "Hey, are models good or safe or fair or transparent?" then you end up having a very different kind of discussion. And the kinds of regulation you go towards are much more the kind of regulation we saw in the AI Act from the EU, the proposal which treats just AI as a product, doesn't really engage with its role in reshaping societies, doesn't really engage with giving individuals or organizations new rights, and doesn't really engage with the supply chain very much.

So the ethics councils have... Where there's been some analyses of the principles that are common across these different statements or declarations. And one thing that's notable, that isn't picked up in that analysis very often, is the questions of power on competition are almost entirely missing from them. Questions of who benefits, who would gain value from these systems, or how do these systems affect other social systems and other sociotechnical systems.

So the ethics questions are really ones about framing. They're very early on and they've been very successful insofar as much of the regulation around AI we see and the discussion around harder regulation just mirrors those ethics and councils and those discussions and just takes them a step further. But what direction is that further step? I think that's what we are trying to ask a little bit in this paper. What is this focus on at the moment, and what are the missing pieces?

Robert Gorwa:

I guess policy framing, policy narratives, and the interaction between these types of instruments, whatever you want to call them, and legislation, I think, is a really important dynamic, like Michael has said. I wouldn't want to say that these are completely useless, that they serve no purpose, and just speak categorically about that. At least from what I've seen, and also with the caveat that I'm kind of an outside observer in the space that I've been working in adjacent but not directly in this area for the last few years. At least from what I've seen, it seems like these ethics councils framing principles, this whole constellation of different things, they are getting a lot more nuanced in the type of language they're deploying, for example, right?

A few years ago, you would've seen way more automation as discourse, which just automatically presumed that the key effective AI systems on society writ large is going to be the uncritical automation away of many thousands or millions of jobs, right? And if you look at the language that they're deploying now, it's much more sophisticated, it's much more focused on current impacts, things that are already happening on the ground. And I think it's a little bit more cognizant of the ways that other social forces are going to affect this. So we could say it's less deterministic, right?

So on one hand, that opens the question that Michael just raised, like, "Is that necessarily a good thing? What are the kind of knock-on effects to this kind of language?" But the second thing we're thinking about is just like, "Okay, let's say these principles are actually kind of communicating the types of visions that we are trying to see. What are they achieving in practice?" Right?

And again, this is a huge challenge. These are complicated, sprawling, highly profitable business sectors with entrants in all sorts of different markets. But there is a real question, especially when you're looking at the history of global financial regulation, of environmental regulation, and the role of movements like corporate social responsibility or voluntary environmental and social governance principles. Hey, the climate crisis is still here. How do we measure those impacts and think about them? Right?

So we can talk about some concrete examples of frameworks and principles that have come out recently that are just, I think, a case study of this in terms of what actually happens in terms of impact and who is going to hold signatories to these principles for account. But I think generally, yeah, it's tricky to see what the long-term impact of this is going to be. And of course they do serve an important function as a policy network, bringing stakeholders together, advancing this conversation, but again, as we kind of try to point to in this paper, that isn't an uncritically, 100% of the time benefit, I guess.

Justin Hendrix:

So the next area of potential international governance that you go into is in fact industry governance. I don't want to think of it as a step beyond ethical codes and councils, but I assume on some level, it is that. It is the sort of governance within the industry itself. You say that in the future, global AI governance seems likely to become highly enmeshed with platform governance. Let's talk about this particular area.

Robert Gorwa:

So I think this is actually an area that Michael and I are slowly starting to develop in some collaboration together, so these are still some early thoughts and works in progress. I think one interesting thing that just we haven't been able to ignore, and I think people observing this space are increasingly seeing, is that there are, I guess, more and more kind of choke points, if you will, in this kind of AI technology stack, and there's an enclosure of various AI resources, infrastructures within platformized ecosystems that are being operated as a service by different companies.

This is creating a lot of interesting classic platform governance decisions and dilemmas for companies, right? So again, this is super nascent work in progress. We kind of just point to it in the paper. But the emergence of things that you might call model marketplaces, like Hugging Face, right, where they offer third parties, researchers, industry, other interested people access to certain models that they can download and deploy and use as they see fit, has also kind of opened up them as a political actor that has to make decisions about what acceptable use is. And we kind of see the industry also looking backwards and saying, "Wait, maybe this is actually a content area and we need to have rules and standards around this."

Again, early work. But from what we've seen so far, while there's tons of, I think, really well-intentioned stuff that's going on, it's also almost rewriting the history of trust and safety from the ground up without necessarily taking on all of those lessons proactively, right? So we're seeing the CEO of some of these model marketplaces just basically going on Twitter and in an ad hoc basis, making basically trust and safety decisions about which kinds of models and services should be available on their platform or not, right?

The second kind of argument we're trying to embed there too is that this is getting buried... I guess it's moving deeper and deeper into the kind of stack of the supply chain. So whether you're thinking about services that are providing a data that's been annotated and labeled, there are also increasingly kind of platform governance/trust and safety practices that are being embedded into that process in different spaces by different actors too.

Justin Hendrix:

Let's talk a little bit about copyright and licensing. I've seen in just some of the recent discussion about AI regulation, there's been a sort of almost enthusiasm for this idea that maybe you'll have to seek a license if you want to build a model above a certain size, or perhaps if you want to apply a model at a certain level. What do we make of this particular mode of regulation?

Michael Veale:

Right, so this is a sort of error that... The challenge with the word "licensing" is it blows all the way between the industry governance part. We talk about contracts and licensing. That also goes more into the domain of hardware or special store in the area of AI. So when we think of it in the first sense of industry governance, well, there could be technical choke points, as Rob mentioned, which means, "Hey, look, we're just going to disable access to your API. We're not going to let you run this kind of thing on our hardware or on our smartphones. We won't give you access to the private API. We won't give you the necessary tools or developer licenses you might need to do that." But that largely stems from the ability to withhold access to a business asset or an infrastructure or something like this.

So in that case, the license is granted by an organization that has a huge amount of power to try to switch off, right? So that's one type of license that emerges, and we see that already with APIs and with terms of use that are built into there in quite an ad hoc way.

So this then links to questions around contracts. When we start to see models move away from some of those pinch points and bottlenecks that has to be traded and moved freely, or researcher models or models that are designed in certain ways, they're often governed by contract law, and those laws are linked to other areas like intellectual property. So we might say, "Hey, we built this really big model. Maybe it's not open source in a sense it's not real on a creative commons license or something similar, but it's available," right? This is like Meta's LLaMA model, for example. It is actually in practice available, but it is available subject to certain contractual restrictions.

And you have to then think, "Well, what's that contract based on?" Because if I give you a floppy disc of information, and that will be a floppy disc, and I say don't use that for anything other than what I say, then there has to be some kind of basis that I'm making that claim. That could be that we've signed a contract and says there'll be liability between us if you do that. So you have a contract directly. It might be because I still withhold intellectual property and the things on that disc, and therefore, I license it under certain conditions and I say, "Well, you're misusing the property that I've given you."

But when we've got models that are more open, as we're seeing the development of open source models, it becomes a lot trickier to work out how you both enable this openness and sharing and create an approach where this is actually policed. So we see that some licenses like the responsible AI licenses, the RAIL licenses have emerged and they say, "Hey, we claim copyright in this model," the weights of this model, or some other IP, but it's largely going to be copyright, "and if you want to use this model or make it available to other people or something like this, then we believe that you can do so under certain conditions such that you're not using it to discriminate against people or you're not using it in border control or so on. And if you are, that's a breach of a license."

But the problem with this kind of law is that only the rights holder or someone the rights holder delegates can actually enforce this. So it's not like... The Linux Foundation, for example, had some large model and it said we're going to put these behavioral restrictions on that say that ICE in the U.S. can't use it at the border. Well, it will be up to the Linux Foundation to sue ICE. It's not like I or a third party NGO or something can go and be the enforcer in this space, and it's not like criminal law or some public agency can go and do that. So we're seeing these nice attempts to build a global governance system to try and deal with these dual use technologies, where they can used in multiple ways, but the policing is going to be quite far behind and there's a bit magical thinking, really, and it has been in other areas.

And then lastly, the kind of licensing that you alluded to, Justin, is the idea that you might have a state-mandated license say you cannot practice or do a certain activity or produce a certain thing or sell a certain thing or put it on the market without some kind of license or without... In the AI Act, it would be a conformity assessment. That's the kind of model we use for a PPE, for example. You cannot produce or put on the market in the EU a face mask and claim it's a medical face mask or something without it going through exactly the same legal process that's proposed in the AI Act.

In the AI Act, it's wholly self-regulatory, basically. It's just saying, "Fill in the forms." You don't need to get anyone even to look over the forms that you've done them correctly. You can then give yourself the stamp and then it's there. If the regulator comes, you might demonstrate you've done that, but it's not a case that you are being awarded a licensed per se. But in some regimes, like in medical devices, there are some checks from some private bodies that sort of do those checks for a fee. And then for the more serious regimes like pharmaceuticals, you would see that you can't actually market something before a proactive approval from a public sector agency like the medical agency.

Now, we haven't seen those things emerge in any shape in the AI worlds yet. There's been discussions. But that's not what the kind of AI Act looks like. So the idea of a license is a difficult one because it's so broad in the way that we can use that term, but we're definitely seeing it emerge in multiple areas and ways.

Robert Gorwa:

I just wanted to hit with one last point on copyright, because I think this is really important, especially to the kind of ongoing conversations that are happening around generative models and LLMs and things. And I don't know, Michael might disagree with me on this, but I think my feeling is that the more salient kind of aspect of copyright related governance in this space isn't going to be through these highly institutionalized kind of practices like Michael has just talked about in terms of contracts and licensing, but it's going to be more on the input side in terms of, for example, the data pipeline that's feeding into these models.

If you look at the history, I think, and again, I'm not a copyright scholar, so this comes with a grain of salt, but my reading of the history of global copyright regulation as it pertains to the internet and platforms more broadly, drawing on the work of people like Natasha Tusikov, who has a really great book called Choke Points, is that this is actually highly informal and it's highly political and it's basically driven from kind of grassroots level contestation from powerful stakeholders that have copyright and other rights, right? So we're thinking about the motion picture of a movie association. We're thinking about big brands. We're thinking about powerful copyrights holders, and they're basically going to shape what companies can do through lawsuits and through mobilizing lobbying and through all sorts of stakeholder pressure, right?

This has manifest itself in the intellectual property domain in a kind of informal, global regime where all sorts of companies, Google, YouTube, all sorts of platforms operating different services are basically kind of complying with these systems behind the scenes. And in many cases, they've become more stringent than they have in other content areas, as we see with the development of the parallel system in the U.S. for copyright under the DMCA rather than Section 230 Communications Decency Act and what have you.

Justin Hendrix:

So speaking of processes that could empower well-resourced incumbents, you also put standards in that category. Let's talk a little bit about international standards very briefly.

Michael Veale:

Yeah, so standards are an interesting one. The reason being that there's been a lot of movement in AI standards for quite a long time. People keep developing them. You'll prune them out through private sector standards bodies, predominantly. And the standards world is one that is very familiar to people in state governance, but also in the governance of complex products and engineered products and services.

What is happening now in standards is that the kind of technical nature of AI is encouraging legislators to turn to standards bodies to solve things that they want to do themselves, which we see particularly in the EU AI Act. What are mostly rulemaking is actually just given in practice to private standards bodies.

But at the same time, we know that AI is a technology that affects many, many different sectors and application areas. Those areas require domain expertise to understand what exactly could go right and wrong. So if you think about how the AI Act works here, it places regulation on high risk systems to do some sort of self-regulatory and self-regulatory certification. Those are things like supporting judges or checking critical infrastructure or marking essays or admitting or hiring and firing people for jobs, something like that.

But standards bodies, particularly these kind of big essential standards bodies, are never designed to get that kind of deep sectorial expertise. Civil society organizations simply don't have access to the resources, the time, the expert in equities or just the networks to make a difference in these standards bodies.

So we're seeing the treatment of a lot of things that are deeply about fundamental rights and big questions about which political direction we want the world to go being delegated passively to standards bodies who could be very good at working out what shape your plug is, or could be very good at working out best practices in pretty settled areas where we agree on what we're trying to achieve. But are not domains for contestation of who wins, who loses, and how the future of different sectors should really look.

So this is the big tension underlying all of this space right now, but we're seeing it emerge more and more. And the standards that get produced by these bodies come back to copyright. Costs a large amount of money to even purchase, or costs you $150 to get a single version of these standards, and the AI Act will presume to apply 10 different ones at once. So it's not exactly something that individuals can look at in order to work out whether enforcement action should be taken and be part of that policing and enforcement system, because these are private rules that are made by private bodies, but they're going to be very influential.

Justin Hendrix:

That brings us to international agreements. I do think this area is what a lot of folks are thinking of when they think of global AI governance. They're thinking of a kind of a gathering of folks around a table with flags, and we're going to kind of come to some terms about how we should do this thing together in governmental bodies. But there's various things going on here as well. There's more depth to it than that.

Michael Veale:

We only covered this in about three paragraphs because really, there's not actually a huge amount of movement here. There's a lot of discussion. There's a lot of noise made at these areas. We're seeing the Council of Europe, which is a body that's been involved in international technology treaties before, get involved here as well. They've been particularly involved in the Budapest Convention on Cybercrime, which really was pushed after the love letter virus didn't reveal that many, many countries had no cybercrime rules. And the U.S. is very big in involving itself in the Budapest Convention and getting a lot of signatories, because it seemed to create, well, a security if countries didn't have any protection or prosecution against cybercrime.

It's unclear what this process is going to pull out. It might give individuals some more rights, but there's a huge amount of push from the European Union now to align the broader international agreements with the domestic AI Act, which had its kind of product safety product focus. Whereas the Council of Europe, which is where the European Convention on Human Rights comes from, that is a much more rights-based organization, less market and economically oriented.

So we're seeing a bit of a clash here between the human rights side, as typified by Council of Europe in theory, and the economic free market free movement side, as typified by the European Union and its AI Act, which is actually, in many ways, heavily deregulatory and restricts member states from regulating AI at all. So these two things come together in a bit of an unusual clash, and I think we'll see that going forward whether AI moves more into trade law, just as data in some countries try to move data law into trade law, or whether it goes to human rights law and human rights areas, and in what domains that happens. I think that's the main kind of contestation hinge that we're seeing right now.

Justin Hendrix:

That does lead us directly into your last area in the taxonomy, which is the idea that global AI governance may mean converging an extraterritorial domestic regulation. Are we already seeing this happen to some extent?

Robert Gorwa:

I think Michael has been looking much more deeply at the AI Act, which I think is kind of... At least seems to be perceived as the kind of first mover in the space, the one that is the widest reaching. It's the most kind of complex and sophisticated in terms of what it's trying to do. And yeah, I just wanted to say that I think some of these themes have come up, but we're getting a kind of interesting space where this is something that happens in data protection law, this is something that happens in platform regulation, other areas too, where there's much going on behind the scenes in terms of policy discourses and this idea that X actor is intervening in Y market, and therefore, everyone else needs to do this. Everyone else needs to follow this approach. So maybe there is some kind of inherent pressure to harmonize in terms of this first mover advantage. I just don't think we've seen enough yet in terms of different legal frameworks being developed in different countries to actually really see to what extent that is the case.

But I just wanted to bring out what Michael said, which is that there's this perception that, okay, the EU is this highly regulatory entity and it's regulating AI or it's regulating digital markets, digital services really broadly. I guess regulatory initiatives, maybe with the exception of the Digital Markets Act and the more kind of assertive competition powers that could theoretically give the commission, are in many ways kind of acts of delegation where yes, they're intervening into these markets, but they're also setting up these parallel structures where they're saying, "Okay, we're going to give more authority to new third parties, whether it be auditors, for example, under the DSA, Digital Services Act, or standards bodies under the AI Act."

But yeah, I don't know, Michael, what you've been seeing in terms of what different jurisdictions are doing. There's been more conversation in the media lately, of course, about potential bills in the U.S. and what that might look like. To what extent do you think those are, I guess, drawing from the EU approach, motivated by the EU approach, or what else interesting is going on here that we should talk about?

Michael Veale:

I think the furthest one ahead around drawing on the EU approach, which I don't think is a great approach, to be fair, is Brazil is probably the closest to this. But I guess in this section, we wanted to draw attention from the converging areas in, say, copyright law. We've had, since 2009 in Japan, a text and data mining exemption, and in the UK in 2014, and the EU had one that was a little bit weaker in 2019. Those were really designed to allow the training of AI systems on large sets of data which people had access to, but may not have had rights to reproduce individually, but to analyze and turn into some aggregate product or analysis without having to do that. And we're having to do some sort of stair use analysis that will be required in the U.S.. It'll be quite open-ended and will take a lot of litigation to work out.

So there's been a long discussion around this, but it's been kind of emerging different parts of the world. We've got many types of algorithmic rights and data protection law that have been converging across the world. It's not just the EU which has rights around that. It's many, many countries have rights around algorithmic decision-making in their data protection laws. And we find the same kind of slight emergence, which is very interesting, I think, in platform work directive proposal from the commission about algorithmic auditing. And the gig economy, basically, is a large part of algorithm management, algorithmic reviews of decisions and so on.

So we start to see these kind of echoing parallels. But I also think the last thing about this would be how much countries are going to try and claim back some of that decision-making power, find the ability to drag AI providers and so on into courts into subject to quite bespoke and custom jurisdiction. Whereas copyright has been a world of companies sort of agreeing to also be bound by these regimes, which are mirrored quite a lot across the world.

So how much, just because Brazil or another jurisdiction might be trying to copy a law from another jurisdiction, does that really give... I mean, that Brazil would have the power to create their own rules in that sector, or can they only be rule takers? And if so, why are we allowing these companies to decide which rules from which jurisdictions are legitimate to follow and which aren't? I think that still needs to be hashed out.

But that's not an AI governance question uniquely. That's a platform governance question much more broadly, which is why AI is really just platforms in many cases, not just because the way it tries to divide up the world and extract value from it as a business practice. And it will do that, I think, through AI as a service, but just also because of the kind of legal games they are playing around what Julie Cohen talked about as law for the platform economy. She's a professor at Georgetown and talked about a lot of the mechanisms platforms avoid and really construct shields and armor out of law, rather than just pretend there's nothing there.

Robert Gorwa:

And if I may, I think AI, in many cases, is just platforms, and platforms, we shouldn't forget, are run by platform companies, right? And these are multinational corporations headquartered mainly in the U.S. and in other places. So I think, yeah, that gives us just the start of a toolbox to think about regulation too. If rather than focusing on outputs and specific tools, we're looking at corporate actors.

Justin Hendrix:

There are a range of rationales and tensions in the global governance and AI in the latter half of this paper. Perhaps what you've just described leads into your discussion of the governance of transboundary issues and transboundary effects of AI systems, which range from the labor to the environmental. But in our last couple minutes here, I want to just focus in on this section of governance as lobbying, the extent to which these governance mechanisms themselves may allow industry or allow the developers of AI to obtain their preferred policy outcomes, as you say.

Michael Veale:

Yeah, I mean, one example, I think, was just from this week. I heard that the prime minister of the UK, Rishi Sunak, is going to meet Sundar Pichai and that to talk about AI and how Google can work and look with the UK.

Justin Hendrix:

Worth mentioning that Joe Biden was also in Silicon Valley, apparently meeting with heads of AI companies.

Michael Veale:

Yeah. This kind of signaling approach we talk about in the paper is a way of getting really close to policymakers and getting the ear of people very senior to also push a business agenda. It's not going to be the case that these companies are interested in inputting genuine boundaries in creating collective goods and engaging collective action.

There may be a small amount of that, but ultimately by looking like you're a good, cooperative player and a policy entrepreneur and you're saying, "Hey, we should be involved in regulation," then you are there both to shape regulation in that area, but also to shape regulation in the neighboring areas, particularly thinking of AI as a service, platforms, maintaining those pinch points or bottlenecks that allow organizations to durably extract value by using computational systems and computational infrastructures. That's, I think, what our focus needs to be, because these business models are not getting enough attention.

In all the discussion of AI safety, we're treating it as some external body or new actor that could do something to us. But in practice, these are multinational corporations that are doing something with this technology, and we're not thinking about the different visions for what they will do in the future with this technology. And we may be heading to a world where small businesses become even more vulnerable and at risk from large multinational platforms. They can only play to their tune. They can only engage in digitization if they agree to the terms of the particular AI systems and models and problem framing that comes from that big platform work.

That's a pretty anti-innovative future, I'd say. A future where it looks like everyone's got cool digital tools, but they're all kind of the same digital tools that are methods of structuring companies so that a significant portion of whatever they earn slows upwards into the maintainers and developers of computational infrastructures, who in turn use that power to structure those companies more. And you can pretend there's innovation in that system, but it's not a very particular innovative kind, I think. So we have to think about that very political economy view on what's happening here and how those visions are achieved by making it currently look like being a responsible actor in governing AI and involves in all these global governance discussions, what are the visions that really could emerge from AI and how are they being headed towards or away from?

Robert Gorwa:

So one key thing that I think exactly like you put it, Michael, this political economy of regulation approach takes you, is to thinking a little bit more about material costs and benefits, right? This is something regulation scholars love to do. If we say that something like a voluntary principle, a governance initiative is regulation. That it can have a meaningful impact on business practices and decisions, policies, what have you. And of course it can, because companies are making voluntary commitments. They can change their global standards or their practices around how they handle process data in many, many ways, and that can be extremely impactful, and they can do that, of course, voluntarily without being forced to do so. You start thinking a little bit skeptically maybe, or at least critically about some of these different initiatives, right?

So what are the downfalls, as an industry actor, to agreeing to abide by, for example, some kind of responsible framework or signing on to some kind of ethical principle? Because where are the monitoring mechanisms, right? We're so much... I guess I don't want to say behind, but these conversations, in many ways, are kind of in their early days of the '90s of the corporate social responsibility, where conversation around trust marks was developing. So okay, you certify your labor supply chain a certain way, and we can put this on your product, right? Some kind of seal that you can signal to consumers that you're responsible.

I think in many ways, a lot of the things we're seeing now with different declarations and different principles and frameworks are doing that same kind of thing. Interestingly, not necessarily public-facing, but policymaker-facing. So they're not stamping this on their products. ChatGPT isn't telling you, "Hey, we have a responsible data policy, and everyone who labeled data that went into this model is somehow being paid a living wage," or whatever. It's more policymaker-facing.

I think the difference here, and maybe this is just part of the landscape because these are opaque kind of computational infrastructures that are hard to audit, is that monitoring is really difficult. So if you don't comply with the principle that some organization has made. The Partnership on AI, for example, one of their frameworks, are they going to kick you out? Are you going to get publicly sanctioned?

Again, if we look at what actually is happening, I worry that that isn't happening. And I don't want to pick on specific organizations. I'm an observer here. But just to give a concrete example, I was really interested to read about what I think is a really interesting framework for synthetic media that the Partnership on AI has developed. I think you had a podcast with some of the folks that worked on that recently, Justin. And there was an announcement that Microsoft has signed onto this, and they said, quote, "Microsoft endorses the framework for responsible practices and use of generative AI." Arguably, directly after doing an incredibly irresponsible product launch with basically integrating ChatGPT into everything in the Microsoft stack, right? Trying to make Bing relevant again, basically bringing back a Clippy from hell. And next thing you know, Microsoft Word is going to have ChatGPT into it.

And again, it makes sense as a business strategy. They've done a great licensing deal. They got beat in search. Now they're entering this new market in terms of conversational search. But again, they can both signal to policymakers that they're parts of this kind of framework, while also basically doing business as usual. So I worry, and I think we worry a little bit in this paper, that this is putting firms in a kind of have their cake and eat it too situation.

Justin Hendrix:

One word that doesn't appear in this paper is China. And it appears to me that China, of course, is pursuing a very different approach to the way it thinks about AI and its relationship to the state. How do you see China factoring in this discussions of international governance of AI?

Michael Veale:

I mean, it's certainly very important, but I think if you take away the view of being really concerned about which models are the most powerful, then the global governance of AI in China looks very much like the global governance. It's data flows of platform companies. It's infrastructure and hardware value chains, which I think is all important. There's only a certain number of words we can cover in this paper.

But I think it really emphasizes how much we need to think about the global governance of just complex computing and computational infrastructures more than we have to think about the global governance of AI. Because AI, in a way, is a distraction. We can't miss that actually, in order to make AI systems useful or even dangerous, there has to be an underlying set of computational infrastructures which can be used and reused and turned to different ways. Even if you want to take some kind of existential risk point of view, that doesn't just come in a vacuum, it comes from the fact that a lot of the things around us become programmable and alterable from a central vantage point.

And if we take that point of view, then we will ask, "Well, who is controlling the supply chains that lead to this infrastructure? Who has covenants over it? Who has the ability to alter it and change it and reprogram it around us?" That is important from the point of view of, yeah, Chinese-made hardware, software, and the kind of bugs in control, just as it is for America found zero days, America's control of the hardware system, chip design and so on, and the kinds of integrations between the national security agency and similar organizations and those supply chains and structures.

So we can certainly look deeper, but I think the deeper you get and the more geopolitical you get, you get into questions that aren't really about AI anymore, or at least are not even best framed that way. I know if we do frame it that way, it's a little distraction. The digital sovereignty debate is interesting in that, but the AI debate at the moment, I think, would be distracting in that world.

Justin Hendrix:

Fair enough. Well, Michael, Robert, I appreciate that the last sentence of this paper is a question, "Who really benefits?" Thank you for focusing our minds on that, and I appreciate you joining me today.

Michael Veale:

Thanks.

Robert Gorwa:

Thanks.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics