Home

The Regulation of Artificial Intelligence: A Conversation with Ryan Calo

Justin Hendrix / Apr 22, 2021

Ryan Calo is the Lane Powell and D. Wayne Gittinger Professor at the University of Washington School of Law. He is a founding co-director of the interdisciplinary UW Tech Policy Lab and the UW Center for an Informed Public. Professor Calo holds adjunct appointments at the University of Washington Information School and the Paul G. Allen School of Computer Science and Engineering.

The following is a lightly edited transcript of a discussion that took place shortly after the publication of the European Commission’s proposed new regulation of artificial intelligence (AI).

Subscribe to the Tech Policy Press podcast on your favorite podcast service here.

Justin Hendrix:

The European Commission has today released a proposed regulation around AI. This is obviously something that you have been prepared for and waiting to see happen. What did the EU put out?

Ryan Calo:

Well, indeed. Years ago I wrote a primer and roadmap for AI policy and also hosted the inaugural Obama White House workshop on artificial intelligence policy. Many of the themes of that essay and of the workshop were reflected in the EU proposal, which is to say that they're not limiting themselves to decision-making by AI. They're not limiting themselves to privacy.

Their approach is to look at the impacts of AI holistically, and to tackle everything from liability should there be harm, to additional obligations for high-risk uses, to facial recognition and biometrics. They're really trying to look at this holistically. In the United States, we would call this an omnibus approach. It is pretty typical of the way that Europe has engaged with emerging technology, whether it be privacy and data protection in the past.

That's their comfort level. They think about something for a long time, and then they promulgate a comprehensive approach to regulation.

Justin Hendrix:

This will take a couple of years now to wind through the process in the EU, but it's interesting to look at what they've proposed. There appear to be three categories- of things that are unacceptable applications or outcomes of the use of AI, the mid-tier of potentially harmful or highly harmful, and then the mid-to-low tier of things. How do you think they're defining the risk factors and what goes into that?

Ryan Calo:

In U.S. law, especially in common law courts tradition, there's a big difference when bones or bits are on the line. We tend to think, in the United States, if there's a potential for a physical harm, that's going to be treated very, very differently than if the harm is more ephemeral and more digital. It's very difficult to avoid liability when bones are on the line, and relatively easy when bits are. Whereas the European approach expressly prohibits a number of digital harms, overtly digital harms.

In fact, that is precisely what they're putting into the category that's such a high risk that you can't do it. That's really fascinating to me. Those are things like manipulating people based on their vulnerabilities or engaging in indiscriminate surveillance or using a kind of general purpose scoring system like China has been reported to use, and so on. The regulations do not touch military use of AI to kill people. It's so interesting.

Then if you go down to high-risk systems, now, all of a sudden it's a mixture of physical infrastructure, the capacity to do serious violence to infrastructure, the capacity to hurt people physically, and a bunch of other things that are about biometric identification and the like. Then that section introduces the concept of having an AI board that would be a sounding board for folks trying to do risk assessment.

You have stuff you can't do, which is digital harms, interestingly. Fundamental rights, digital harms. Stuff that when you do it, you need to be particularly mindful and seemingly get feedback from the European regulatory apparatus. Then there's everything else and there's a bunch of requirements around everything else having to do with risk assessment and document keeping and so on.

As you can see, it is a taxonomy that has a certain internal logic, but it does not graft on neatly to the way that the American legal system works.

Justin Hendrix:

I see also there are these transparency obligations around data, around systems that interact with people or that detect emotions or interpret social categories that Kate Crawford pointed out. There are interesting obligations around things that generate or manipulate content- deepfakes or other generative content mechanisms, maybe even voice assistants, things like that would have to be transparently identified as AI systems. What else is in here with regard to transparency, or things that caught your eye on that front? What will the consumer know if this goes into effect about the systems they're encountering?

Ryan Calo:

Remember that already in the GDPR there are any number of transparency requirements, also there are requirements that a human being participate in decision-making under some circumstances. In that way, it's not a significant departure. What I like about the approach is that it seems to reflect a lot of the thinking that the European and American and presumably other jurisdictions' AI ethics and responsibility community have been saying.

For example, there's an attention to the data itself that trains the models. You have to have quality control. You have to be mindful of the data that you use to train your models. That helps to address some of the garbage in garbage out sort of ideas, but it's also really important to mitigating bias.

You mentioned Kate Crawford. Her book, Atlas of AI, has a number of really good examples tending to show the way in which the availability of data, the sheer volume affects things. Because once upon a time, access to data was harder. Many of our benchmarks and many of the data sets that our models are trained upon are based on data that were available at the time. They reflected tremendously problematic aspects of society, whether they be the Enron emails, or mugshots, you name it.

Part of what's being said here is, "You need to be mindful of the data that you're using and how it affects the outcome." Also, there's attention to what I have called in my own work, the emergent properties of AI, where there could be a risk that's difficult to anticipate. It requires some foreseeability exercises to anticipate what the risks are.

Ryan Calo, University of Washington

Then there is documenting what you're doing and what the data sources are and how your algorithm works and so on. Then you asked about transparency specifically to users. Yeah. There you see a typical transparency regime wherein high-risk systems have to be accompanied with documentation and instructions of use in an accessible format that's easy to read and relevant and accessible.

These are all principles that are well understood in European, and to some extent, American law, because of the GDPR. Then you have ideas about human oversight, where somebody in the system somewhere can understand and affect how the processes work. My own recent research with Danielle Citron has emphasized the degree to which especially public actors using AI decision-making systems or algorithmic systems will outsource tasks that the agency used to do.

By doing so, they lose the knowledge of how anything works. They're throwing away their expertise with both hands by automating. I think that to some extent, the European regulations are being responsive to that concern.

Justin Hendrix:

You mentioned the fact that this does not create many or any restraints on military applications of AI. Others have pointed out that it seems to go soft on some of the worst case surveillance or use by law enforcement applications. What do you think about that?

Ryan Calo:

A curiosity about European data protection law that carries over into this context of regulating AI more broadly, is that the fear seems to be more private actors than public, which I have trouble explaining. Because the standard narrative about why the EU is so much more solicitous of privacy interests, so much more privacy-focused, is because it has more immediate experiences with tyrannical surveillance.

Of course, what we're talking about is Stalin or Hitler, we're talking about the state, we're not talking about private entities. Yet, that fear about fundamental rights and privacy has led to the generation of rules that primarily affect private conduct. Now, again, it is truly omnibus and so there are of course European laws that limit the state. I don't mean to suggest that they're not, but the focus tends to be on what private people are doing.

However, in this particular measure, you do see certain attention being paid to indiscriminate surveillance. I read a lot of these things too. They don't exempt the state, a lot of them. For example, the scope of the rules applies to providers of AI. And users of AI, but it also applies to EU institutions, offices, bodies, and agencies. The issue is that there has been a practice among some of the member states in Europe.

Remember, the system where it works as EU legislation, but then the member states have to enact it. Some member states have enacted it less aggressively and have exempted certain of the state's own practices. We have to see whether or not in the end whether state use of AI is much affected by this regulation.

I don't know that I'm ready to be critical at this time, but it does seem to create room for state use of AI and be primarily interested in regulating the Googles and the Facebooks. You know what I mean? That seems to be the focus, but that would be consistent with its regulatory approach to date with data protection.

Justin Hendrix:

Some of the coverage has suggested that this is going to put a lot of onerous restraint on the private sector in the EU, and that it's likely going to kill innovation versus startups working with AI. I've read articles in the Wall Street Journal and in Fortune that have had voices suggesting those things. Do you believe any of that's true?

Ryan Calo:

I think that if the question is, how will industry fare with a set of knowable consistent rules? Then the picture's going to be at least mixed. That is, they're going to benefit tremendously from knowing what is okay and what isn't, and not making massive investments in things that are not going to be allowed and knowing how much accountability to build into their systems.

If anything, you worry about a compliance culture that develops around this, where if anything is not specifically obligated then that's their wiggle room. There may be situations where in a compliance-heavy environment, new entrants often face obstacles, for the obvious reason that they don't have the time or the people or the expertise or the money to comply.

Some combination of those two things, the benefits that come from certainty and knowing what the rules of the road are, coupled with the advantage that that bakes in to legacy companies, companies that are already in the market, and already mature, helps to explain why large U.S. companies have come to endorse comprehensive legislation in some context. Do you know what I mean? It's true.

You could be cynical and say, "This is going to be just fine for the household name technology companies, but no one's ever going to compete with them." Now, I have to dig in deeper, but oftentimes these kinds of restrictions only arise when you reach a certain size. Just honestly talking to you right this moment, I haven't looked deeply enough into whether that's true here, but some of the proposals in the U.S. are only triggered when you have a certain user base or a certain amount of data or a certain amount of capitalization.

There is a way to mitigate that concern, so I would be highly skeptical of people who just say in a kind of open-ended way, "This is going to hurt industries. This is going to hurt innovation." I'm very skeptical of that, but I'm sympathetic to the kinds of arguments that say, "This is going to make it harder for people to compete with Google and Microsoft and Facebook."

I'm also sympathetic to the people that will argue, if they haven't already, because they've argued this about GDPR, that if Europe is selective in its enforcement, then this could wind up being an anti-competitive measure. In the sense that if Europe is imposing these obligations on American companies, but letting European companies get a pass, that could be an issue too. People have made that claim about European privacy law in the past.

Justin Hendrix:

It's a good place to switch our attention to the U.S. and in particular to the Federal Trade Commission, which this week published a blog post on aiming for truth, fairness, and equity in the corporate use of AI, which follows signals from the acting FTC chair, Rebecca Kelly Slaughter, who seems to be much more interested in pursuing AI-related enforcement priorities in the coming years. Tell us, what excited you? I reached out to you after I saw you enthusiastically tweeting about this blog post. What excited you about it?

Ryan Calo:

Well, at one level, you look at a blog post from the FTC and you just say to yourself, "Okay. Here the FTC is telling you that gosh, you shouldn't sell racist algorithms or you ought to take this and that to account, and that into account." It's a blog post. It's written, not by the commissioner, not by the chairwoman or the acting chairwoman, but by a staff attorney. You might say to yourself like, "So what?"

Actually, it's a huge deal. It's a huge deal. It matters, especially that it was a rank and file staff attorney, albeit, a very sophisticated long standing privacy attorney at the FTC, because they don't write public-facing things willy-nilly. They only do it if they've gotten the sign-off because it would be career-ending to go rogue.

To see a staff attorney at the Federal Trade Commission say things like, "Hey, we have this unfair and deception authority under the FTC Act, and it would apply." Not it might apply. Not it might. It would apply to things like selling racist algorithms. Very specific language about the kinds of things that would violate the FTC Act, talking about exaggerating the ability of AI to do what it says.

Because- and I've talked to the Chairwoman about this in the past- one of my pet peeves has been the way in which if you fudge anything in the dietary supplement world or something like that, like if you make a false claim or an unsupported claim in many different contexts, you could find yourself on the receiving end of an investigation and an enforcement by the FTC for misleading consumers. That's the deception component of it.

People make outlandish claims about the power of AI and so this is a shot across the bow, is to say, "You can't make outlandish unsupported claims about your AI." That is a big deal, because if they started to bring cases like they do in unfair and ... I'm sorry, in deceptive advertising against AI, wow, that would just change the whole marketplace because you couldn't just make up stuff about how AI works all the time.

To say selling racist algorithms, lying about or exaggerating how much AI can do, that all these things that the FTC is paying attention to, and that they consider to be unfair and deceptive practices, that's a big deal. Every in-house counsel at these companies, every outside counsel that they use, are paying attention to that.

It's hard to explain because it looks like it's just a random blog post by a staff attorney about a topic. But if you couple it, as you did, Justin, just now with the kind of claims that the commissioners themselves are making- who, by the way, under a Biden administration, an independent commission like the FTC can have three of five be progressives.

You've got Lina Khan coming in. You have the Chairwoman making statements about scrutinizing AI, and you also have conservative members who recognize that there are real harms here. I mean, I think that my conversation with one of the conservative appointees, who I have known for a long time because we went to school together, suggested to me that he also was taking this thing really seriously. If you combine those things together, this blog post is a big deal.

Justin Hendrix:

There are three laws mentioned specifically, there's Section 5 of the FTC Act, which prohibits unfair or deceptive practices, the Fair Credit Reporting Act, which comes into play, particularly around when people are denied employment or housing or credit or insurance. Then the Equal Credit Opportunity Act, which of course is applicable, particularly when there's discrimination on the basis of race or color or religion or other factors. Do you think we're going to see a cascade of other enforcement actions and in effect the FTC saying, "Buckle up?"

Ryan Calo:

Yes, I do. I think it's going to be harder to merge and acquire under this FTC and Department of Justice. I think we're going to see more enforcement activity in the AI space. I think we're going to potentially see more boundary-pushing interpretations of Section 5, unfairness and deception. The reason that those authorities are listed is of course, because those are three statutes that Congress has charged the FTC with enforcement of.

The most dangerous one for a company in many ways, is the FTC Act Section 5, because it's such a free-ranging authority. The other ones can carry with them additional penalty-making power. This is a little bit too much in the weeds, but the commission's powers under the FTC Act, their jurisdiction and the scope of the FTC Act of Section 5 is very broad. It really ought to be up to the commission what is unfair and deceptive.

In fact, it's a bit of an oddity of administrative law that the courts don't defer more to the FTC's interpretation of Section 5, but that's for another day. That said, the FTC Act and subsequent amendments and internal procedures require the FTC to take a certain series of steps. Ultimately, they can only bring to bear fees and penalties if they either get a consent decree that's been violated or they wind up going to court.

I mean, their powers are initially equitable. Now, whereas with some of these other things that they enforce, they actually have penalty authority. While the question may be much narrower, because we have to look to exactly what these laws are prohibiting, the consequences of violating those laws, as interpreted by the FTC, could be much greater and more immediate.

Taken together, the FTC is signaling, "We have a number of ways to get at, and we will use them all potentially." This language of ‘hold yourself accountable or be ready for the FTC to do it for you’- gosh, I mean, where was that language in the late 1990s when we were talking about privacy in the days of self-regulation? I mean, this is like, "Get your act together or you will be facing consequences."

Ultimately, all agencies, including the FTC, with a couple exceptions in the Constitution are really creatures of Congress. It's Congress that will have to act here. The FTC could do a lot more, if it knew for certain that it had the legislative winds at its sails. Congress could give the FTC more budget, let them hire more people.

I think Congress can and should undo the 1990s Gingrich era cost-benefit analysis that has been tacked onto the unfairness prong of the FTC Act. I know that's a little bit wonky, but the basic idea is that once upon a time, the FTC was able to make unfairness determinations on the base of moral principles and public policy and say something was unfair because it violated public policy or wasn't moral.

That was the original understanding of unfair. The word unfair obviously means not fair, right? It's moral. It's normative. Then a perception that they were overdoing it or putting too much pressure on industry led to initially a self-limitation by the agency where it declared that it would do a cost-benefit analysis before bringing unfairness and that public policy purely could not be a basis for unfairness authority.

Later that was codified, which someone on Twitter reminded me of. I'd actually forgotten that, but it was codified by Congress in the 1990s, during the Gingrich years. We could undo that. Congress could undo that, and undoing that would embolden the agency to be much more assertive and aggressive about what constitutes unfairness. There's a political will within the commission.

There is a capacity because of the different acts that they're in charge of enforcing, but a perfect storm would involve some signal from Congress that they want the commission to do more. We got that signal in some ways through the Senate commerce hearing the other day, which was validly about COVID scams, but really dramatized how interested Congress is in the FTC taking a more active role to protect consumers. More of that will ... That's the perfect storm right there.

Justin Hendrix:

Clearly you're enthusiastic about the FTC being more aggressive. Do you think the U.S. needs a proposed regulation similar to what the EU has just put forward, or does the U.S. have all the pieces it needs to take action on these issues?

Ryan Calo:

The U.S. does not have all the pieces it needs, nor do I think there's going to be the appetite for the kind of comprehensive regulation we see in Europe. I referenced at the outset an AI policy roadmap that I wrote years ago. In it, I talked about some of the ways that U.S. law is outdated in light of AI. The standard law in technology move is to talk about the new affordances of an emerging technology and what assumptions of law and policy no longer obtain.

There are a number of them. For example, it seems to me that often the way that companies are held accountable for disparate impacts of their technologies on racialized groups, or the way that safety problems or even like the emission scandal, like avoidance of regulation problems, the way these things emerge is because there are more impartial researchers out there who are looking into the practices.

They're journalists, or they are researchers in the academy, or they are non-profits. Yeah, they all have their motivations and they'll have their vantage points, but these are third parties who are kicking tires on systems and saying, "These are unsafe. These are racist, or have a racist impact." One of the things that U.S. law needs to do is to make absolutely clear that that kind of investigatory technical work is protected and cannot be the basis of a cease and desist letter or a lawsuit under the Computer Fraud and Abuse Act for example.

That's just one concrete example. I could give you seven more, there are ways in which U.S. law needs to change in light of AI that would improve the ecosystem. I cannot imagine that we would have the appetite for something like what Europe did. I mean, it's just ... or purports it will do or intends to do. It's just not in our DNA here in a way. You know?

Justin Hendrix:

Well, we'll have two separate experiments going on with how AI develops as a technology and how it interacts with society and with our economies and with our rights. Look forward to that and seeing how it turns out, I suppose.

Ryan Calo:

It's a really interesting time to be an observer of tech policy. In recent years, scholars and activists have foregrounded issues of race, gender, and identity. They have been very impactful and the FTC blog post is a testament to that.

Before that too, there were also a bunch of people now for over a decade who have been interested in robotics law and policy who have been writing about it and talking to the media about it. Some of these ideas are surfacing now in public discourse. It's really gratifying to think that there is a set of communities who have paid sustained attention to this and now the conversation is so much more sophisticated and there are actually proposals lying around that can be picked up by society because people have been doing this work now for 12 years. It's neat.

Justin Hendrix:

Well, a reason for optimism. Ryan Calo, thank you very much.

Ryan Calo:

Thank you so much, Justin.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics