Home

Donate

FTC Commissioner Alvaro Bedoya on Algorithmic Fairness, Voice Cloning, and the Future

Justin Hendrix / Feb 18, 2024

The US Federal Trade Commission (FTC) under chair Lina Khan has taken an aggressive approach aimed at mitigating the risks of artificial intelligence. In recent weeks and months, it has:

  • Initiated an inquiry into AI investments and partnerships among companies such as Alphabet, Amazon, Anthropic, Microsoft, and OpenAI.
  • Joined a collective declaration — in collaboration with the Consumer Financial Protection Bureau, the Department of Justice, and the Equal Employment Opportunity Commission — titled "Enforcement Efforts Against Discrimination And Bias In Automated Systems."
  • Stepped up enforcement actions. In December, it announced Rite Aid will be prohibited from using facial recognition technology for surveillance purposes for five years as part of a settlement over charges that the company failed to prevent harm to consumers in its use of facial recognition technology in its stores.

To learn a little more about how the FTC is thinking AI and its role in the broader constellation of federal regulatory agencies, I spoke to Alvaro Bedoya, one of five Commissioners who are nominated by the President, and confirmed by the Senate. Bedoya started his term in 2022. Before President Biden’s nomination, he was the founding director of the Center on Privacy & Technology at Georgetown University Law Center, where he was also a visiting professor of law.

FTC Commissioner Alvaro Bedoya at an Open Markets conference at the JW Marriott in Washington, DC Wednesday, November 15, 2023. (© 2023 Michael Connor / Connor Studios)

During the course of the conversation, we touched on questions such as whether the FTC has the capacity to take on the rapid proliferation of AI harms, the ethics of voice cloning, and how to think about the future.

What follows is a lightly edited transcript of the discussion.

Alvaro Bedoya:

My name is Alvaro Bedoya. I am a Commissioner at the Federal Trade Commission.

Justin Hendrix:

Commissioner Bedoya, for anyone that is listening to this who somehow doesn't know who you are, can you just explain a little bit about your background and what took you to the FTC as the most next logical step for you in your overall effort?

Alvaro Bedoya:

Maybe the thing that makes the most sense to share is that I was probably the last person who would ever be involved in tech policy if you asked any of my law school classmates. I'm not particularly tech-savvy, I haven't had a longstanding interest in technology policy. My interest ended up here in a roundabout way. I focused in college on labor issues, on voting rights issues in law school. In college, I wrote my senior thesis on the working conditions of Peruvian sheep herders in California and Nevada. My first legal job was working at a place called the Migrant Farmworker Justice Project in Belle Glade, Florida, which is in central Florida. And then in law school, I continued to pursue these interests with a growing interest in civil rights. And so I spent a hot six weeks on Senator Kennedy's judiciary subcommittee and then another six weeks on the NAACP Legal Defense Fund team for the summer.

So how on earth did I end up here? I got a job working as Senator Franken's chief counsel, and I started on his first day in office, July 7th, 2009. And there was this very funny moment that first or second week where me and the other councils sat in a room and had to divide up the portfolio and no one wanted privacy. It was the last item that no one wanted to claim. It was like the poor kid in gym class who gets picked last. And I forget whether I volunteered for it or I "voluntold." I was "voluntold" to take it, but I took it. And pretty quickly I discovered that this dynamic you see in labor, this dynamic you see in civil rights where there is a whole lot of dispersed people who are up against concentrated power was being replicated many times over when it came to technology.

Everyone loves their... Well, most people love their privacy, most people love their civil liberties, but it's not like they wake up in the morning thinking about it, how they're going to protect it. Whereas, I think you do have tech companies that wake up in the morning and their teams think about, "Okay, well how can we get more data? How can we get more sensitive data? How can we monetize that data?" And it's a hard balance to strike. But I think that all of this came to a head when Ed Snowden started leaking documents and there was this reexamination of what the rules should be around government surveillance. And I always expected there to be this big attention on Capitol Hill to how government surveillance affected historically marginalized groups, be they religious minorities, ethnic or racial minorities, and that never happened, and I was a part of it, it never happened.

I didn't raise those issues for my boss in hearings. And after I left the Hill, I had the moment to look back and read those transcripts and think, how crazy is it we're having this conversation about racial justice, civil rights because this was 2014, 2015, and how striking it is that that conversation was happening alongside this conversation about civil liberties and privacy and never did the twang meet in the United States Senate. And at that point, I decided to dedicate a lot of my attention to what privacy, what surveillance mean for the "rest of us," for people who don't have a particularly significant amount of power or influence and who may be unpopular or unliked, depending on who's in power. And that set of interests on a circuitously brought me to the Federal Trade Commission where this is one of a couple of things I care a lot about work.

Justin Hendrix:

We're going to talk about algorithmic fairness, algorithmic justice in particular, but I do want to ask you about perhaps that broader question around civil rights protections, privacy protection. What is missing in your view when it comes to the types of protections that should be on the books in the United States?

Alvaro Bedoya:

That's a really good question. Let me offer two sets of thoughts. The first is as to what is missing. Look, like most of my colleagues, I have supported comprehensive privacy legislation on Capitol Hill. I think it was a real shame that it wasn't passed last Congress, and it is high time that we fill the gaps of the sectoral privacy regulation system in the United States. And so for everyone that's listening, that doesn't come from the world of privacy. The way the United States has regulated privacy historically is by passing sectoral laws that address video privacy, credit card privacy, cable company privacy, kids' information privacy. And underlying that, what my institution, the Federal Trade Commission has done is take our authority to stop unfair and deceptive trade practices and use that to fight privacy invasions and fight security breaches and security lapses when they substantially injure people. But we don't have a baseline set of protections, and so we need something to fill those gaps.

Additionally, I think we urgently could benefit from greater protections around fairness and the decisions that are made about people when opaque automated systems make choices for them about their job, about their housing, about their healthcare. I don't particularly care that much if the algorithm that models a pair of glasses on my face when I go online and I'm buying new glasses, if that misfires. I don't care that much about that. But if there is a system that is deciding how much I should be paid, whether I will be able to get a rental, whether my insurer is going to cover this very serious injury I suffered, treatment for that injury. I do think we need to have clear rules of the road for those scenarios. So I think those are some gaps.

But the second thought I want to offer is I'm really proud of the work we do on competition. I think it is critical. I now wake up in the morning and I think about antitrust in a way that I absolutely did not do a couple of years ago, But what our director, Sam Levine, what our individual shops at FTC are doing, whether it's the Division of Privacy and Identity Protection, Division of Financial Practices, Division of Advertising Practices, Division of Marketing, Enforcement, what they are doing is no less important than what we're doing on competition because we are taking the letter of the law and thinking aggressively about how we could use that letter to protect consumers comprehensively. And so the Rite Aid settlement, which I'm looking forward to talking about, I think is an example of us taking this law and applying it to a grievous harm in a way that clarifies how consumers are protected against algorithmic harms in a way that had not been made clear or as clear previously.

The second message I want to strike is, yeah, there are gaps. What American consumers have at FTC, career staff leadership who are using every single little tool, big tool at our disposal to try to do everything we can to protect folks.

Justin Hendrix:

Let's get into FTC v Rite Aid. You have written that you regard it as a baseline for what an algorithmic fairness program should look like. What does this action signal in your view, and what should other companies take from this language that you've put out?

Alvaro Bedoya:

We're used to algorithmic systems recommending the road for us to drive on our way to work. We're used to algorithmic systems highlighting email messages in our inboxes. It's 2024, that's old news. What I am really worried about is that decision making that is invisible to people and yet is tremendously more consequential. When you put in that job application, no one tells you the algorithmic system said not to hire you, right? You never get that explanation. Similarly, some of the allegations you hear around healthcare coverage decisions, people stumble upon the fact that the decision came from an algorithm.

So there's a very powerful set of allegations made by the family of Dr. Gary Bent, who was a research physicist at University of Connecticut, suffered a recurrence of melanoma, went to the hospital, the surgeon operated and said, "I now prescribe to you a course of treatment for acute rehabilitation services." And then a week later they get a call from the insurer saying, "No, we're not going to cover that, and we're not going to cover that because we don't think that Dr. Bent can survive this therapy." And none other than Senator Richard Blumenthal has alleged that this was all a result of an algorithmic recommendation made by that insurer.

So why is Rite Aid so important against this backdrop? Rite Aid is really important against this backdrop because it constitutes the first instance in which the Commission has made clear. And I think this authority has been there for a long, long time, but in which we make clear that substantial injuries from algorithmic harms are absolutely covered by our unfair and deceptive trade practices law. So Congress in 1994, so our authority to stop unfair deceptive trade practices, that's from 1938. And it was an expansion upon an earlier authority against unfair methods of competition. That's from 1914. But in '94, Congress comes in and says, "Hey, look, if you substantially injure someone, they can't reasonably avoid it, and the benefits are not outweighed by the harms, that equals unfairness."

And so we, in Rite Aid, took that legal formula and applied it to a face surveillance system that the pharmacy had rolled out that was resulting in 11-year-old girl getting stopped, searched wrongfully because in the eyes of this completely secret system, she was a person of interest that potentially had shoplifted in the past, which was not true. It resulted in a Black woman getting stopped, detained effectively, and had the police called on her based on a probed image, or pardon me, a library image of someone, the employees later described as a blonde lady, a white lady with blonde hair. It resulted in people not being able to get their prescriptions because of being falsely flagged for things they did not do.

So we took that legal formula and applied it to this algorithmic recommendation system and said, "This is unfair. This is illegal." And we reached a settlement. So that's the first reason this is so important because across industry, companies are beginning to use and use in very important ways, algorithmic decision-making systems. And we are clearly unanimously and with a strong unified voice saying, "This is breaking the law if you injure people in this way using these systems." And then secondly, we are putting in place a very rigorous system of monitoring if the company decides to take up this practice in the future. And by the way, we're banning them from doing this for five years full out. But if they decide to do it, they have to test pre-deployment, post-deployment annually. They have to test for bias, not just on binary functions of Black versus white, man versus woman, old versus young, but all of those categories in combination. And any expert on algorithmic bias will tell you that that's where bias lives.

It's easy to hide bias that all you do is present a series of binaries, but when you combine the fact that all of us have an age, all of us have a gender, all of us have a complexion, right? An ethnicity, that's where bias lives. We are requiring an assessment of where bias is coming from, inaccuracies is coming... Hardware, software, treating data, probe data. That's why this is so important. It's not just a signal to the legal community that there is no algorithmic discrimination carve out to unfairness law, but also putting forward a concrete staff developed regime for monitoring and preventing this conduct in the future. That's why I think it's so important.

Justin Hendrix:

One of the questions that folks may have is around volume and the extent to which the FTC could take this model and apply it to all of the other companies that are using similar systems. You've hired more technologists, you've built some capacity for that. How much of the problem can you address?

Alvaro Bedoya:

One thing that you don't see a lot of headlines for is something that Chair Khan has done recently, which is really powerful, which is build out our Office of Technology in a really fulsome way. So we are in the process of Stephanie Nguyen, our chief technologist together with Chair Khan is building out a strong Office of Technology, and that's absolutely a terrific step forward. At the same time, I do want to underline something that I shared in a public setting a couple of days ago, which is you take the federal government's investment in FTC as a ratio of the population that we're charged with protecting and compare that to those of our peer countries, OECD countries. These are highly industrialized, highly wealthy nations, and the amount that's invested in us is a fraction of what's invested in our peer nations.

And so what should folks expect? They should expect that the folks in DPIP and elsewhere at the agency will do every single thing within their power to investigate these wrongs. But will we cover everything? No. That said, one of the many reasons I was excited to talk with you about this and talk to the folks earlier in the week is in an ideal world, we don't have to bring these actions, right? In an ideal world, we make it crystal clear to the private sector that algorithmic harms from algorithm covered by our fairness authority, and they start reading that writing on the wall, reading that writing on that order, and start implementing their processes.

So absolutely, are we going to cover every single harm? No. The other thing I would add is these investigations take time. And often if you're not familiar with law enforcement, I'd never worked at a law enforcement agency before, it can seem interminable when you're on the outside, but when you're on the inside, you need to give the parties due process. You need to request documents, obtain documents, process documents, and all that takes time. So I think my best answer to you is we will do every single thing within our abilities, and we're building out that capacity as we speak, but we can't cover everything. And that's why we want to tell everyone that the law applies whether or not you're getting enforced anything.

Justin Hendrix:

How do you think about the FTC's relationship to other regulatory agencies in the federal government on these types of questions? Looking at the AI Executive Order, there wasn't an enormous amount of focus on the FTC and its role necessarily, but within that constellation of different agencies, how do you think the FTC's role evolves?

Alvaro Bedoya:

You mentioned about the Executive Order, may come from the fact that we don't report to the president. So we are an independent agency, and so that's why you see language in that order encouraging the FTC to do X or Y or Z, but that's because the president literally cannot tell us to do X or Y or Z. And so I think that's why you see a little bit of that in the Executive Order. But Director Rohit Chopra at CFPB, our colleagues at the Department of Labor, the NLRB, Chair Charlotte Burrows at the EEOC, there are any number of peer institutions and agencies that are deeply involved in this and other innovation under Chair Khan is that we've developed and she assigned MOUs with the leaders of a lot of our peer agencies and institutions to facilitate information transfer, which is actually something that in the absence of these MOUs take some time because we are a law enforcement agency.

And so if I could give you one concrete example outside of the consumer protection context of how our authorities are complimentary, I recently gave a talk on misclassification and how it may constitute an unfair method of competition. And so misclassification is when someone really is your employee, you're telling them what to do, how to do it, and you're seeing the upside of whether they do a good job at it, but instead of giving them a W2, giving them benefits, over time and employment, et cetera, you're saying, "Oh, no, no, no, you're your own business. Here's a 1099, you file your own taxes." And DOL has historically been charged with enforcing that, against that and those violations of the law, and they worked in conjunction with NLRB with some of the labor issues that result from that. But if an entity, if the business uses misclassification as a method of competition, as a strategy to compete, then absolutely our authority against unfair methods can come into play.

I'm explaining all this because our authorities are complimentary. So DOL and NLRB can only go after someone after the harm has occurred. One of the extraordinary things about our unfair methods authority in 1914 is that Congress deliberately decided to give us the ability to stop anti-competitive practices in their incipiency, before the harm is cemented against consumers and market competitors. So I think in any number of ways our authorities are broader and more nimble than some of the authorities that our peer agencies have. And in that way, I think they can be complimentary.

Justin Hendrix:

I want to ask you about what you're watching abroad as well. The EU is patting itself on the back for having brought its AI Act through its complicated legislative process and appears to be moving forward. Your counterparts in other governments are also taking action in some cases against algorithmic fairness or algorithm harms. Who do you admire out there, or what actions have you looked at and said, "That's instructive for us"?

Alvaro Bedoya:

I look to our colleagues abroad, not just Europe, but elsewhere as sources of ideas and as laboratories, much like you might look at states about their ideas. And so when I look abroad, I'm curious what ideas are being deployed and why. Now, I'm in the process of familiarizing myself with the AI Act, and so I won't comment on that here, but I will offer two ways in which I have benefited from looking abroad and one's to Ireland and the other's to the UK.

So for Ireland, I'm going to mispronounce it, but Ireland's media regulator, and I'll mispronounce the Gaelic name. So for the listeners, I'll forgive me for not attempting to pronounce it, but their media regulator recently proposed a guideline that algorithmic content recommendations based on gender, sexual orientation, gender identity, political beliefs, race or ethnicity should be off by default. Okay? I think that's a very elegant proposal. Why? Not banning anything. It's not banning any content recommendations. It's not telling the platform, "Thou shalt structure your content recommendation systems in the following way." It's simply saying, "Hey, turn them off by default." If consumers want that, they'll go in there, they'll turn it on. And I'm not saying I want to apply that to all Americans, but I spent a lot of time thinking about teen mental health online. And frankly, I think that as a policy matter, again, not a law enforcement matter, but as a policy matter, setting that toggle switch to off for teens 13, 14, 15, 16, et cetera might be pretty darn positive. And so that's something that I think is very compelling.

The other thing is the UK's age appropriate design code. Now, I know there's a bunch of folks I respect deeply and admire who have been very critical of the code, and particularly its potential impact on LGBT folks. And so I'm not proposing a wholesale adoption of everything the UK has done over there over here, but I think I like the idea of putting the burden on the company to protect a teenager's mental health and wellbeing. And I think that is a very positive move to think those things that way, because just looking at privacy, I learned privacy 2009, 2010, 2011, et cetera. That's the time of opt-in opt out. When it's notice and choice, "We're going to tell you what we're doing, and then you choose." And you know what happens, everyone just clicks yes, yes, yes, yes, yes. Right?

The idea is that, "Oh, we'll give you the choice and you choose." And look, am I saying we should do away with choice? Absolutely not. I think people do need choices, but the burden should not be on the individual consumer to protect their privacy. And by the same token, I don't think the burden should be on the individual teenager, the individual parent to do their own research, so to speak, and read JAMA how to figure out how to protect their teen's mental health, no. I think the burden should be on the company to protect the teen's wellbeing. And so that is regulatory innovation that I'm watching closely and trying to learn from based on what the experiences are abroad.

Justin Hendrix:

I know we only have a moment left, and I want to ask you a bigger picture question that you may or may not have an answer for, but I'm hoping that maybe you'll give it a go. I'm very interested in how we conceptualize the future when we think about tech and tech policy. So much of what I do, of course, is criticizing the state of the present, criticizing or analyzing harms that are underway. So much of what the FTC does, of course, is to look at phenomena as it exists in the world at the moment and to bring enforcement and action when necessary. It's about acquiring evidence and validating harms. How do you think about the future, or even I suppose, how does the agency think about the future? What role does the future play in what you're doing? Are there efforts to conceptualize different futures? Are there ways of doing that?

Alvaro Bedoya:

That's a neat question. Let me answer in two ways. One, as a matter of principle and then one as a very specific example of what I think a pretty all right future looks like. Principle first, I think, I hope where a future in which people are in control and they feel like they understand why decisions that are being made about them are made, they have a sense of control in those decisions, and they feel in control of the technology. And so it's a world where, yeah, there are some data that I'm producing that is collected by others, but I know what data's out there. And I also know that there are ground rules that the companies are going to be held to account for if they break. And when I log online, I don't have this feeling of pop-up fatigue, and I really don't know what's happening here, but whatever, everyone's doing, so I guess I'll do it. No, I have a sense of... I know what people know and why, but most importantly, regardless of what I do, I know I'm protected.

And thinking more in terms of unity trust, I want a world where companies win because they do right by consumers and because consumers like their product, not because they happen to be the biggest company that is the incumbent on whatever platform you're using that day and they happen to put right in front of you this AI assistant or whatever. And as just a frank result of it being convenient, you end up using the system and all sorts of other more exciting, better functioning systems go to waste. This is like the Betamax VCR thing. And I don't know anything about audio/visual, but anyone who does always says Betamax is actually the better one, but they lost out. And in my view, that kind of sucks. It doesn't kind of suck. It really sucks, right? Because you walk the best technology to win out.

And so that's the vision of the future I'm excited for with a big thick layer on top of this being for everyone, not just being with the tech-savvy people who wake up and think about technology, but everyone. People working entry level jobs, people who just retired. That sense of control applies regardless of who you are. Let me give one example I've been pointing to recently in conversations for what an all right future might look like when it comes to generative AI. I'm frankly creeped out a little bit by the use of generative AI to revive the voices of people who are long deceased. I don't mean to impose that on anyone, I get that. But here's an example that I thought was so powerful and frankly wonderful, and that was the use of Andy Warhol's voice in the documentary that I think recently was on one of the big streamers.

Why do I think that? I think that, first of all, because my understanding, and I'm sure I'll be corrected after the fact if I'm wrong, and in which case I take all this back. My understanding is that Mr. Warhol's estate was conferred with to get consent for his voice, number one. So there was a consent from his family and the folks that are responsible for administering his legacy, number one. Number two, who Andy Warhol was, you cannot get to know Andy Warhol and not fundamentally understand that he loved being out there. He loved trying new things. I think MTV rolled around and he got himself an MTV program. He noticed that modeling was in vogue, and so Andy Warhol became a model right at the age of 40 something, 50 something. And as someone who himself said to the world, "I don't consider myself particularly attractive, but I'm going to go do it." And he loved the idea of eternal fame or was intrigued as an artistic matter in this idea of fame and celebrity and eternity and trying new things.

And so you have this person who loved the idea of the future and of things persisting beyond the length of one's lifetime and who had written these diaries that were clearly intended for a public audience, even though kind of as a work of art. And so ultimately what ended up happening is the documentary recreated Andy Warhol's voice and showcased them throughout the documentary, but on so many levels, in an artistic level, you got the sense this was the final performance of these diaries that Warhol had maintained his whole life. On a legal level, this was done with the consent of his family. And on a visceral spiritual level, obviously, what do I know about spirituality? But I certainly as an individual observer got the sense that, yeah, Andy Warhol would like this kind of thing.

And so often as law enforcers were called upon to bring the hammer down on a technology, and that doesn't mean we don't think this stuff is cool. I think voice recreation technology is really fascinating, really cool. And I think when used in a way that pays attention to consent, pays attention to who the people are, it might be a great tribute to society, but I also think it can misfire pretty dramatically. So that's the kind of future I hope, for a future where we can get the fruits of these extraordinary innovations while paying attention to people's desires and their sense of control and their desire to have a sense of control over something as intimate as your voice and your identity.

Justin Hendrix:

And the FTC has just done a Voice Cloning Challenge.

Alvaro Bedoya:

Voice Cloning Challenge. Yes, we did. I wish I knew off the top of my head if the submissions period was still open. I believe it is. But I would encourage anyone who wants to look into this to submit, because we do need more input on them.

Justin Hendrix:

So perhaps one way that the agency is inviting a little taste of the future in. I'll encourage my listeners to check that out. Commissioner Bedoya, thank you so much.

Alvaro Bedoya:

Thank you.

---

Note: according to a report in Wired, Commissioner Bedoya is correct that the makers of the documentary The Andy Warhol Diaries did in fact seek out the approval of the Warhol foundation. The company that created the model of Warhol’s voice is called Resemble AI.

Unfortunately submissions for the FTC’s Voice Cloning Challenge are now closed, but the results are due sometime soon in early 2024.

The FTC is still inviting public feedback on an additional notice of proposed rulemaking aimed at banning the impersonation of individuals, which would expand the scope of a new rule on government and business impersonation that the Commission is finalizing. The FTC says it has seen a significant increase in complaints related to impersonation fraud and widespread public concern over the damage it inflicts on consumers and the individuals being impersonated.

For a pronunciation of the name of Ireland's media and online regulator Coimisiún na Meán, provided by an authentic Irish voice, listen to the episode.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics