Home

Donate

Securing Privacy Rights to Advance Civil Rights

Justin Hendrix / Apr 21, 2024

Last week, the US House of Representatives Energy and Commerce Subcommittee on Innovation, Data, and Commerce held a hearing: “Legislative Solutions to Protect Kids Online and Ensure Americans’ Data Privacy Rights.” Between the Kids Online Safety Act (KOSA) and the American Privacy Rights Act (APRA), both of which have bipartisan and bicameral support, Congress may be closer to acting on the issues than it has been recent memory.

One of the witnesses at the hearing was David Brody, who is managing attorney of the Digital Justice Initiative of the Lawyers' Committee for Civil Rights Under Law. I caught up with Brody the day after the hearing, we spoke about the challenges of advancing the American Privacy Rights Act, and why he connects fundamental data to privacy rights to so many of the other issues that the Lawyers' Committee cares about, including voting rights and how to counter disinformation that targets communities of color.

"When you are writing privacy rules and data protection rules, you're establishing the infrastructure for online governance. These are going to be the building codes of the internet," said Brody. "And, we have an opportunity here to create a more fair and equitable internet that lives up to the democratic aspirations of the people that were imbued in the original theory and creation of the internet. If we miss this moment, we risk replicating the mistakes of the past. We risk allowing unfettered collection and processing of information in ways that profiles people, in ways that segregates people on the basis of race and sex and other traits, and in ways that allows algorithmic technologies to deny equal opportunity without people even knowing why they did or did not get a particular decision. It's really important that we get this right."

What follows is a lightly edited transcript of the discussion.

Rep. Gus Bilirakis (R-FL):

The subcommittee will come to order. The chair recognizes himself for an opening statement. Again, good morning and welcome to today's legislative hearing to examine solutions to protect kids online and safeguard American's data privacy rights. First, I wanted to welcome our new member.

Justin Hendrix:

That's the voice of Gus Bilirakis, a Republican congressman from Florida who serves as the Chair of the House Energy and Commerce Subcommittee on Innovation Data and Commerce. Last week, the subcommittee held a hearing titled Legislative Solutions to Protect Kids Online and Ensure American's Data Privacy Rights. Between the Kids Online Safety Act and the American Privacy Rights Act, both of which have bipartisan and bicameral support, Congress may be closer to acting on the issues than it has been in recent memory. One of the witnesses at the hearing was David Brody, who is managing attorney of the Digital Justice Initiative of the Lawyers Committee for Civil Rights Under Law. I caught up with David the day after the hearing. We spoke about the challenges of advancing the American Privacy Rights Act and why he connects fundamental data to privacy rights, to so many of the other issues that the Lawyers Committee cares about, including voting rights and how to counter disinformation that targets communities of color. Here's David.

David Brody:

My name is David Brody. I am the managing attorney of the Digital Justice Initiative at the Lawyers Committee for Civil Rights Under Law.

Justin Hendrix:

And can you tell us a little bit about the Digital Justice Initiative? What do you get up to there?

David Brody:

The Digital Justice Initiative works at the intersection of privacy, technology, and racial justice. So the Lawyers Committee as a whole is a racial justice organization focused on discrimination against black communities and other communities of color. And at the Digital Justice Initiative, we focus particularly on commercial data practices, algorithmic discrimination, government surveillance, online disinformation, online white supremacy, and other forms of hate and things like that that affect the ability of everyone to equally enjoy the internet and its goods and services.

Justin Hendrix:

David, you were on Capitol Hill I suppose yesterday as I'm talking to you for a hearing in the Energy and Commerce Subcommittee on Innovation Data and Commerce. Have to say it almost sounded like a cheerleading session at the outset of it. You had members essentially trying to raise the enthusiasm, I think, about the legislative interventions that were going to be discussed during the hearing. Did you feel it? Did you get a sense that something might actually happen?

David Brody:

Yeah, so I think there was definitely a change in tone from some past privacy hearings. So in the past we've had to spend a lot of time saying, "This is a problem. We need to recognize this problem. We need to do something about this problem." And now it really feels like particularly in this hearing, the conversation has changed. Everyone has acknowledged there is a problem, and now the conversation is, "What are we going to do about it?"

Justin Hendrix:

And so, one of the things that apparently the House would like to do about it is pass this new comprehensive privacy legislation that's before the Chamber, the American Privacy Rights Act. You got into it with lawmakers on this subject, and one of the things you brought up, of course, is possibly the biggest barrier to it, certainly the biggest barrier to the American Data Privacy and Protection Act, which came before the APRA, which is the question of preemption. After the conversation yesterday, I don't know, do you think there's a chance that the question of preemption won't leave the APRA to meet the same fate that the ADPA met?

David Brody:

Well, I wouldn't place any bets one way or the other. One thing we've definitely learned over 10-plus years of trying to get comprehensive privacy legislation is that it's very difficult to reach a bipartisan agreement. I think we are closer now than we've ever been before. And I think on preemption, the devil's really in the details. So I think most people would agree that you can't just preempt everything because data is involved with everything. And if you have some broad spectrum preemption, you're going to have a lot of unintended consequences where you break things at the state level that have nothing to do with the types of privacy regulations that a bill like this would enact. So you don't want to just come in and have field preemption that breaks a bunch of things.

At the same time, the Grand Bargain in ADPPA a couple of years ago, and also in this new bill, the American Privacy Rights Act, the grand bargain that is bringing both sides to the table is something along the lines of, "Okay, we're going to have some limited preemption in exchange for having strong federal rules and a private right of action." Now, the key there is are the federal rules going to be strong enough? In my ideal world, the preemption would just be a floor, not a ceiling, meaning have preemption that blocks weaker state laws, but says that states can enact stricter protections above and beyond the federal law. But that's what we do in many contexts.

But I also am a big believer in not making the perfect the enemy of the good. And what we have to recognize here is that today, while there are some strong laws and regulations in California and a few other states, and there's a few states that are on the cusp of maybe enacting new laws that are good like in Maryland, but the reality is for the vast majority of the country, for the vast majority of people in this country, their protections are nonexistent or weak, and harms are happening every day. People are being discriminated against when they're looking for jobs, when they're looking for houses, when they're trying to get into school, even when they just walk into a store and have their face scanned by a facial recognition algorithm that doesn't work for black people or women or other groups.

We can't afford to wait. We can't afford to haggle forever trying to get something that may never happen. We need to protect people today. In that spirit, my approach to this is, "Okay, is this going to be good enough? Is this going to be good enough to trade some form of preemption in order to get these protections?" And what I said at the hearing yesterday, and I think this is the right approach, is the federal law at a bare minimum has to be as strong as the protections in the states. So that means that California has put into place some new regulations and laws. Some other states are innovating in the space. Washington passed its health data privacy law. I think I just saw a news report today about Colorado's new protection for brainwave data.

Justin Hendrix:

Yeah, first in the country, I think on that one.

David Brody:

That's right. And obviously Illinois is the national leader on biometric privacy. We have to make sure that a federal law accounts for all these things if we're going to have a conversation about preemption.

Justin Hendrix:

Seems like one of your fellow witnesses yesterday, Samir Jain from the Center for Democracy and Technology, agreed. I know he said something along the lines that the idea is that we prefer perhaps for federal privacy law to set the floor, but that might not be the way things do turn out. Let's look as well at this question of notice and consent. I know you talked a bit about this yesterday, and it seems like Congress might be essentially finally moving beyond notice and consent as the focus of privacy policy. That seems to make you happy.

David Brody:

Yeah, I think this is a good direction, that one of the committee, I think from the Republican side, I don't remember who asked this question, but asked the panel what we think are the most important components of this bill. And most of the panel, almost everyone said data minimization was one of the most important components, and that's because we know that notice and choice is a failed system. In almost every other area of consumer protection we don't allow people to consent to practices that are affirmatively harmful. We don't allow people to consent to have more arsenic in their drinking water. We don't allow you to opt out of having airbags in your car.

We don't take a caveat emptor philosophy to letting new pharmaceutical products into the market. In pretty much every other area of law or area of consumer protection, we don't assume that the consumer has to have complete and perfect knowledge about how a product works. We just put into place this sort of trust that the product is going to work, it's going to be safe, and if it turns out that it doesn't work, if it turns out that it's not safe, if it turns out that it doesn't do what it claims to do, then we provide some remedy to the consumer. We don't expect consumers to understand how a catalytic converter works and check that all the spark plugs are properly connected before they drive off the lot at the car dealership. We have lemon laws. If the car sucks, you take it back.

Justin Hendrix:

I think the example you were using yesterday was maybe a little closer to home. You were talking about Bluey.

David Brody:

Yeah, we were talking in the context of children's privacy. Yeah. So in this context of children's privacy, it's even more difficult to do a notice and choice system properly. For adults at least, we're already overwhelmed with the number of privacy policies and other types of things that we theoretically have to read and consent to, and no one does. But parents have no time. Parents don't have time to go down the rabbit hole on every random thing that is thrown at them and check every single terms of service. They need assurance that the products and services that they're using for their kids are safe. The example I gave in the hearing was, "Look, I just want my kid to be able to watch Bluey. I'm not consenting to someone building a dossier on them before they even learn to read." We just want kids to be able to watch a show. That's it.

Justin Hendrix:

What did you make of the conversation yesterday in terms of the intersection of comprehensive privacy on the one hand and kids online safety on the other?

David Brody:

The Lawyers Committee, we don't have a position on any of the kids' bills, and that's for a couple of reasons. The first is that that's not necessarily our lane. These bills are not civil rights bills and they're not general privacy bills. And it's also that our focus has really been on enacting a comprehensive privacy law that protects everyone. And I do think that is the right approach because the challenges that you get when you pass kids-only legislation is that not only are you not protecting everyone who's not within scope of that legislation, but you also create these weird incentives where companies now have to figure out who's a kid and who is not a kid, or depending on how the bill is written, maybe the companies are going to bury their heads in the sand about who is a kid and who is not a kid. And that creates a whole lot of extra complication, so if you just have protections for everyone, that gets settled.

Now that said, children are a particularly vulnerable group just like seniors and some other groups, and it's totally appropriate to think about what are the extra protections that kids need? So there's things like prohibiting targeted advertising to kids, having higher default settings, privacy settings, other settings for kids, limiting the uses for which kids' data can be used, things like that. One of the things in the American Privacy Rights Act is that it treats children's data and sensitive data, and that means that it cannot be used in various ways that non-sensitive data can be used. So that information can't be used for targeted advertising and things like that. That's the practical aspects of it. I think when you get beyond that into some of the other kid safety stuff and the design stuff, I think the devil's really in the details. Some of this stuff could be really great. Some of it could raise constitutional issues, but the devil's in the details.

Justin Hendrix:

So going back to APRA for just a moment, I want to ask you about your perspective on something that a couple of folks writing Tech Policy Press have been thinking about, which is this list of permitted purposes that are included in the legislation. I think there are 15 or 17 odd permitted purposes for data processing that are listed there. Things like exceptions that include protecting data security or making legal claims or effectuating a product recall or filing a warranty, etc. I've had a couple people point out that this list is in itself problematic in some way. The fact that it's limited creates almost a sort of rigidity. Is this something you've considered? Anything here you would address?

David Brody:

I think that if you look at the list, it's very close, it's not identical to the list in the ADPPA from two years ago, but the permissible purposes are very similar. I do think it should be seen in a similar light. Now, one thing I will note, and I think this is something that got cleaned up in, I was just trying to find my notes here to see if I could find it, I think this got cleaned up in one of the technical drafts that just came out. What's important for the permissible purposes is that they should be gate-kept by the requirement that the covered entity is only processing data as is necessary, proportionate, and limited to such permissible purpose.

So the way to read this, if you're looking at section three A, it says, yada, yada, yada, "You cannot collect, process, retain, or transfer cover data," and then it goes into number one, "Beyond what is necessary, proportionate and limited for a few things," and then there's number two, "For a purpose other than expressly permitted subsection D." This language has to get fixed a little bit because the necessary, proportionate, and limited provision needs to apply to all those categories in subsection D. That's how it was in ADPPA. That's a really important provision because otherwise you're not really minimizing data. You're just authorizing purposes with no minimization requirement.

To the question of whether the permitted purposes is too narrow, I don't think it is. I think there's a couple tweaks around the margins that need to get made. I think the researcher exception doesn't quite work as intended at the moment, and there might be a few small technical things like that, but I think the scope of the permitted purposes is actually pretty appropriate, and I haven't seen too many examples of something that should be permitted that would not fall within one of the integrated purposes.

Justin Hendrix:

I know the Digital Justice Initiative gets up to more than just privacy and civil rights. You're also concerned with various other things, disinformation, voting rights, online hate, free speech issues. We're in the middle of an election year. There's already a lot happening in New York. We're already seeing the beginnings of the slightly more intense period. There's protesters in downtown Manhattan and lots going on around the federal courthouse. As the election cycle picks up, what are your key priorities around issues like disinformation, voting rights in particular, and what other issues are you working on?

David Brody:

Yeah, so we are paying very close attention to particularly the disinformation landscape and election integrity. Actually just an hour or two ago, we just got a decision in our January 6th case, so we represent several of the US Capitol police officers who were injured on January 6th, and we are suing former President Trump and a bunch of the insurrectionists and groups that were involved with that.

Justin Hendrix:

This is Smith versus Trump?

David Brody:

Yes, this is Smith versus Trump, and we just got a decision a couple hours ago saying that I believe we're going to be going forward with discovery. Trump had moved to try to stay discovery, and I think that got rejected. We are charging ahead with that case to hold the former president accountable for January 6th. We also recently settled our voter intimidation robocall case, which was an National Coalition on Black Civic Participation versus Wool. And the significance of that case, this was a case in which voter intimidation robocalls were set to black voters and others in the 2020 election. The significance of that case is that we won on all of our accounts for violation of the Voting Rights Act and other voting rights laws, and the court held that electronic communications like robocalls can violate the Voting Rights Act. They can constitute voter intimidation.

So that's a really important precedent that we're going to be paying attention to. As we go into this election cycle we're going to be looking, keeping an eye out for instances where bad actors might be trying to intimidate or deceive voters out of voting in the election, tricking people about where to vote, when to vote, how to vote, who's eligible to vote, trying to threaten people or make them fear that it's not safe to go to the polls. We are keeping an eye on all that stuff. Our digital justice initiative works very closely with the Election Protection Coalition, which is led by the Lawyers Committee, and so we will be working with our Election Protection partners specifically to keep an eye on online disinformation and online voter intimidation efforts.

Justin Hendrix:

Thinking about the robocall in particular, you look at all these automated systems and we're beginning to see even in other countries, the deployment of more micro-targeting systems that are taking advantage of generative AI, the ability to automatically create content and associate that with people's personal information. I suppose, those types of things you're watching very closely.

David Brody:

Yeah. I am very concerned about this. It's one of the things I'm most concerned about, and it's not necessarily that I'm concerned that there's one particular deep fake of President Biden or former President Trump that's going to go viral and trick everyone because generative AI doesn't make things more viral. It just reduces the cost of making things at scale, and so what I am concerned about is reducing barriers to entry for bad actors to scale up disinformation and intimidation efforts. Like you said, some of it could be micro-targeted. Some of it could just be scattershot, where if you throw a thousand things out there and 1% of it sticks, that could still have a hugely disenfranchising effect.

And the types of scenarios I'm particularly worried about are, like I said, not necessarily something that's going to resonate at a national level, but I'm worried about fake messages that say there's been a flood at the polling place, voting has been delayed until tomorrow, or things like that where it's totally plausible and people just don't know if the information is authentic or not. Because the reality is, look, we have thousands of polling centers all over the country. It's totally foreseeable that one of them might have a pipe burst or the microwave catches fire or something, and they have to adjust, right? You have to move things to the building next door or accommodate things.

These things happen, but what we don't want is for bad actors to spread false information that tricks people out of voting. Just coming back to federal privacy, I really think it's important to think about this not just as an area of tech policy, but to think about it as the foundational area of tech and civil rights policy for this generation. When you are writing privacy rules and data protection rules, you're establishing the infrastructure for online governance. These are going to be the building codes of the internet, and we have an opportunity here to create a more fair and equitable internet that lives up to the democratic aspirations that were imbued in the original theory and creation of the internet, or if we miss this moment, we risk replicating the mistakes of the past.

We risk allowing unfettered collection and processing of information in ways that profiles people, in ways that segregates people on the basis of race and sex and other traits, and in ways that allows algorithmic technologies to deny equal opportunity without people even knowing why they did or did not get a particular decision. It's really important that we get this right, and it's also really important that we get it done because as AI really starts to take off, we can't sit on our hands with regard to privacy and not get it done. We saw what happens when we failed to regulate social media for the past 15 years and we can't make that mistake again.

Justin Hendrix:

It seems to me that all the worst case scenarios for persuasion and micro-targeting would certainly rely on all the personal data that's sloshing around out there, which perhaps this comprehensive privacy bill would address.

David Brody:

That's right, and what I would say here is discrimination runs on data. If you want to engage in any form of systematic discrimination beyond just one-off face-to-face interactions, that's going to involve data. It's going to involve data about who someone is, where they live, their personal traits, where they go to school, their health, their wealth, their education. Discrimination runs on data, and so if you want to combat systematic discrimination, you need to go upstream and regulate how that data is collected and used.

Justin Hendrix:

David, perhaps when we get a little further into the election cycle, we'll have you back on to talk about what phenomena you're seeing and maybe we'll see if there's progress on this comprehensive privacy legislation.

David Brody:

Absolutely. Thanks.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics