Considering a New 'Civil Rights Approach to AI'
Justin Hendrix / May 29, 2025Audio of this conversation is available via your favorite podcast service.
Today, the Center for Civil Rights and Technology at The Leadership Conference on Civil and Human Rights released its Innovation Framework: A Civil Rights Approach to AI, which it calls a “new guiding document for companies that invest in, create, and use artificial intelligence (AI), to ensure that their AI systems protect and promote civil rights and are fair, trusted, and safe for all of us, especially communities historically pushed to the margins.” I spoke to the Center’s senior policy advisor on civil rights and technology, Frank Torres, about the framework, the ideas that informed it, and the Center’s interactions with industry.
What follows is a lightly edited transcript of the discussion.

Source: The Leadership Conference.
Frank Torres:
Frank Torres. I'm the senior policy advisor on Civil Rights and Technology at the Center for Civil Rights and Technology at the Leadership Conference.
Justin Hendrix:
Frank, can you tell me a little bit about the Leadership Conference for any listener who isn't aware of it? What does it get up to, and why does it concern itself with technology?
Frank Torres:
Yes. The Leadership Conference on Civil Rights is a leading civil rights organization founded during the civil rights movement to bring together a coalition of stakeholders. We're now over 230 members of our coalition. That's essentially our mandate is to fight for the civil rights of everyone. We do so on a bipartisan basis and across the board. Whether it be in the criminal justice system, in education, in healthcare, in financial services, in housing, we are there. And because of the recognition that technology was having an influence across all those areas, the Leadership Conference started up the Center for Civil Rights and Technology. So we're poised looking at the intersection of technology and civil rights. So when it comes to things like AI, the center has been very much engaged in working to ensure that the technology works for us all and works fairly.
Justin Hendrix:
We're going to talk a little bit about this new Innovation Framework: A Civil Rights Approach to AI that you've published this week. You get onto this in the document itself. You have a little section called Why This Framework, Why Now? But I assume to some extent you published this or got this document together before even the last couple of weeks. It feels AI policy, AI governance, have been very much in the news. There's been a state moratorium on state AI legislation or the enforcement of state AI legislation that's moved through the House of Representatives. So much happening on this topic, maybe allowing you to bring us up to date with why now. Why now? Why did you write this report?
Frank Torres:
So we wrote this report now, and of course as you point out, it's more timely than ever because what we saw was AI being used. It's no longer a nascent technology, so it's no longer brand new. It's being used across the board, whether it be in housing or in healthcare or in education or in criminal justice or other areas to make consequential decisions that impact people. And there's recognition and evidence that shows there are harms that happen. There's risk to AI, especially when there's no guardrails in place. And so, we see people paying more for loans. We see bias in discrimination. We see harms to communities that are marginalized or are discriminated against because of their race or ethnicity or who they are versus being treated fairly.
So it occurred to us that it's important ... And while industry for years has recognized that there's risk to AI and they've developed all sorts of principles around the responsible use of ai, what we saw was in some cases the need to then put those principles into practice. The other thing we saw was, and started to recognize, was the need to get a focus on civil rights. Most of the risk that we've seen or the harms that we've seen coming from AI had been because the AI has treated people unfair. It's been biased or discriminatory. And so ,we created the framework as this new initiative to work with companies to embed civil rights as they develop and use these new technological tools.
Justin Hendrix:
We're in a situation where the current administration is ending science around the study of bias and discrimination. You've had this group of independent scientists, technology researchers, who've had to come out even defend the idea of doing basic science on these issues. These questions, I know one of the things that you're hoping for here is that this will be voluntarily adopted by industry. Are you still feeling the pull from industry? Are companies still opening the door to hear about this framework, to hear about these ideas? Are they actively interested in implementing any of them?
Frank Torres:
I think it depends upon the particular company. What we're really after here though is, and I think companies understand this, that they will be successful if the consumers, the people that are either using the technology or subject to the technology can actually trust it, right? Nobody wants to put a car out on the road, whether or not there are regulations around it or not, that's going to crash. That's dangerous. That's going to harm people. Then there's reputational risk. Yeah, there could be some liability risk.
Now, we have rules in place across the board in many industries where people could get harmed, and AI is now no different. And that's why the moratorium is also potentially really harmful. Congress has had ample time to act to protect the public both in terms of data protection and AI protection, and it has failed to do so. And the states have stepped in and filled that role to protect their citizens. And if Congress doesn't want to act or can't act or they don't think they can act or they can't get their act together to act, then they shouldn't prevent the states from acting to protect us all.
Justin Hendrix:
So this framework revolves around the development pipeline for AI, this idea of the life cycle pillars. There are 10 of them. I understand. Can you take us through, in basic terms, the life cycle pillars? How does this work? If someone were in an elevator with you and you wanted to describe how this framework would apply in their context, what would you tell them?
Frank Torres:
Sure. First of all, I'd back up slightly because I do think it's important to think about what we're talking about from a very foundational level. And so we have both what we're calling foundational values. These are the almost timeless, long-lasting, more. If you're in the C-suite, if you're a company leader, you're looking at long-term plays. These foundational values get to that, and they do exactly that. They lay a strong foundation for the life cycle pillars, which get more into how to operationalize the foundational values, how to bring those to life.
I worked for Microsoft for a number of years, and I remember talking about these issues with engineers. And Microsoft was one of the companies that came up with responsible AI principles, and we took those to the engineers. And the first questions the engineers, the first comment they had was, "We agree with all of these. We agree that AI should be fair, that it should work, that there should be transparency. That we should be speaking with communities, but tell me how I'm supposed to implement all of that. Tell me how I'm supposed to put that into practice." And that's what we're trying to get to at the life cycle pillars.
In brief, the foundational values, which I think are very important for folks to know, is that there's civil and human rights by design, right? That this needs to be built in into corporate thinking from the get-go. Recognition that AI is a tool, not the solution. It's not going to solve all of our problems, but it can be helpful in getting to good solutions that humans are integral to AI, that humans need to be part of that decision-making process ultimately, and that innovation must be sustainable. And this doesn't just mean environmentally sustainable. It's got to be also socially sustainable. We've got to look at it from is this technology really benefiting us or is it hurting us? And get to some of those questions.
When it comes to the pillars, and that's a really good question. The pillars are almost the core of what we're talking about in the Innovation Framework. It really focuses on a couple of key things. While there's 10 of them, I'll go over just a few of the ones briefly. One is a recognition that it's important to think, to be intentional about thinking about the impact that the technology is going to have, especially on marginalized users. And you can do this in a couple of ways. You can sit around as product developers and try to posit, "Okay, how is this AI system that I'm developing or designing going to be used? Which populations could it be used for? What are the potential risks there?
But then you should actually, during the process, at some point in the life cycle of the AI development, actually talked to those communities, and we saw a beautiful example of this through the years of companies actually proactively reaching out to the disabled community and they've improved their products. They figured out where the failures of existing products were. And now I think that there are lots of companies that support and have developed products that are great for the disabled community, but the companies also found that for an aging population, those tools are also very important to be able to increase the brightness of your screen size or increase the font. It's good for a lot of people.
In the development phase, a key point there is to take a look at the things like training data, because if you're using biased data to train your AI systems, they could result in the biased outcomes and biased algorithms. You also should be looking at if AI systems require massive amounts of data, how are you going about protecting that data? But perhaps most importantly is from our perspective, from a civil rights perspective, is how do you assess for bias and discrimination in the tools that you're creating and how are you viewing that? And then once you create the product, how are you monitoring it to make sure that it's not biased, that you're not getting biased outcomes, you're not getting bad outcomes, that you're not harming people, and what measures are you taking to protect it? And so, those are some of the key elements that we're talking about, both in terms of the values and in terms of the lifecycle pillars or how you go about operationalizing what we're calling for here.
Justin Hendrix:
Can I ask you just to maybe elaborate a little bit on the shared and distinct responsibilities that you set out between AI developers and deployers? There's a lot of argument and discussion about that. Where does liability lie? When we do introduce harms or biases, is it with foundation model developer, the tech firm that's purveying the actual AI fundamental tool, or is it with the intermediary company that may develop a specific application? How do you think about that in this?
Frank Torres:
If you step into the shoes of a person who's gone into a bank or a person that's gone into a doctor's office and they're using AI as a tool and you either get a bad diagnosis, you don't get admitted to the hospital and you should because the AI is biased, you don't get the loan or you end up paying more for the loan because of a bad AI system. , it doesn't matter who's responsible for it. Right? At the end of the day. It's like you want to be treated fairly, and if you're not, you want to be able to figure it out. So there needs to be transparency and you need to have a way to fix that to be treated fairly. So don't mean to give you a flip answer there, but from the consumer or the individual perspective, it doesn't make any difference.
Now, I realize that between, in the business community, between the companies that are creating these AI tools and the companies using them, there is a big debate because there is at least there's recognition that somebody needs to be responsible, and the question becomes when and where, and I think there's still some discussion to be had there, but a lot of these players have been at this for a number of years now discussing these issues. And clearly if a Microsoft, or a Google, or an Amazon, or whoever is creating these AI tools that they just put on the shelf and somebody comes and gets and uses, then I would say the developer has some responsibility to make sure that those systems work, that people are aware of the capability and limitations of the tools that they're creating.
Now, if a tool is made for a certain purpose and the capabilities and limitations are clear and somebody decides to use it for something completely different, I'm using a predictive tool in the retail sense, so I can tell whether or not my customer wants shoes after they buy a pair of socks. That's not going to be the right tool for a doctor to use or law office to use or a judge to use in the criminal justice system to determine whether or not somebody should get bail or not. That tool's not fit for purpose. And in that case, the user of the technology, the employer of the technology should bear some responsibility. And so when we call it a shared responsibility, the developer has some responsibility as they develop the tool and the user has some responsibility when they use the tool, but what we don't want to have happen is the individual gets stuck where the developer and the employer are pointing fingers at one another and nobody's responsible.
Justin Hendrix:
One of the things you encourage is community engagement and co-design. You call out the design for the margins methodology, which is something we've talked about, discussed in other podcasts on tech policy press in the past, including with Shana Rigo. Are there other models that you've observed for compensating, supporting historically marginalized communities in the design process when it comes to AI?
Frank Torres:
The Design for the Margins is the one. And again, here's one where I think there's still more work to be done to sort out the best ways to do that. Thankfully, there is recognition that those sorts of discussions are important, but we recognize that there's more work to be done to figure out a good way to bring in community voices.
Justin Hendrix:
As far as accountability enforcement always when it comes to these types of frameworks and questions. Bit of a carrot, of course, you want to encourage people to have those foundational values to follow the life cycle principles that you set out, but I don't know, where do you bring in accountability? Where is the limits of what self-regulation can produce or ensure?
Frank Torres:
Yeah, so again, another really good question, and there's a few things that we considered when we created the Innovation Framework. We wanted a framework that is actually operational. So we did have, during the development of the framework, discussions with folks from the industry as well as civil society, our civil rights colleagues, about what's the right way to view this.
The reason why we reached out to industry players across the board, both those developing the technology and companies using the technology is we wanted a realistic approach to this. We wanted something that wasn't just aspirational things that companies could implement. And in doing so, we think that this would be a good tool to hold companies accountable. Certainly, if Congress isn't going to pass legislation that would set the rules of the road in a way that's enforceable from that perspective, we can certainly use other mechanisms and the bully pulpit that we have to hold companies accountable in whether or not they're living up to these life cycle pillars and what's in those, which I would argue is also holding many accountable to their own principles around AI and what they've published. And so, there's different ways to hold them accountable. It's certainly more difficult if Congress continues to not to act at the federal level, and hopefully the state rules will still be in force as we move forward.
Justin Hendrix:
I want to ask you about a couple of the just underlying assumptions here as well. In one part of this, you do say for instance that in order for innovation to be sustainable, it has to be environmentally, socially, economically sustainable in order to provide long-term benefits. Environmental considerations are top of mind for a lot of folks who are looking at AI, and we even see a strain of folks who essentially are arguing for abstinence entirely from artificial intelligence over environmental considerations. I don't know, is there any way for folks to either through your framework or even beyond it to think about that? It seems like it's a struggle at the moment to know how to know ... This is a bit of a clunky question, but if I want to be responsible in the deployment of AI, what do I do with that sort of fundamental problem if the technology itself is fundamentally unsustainable or fundamentally counter to environmental, social, and economic sustainability?
Frank Torres:
So another really good question, and certainly we're really starting to dig into the environmental aspects of the use of this technology because it is having an impact, and you can take a couple of different approaches. You can take the approaches, forget it. Having the technology is too important. We don't care what it costs environmentally. We want the technology, and so we're going to do it at all costs. And certainly that's one point of view. The other point of view is, "Hey, listen, we recognize that this is an energy intensive undertaking. We know that data centers consume massive amounts of data.
Our challenge then is, and our challenge to the companies would be, are you looking at ways to lessen the energy load that your data center is required? Are you looking at using alternative sources of energy? Are you looking at solar power or other things to power the technology? What's feasible? Technology advancements can move quickly if the will is there, and certainly some of the companies that are creating this technology are the biggest that the United States have ever seen. They're making billions of dollars a year from the technologies. The challenge for them is what are they doing to address some of these issues? Because ultimately, arguably, it's not sustainable over a long period of time to continue down a path of environment be damned, we're going to do this at all costs. It's just not sustainable. So what are the companies doing to address that?
Justin Hendrix:
It does seem to me that the big AI developers, whether it's the Googles, the OpenAIs, the Microsofts, et cetera, the moral burden really is on them to ensure that those products are not too dangerous to the environment. I don't feel comfortable right now saying that they're taking that moral burden off of the user at the moment. I still feel a little bit uneasy anytime I hit send on a prompt to a generative AI application. So I don't know. That feels like one where we've got a long way to go.
Frank Torres:
Yeah. Yeah, I agree.
Justin Hendrix:
What happens to this report now? You've laid out this framework. It's got various resources, norms, standards, processes that it refers to. It goes on the shelf next to many of those types of frameworks and formats that folks are using to try to assess artificial intelligence and to roll it out responsibly. How do you get this out into the business community?
Frank Torres:
Yeah, so it'll go on the shelf for a skinny minute, and then we'll be taking it off the shelf and taking it directly to companies. Some we've been in conversation with, we'll be going back to them with the framework and seeing what they will do to go about implementing what we have in the report and how they're doing it. I think moving forward, this is a way, as I said before, for people to hold the companies accountable and we'll look for ways to do that moving forward.
Justin Hendrix:
There are a lot of resources that are cited in here as well in addition to the framework that you've put out. And there, many of them from folks who've been on this podcast before, participated in Tech Policy Press in different ways. Things like the partnership on AIs, set of guidelines around responsible use of synthetic media is one that I note. I know David Brody played a role in this document. There's so many other people that are listed in it in addition to the folks at Leadership Conference who've had something to do with it. Yeah, I would encourage my readers to go and check this out and check out the Center for Civil Rights and Technology at the Leadership Conference on Civil and Human Rights. The report's called The Innovation Framework: A Civil Rights Approach to AI. Frank, thank you so much for talking to me today.
Frank Torres:
Thank you, Justin.
Authors
