Home

Voices in the Code: Algorithms, People and Values

Justin Hendrix / Feb 5, 2023

Audio of this conversation is available via your favorite podcast service.

Today, we’re going to listen in on a panel discussion that took place at the end of last year, convened by the Knight First Amendment Institute at Columbia University. The Institute’s Research Director, Katy Glenn Bass, hosted a conversation based on themes from the scholar David G. Robinson’s first book Voices in the Code.

The book contains the story of how a group of patients, doctors, data scientists, and advocates worked together to develop a new way to match kidney donations for transplants, with the goal of making the process fair and open. The book bears insights on how algorithmic systems that are often heavily freighted with moral and political complexity can and should be developed with care to avoid excluding the voices of non-technical stakeholders in the outcome, and is a guide for policymakers concerned with questions around transparency, safety and equity in such systems.

Panelists included Robinson, as well as scholars Deborah Raji (University of California at Berkeley) and J. Nathan Matias (Cornell University).

What follows is a lightly edited transcript of the discussion.

Katy Glenn Bass:

Welcome everybody. I am Katy Glenn Bass. I am the research director at the Knight First Amendment Institute. Thank you all so much for joining us for this. I want to welcome the really stellar group of speakers that we have with us today. David Robinson is a visiting scholar at the Social Science Matrix at the University of California at Berkeley, and a member of the faculty at Apple University. Nathan Matias is a professor at Cornell and a visiting scholar at the Knight Institute this year. We're very lucky to have him. And Deborah Raji is a fellow at the Mozilla Foundation and a PhD candidate at Berkeley. Finally, Arvind Narayanan was supposed to join us today, but unfortunately he is tending to a sick child at home, so he's going to have to miss it.

We're sorry that he can't be with us, but he did send around some very insightful comments in advance that I will draw from as we talk. And we are all here to discuss David's wonderful book, Voices in the Code, which details how one community built a life or death algorithm in an inclusive and accountable way. It is really an outstanding book and I would encourage you all to buy and read it. And with that, David, can you please introduce us to your book?

David Robinson:

Absolutely. Thank you, Katy. And thanks everyone. I'm just going to share my screen. Can I get a thumbs up from my fellow panelist if … Okay, great. It's visible. So, what I'm going to do in approximately the next 15 minutes or so is just to set the stage for a conversation about the book, which is not exactly the same as trying to summarize the entire book. But I just want to start with some motivation because I suspect a lot of our discussion may be about how the problem, that's the central story in the book, namely the allocation of transplantable kidneys in the United States relates to other problems that we care about that have a similar form where automated decision-making by software is doing something that is of civic import and where it would be ideal to do it in a way that is inclusive and accountable.

In past years have worked on civil rights issues where that hasn't been happening. And like Deb, like Nathan, I'm part of a scholarly community on a scholarly discussion about how that goal in general might be achieved better, whether we're talking about courtroom or a hospital or a social services office. But all these places where people's lives are really shaped by these automated decisions and where we've seen a series of conspicuous examples of things gone badly wrong. This case of transplant allocation is one in which relative to many of those other cases, there's relatively more inclusion, relatively more accountability is the argument that I make in the book. And so even though it's a process that's far from perfect and an outcome that is in many ways far from perfect, it's something where we can learn a great deal. So, to get us oriented, I want to start the story here at this airport Marriott in Dallas, Texas. On a chilly morning in February 2007, there's an all day meeting happening inside of hundreds of people involved in the transplant of kidneys in the United States, you've got surgeons, nurses, social workers, data scientists.

But also and I should say transplant recipients and living donors of kidneys are in this room. And they're changing the system that across the United States where there are a hundred thousand people waiting to receive a transplanted kidney allocates each newly available organ. And the proposal that's on the table is to use something that they are calling Lyft. This was in the era before ride sharing life years from transplant. And the idea is to maximize how many additional years of life are lived from this overall pool of organs. This mostly is about, I should clarify the list, the allocation we're talking here about people who are organ donors and didn't pick out a recipient. And so the question is, the system in some way needs to identify who's going to receive the organ because the donor did not. So, this isn't donations to loved ones. It's there's a resource that's regarded legally and ethically as a gift. So, it's not a market commodity, but there's not enough of it to go around.

So, we have to do something. And anyway, it's often called a waiting list, but I should say it's in fact a matching process that involves a complex blend of medical, moral and logistical factors. Who's nearby, who's ready, who has the right blood type, but also to what extent do we give everyone an equal chance versus maximizing total benefit? And here you see the areas here are the likely amount of life that is gained if the organ is given to a younger or an older recipient. And what you can see is that the integral, the space between how long people would survive on dialysis, which is the non-transplant alternative and how long they'd survive transplanted is basically greater for younger people. And they made a very vivid illustration that if this life year maximization plan were to be implemented, this by the way is the scientific registry of transplant recipients. You see the medallion in the lower left. This is an outside auditing group that doesn't run the system but just analyzes it.

And they made it very clear that maximizing benefit would mean dramatically reducing the number of older people who would be able to get transplants among those who need a transplant. But I want to take us forward in time, and this is the lunchtime speaker Clive Grawe. He's a 54-year-old traffic engineer from Los Angeles who suffers from a rare genetic disease called Polycystic Kidney Disease, which causes the kidneys to break down over time. And what he argues is that he's taken good care of himself for decades. He's a marathon runner, he has been closely followed by doctors, and this plan which is being implemented as he reaches the older part of his life, threatens to essentially punish him for not having needed an organ earlier in his life when as a younger candidate under the newly proposed system, he would have a much better chance. So, he says, look, this age thing, this idea of halting, not halting, but greatly reducing transplants to older recipients, it's unfair and you shouldn't do it. He pushes back.

And by the way, at the time he gives this speech, he himself is in kidney failure just above the need for dialysis in his level of kidney function. So, what happened to the plan? What happened to him? We'll come back to in a moment. But the big question over a decade of debate between about 2004 and 2014 was this balance between, if you like the apple of, here it's the apple of equity, meaning giving everyone the same chance on the orange of utility, meaning maximizing benefit. There are other a lot of other things that about the maximization of benefit that were controversial in addition to the elder piece. It also would've really rewarded people for being healthier in the first place when they came to need a kidney, since they're going to be more efficient converters of kidneys into life years saved. So, that meant that by and large, the recipients would be wealthier, whiter, younger, more closely followed, have fewer comorbidities like diabetes or obesity or other things that might complicate a person's health in addition to kidney function.

So, anyway, this is the argument, do we maximize benefit or give equal chances or on a spectrum, how close do we come to doing either of those things? And here's that graph that I mentioned in Dallas. What you can see that they very clearly laid out for people is that the age distribution of who would receive an organ would change. So, the pink is the current that is to say the preexisting rules at the time of this Dallas meeting. And the teal is the new idea. And what you can see is that for people in their 20s, the new rules would've more than tripled the share of organs going to them from 7% of all organs to 23. Whereas for people in their 50s like Clive, the fraction of organs going to them would decline by roughly half.

And part of what's illustrated here is not only the impact of this particular plan, but it's also the broader idea that it's important to forecast what the consequences are going to be of a proposed change in a high stakes algorithm and that it's possible through things like graphs and visuals and plain language explanations to create a world in which people can understand what's really at stake. So, the lift proposal was the main idea that was under debate for about the first half of this period that studied in the book 2004 to 2009. And then they moved after that because it was rejected partly on the strength of objections like Clives in Dallas, but also many other patients and advocates and others said, this wouldn't be fair. And so there was a switch to a second idea, which was that we could preserve everyone's chance of getting an organ roughly at the same level that it had been before. But essentially remap which organs go to which recipients and give the youngest and healthiest organs to the youngest and healthiest recipients.

Thus recapturing many of the overall welfare gains without leaving anyone further out into the cold than they had been before. For reasons that are too complicated to get into here, this was seen as violative of the Age Discrimination Act and there was a third proposal which basically ended up as a messy compromise between the two sides. So, on the one hand, it didn't profoundly alter the overall likelihood of getting an organ by age, but it did increase total benefit and did increase the extent to which the system was oriented toward maximization of benefit as opposed to giving everyone an equal chance. After the system was implemented, we saw a lot of analysis and tracking of what was going on. One of the huge problems in the transplant system is the extent to which non-white candidates and particularly black candidates in the United States where the baseline level of kidney failure is between three and four times the rate in the white community are disfavored in allocation.

One of the problems in the old system, or one of the unfair things about it was that you would get priority on the waiting list from the date when your doctor first added you to the list. So, if you had superior access to care and were being seen it earlier in the course of your disease, you'd have a higher priority because you would get on the list earlier. And what they did was they said, instead of doing priority from the date that your doctor adds you to the list, you do have to be sick in order to get on the list. But still what was happening was many people who had were more closely followed, which is a by and large white group, relatively speaking, what were being added to the list earlier. So, the change was as soon as you need dialysis, you get that date as your priority date for waiting time for transplant, even if you don't in fact join the transplant list until later. So, the result of this was that it essentially made the calculation of waiting time more equitable.

And what you see here is for people who are wait listed different races, what's the rate at which they're getting transplanted? And what you can see is that after the 2014 rules change, there's a convergence depicted, which implies that at least in this period there was a reduction in overall inequity by race. There were many other changes that we could also talk about. But I want to just give a flavor for the way this process worked such as I can in the few minutes that I'm sketching this out. But of course I didn't study this out of a preexisting interest in clinical medicine, but rather because I hoped that it could inform us about how to do public debate and policymaking about high stakes automated decision-making better in the future for other domains. So, my pitch to you as attendees at this event is there's stuff you can learn as it were transplant out of medicine into criminal legal system reform, into welfare, into perhaps hiring other kinds of high stake systems.

So, at this point, you're probably wondering what I think those lessons are that are portable, and I'm going to quickly go through six of them just at a high level and then I'll get us rolling on the discussion here. I'm as eager for that as anyone else present. So, the first is algorithms shift our moral attention. What do I mean by this? So, the balance between maximizing total benefit and giving everyone an equal chance was something that was easily altered through a change in the parameters of this algorithm that allocates or organs, and it became a huge focus. But around any as associate as a STS scholar science and technology studies, anyone would tell you around any technical system, there are social surroundings that mediate how that system actually impacts the world. That's very true here. And one of the questions there, here are a couple of things that were not on stage but were vitally important.

Number one, there are arbitrary geographic zones within which in this time period, people's organs are being allocated. So, if you live in one state versus another, there could be a manyfold difference in how likely you are given the same clinical presentation to receive an organ. But those zones, the idea of changing them was not on the table. And in fact, if you go back as I did and look at the debates and the documents which are voluminous, they scarcely mentioned geography even though that's a major driver of inequity. Another one is this whole conversation is about how who gets from the waiting list to the operating room to get a transplant. And that is very important to do in an equitable way. But a question that is not addressed in that is who's on the transplant list in the first place and how do people get there? And how many of the people and which of the people who are clinically suitable for transplant are even listed?

And there are a whole bunch of factors there, social barriers that if someone is likely to have trouble accessing care after they get a transplant, then the transplant center may not want them on the list because if transplanted, they would be more likely or would be perceived perhaps inaccurately as being more likely to fail to maintain that transplant and have a bad outcome that would hurt the center. Anyway, there's a lot going on that's not on stage is the point here. Second briefly, participation creates opinions, meaning that I thought of public input when I began this project as a raw material that one would go out and gather like the way you might mine coal or something and use it to create this process where everyone's views would be measured. Maybe it's a measurement problem, be another way to characterize the earlier way that I was thinking about this.

And in fact what happened here was much more human and more interesting people gradually influenced one another's beliefs about what would be fair or would at least be an incremental improvement. And the process not only reconciled, but changed people's positions about what the right thing to do would be. Oh, and okay, thirdly, shared understanding needs shared infrastructure. What do I mean? If we don't want to live in a world in which technical experts make the moral choices for us, then we need there to be a way for people whose expertise or whose perspective comes from somewhere else besides being a technical expert.

We need there to be a way for those folks to participate. And even if there is perfect transparency, and this comes to an issue that I know is of interest at the Knight Foundation as it is in many other places, what does it mean to meaningfully have an informed debate? And people sometimes say, and unless I be perceived as casting stones, I myself earlier in my career argue thus that if only the underlying data from policymaking processes were more public, that then there would be more participation, there would be a more inclusive process and we would get to wiser and more legitimate answers. But in fact, this process worked to the extent it did because there were people out there making plain language summaries. There were people out there making graphs, there were annual audit reports that had detailed descriptions that were easily understood of how this system was working.

And by the way, there's there still are, and that actually was done by a separate body, as I mentioned earlier, not the people running the software. So, my point here more generally is it takes resources to do those things and we should be clear-eyed about, and we should expect when it's important enough that we really want participation, we should expect to invest to make that possible. Fourth deliberation is costly and it's details matter. And what I mean here is that when you take an empty chair and point to it and say, anyone can come in or impacted, people can come in and sit at the table and weigh in on a way that is opening up a possibility and in a way, in the best or idealized case, it's a kind of sharing power. But at the same time, it also imposes a burden on people to show up whenever the meeting is, wherever it is, and to devote their time to thinking through the issue.

And particularly if the input that is sought comes from people who are in some way marginalized or resourced constrained, it becomes often hard for them to participate in this civics fable because, well, for whatever reason that we're concerned about their wellbeing to begin with, they don't have resources necessarily to fully participate. And so this is something that, and in some other examples that I talk about in the book, like participatory budgeting in Brazil for instance, there's been positive experiences with investing in things like childcare during evening meetings or food if it's at the dinner hour, so that people can come in and participate in a way that works for them. And this is something I would say the transplant world did not illustrate a particularly great way of doing. Fifth quantification can be a moral anesthetic. And what I mean by that is it takes messy and deep human realities and sort of abstracts them away.

And we end up in a world where we're saying Alice gets the organ, Bob does not because Alice has more allocation points. And it seems like a math problem instead of seeming like a cruel choice in which inevitably someone's not getting an organ. And this by the way, arises not only in kidneys where the alternative is to be kept alive by dialysis, but also in hearts and lungs, where the alternative often is to simply die and you can't save people. And I think that can be a bad thing to abstract a way that humanity, but it also may have a constructive role to play in some instances. I mean, anesthesia of the regular kind is also something that while prone to abuse, we're generally glad we have access to. And as I once heard in a discussion around compassion, you don't necessarily want a surgeon who feels your pain.

You want a surgeon who's completely focused on a successful surgery. And so I think this question of when and where do we open our hearts and get into the human suffering, and how do we reconcile ourselves to the fact that there's far more such suffering than any one person can fully resonate to in any one moment? Quantification is descriptively speaking part of how we navigate that impossible situation. And I think there's a lot to be explored about how to use that. And lastly, knowledge and participation don't always mean power. In the case that I studied, we had a great participatory process that resolved some important moral questions, but other key questions were later resolved by a court acting in just a few days. So, whatever new institutions are practices are created, we should bear in mind that they're subsidiary to the existing infrastructure of governance and power. And with that I'll conclude.

Katy Glenn Bass:

Thank you, David. All right. I want this to be as much a conversation between you, the experts who have thought about this far more than I have, but I will guide us along. This book, it's so wonderfully written and it's about this kidney allocation process, but it's also, it's about everything that has to do with sort of algorithms and governance and human problems inside of computer problems. So, I'm going to start with your point number five, and then I also want to save some time to talk about this participation question and participation versus power. But to begin with the point you made about quantification as a moral anesthetic, this is an observation that I think a few of the panelists also made. Arvind noted that this comes up over and over again in algorithmic decision-making, this attempt to take the politics out of policymaking, which he thinks is the source of much of the appeal of machine learning. And Deb, I know you've written and thought a lot about similar issues, so I'm wondering if you could expand a bit on that point and then maybe David and Nathan can respond.

Deborah Raji:

Thank you so much first of all David, for that great overview of the sort of main themes in your book and the main conclusions. I appreciated that. Yeah, I think one distinction that I reflect on, I know we're talking with the Knight Foundation and you guys have done a lot of work on online platforms and online platform governance. And I think one of the distinctions in this realm of automated decision systems, so systems that are sort of typically used by institutions to make decisions on behalf of this sort of separate impacted population is that there's another layer of challenges involved with these kind of automated systems where the users of these systems, so if you think of hospital workers and clinicians or doctors or hiring managers, they will make use of these systems to make decisions about separate impacted populations. So, that's the students, the patients.

And so as a result, these algorithms can become leveraged by institutions and individuals who actually have the decision-making power to make decisions about those that might not even have any visibility into the fact that the reason that they were denied rent was because their landlord made use of an algorithm. Or the reason that they didn't get this job was because the hiring manager made use of an algorithm. And there's been a lot of great work that's been written on it. I know Arvin has been starting to talk a lot about this. Virginia Eubank has sort of done a lot of work discussing this. Ben Green has discussed this use in the public sector very heavily. And now David's book really kind of explores this theme very thoroughly as well. The idea of people leveraging these systems in order to effectively set new norms for how their decision-making is going to go.

And there's so many ethical dimensions and moral dimensions to these decisions. I think David's book does a really good job highlighting just how high the stakes can be, literally life and death. And typically with these decisions when you have human decision makers involved, they're accountable in some traceable way for the decisions that they make in the sense of if an individual makes a poor decision or a group of individuals follows a policy that's poorly constructed, then we can point to that and we can hold an individual accountable at least legally. But with the algorithm, a lot of this is just delegated to equations. And although the decisions are still being made and there's definitely responsibility on those that construct these algorithms, they're making decisions about the data, they're making decisions about how exactly they implement these algorithms, who they're going to deploy these systems on.

They're making a lot of decisions as they build these systems, but it's so much easier to point to the algorithm, which is sort of this disembodied decision maker than take responsibility when that algorithm is involved in the process. And so you can see that with this case in the kidney allocation scenario, but so many others as well, of just opting to leverage the use of an algorithm as a way to sort of absolve decision makers of responsibility in really critical situations. That keeps coming up as a theme in so many scenarios, and it was sort of beautifully highlighted in this book.

Katy Glenn Bass:

Great. Nathan, David, anything you want to add to that?

David Robinson:

Maybe I'll just briefly say that one of the pieces that I think that Deb mentioned a moment ago that I think really rings in my ears in particular is this question about the relationship between this technology and what the norms are among humans who's surrounded and are involved in creating it. And I think one of the most interesting things about this story for me was the way in which the participants illustrated norms that are different from what often happens where technical people sort of end up just deciding was is it a picture I think that we often have in these situations. And in this case, the data scientists in at least some important instances took pains to say, okay, this seemingly technical decision is actually a moral choice that belongs to a wider circle. And I think that's part of what I would hope to see more of elsewhere in the world.

J. Nathan Matias:

Yeah, I would add something to this. First, I want to thank David for diving into some of the details in this talk. If you haven't read the book, it's also a gripping story. It's a story about the people who were fighting for their lives, about scientists who were trying to find ways to discover breakthroughs and sometimes flying across the country just to carry some chemicals in a bag so they could try some new medical treatment that might make a huge difference in some someone's life. Or a young girl who became a case in point in the media who thousands of people visited her hospital bed as she was waiting for the clock to tick down on whether she would get access to a kidney. And so I really loved David how you balanced the human story and the technical and procedural details in the book.

And it had me thinking about what it would look like to live in a world where we prioritized moral thinking and moral emotions to the exclusion of systems. And we actually live in that world with GoFundMe medicine. I was looking at a academic paper by Mark Igra and others that during COVID, more than 40% of COVID campaigns raised no money at all in the US and that 1% of campaigns received 25% of the resources. So, you have cases where humans engaging our moral reasoning, our moral emotions, hearing about stories are allocating resources and gifts to the people that we individually and collectively think are most deserving and producing huge biases and inequalities and discrimination in how that resource allocation is happening. And so I think David, your book has helped me think about how we hold those things in tension, that our moral emotions don't necessarily steer us in ways that are fully beneficial for society, but similarly systematizing things to the exclusion of those moral considerations can be dangerous in the ways you've outlined as well. Deb

Deborah Raji:

Yeah, I have a quick response to that. I was going to say I actually also really resonated with that point as well. I think something that struck me is that when you turn something into an algorithmic system, when you sort of delegate that moral decision-making, because it was introduced as a mechanism of reducing bias, like a method of neutralizing sort of individual, the way in which we have these individual judgements and how that can definitely steer us in the wrong way in many ways. And you hear similar arguments, for example, to the introduction of hiring algorithms as a mechanism to neutralize the individual biases of a hiring manager. I think the challenge here is that one is the inflexibility of the algorithm, which we did see in the book as well, of the idea that when you turn something into a more fossilized system than these exceptions that people, us as humans, we can see, oh, this is a little girl, she deserves this kidney.

The exceptions that we recognize or that we might hold a lot of empathy for the algorithm doesn't see as an exception or as a reason to bend the rules. And so that inflexibility definitely comes up and shows up, and you can see that multiple times. But I also think there's this idea of the reality of when we design an algorithm with the perspective of one particular group and that getting scaled to the entire system at the national level. David, you had kind of alluded to some of the geographical disparities, but I even thought some of the racial disparities in age disparities reflected the reality of the fact that this is now a set of heuristics that is being massively deployed at a scale that impact so many people, even though every single case is unique in its own way and localized in its own way.

And so I think that that's just another side effect of turning something into an algorithm. But yeah, I agree with what you mentioned, Nathan, of just the reality of the fact that the reason these things are introduced in the first place is because we as humans have all these individual biases and heuristics that we use to make these decisions, and that needs to be sort of systematized in some concrete way. And so that's why they even get introduced in the first place. So, yeah, it's a weird tension that is really difficult to make a decision about or to work through in various scenarios. Yeah.

Katy Glenn Bass:

Great. I want to talk about participation here and how it was done in this case and other ways that you all have seen it done for better or worse. So, David, I'm wondering if you can tell us a little more about the way participation happened in this process. How were the decisions made as to who got to be in those meetings? How successful do you think they were at achieving an actually representative sample of the people who were involved in this process in different ways? And then I think Nathan also had a question about knowledge sharing within that committee. Did people serve only one role if they were the patients, they just talked about what it was like to be a patient, or were there efforts at citizen science or knowledge gathering and sharing that went in multiple directions?

David Robinson:

So, there are a lot of different perspectives on the participation that takes place in transplant policy making. In fact. So, for other reasons, the nonprofit that runs the transplant system, United Network for organ sharing has been on the hill recently in oversight hearings because there've been some downtime of the system and some other technological things, and people are debating changing it. And that has meant that some of the dirty laundry of their executives emailing each other has become public, including one from their current CEO where he analogizes the public participation to putting your toddler's crayon, drawing on the wall, that it's a performative exercise that has no real impact on the actual outcome, which I think I would be more troubled by if, and I am of course deeply traveled, but I'd be even more troubled and would see it as potentially a description of what actually happens.

Except that in the case I studied, the experts wanted one thing and got something profoundly different. And many of them are still, I'm trying to pick a gentler word here, angry that the system didn't end up maximizing total benefits. So, I mean, I think if you're looking for an asset test of whether a participatory process worked or not, you can say, well before the participants, which is a sort of othering term for the outsiders, I think, became involved. What was the plan, and what ended up happening and did it make a difference? And here it did. But the outreach was not systematic. There was not systematic citizen science or anything like that.

Even the question of who showed up, although these meetings were open to the public, there was a prominent role of accident and social network and who ended up in the room as well as of course, a moderating factor of who's in a position to fly across the country to some hotel room and say their piece. And so Clive [inaudible 00:37:25], for instance, was active in a patient advocacy group for people with Polycystic Kidney Disease. That's how he found out about the meeting. Another activist that I talked to, it turns out one of the policymaking physicians happens to be his nephrologist. So, his thing was my doctors on the policy committee had mentioned it to me and I got involved as a patient. So, it's very haphazard, and I wouldn't put the outreach as a model necessarily.

Katy Glenn Bass:

Nathan or Deb, do you have any reactions?

Deborah Raji:

I have a minor reaction, which is I think the notion of participation and the quality of participation is a huge topic we could go on for a while about just how difficult it is to get representative participation to avoid things like do tokenization. I think something else that was touched on in the book in this particular case as well is just how in the desire for bringing in as many diverse perspectives as possible, you can also really, the complication of the problem intensifies. And so that could slow down processes in a way that at times can be necessary, but at times can also cause very serious delays in terms of getting to particular outcomes. And I thought that was just an important kind of very grounded reflection on just the process of actually including as many voices as possible.

I think something that I've been thinking a lot about lately, there's an article that Mona Sloane and Emanuel Moss wrote called Participation is Not a Design Fix for Machine Learning because it's not a conversation in machine learning circles of just try to include as many perspectives as possible as you're designing these things. And I think that that is definitely moving us towards a better set of systems, a more inclusive design approach. But they talk about how it's also very easy to fall into the trap of taking one patient to represent all patients. And so missing out on just the complexity of the problem and getting caught in the weeds of it. I think there's an article that, I'd mentioned a lot of this conversation being really parallel in the hiring space where people also introduced algorithms as an approach of systematizing decision-making in the hiring space, especially for jobs that have very high turnover. So, think of if you're a firm that's hiring a lot of telemarketers, for example, how do we automate that process so that there's less bias by the HR managers and things like that?

And I found that in that literature, they've sort of shifted from designing, from thinking about participation at the design stage of things, which became a very complicated question to enabling sort of third party access for auditing and for third party questioning of the use of these systems and the design of these systems. And I don't know if that's the right approach. I think I'm still reflecting on just the details of how all of this would play out, but I think there's other ways to include outside perspectives beyond just factoring them into the actual development and creation of the algorithms initial heuristics. But also having them be able to have the information they need to push back on decisions that are made specifically about them by the algorithm, but also collectively if there's some particular attribute about the algorithm or characteristic that factored into their decision that they disagree with or that as a collective, those that are impacted or beginning to disagree with, allowing them to push back and to vocalize that disagreement in a way that becomes consequential and becomes impactful.

So, in this case, you mentioned Clive feeling very strongly about the race attribute factoring into the decision-making for the kidney transplant. I think something like that is something that we wouldn't need to see in other spaces. Just allowing for that visibility of what's actually going on with the algorithm. The fact that an algorithm made the decision in the first place, I think is the floor level of visibility required. And then also enabling that visibility to lead to those that are impacted, to be able to make judgments about the quality of the decision-making that's being made about them by these algorithms and be able to effectively push back or question those judgments in a really meaningful consequential way. I think that that's a form of participation that's often overlooked in these conversations, but might be sort of a helpful counterweight or alternative to simply attempting to capture as many perspectives as possible to get it perfect the first time. So, that's just a thought that came to mind.

Katy Glenn Bass:

David or Nathan, you want to respond?

Nathan Matias:

I think that was a great summary of that debate.

Katy Glenn Bass:

Have you, Deb or David or Nathan, have any of you seen promising proposals to that end, sort of allowing communities to be engaged at other points in these processes and to push back on decisions that get made?

David Robinson:

Yeah, I think there are participatory things that originate. I mean, at some level when we say participatory, there's maybe a tacit subtext that what we're talking about is people in power setting up a room and putting some chairs around the table. But of course, many of the advocacy traditions themselves are profoundly participatory and they begin, as it were, from the ground rather than from the top. So, I think of the disability rights community, Nothing About Us Without Us, so it may not have been the United States Congress's intention that disabled people be very deeply involved in the creation of the Americans with Disabilities Act in 1990 and before. But they were because it was their intention that they participate. And so I think there's a lot to be learned from that. And I also think participation can take many forms. And maybe to pick up on a question that Nathan and I were talking about over email a little bit earlier, you know might say, look, I know what the right answer is.

For example, someone in a progressive context might say the right answer is we should abolish pretrial detention, for instance, something I've worked a lot on. And then they'll say, well, for policymaking impacted communities should have a voice in this. And there's a tacit premise that if impacted communities are given a voice, then the correct answer, which is to abolish or greatly reduce incarceration will be reached. Which it's interesting to wonder, well, okay, if I've got a mind's eye view that these folks will participate and that answer will emerge as the right answer. Which of those things is actually the thing I'm most committed to? Do I think a democratic process makes an answer, right? Because people were listened to. That's really a proceduralist view.

Or do I think, look, democracy is handy sometimes because it gets us to the right answers, but we know what the right answers are and the important thing is getting there one way or the other. And I think the more there's a practical
Implementation of these kinds of ideas, but more often people are apt to be confronted by the imperfect match between the civics fable that they envisioned and the actual truth when people get in a room and the results are messy. For instance, some of the research that I cite in the book says that unlike the medical system, people in general, on average when surveyed are eager to punish, for example, former alcoholics in the allocation of livers, even if those people are going to have better outcomes, it turns out the person on the street wants to be prioritize them. And I know this is something, and maybe we can, I'd like to hear what Nathan thinks about this is how those should balance or how we should think about that problem.

J. Nathan Matias:

Yeah, I guess I'm in the queue then to take that forward. A few thoughts. First, David, you have a really fascinating example in the book about post-design interventions from people directly affected by this algorithm to change it. And this is the story of Miriam Holman, the young woman who was on her potential deathbed. And there's this question, is she going to get an … In this case, she has a lung disease and she has this question, is she going to get a transplant? And in this particular case, there were people who cared about her, who were willing to organize a lawsuit and other things to try to change allocation of lungs in her case and maybe more broadly. And I thought that was actually really thought provoking story for how one system, the things that had been done with the kidney allocation system, matter not only for kidneys, but have come to mean something more for how American health system thinks about the allocation of scarce transplant resources more generally. So, that was actually, I found really helpful for thinking about those kind of post hawk processes.

More generally, it's hard. We hope that, I at least am someone who is committed to democracy for all sorts of reasons we talk about later. And my hope is that on average with the right kind of support, we can steer it in beneficial ways. And one of the things that you touch on, David in the book is this idea of what ethicists called principal of ethics. That sometimes we need guides for having these conversations and thinking about the trade-offs. That yes, we want to have the right people in the room, we want to have the right expertise and knowledge. And then we also need the right framing in terms of discussion. Or at the beginning of your talk, you showed us this incredibly loaded image that created a scale between utility and equity. And it was one of those things that says, well, we should talk about utility. This is a moral concept that we can think about. We can fill our own, we can ask what our position on utility is.

We can think about what do we mean by equity? How do we articulate our views? And I wonder if one of the important questions in this work is that just quantification can hide a bunch of decisions. Also, how we define the moral terms of the debate is also incredibly important. And when done well, they give us valuable guardrails and guidance for having a shared conversation about these issues. What do we mean by justice? What do we mean by equity? But also when done poorly, they can silence certain voices and steer people towards a conversation that actually doesn't serve the common good. So, that's another thing I've been thinking about as reading the book about just who defines not only what we mean by a life here, but who defines what we mean by say equity.

David Robinson:

If I could respond directly on that last point and also pick up something from the Q&A in the audience, I see that someone has asked what are the deepest insights that I've had on reducing the cost of deliberation or improving its quality? And I think one of the things that comes up is this framing this, okay, what are the terms of debate and how are we going to analyze things that's expensive work to do all this data analysis and gathering and synthesis. And lots of people who have a stake in the outcome don't have the resources in their back pocket to replicate that analysis themselves or to conduct separate analyses that are going to point toward other moral principles. And so as a practical matter, I think even though whoever does that analysis, for example, there's an access to transplant score that's an analytical tool that the United Network for organ sharing created in order to compare the relative effect of different factors that are supposed to not matter, like geography or gender or other things, race.

And that's a one way of looking at these factors that shouldn't alter people's chances but do and to what extent they do and comparing their magnitude and so on. And that's a very loaded thing to do. But if nobody does it, then we're not really equipped to have a conversation. And so even though that's a powerful position, I think my deepest insight about reducing the cost of deliberation is if you think about the quantum of how many deliberated decisions there are going to be, whether it's organs allocated or jobs decided or whatever it is, that if you're able to amortize your governance effort across a larger number of decisions, then you have more resources to do the governing with. And that can be a powerful advantage. And so I think even though the analyst has power to shape the discussion, I actually think this story left me feeling like centralized analysis where someone does the plain language, okay, what does this mean?

What would it do? Somebody does that and publishes that, and we all pay for that effort to be centralized. Of course, I think people should still have access to the underlying data so that when they want to, they can replicate or can say, oh, you've done this wrong and really the key question is over here on the side and your C statistic is too low and so on. As people in fact did in this case, academics took the modelers to task on certain points. But I think centralizing that and providing those resources to do that analysis so that the barrier to understanding the system and understanding the moral stakes is low is in general a net good.

And now you could say, okay, who gets to do that modeling, and who gets to decide what the modeler's incentives are? And you can kind of regress that back and say, well, I think at some point somewhere there's someone who has expertise that the rest of the community does not, who acts with a kind of fiduciary duty, ethically if not legally, to be a fair-minded synthesizer of what's at stake. And although material incentives are vital, I also think there is an ethical piece and that the culture of moral humility of someone saying, I'm not going to try to prejudge this is also one of the ingredients that we would want.

Deborah Raji:

I just wanted to also comment on something else, which is another comment that came up in the questions. Someone mentioned the idea of incorporating feedback into a wide range of access points pretty much. So, how do you account for feedback, a wide range of feedback? So, I think on one dimension, like I mentioned, there's definitely the question of participation before the model's designed as the model's being designed after the model's already deployed and allowing for recourse and pushback and auditing and the expertise required for that. But then there's also the question of maybe patients are not actually going to have the most informed perspective on which features to include in this organ assignment algorithm. And so what their feedback is actually most useful for is just understanding the nature of the impacts and the nature of the harms and helping to taxonomize something like that.

The example that was given in the question I see here is around how patient feedback led to the removal of the race adjustment in an equation used for the CKD equation. And I think that that's a great example of just how interacting with various stakeholders and going through the participatory process doesn't necessarily have to directly inform the design and the way that an engineer would give feedback on that model or the way that we would ideally want the patients to list out the ideal set of features for the model, but they might just be able to describe how it impacts their lives and what worries them or what concerns them. And that perspective could just be informative in and of itself of the ethical challenges and the harms involved. And I think that having more rich vocabulary around that dimension of things is also pretty valuable. And they're definitely experts in terms of describing how things impact them ideally.

Katy Glenn Bass:

Yeah.

J. Nathan Matias:

So, would you say Deb and David, that maybe the hidden expertise in the success of this story is not in the statistics or in the surgery researchers or necessarily even the lawyers, but the facilitators and organizers who did the work of bringing people together, helping people understand each other and making this process go. Is that one of the pieces maybe hidden and under acknowledged pieces of expertise needed for this kind of work?Deborah Raji:
And identifying the right questions to ask, which stakeholders I think is another key piece of that. Yeah. I'm curious what David thinks of that.

David Robinson:

Yes, I strongly agree. There's a lot of that work that happened. It was largely invisible, which meant the historical record that I was reviewing, which was in other respects extensive, was pretty thin on that. And it's definitely something that in a Platonic ideal where I knew that it was going to be this book when I started reading the stuff about transplant, I would've gone back and pushed harder on that. But I think the fact that it's simply documented speaks to exactly both Nathan and Deb's point here, that it's not valued in the same way. And even the word expertise maybe is slanted away from some of the things that we need in order for meaningful participation to happen.

Katy Glenn Bass:

Okay. I'm going to try to get one last question in under the wire, because we are incredibly almost out of time. And it's on David's point number six in his presentation, which is knowledge and participation don't always mean power. So, we've mentioned this a couple of times in this discussion, but there's this massive geographic disparity in allocation of transplant organs of kidneys here. And the committee essentially decides not to touch it. My sense is because there's an understanding that it can't be changed, that it's just the system is the way it is and they can't do anything about it. And then towards the end of the book, we get introduced to Miriam Holman, who is a woman who is in desperate need of a transplant and who files suit arguing that this geographic disparity should not be allowed. And suddenly this system that people thought could not be changed and there was no point in trying to fix within days of this lawsuit being filed, there are federal government levers being thrown all over the place trying to change rules and letters being sent.

And I was quite unsettled by that scenario. I mean, the story is a horrible one, and all of these decisions are horrible. But just the fact that you have this long participatory process and then you file suit, you pick a different tool to use, and suddenly you get movement immediately. And as David points out in the book, that can happen in lots of different ways after these participatory processes, in some cases undoing progress that has been done by those sort of participatory committees. So, I'm just wondering, David, maybe to close us out, can you talk a little bit about your thoughts on that and the committee? I mean, what was the reaction among the people who had been involved in this process to that litigation and to the changes that resulted?

David Robinson:

So, because this was about lungs and kind of came in from left field after this big, long participatory debate, I actually didn't, it was late in the game for me research wise that someone even mentioned this. And then I was like, hey, wait a minute. This is a very different story than the tidy participatory victory that I thought I had. And I think that a lot of people in the world who were the clinical kind of policymaking world were put off by this. I think in this case, personally substantively, I think it was correct and that the geographic zones were politically and morally dysfunctional. And I'm glad that they're going away. But I think that in general, the model of have a family friend who works for the foremost litigation boutique, this was like, oh, someone from Boies Schiller is taking this on pro bono and they're going to go to a federal district judge and they're going to give a 300 page crash course in Oregon transplantation.

And by the time you're done reading the 300-page book that Dubois Schiller has written for you, of course you think you need to immediately do urgently what they say you should. I mean not, but because also it was ex parte because it was so fast. They came in and asked for a TRO originally and then … Anyway, yeah, and procedurally, these aren't the procedures that you would think would categorically tend to give you the wisest outcomes or the most careful outcomes. But I think that's one of the, we all negotiate in the shadow of the political realities. And it's one of the things that's curious to me is that the recourse to courts doesn't happen more pervasively even than it does. And that's just kind of the popularity of litigation and of hard power kind of hardball as it were. And what controls that is something I've often wondered about, but don't have a lot of wisdom on. To what extent are people willing and when to really play hardball?

Katy Glenn Bass:

Yeah, maybe that's another book. All right, we are out of time, unfortunately. But Deb and Nathan and David, thank you for a really wonderful discussion. David, thank you for writing this excellent book. And on behalf of everyone at the Knight Institute, thank you to the audience for joining us for this. I really enjoyed it. And we'll see you all soon, I hope.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics