Home

Critical Perspectives on Ethics in Technology

Justin Hendrix / Apr 7, 2022

Audio of this conversation is available via your favorite podcast service.

Last year, the Journal of Social Computing published a Special Issue on the subject of Technology Ethics in Action. The special issue was the product of the Ethical Tech Working Group at the Berkman Klein Center for Internet & Society at Harvard, which was cofounded by Mary Gray and Kathy Pham. The ideas in the special issue span a range of critical and interdisciplinary perspectives, with essay titles ranging from “Creating Technology Worthy of the Human Spirit” to “Connecting Race to Ethics Related to Technology” to “The Promise and Limits of Lawfulness: Inequality, Law, and the Techlash.”

For anyone interested in the subject of technology and ethics, whether you are teaching the subject, working on applications in industry, or a student eager to learn, I recommend downloading the special issue.

To learn more about the ideas in it, I spoke to its editor, Ben Green. Ben is a postdoctoral scholar in the Michigan Society of Fellows and an assistant professor at the University of Michigan's Gerald R. Ford School of Public Policy. His Harvard PhD is in applied mathematics, with a secondary field in science, technology, and society. He studies the social and political impacts of government algorithms, focusing on algorithmic fairness, smart cities, and the criminal justice system. In 2019 MIT Press published his book, The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future. Ben is also an affiliate at the Berkman Klein Center for Internet & Society at Harvard.

What follows is a lightly edited transcript of our discussion.

Justin Hendrix:

So the first thing I always do is just ask folks to state their name, their title, and their affiliation.

Ben Green:

I'm Ben Green. I'm a postdoctoral scholar in the Michigan Society of Fellows and an Assistant Professor in the Ford School of Public Policy at Michigan.

Justin Hendrix:

And you are the editor of the Special Issue of the Journal of Social Computing which is called Technology Ethics in Action: Critical and Interdisciplinary Perspectives. What led the Journal of Social Computing to hand over the editorial keys to you for this issue?

Ben Green:

Yeah, so this issue really emerged from a several year long process even dating back to before we were involved with the Journal of Social Computing. So everyone who was involved in this was part of what was described as the Ethical Tech Working Group at the Berkman Klein Center for Internet & Society at Harvard. And starting around 2018, there was a large group of us which includes other folks even beyond the contributors to this issue who were just having early stage conversations, trying to come together across fields and think about what is this nascent energy around technology ethics? What does it actually mean in practice? And what are the limits of these efforts? What is our skepticism around the ethical language that many different actors are adopting? And so from that, we decided that one exciting output to create would be this issue that really brings this combination of critical and interdisciplinary perspectives together rather than trying to say, we're going to create a single manifesto from these 10 or 20 different people.

We wanted to bring that sense of different perspectives and different backgrounds to life with a special issue. So really it was a several year process of developing and curating articles and developing ideas and even seeing how our ideas shifted over the last two or three years in terms of the different developments around Tech Ethics since it's such a rapidly evolving area, but we put it together and then we approached the Journal of Social Computing. We found a really great partnership with them as a journal that was looking for this type of work and was really excited about the particular style and approach that we were taking.

Justin Hendrix:

So there are some great essays in here with titles like "Creating Technology Worthy of the Human Spirit," "A Lightweight Design Method for Sociotechnical Inquiry," "Algorithmic Silence: A Call to Decomputerize," just a real range. Can you give me just a little bit of an overview of what are some of the topics that people have taken on?

Ben Green:

Yeah, so it's a broad range and the issue is loosely grouped into two different sub issues. The first is really looking at Tech Ethics itself, right? What's the evolution of this area? What are the different limits of Technology Ethics as it's come about? Jasmine McNealy’s article, for instance, is looking at the different language that companies and other actors use and how language such as users and human centered can sound ethical, but ultimately obscure some of the deeper political questions that lie under the surface of that type of language and of the product. We have other articles like Lily Hu in that same part of the issue looking at the power relations and who gets to shape Technology Ethics and how Technology Ethics might be, in some ways, falling into similar traps as human rights did during the 20th century of presenting a certain backstop against certain harms and a certain language for ethical discourse, but ultimately failing to confront some of the deeper issues of power and the trajectory of the political economy.

And then the second half is taking a broader perspective looking at different types of approaches for regulation or design that could potentially help to move beyond the existing frame of Technology Ethics and create more just and equitable forms of technology. So Luke Starks' article, for instance, about a Lightweight Design Method, proposes an hour-long workshop that different actors could bring designers and policy makers and social scientists together to think in a somewhat lightweight at speed method about how to incorporate values into design. Aden Van Noppen talks about the role of spirituality and spiritual leaders as an important frame of reference for ethics and how some technology companies have attempted to bring in that type of perspective into their conversations about design and ethics.

And then Joanne Cheung has an article looking at the political economy of social media, thinking about the relationship between the focus on the design aspects of social media, things around the algorithm and the like button and so on, but then also the broader backdrop of the political economy and the financial incentives that social media companies are facing. And she draws some really interesting parallels to urban planning and urban design in thinking about similar structural issues and places that we could look there for potential remedies. So that's just maybe half of the articles and the issues, but as you can see, there's just a wide range of different technology topics focused on, and then different disciplinary perspectives that are brought from the different authors into the conversation.

Justin Hendrix:

And you place all of this in context in an introductory essay where you talk about a “crisis of conscience” in digital technology. Can you just explain the landscape into which you introduce this special issue as you see it?

Ben Green:

It's a tough job for the intro for such a wide ranging set of articles. But what I tried to do was to take the intro as something where a reader could come relatively unfamiliar to Technology Ethics, what are the developments and what is this particular discourse and get up to speed on what's been going on and what are the limits? What are the critiques of Technology Ethics that have been raised? So first I run through some of the different areas around algorithmic bias and privacy and so on, that have led to this crisis of conscience among many engineers and these wider calls for ethics among regulators across the tech industry, among academia and so on. But then I look at as these forms of Technology Ethics have been taken up, what are some of the issues that have been raised?

And there are several that have come up around… often Technology Ethics interventions are highly focused on individual design decisions of engineers not really targeting some of the broader factors that are shaping what technology companies or governments are doing. So it's this very individual design oriented practice often. And it often ends up being subsumed into technology company incentives and processes. So you have companies like Google and Facebook and really everyone who has these new ethics engineers and ethics researchers and ethics teams, but what ends up seeming to is that those individuals and those groups and those processes are really about being incorporated into the wider business practices. And I think there's a really fundamental limit on how much change those organizations or those groups have been able to create within organizations. So there's this conflict of what happens when the ethic comes in conflict with the business models. And so you have this wider language around ethics washing and the idea that ethics is a way of providing a superficial gloss over larger systemic unjust practices.

Justin Hendrix:

So in some ways, we see even these companies that have established big corporate ethics entities like responsible AI teams or the rest, it all just gets essentially subjugated to the business model.

Ben Green:

Yeah, pretty much. I think there's a variety of research and news stories talking about that. And we can see there are the high profile cases such as what happened with Google and Timnit Gebru. And you can essentially she was pushing strongly for more critical research and more changes in products and research to really be mindful of the social impacts. And that led to really significant conflicts between her charge as the co-lead of the ethics team and what Google was actually interested in doing and knowing about and responding to. And so I think that's just one case study of this wider issue of how... The companies are interested in what they can do to be ethical that is relatively easy, but they're not interested in changing their ultimate business models and their plans for the next five years in order to really respond to what a rigorous view of ethics or justice or equity would require of them.

Justin Hendrix:

So you lay out what you call a sociotechnical approach to technology, and you say central to adopting that or to pursuing that is the rejection of technological determinism. What do you see as that argument?

Ben Green:

The way I try to lay out the piece at the end is not with a super prescriptive response that says, here's the frame of ethics that we should follow or here's the alternative or something like that, but really about how can we pursue a rigorous practice of engaging with these efforts around ethics both recognizing what ethics can provide as a philosophical and normative discipline, but also being mindful of the limits of what ethics ends up being turned into or applied as practice. And I think it's really important to hold both of those things at the same time, right? We should be mindful of all of these limits of technology ethics as it's been applied in academia and in the industry. But that doesn't mean we should reject ethics entirely in terms of what it can provide as a mode of reasoning.

So the idea around determinism and this part of this larger sociotechnical frame is to take what we know about how to think about technology in context as a tool and apply that to thinking about Technology Ethics in its context. So in the same way that STS and other scholars have talked about rejecting technological determinism, the idea that technology acts as autonomous force that drives society and changes things for the better on its own. We should have a similar attitude toward Technology Ethics and think of Tech Ethics not as something that can simply be adopted. If you adopt some principles, you're just in a deterministic way going to have an ethical organization or ethical technology. But instead what this tool of Technology Ethics actually does in practice depends on the context in which it's embedded and the incentives of the different actors who are involved in shaping it.

And so that's the idea of what I propose as a way of thinking about what Technology Ethics looks like in practice and how we can both try to pursue more ethical and just technologies while bringing a critical lens into what often happens in practice.

Justin Hendrix:

So the cynic might read this essay and some of the others and say, well, there's no chance we're ever going to overlay any sufficient ethical framework on corporate entities that are designed in the mode of 21st century shareholder capitalism and they need to grow and they need to provide a return. Don't we need a major regulation? Don't we need government entities and apparatus that can require this type of ethics?

Ben Green:

Yeah. I think that's absolutely right. And I think that that's often where the ethics discussions are often a way among technologists and companies of preempting those sorts of conversations, right? They essentially often act as calls for self-regulation trying to turn these issues that you've just described into technology problems that are really best served by more ethical engineering or into process problems that can just be solved through various types of internal organizational audits and checklists and so on.

And so I think we should be incredibly mindful of that exact type of dynamic around the language of ethics. But again, ethics as a wider philosophical lens is incredibly important for thinking about what are the underlying problems and how can we think about what regulation should ultimately look like if we eventually can go down that road, what does it look like?

And one of the articles in the collection by Salomé Viljoen actually thinks about the relationship between ethics and law and blurs the boundaries a little bit by pointing out how, in some cases, the law response to the failures or limits of ethics can themselves be co-opted by technology companies and there are interesting examples. Now, as technology companies see the increasing move and the increasing energy towards regulation, they've ended up supporting regulation, but supporting very particular types of regulation that are relatively superficial.

And so in the articles Salomé talks about Microsoft supporting a Facial Recognition Bill in Washington state that is a form of regulation, but was much less stringent and restrictive than what many of the civil rights groups and other advocates would've wanted. And so we should also be mindful of not just assume that any form of regulation is necessarily unco-opted in the way that the ethics response could be co-opted. So there's a larger issue around public participation and power and making sure that technology companies can't simply run the show in either of these domains.

Justin Hendrix:

So you have a particular essay here that, as you mentioned, the issue covers various technologies, but you focus in particular on data science and it's called, Data Sciences, Political Action Grounding Data Science in a Politics of Justice. Give us the basics of this one.

Ben Green:

So another contribution for me in the collection and is really in some ways a response of my own thinking around how can we recognize how does data science actually need to change? What are the recognitions that data scientists need to have about their own practice and how can they pursue a different type of practice moving forward?

Justin Hendrix:

I love this sentence here, "In other words, we must recognize data science as a form of political action. Data scientists must recognize themselves as political actors engaged in normative constructions of society."

Ben Green:

That is really the central sentence of this argument, and is responding to a trend that I was seeing of many data scientists working on public oriented problems, working on algorithms that would inform pretrial decision making or welfare or other issues often out of a genuine desire to do good. We have efforts such as data science for social good. And I think that what I really wanted to impress upon individuals doing this type of work was to recognize that the work was political and they are political actors. That doesn't necessarily mean that the work is bad or that they should disappear and not do any of this work, but it's to say here's a particular type of orientation that you need to bring to your work and understanding the impact that you're having.

And the main thrust of the article particularly in the first half is actually a discussion with a skeptic who hears those arguments and responds. And all the three different counter claims that are made are all real arguments that I've heard from engineers when they are confronted with this, right? They'll say, oh, well, I'm just an engineer. It's not my job to make political decisions, arguments like that. And so what I try to do in the piece is to directly respond to those and say, okay, I hear that response. I understand why you might think that. And here's an argument that draws on some literature and examples to help articulate not just why that way of thinking doesn't fully work from academic perspective, but also what the downstream harms can be of approaching your work while following those views of what you're doing.

Justin Hendrix:

I have students who I try to help to understand how whatever role they might take in a technology firm, whether it's in a government context or a civil society context, or in a corporate context, there is that deeper layer of ways in which their work will interact society. I love this section here where you take on this basic argument that we hear so often, I am just an engineer.

Ben Green:

I think that is really the first place that engineers start. And it's very much implicated into the culture of science and engineering and it's something that is generally shared and taught through classes and through internships and through the wider discourse of data science and other types of engineering and computer science practice. And so I think it's essential to really point out just how different types of tools are impacting society and that really breaking through the idea that, well, I'm just developing something and anyone can use this tool. The tool is neutral, but the uses of it are political. And I have no control over the uses. And I think really both of those things are not true, right?

Different types of applications of algorithms assume a certain type of society. They make assumptions about what types of outcomes would be good or bad and so on. But also I think there's that sense of remove of having no idea how a tool could be used also doesn't quite work. Many times engineers are developing algorithms in direct collaboration with a company or with a government agency. And so the application is quite clear and they have to choose whether or not that's an application that they support or whether there's a data set that they're using from a law enforcement agency that they think is a reliable data set and so on.

So what I do in this part of the article is bring in some of the literature on objectivity and standpoint theory and really try to highlight how different the perspectives of engineers do reflect ultimately different political views and different political standpoints. And again, that's not to say that they're necessarily wrong. To be political is not to be wrong or bad, but it is to be affecting the world in a way that different from how most data scientists think of themselves as affecting the world.

Justin Hendrix:

I know you've written this in the context of data science, but some of these arguments, they're exactly the ones that we hear coming out of the mouths of the most senior technology executives in this country.

Ben Green:

Yeah. And I think there are some interesting parallels to be drawn there. One of the consistent stories of Mark Zuckerberg, for instance, and Zeynep Tufekci has an incredible article about this from a few years ago is about all the times that Mark Zuckerberg said, oh, this bad thing happened. I'm so sorry about it. I just didn't foresee this happening. How could I have foreseen that Facebook would lead to this negative use or this harmful story. And so it's this perfect example of what I was just talking about around the ability to say, well, I couldn't have predicted this outcome, I'm just an engineer or we're just creating technology. We don't decide how users or other companies will interact with that technology is a way of distancing oneself from the harms that arise.

And I think is ultimately an example of how the field relies on a methodology that is very narrow because if you're developing technology to improve society, then how is an evaluation of how people are going to use or could use that technology not a central part of actually evaluating and developing this system? And so thinking about downstream impacts almost has to be a central component of doing rigorous engineering work of any sort because the whole point of engineering is that you're having some beneficial downstream impact on society. And again, so I think that these arguments from technology companies certainly come up and I think that for them being able to tap into this wider culture of engineers as neutral and removed from downstream impacts and political processes gives them cover for trying to avoid responsibility.

Justin Hendrix:

So you lay out a roadmap, a set of stages for data scientists to ground their practice in politics, which you've laid out the reason why that's necessary, can you just quickly take us through the top line points on that? The various stages.

Ben Green:

Yeah. And my thinking around the stages is to try to think about what is a realistic trajectory of how individual data scientists might move from a place where they're making the types of arguments described in the first half of the article to the ability to see themselves as political and to think about how to shape and alter their at work in light of that fact. And in some sense, this is in part modeled on my own experience and the experience of others that I've observed following through this type of process. I talk about four different stages. The first is pretty high level just around the idea of interest where it really starts with data scientists being interested in having some positive social impact in the world. There's lots of data science work that is quite theoretical and technical and removed from wanting to do social good.

So I think these efforts, in the article, I both critique many efforts to achieve data science for social good and sort of the... Yeah, I critique the limited idea of what social good is and how this work can improve society, but I do think it creates a useful launching path and stepping stone for data scientists to start thinking about how can we improve society. And then I think from there, there's really a stage of reflection where data scientists, I think can be prompted to come into contact and grapple with the questions about their work that are not purely technical questions and thinking about the politics of the institutions that they're with and the potential applications of their work and so on.

And I think ideally that process of reflection should be baked into renewed or more robust versions of data science for social good and so on where education and training to help data scientists work towards beneficial social impacts should be integrated with critical literature and questions around, what are the impacts of these tools and how can you reason about having a socially beneficial impact not as some necessary thing that will happen if you develop this algorithm, but as something that's incredibly tenuous and uncertain and requires an incredible amount of work to make happen.

So then from there, I talk about two other stages thinking about first being more around applications where data scientists can think about different types of applications for their work that often are just more mindful of, what are the institutions that maybe are challenging power and working towards equity rather than defaulting towards partnering with technology companies or with law enforcement agencies and so on. And then finally thinking about broader modes of practice around participatory data science and methods that are better able to account for downstream impacts and so on.

And so the last category is a little broader and sort of thinking more speculatively about how the field can change, not just in terms of some specific practices, but really how does an awareness of data science as being political actually alter the day-to-day nature of what this work entails in a fundamental sense. And so I think that's a longer term vision and something that I am excited to work towards. And so I laid out in the of different ideas for that, but I think that's a long term path of trial and development.

Justin Hendrix:

What's next then for you and for the authors of this? Where do you think this will go?

Ben Green:

We're really hoping for the collection to be something that is seated into these time conversations and can help to provide a real touchstone around people's growing awareness of the limits and frustration around the limits of technology ethics. I think that that's certainly something that has been growing. We're not the only ones to make many of these points and claims. And we hope that it can be something that really shapes some of the future research and education around this. I think that many of the articles work quite well as components or readings in both undergrad and graduate syllabi. And one of the things we really looked for when we were trying to figure out where to publish this collection was a place where the whole issue could be published open access.

And so one of the great things about the Journal of Social Computing was that they published this entire issue open access. So there's no paywall, no institutional logins required, anyone can read it. And so that's really important to us in terms of thinking about the broader public conversation that we want to be able to have around bringing these different lenses into critical work on Tech Ethics.

And much of my own work is pushing beyond the article I make about data science as political action. And those last couple of stages I was describing and thinking about I can sort of lay out this broad vision. How can I describe what this fundamental change in data science and algorithmic practice would look like? What are the practices that would change and how would this alter everything about how we apply data science in different domains particularly for me, I have an interest in policy domains and government uses of algorithms, but how does this wider lens affect everything from how we integrate algorithms into decision making, to how do we define our conceptions of algorithmic fairness and so on.

And so I think that there's a lot of work to be done to really take that argument as well as many of the other arguments from the special issue. And think about how do we push this into practice? How do we actually make change in the particular domain that each author is talking about?

Justin Hendrix:

Ben Green, thank you very much.

Ben Green:

Thanks so much, Justin.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics