Contemplating the "Uselessness" of AI Ethics
Justin Hendrix / Sep 27, 2022Audio of this conversation is available via your favorite podcast service.
A couple of weeks ago, Mark Hansen, Director of the Brown Institute for Media Innovation and a professor at Columbia Journalism School, with whom I have collaborated and taught with over the years, came across a paper titled “The uselessness of AI Ethics” in the online edition of the journal AI and Ethics.
Its author, Luke Munn, a media studies scholar based in New Zealand, points to over 80 lists of artificial intelligence ethical principles produced by governments, corporations, research groups and professional societies. In his paper, he expresses concern that most of these ethics statements deal in vague terms and often lack any mechanism for enforcement.
But in critiquing attempts at defining an ethical code for AI, Munn is not suggesting we let technology develop in an ethical vacuum. On the contrary, he wants us to think more deeply about the potential problems before deploying AI.
Munn wants us to examine the existing systems of oppression into which AI technology is deployed — a concept referred to as “AI justice”. He also wants us to avoid vague terms and instead focus more narrowly on accountability, and on better defined notions of accuracy and auditing of AI.
Luke Munn’s paper is part of a growing movement that sees the problems with AI less in purely computational terms, but instead as an area of social science. Ruha Benjamin, Timnit Gebru, Joy Buolamwini, Catherine D'Ignazio, Lauren Klein and others look to historical and social contexts to ground their work, and provide tangible examples of the complexities of auditing AI deployments.
For this episode of the podcast, I turned the mic over to Mark Hansen, who spoke with Luke Munn about his ideas and how they connect to this broader movement.
What follows is a lightly edited transcript of the discussion.
Mark Hansen:
Luke, thank you for speaking to us today. So in terms of the flow of this, I thought we might anchor it around the flow of your paper. But maybe to start, I was looking at your Twitter feed-- because that's what one does these days. And I noticed you started tweeting out short videos of technical keywords that would be important to a new scholar in critical media studies. So maybe we can start our conversation with a definition.
So your articles on AI ethics-- how do you define AI for the purpose of this discussion? What technologies are you considering, or is it just about any degree of computation, is that fair game? Is there something fundamentally hard about collecting data and wedging computation and social or political or even cultural contexts? So what-- to you-- is AI for the purpose of this talk?
Luke Munn:
Yeah, I mean, AI is a really broad term with lots of different definitions people give. Actually, in the video series, I talk about AI first as a research field that emerges from computer science, data science, and so on. In the mid fifties around there, and then develops over the decades into different schools and things like that. I think that's quite a concrete way to think about it, about where it comes from. And then I also talk about AI as a set of techniques, so that in certain architectures, the Perceptron, the neural net, certain techniques like back propagation and so on that are really central to a lot of what machine learning is trying to do or does often. So even if we had these different schools, deep neural networks versus other architectures, a lot of them use the same kinds of techniques and architectures and models and things. So that's one way to cluster AI and think about it. And I think some of the other definitions that people use get into quite debatable territory around what is intelligence, what is agency, how is it different from other forms of computational thinking, decision making? So in some ways, it's quite useful to keep it quite concrete and also not to go into those territories in some ways.
Mark Hansen:
No, I agree. I think things get murky if we start to... Because fundamentally I feel like some of the things that we're going to talk about are definitional problems that come from collecting data, making categories, doing the sorts of things that one has to do when one is translating lived experience into some data, and then the inevitable consequences of computation on that and what that means, whether we call it AI or not. In writing about AI ethics, you look at the moral principles that people have created that are meant to act as guardrails almost, for applications of AI. But before we get into the effectiveness of the guardrails, can you tell me why we need them in the first place? What's at stake? What things can go wrong?
Luke Munn:
So broadly AI is a powerful set of technologies and they are novel in some ways. And so that means that they introduce new capabilities. We can do new things with AI, and as AI technologies then get spun out into all these different sectors, different areas of work, things like welfare, things like the justice system, things like healthcare. Then they do have really concrete effects on people's lives and livelihoods. And so they can benefit lives, but also cause suffering. And so there's real stakes here when we talk about AI technologies. And so that's why we need to think about, yeah sure, their potential, but also their problems. And how do we critically examine those problems? How do we mitigate some of the issues that come with them? How do we put some guardrails or safeguards in place? And so is that especially those who are already vulnerable, already marginalized don't get hurt further by these new technologies.
Mark Hansen:
So one of the things you do in the paper is to... There are collections of AI standards. And I was a little surprised, I think at some point you quoted over 80 different publications or standards sets, which I thought was a little bit surprising. So there's no shortage of lists of ethical standards for AI, but you're critical of the enterprise. So what makes specifying an ethical code for AI so difficult and how do existing standards fall short of providing protection against some of the things that you talked about?
Luke Munn:
Yeah, I mean, there's just this deluge really of AI ethical principles that have sprung up over the last few years and in the paper, unless all the ones that are on a national level, as well as industry bodies, like IEEE and then tech company led efforts like a Google and Facebook and so on. And even the Vatican has their ethical AI principles. So I think that's being one of the defacto turns to this issue of, okay, AI has a potential to cause problems. So how do we handle that? Well, let's just come up with some ethical principles, this seven points, seven values that we can put on a website somewhere. And already we can see that that's insufficient. It's not enough just to talk about these high minded principles that are quite vague and practice things like fairness and transparency and benefit to do humanity and things.
So in the paper, it goes into the issues with these principles, which we can dig into maybe in a bit more detail, but in broad terms, the idea here is that these principles, they're vague, on the one hand they don't come with enforcement so that you can put them on your website or your company press releases and things like that, but they're not actually enforceable. And they're often quite nebulous in terms of what these terms mean. And so there's this disconnect between principles. On the one hand it's high minded principles and the actual production and development of AI systems. And these two things don't really talk to each other in many cases. And so you get AI production continuing on as usual. Well, AI ethics is left on the sidelines.
Mark Hansen:
You make the point that, I think the term you use is 'meaningless,' in the sense that there are pretty vague terms that are used and that it's possible for a corporate entity or, without even going to corporations, that just someone deploying AI can pick and choose their definitions as they'd like, because things like fairness or sustainability or something like that, have a lot of different meanings. And so it provides us with a capacity to thread the needle in different ways.
Luke Munn:
I mean, these terms, they're just really contested terms. And even outside of AI and tech discourse, they've had this really long history of what does a particular term mean? Like privacy, for instance. Privacy is notoriously difficult to define and even privacy scholars talk about it as an umbrella term, which contains lots of different definitions, contested definitions, incompatible definitions about what privacy means. And so when we start to just list these principles, no one's going to disagree with them. They sound great in practice, but again, they're very vague and broad and they're very difficult then to translate into something more specific, more concrete. And then in some ways it's a great benefit to corporations who are then able to massage the meanings of these words, these definitions, to be whatever they want them to be, to line up with already existing corporate principles or business values, business logics, and so on.
And I think that's really a problem because we have these things like being beneficial to humanity, but who are we talking about? We talking about humanity, even when we think about the history of, we look at race and cultural studies and so on, and it's clear, humanity is not this monolithic thing, but some humans have been valued more than others in the past. And so there's the ignoring of this contested definitions of these terms and the fact that they have many different meanings over time and mean many different things to different groups and communities of people.
Mark Hansen:
I can imagine. I reviewed the ACM policy. It's less about AI in particular and more about the use of computation, but I think it has many of the characteristics you're talking about. Is it fruitless then, for an organization like the ACM to want to have a code of professional ethics? I feel like that's got to somehow describe the way we should be acting in the world, right?
Luke Munn:
Yeah, no, I wouldn't say it's fruitless. I think it's good as a starting point, but again, I think firstly, those principles need to have some clear consensus about what we mean by them, even in if that consensus is just on a specific level. So if you're a developer, you're a member of the IEEE or engineer, then this is what we mean by transparency, that transparency in that in particular context for these engineers might mean that you are surfacing the ways in which your system is making decisions for the users. And that's quite a concrete idea of transparency. It doesn't mean that we share all the data with everybody. It doesn't mean that we're a nonprofit or financially transparent, these other meanings. So already, I think you can start to see then how we develop consensus around a particular term. And then we have quite clear ideas of how that might be actioned or operationalized when it comes to certain products, certain services, certain AI models and so on.
Mark Hansen:
So I feel like that part of the issue too is, so in defining what's good for the world, or what's good for humanity, as you said, there are efficiencies that we may argue AI provides in terms of behaviors of systems or something like that. So there can be a case made for efficiency of a system in one form or another, whether that's allocation of police resources or decisions about hiring or whatever, a lot of arguments are made around efficiency terms to justify a computational approach. I think, where things run afoul then are the taking the outputs and deciding, are there consequences, perhaps unintended consequences? Let's give it that, of a particular kind of computational interventions. So the predictive policing algorithm that keeps sending police to the same neighborhood and the crimes really only happen where police are. And so the fact that police are in a place means there's more crime data from a particular pixel on the map and the more we keep sending our police there and then it's a self-fulfilling cycle. So I'm wondering, is part of the issue to align the definition or the high mindedness of the ethical principles with, let's say design strategy or design goals. So that part of it is to not just focus on efficiency of one kind, but think about what else is happening in the non projected part of the space.
Luke Munn:
Yeah, no, I think that's spot on. I think, yeah, like I said, it's about you taking these principles and then thinking about how they actually play out, I mean efficiency is used to mask a lot of things, to launder a lot of technical transformations. Who can be against efficiency? It sounds great in practice. And there is a case, like you said, to be made about, you have a certain number of hours, you have a certain number of staff and they have a amount of time. And so you do want to be able to get certain things out of that limited resource, but efficiency and the idea of progress. And then technical development have been linked together really strongly over history and used to justify a lot of different technologies that have been problems. And so I think along with efficiency, we should also start thinking about, what are other values or even values that counteract that or balance that in particular ways? Towards the end of the paper, I talk about justice.
Well, justice is not necessarily an efficient process. It's something that might slow things down, might require discussion with communities, might require some different stakeholders getting together and trying to hash out these terms or testing on a smaller scale before you roll out your great tech platform to a million users. And all of that slows production down, slows development down, all of that represents really a friction, especially when you think about the tech industry, which is always about moving fast and breaking stuff. That's the tech motto. So yeah, we can start to think then about other values that should be spliced in and need to be balanced with de facto tech industry values. I've done some recent work on thinking about AI from Maori perspective, a Maori way of doing things. And there's a series of tests that they have to decide what a Maori response would be in terms of a technical intervention. Does it stand up? Is it valid? Is it legitimate? Is it something that we want to have? And those tests have a very different set of values than the de facto Silicon Valley values. And then that kind of thing is maybe speculative, but it's quite productive and healthy, I think, to start to clash these value systems together and think how they might actually play out at the level of AI model.
Mark Hansen:
We've had a series of talks here at the Institute about computation and what happens when data and computation enter political, social, cultural processing systems. And it's easy enough to, or it feels sometimes like there's an equation at play here, which is computation plus social system equals disaster. Computation plus political system equals disaster. Computation plus cultural equals disaster. Is it always the case that this plus this equals disaster? Or are there ways of getting... I like where you're headed with a very local approach to thinking through technology and how local understandings of... For a variety of reasons, I find that to be a really interesting direction, but is it always the case that when you weave computation and data into something, it's going to go off the cliff? Or is there a way of it being beneficial to humanity or having some way out of it, and how do we get there?
Luke Munn:
Yeah. I mean, that's the question, right? I mean, my work has been critiqued in the past for being too, I don't know, critical or negative in some ways to technical systems. My latest book with Stanford was around automation is a myth. And that particular book was, it was in a way it was a debunking exercise, criticizing technical systems, automated systems, and so on. But towards the end of the book, I do try to think about ways in which technologies could be used in more inclusive, more egalitarian, more sustainable ways. And that's a hard question. I think there are cases, like you said, it's not always the case that tech systems plus whatever equals disaster, but we do find that, I think I would say in broad terms, what we see maybe in the last couple decades is that these tech companies are developing products and services and so on, that really permeate into everyday life in really profound ways.
And in some ways, they're not equipped to deal with that responsibility. In the paper I talk about the education system for computer science and the fact that ethics really is marginal at best. Some of these courses, they don't have ethics until it's a nice-to-have at the very end of the course if they have time. So ethics and just an awareness of social relations, cultural relations, race relations, and so on history, things like that just has not been baked into computer science courses for a long time. And then as soon as you hit industry, it's no different in tech industry. As I also mentioned in the paper, it's notorious for misogyny, for sexual harassment cases, for forms of racism in the workplace, anti worker and so on. In many ways it's not an ethical industry.
And so when you're just putting these bandaid ethical principles on top, it's not going to be a solution. You're not actually impacting structural issues that are at work around AI production. And so you have on the one hand, a set of seven principles or whatever. And on the other hand, you have AI production and particular brogrammer culture, which we're pretty familiar with by now, which actually builds these systems. And the two things are not talking to each other. Yeah, it's inevitable, then that we get these kinds of issues that are coming out. And in some ways, like you said, they're unanticipated, otherwise they're actually logical conclusions for based on the industry and the cultures that produce them. And so that's why I suggest, towards the end of the paper, to think about these organizational issues, these structural issues, and so on.
Mark Hansen:
I mean, it reminds me a while back, I dealt on a story with the Marshall Project about predictive policing, about allocating police resources using a predictive model. And there were three stages of it. There was one which was being offered by a company called PredPol, which was entirely black box. And you didn't know how it was working. And you couldn't say why a particular pixel was ranked as being high for a potential crime in the next eight hours or something like that. So completely black box predictions. And then we had another group that was using random forests, so they could tell you which factors go into a neighborhood being dangerous, but they are being... Having high probability of crime taking place in the next eight hours. So they could tell you which factors are important, but couldn't tell you exactly what happened.
Then you had a research group at Rutgers who was just doing a simple logistic regression, so not a fancy anything. And the clarity of their model allowed them to then say precisely why things were happening, presumably with a little bit less fidelity, but then they would use that openness of the model to convene neighborhood groups, to talk about why these particular situations were dangerous or why these particular situations would lead to the probability of crime being high. And let's talk about what we do besides just putting a police car there. We can now talk about the actual thing that's causing it so that we could maybe come up with a different solution. And I feel like in that spectrum, you have the one piece, which is, like I said, black box and optimized and thinking about efficiency and yesterday's data and so on, you have something in between, which is trying to have a dialogue with a community.
And then you have the research group with a very simple model that's coming in that says, "From the beginning, we really want to talk about convening groups in one form or another." And I came away from that exercise feeling hopeful that there was a path forward for modeling that might produce... That it makes me hopeful for how they might apply some of this technology that it's not trying to just optimize where police go, but instead it's opening a conversation with community partners about the nature of policing, police resources, other solutions to situations that don't necessarily call for a police response, right-
Luke Munn:
Yeah, definitely. I think I like that example because there's a mixture of transparency and action or some sort of accountability. In the paper we talk about transparency and transparency is really important, I think. Transparency has been a watchword, a buzzword almost for critiques of AI systems in the last five years, I would say, just make it more transparent. It's a black box and that's justified in some ways I think. We definitely need to surface these decisions and understand why they're happening. I think transparency is just one part of the picture because transparency by itself doesn't mean that anything's going to be done. And we can look at open source systems, open source data, governments that release all their data. It's all transparent, but it doesn't mean that you have any way to address that or redress those wrongs, if something goes wrong.
So transparency then must be accompanied really closely, I think, by accountability, that's where things like policy can come into place. That's where things like community discussions and workshops and things can come up, and you end up going back even to tech companies or to particular groups developing a piece of software and saying, "Well, this works really well and we're happy with this aspect. Or that we would think this feature could use some work, but have you thought about this? Adding this on? This would really help us." And so you get this kind of co-design in a way, a nominal version of co-design that actually provides meaningful feedback. And that feedback it gets actually worked maybe into the version 2.0 of the AI model or the software or whatever. That to me, is a pretty promising kind of strategy. And also suggests maybe that these things, like you said, happen on a local or regional level.
So many of these AI ethical principles are meant to be universal and these systems that are being rolled out globally, but actually we have a lot of systems, organizations in place, expertise in place at the local and regional level. And you can see the ways in which particular groups, particular communities have certain needs. And those are not going to be the same, even within the United States, for example. Let alone thinking about AI in India versus AI in China and so on. And so that's why I've been thinking a bit more about the locally grounded ethics. And if we think in some ways finding ethics for the entire globe is, it's not only useless, it's a bit hubristic, it's a bit arrogant. Who are you? Who are you to define what values our community subscribes to? And so that's why I think this locally led and maybe a bit more grassroots focused work could be really productive.
Mark Hansen:
I had done some work getting ready. I attended some meetings getting ready for the 2020 census here. And there was a big push on the part of various community groups to make sure their people got counted because a lot of money gets handed out from the census, based on census numbers. And if your people stay away from counting or don't fill out the forms, then they'll be underrepresented and you won't get the money you need to make your... So participating in the census and census-like surveys becomes something that locally people are advocating for. Our community needs you to fill this out. And I recall that in one of the surveys around COVID and trying to assess the economic impacts of COVID on families, the bureau asked its first sexual orientation or gender identification question. It had never... I mean, I guess in the 2020 census, you could see a same sex household, but this was the first explicit survey] question it had asked before.
So little off path, but the idea that there's something local that's being responded to here. And in this case, maybe it's configuring, making sure that people participate in the event like answering the census bureau, responding to, but there was something that felt very of the same spirit trying to get people to answer this question. And once this question was answered, there was visibility into something that we didn't have before, that could then prompt questions about why are LGBTQ led families doing worse than other families or things like that, that wouldn't have surfaced if it wasn't for asking those kinds of questions. This is more of a data thing than an AI thing per se.
Luke Munn:
No. And I think it's really, like you said at the beginning of the conversation, capturing data and generating data has a long history and that history absolutely figures into AI systems, AI models. Even if we think about the census, we go back to Herman Hollerith at the beginning of the 20th century, he used a very technical new machine at the time to speed up the capture of data and process all that information in record time. So this very clear connection between technical systems and the capturing of data and then what you can do with that data, because in that case, census data then provides a much sharper, fine grain portrait for the state about what they're dealing with, which has of course, it's upsides and downsides, but absolutely the productional data and what kind of data is being captured. What are the limits of data capture, is really key.
And when we think about bias and AI systems, which has been quite a hot topic in the last few years, a lot of that comes down to the data that a model is trained on and what is in that data and what is outside of that data. And so if you're training on a data set that's predominantly white on Caucasian, your model is going to struggle to identify Latin American subjects or misidentify people who are African American or so on. These are some just well known examples of bias within AI systems. And that comes back to the data, what gets left out and what gets captured. And some of that is, I think about diversity, different people being represented within the data set. And some of it is also about, I think the limits of quantification. So when you think about life and about how you convert the rich messiness of life, everyday life and people's experiences and subjectivity and things like that, into data sets, there's always going to be some things that are left out, that are residue and some things that are captured that don't quite actually line up with people's lived experiences.
And so you have these hard edged integers as containers of numbers and graphs and stats and so on, and they give you a skewed version of reality. And so one of the things then, when you think about making AI systems more inclusive, better for people in general, we can start to think then about the kinds of data that is captured and whose knowledge systems that are used, what kinds of data is privileged. And often those systems are trained on data that has a certain point of view, a certain way of privileging kinds of information, and leaves out the experiences in life, things of other people.
Mark Hansen:
I usually assign for my students, Seeing Like a State, the text and the ways in which data collection or the assembly of data at, let's say the level of estate, involves erasing a lot of local notions of how we might describe things in certain ways, locally, that don't necessarily carry over to other places. But because of the vantage of a state, we have to integrate that all. Or Ted Porter calls quantification a distancing technology. So that you are pushing away some of the local, those components, and instead finding a way to be able to aggregate up, to be able to then feed them into things like models of various kinds or tables or wherever it might go next.
Luke Munn:
Yeah, actually in my PhD, I looked briefly at the Austrian empire and early on, a couple hundred years ago and they didn't have a street naming system. So when you told someone where you lived, it would be like, "Oh, I just lived down the street from the golden lion, the pub." Everybody knows that. So you have what already is local knowledge that's required to understand things about you and where you live and who you are. A lot of people with the same name or similar names. And so these census takers then started going through the city and was just overwhelmed because they're trying to assign these numbers and street names to certain places. And they had certain boxes and fields that they were filling in and were required to enter. And the responses from these locals, people, just completely exceeded these standardized boxes that they were supposed to fill out. And so I think the census was supposed to take a few months or something. It ended up taking two and a half years.
Mark Hansen:
So that idea then, of expressiveness of data at one moment, and then the necessity of maybe removing some of that expressiveness in favor of doing some kind of larger scale analysis or some kind of modeling exercise, computational exercise. I want to go back for a second to a word you used earlier that I wanted to get, at least for the audience who have an appreciation of, in the paper you write about ways in which... So if assigning ethical principles doesn't necessarily get us where we want to be, because they're reliant on vague words that are hard to pin down, or they don't have any actual enforcement behind them, or they're focusing too much on the system itself and not sort the broader context, your answer, you have two answers. One of them is to think instead about AI justice, as opposed to AI ethics.
And I was corresponding with Julia Angwin before we had this conversation. And she had thought about this idea of justice as similar to the way in which we've moved from a computational focus when talking about AI, to maybe more of one that depends on social science and thinking about social science as a way of framing AI. And that she in particular suggested we give credit to the academics and primarily women of color, who have forced that conversational shift from computational concern to more of a social science one. But I think that that shift social science is also code for what you're describing as a turn towards social justice, as opposed to... Or sorry, AI justice, as opposed to AI ethics. And maybe if you could speak a little bit more about what you mean by AI justice and how that starts to get us out of the problem of ethics.
Luke Munn:
Yeah. I mean, in really broad terms, it's about expanding the conversation. AI ethics are, as we think about them as a set of, in a narrow terms, as moral principles and those moral principles then are applied to an AI model or a piece of software or platform and so on. And that's quite a narrow understanding of the problems. And we already alluded to the fact that a lot of these problems are structural and organizational. And so, simply having some principles and then saying that your piece of software adheres to them doesn't really address the problem. So I think some of that is also this naivety or obliviousness, so that if your production team is the homogenous culture of particular people, then you're not actually aware of a lot of these issues. And so in a way, like you said, it's about a turn to social science.
And I think really, the key there is the awareness of broader issues that social science brings, right? It's about breaking out of this engineering, this very strict, narrow engineering mentality about problems and solutions and about being aware then of social dynamics, cultural dynamics, racial dynamics, about how in history, certain people have been privileged. Other peoples have been oppressed or marginalized in particular ways. I think that's one of the main contributions that social science then brings us. So AI justice then, is about expanding that conversation. It's about thinking more broadly about, okay, who's in your company? Who makes up your team? Do we need more diverse voices in that team? Do we need to consult then with a community that our product is going to impact? Do we need to think about the knowledge systems that our AI model is based on? This particular knowledge system is very Western, for example, in that it privileges certain types of information and excludes other types of information like we talked about previously. So when we think about these global systems and touching down on different levels, different places, different peoples, then we had this different production and the way in which privileges certain things. And those two things clash. So AI justice really is about thinking more broadly and trying to address the structural and systemic issues at the core of technical production.
Mark Hansen:
So if we take that framing and maybe some of the other suggestions that you have in the paper, what does this say for, let's say the CEO of a tech company? What do they take away? You've just illustrated a few things like maybe we should look at who we've employed and maybe we should look at the diversity of our workforce. Maybe we should look at perhaps more user centered design for lack of a better term.
Luke Munn:
CEOs of tech companies? Yeah, I mean, the advice I think would be, the advice I give in the article is to, on the one hand, think more broadly. So it's about reflection, I think, and about contemplating, taking a hard look at your business, about your production crew and thinking about, and your position. What's the perspective that we're coming from and what other perspectives might there be? I mean, I think that would be the first step when we start to think about AI justice, social justice. And so it's this hard introspection, which can be difficult, but is actually a starting point saying just think about how we might start to address these issues.
Mark Hansen:
For policy makers, what do you advise?
Luke Munn:
Yeah. So policy makers, I think it's about, as we mentioned earlier, it's about distilling these principles down into actionable legislation that is actually enforceable. So you take something again, like fairness or transparency and you think really concretely about what that might mean on a policy level, and then how that actually might be enforced. By enforcement we're talking about fines, we're talking about punishments of certain kinds, because I think in the end, people do what they can get away with. And so policy makers then, are about offering certain incentives and rewards and punishments to ensure that things get followed. And we can think about China recently releases algorithmic regulation, which is about saying, you can't actually offer people different prices. You always have to offer the same people the same prices, when we have these algorithmically priced products and services. And China is a whole, another debate, which I've critiqued in the past. But I think that particular example is quite strong because it's this very concrete, actionable thing. It's an affordance built into software and that you can implement at the code level. And then it's enforceable. They've said that we're going to take this really seriously, and they've already demonstrated that in the past. And so in that particular example, I think is really strong in terms of what policy should be doing.
Mark Hansen:
So I'm a teacher. I teach in the School Of Journalism. How am I moving my students along to help be better consumers, more aware of the systems around them? What does the work you've done here say to a teacher?
Luke Munn:
It would say that that's what good education does. A well rounded education. And that's something that's been eroded, I would say, over the last decade or so, in terms of really narrowing the kinds of education that students get in privileging, like stem disciplines as a money maker at the end of your degree. But I think that's what good education in terms of, is well rounded and provides different perspectives from different people. In that sense, it's a empathy generator and what you should end up with for students at the end of an education, is someone who's more socially, culturally, politically aware. Someone who could not just code, but actually think critically about the kinds of things that you're putting out into the world and how they might impact people's lived experiences.
Mark Hansen:
And does it require a stem background to be able to live... If I'm coming out in the humanities, do I need to have a stem background to ask questions about AI and the systems around us? Or how do I become equipped?
Luke Munn:
Yeah, I mean, as a user, I think the conversation is a little bit different in that you're not actively producing these systems, but this idea of tech literacy, I think, still has some value and thinking, again, critically about technical systems and asking questions. These are things that I think humanities has actually been really good at historically and should continue to do, right? Because it's clear that we need people to ask these difficult questions. And I think one of the things, maybe I hinted it in this article but didn't touch on so much, this idea of a bridge. A figure that we really need in today's society, who is a bridge between something like social sciences and stem subjects, someone who understands at least the logic of technical systems, how they operate, not necessarily the nitty gritty details, but understands how they arrive at certain outputs, how things are produced, generally speaking, how data contributes to all of that. And then can also put that together with insights, from race studies, cultural studies, political science, history and so on, and then provide this translation or bridge between these two worlds and help companies of organizations to make better products.
Mark Hansen:
I will put my journalism students up for being part of that, a critical part of that bridge. All right. Well, thank you very much for spending an hour today.
Luke Munn:
Yeah, no thanks. Mark, it's very generous of you to spend your time and talk to me. And thanks to Justin for organizing all of this as well. I think it's been a really fun conversation.