Home

Donate

Ifeoma Ajunwa on the Quantified Worker

Justin Hendrix / Jul 23, 2023

Audio of this conversation is available via your favorite podcast service.

Last week, Emma Goldberg, a journalist covering the future of work for the New York Times, told the story of Ylonda Sherrod, an employee at AT&T’s call center in Ocean Springs, Mississippi. and also vice president of the facility’s local union chapter, part of the Communications Workers of America. The story describes the ways in which AI is creeping into Sherrod’s job. For instance, one system produces automated trancscripts of calls with customers, but it has a hard time interpreting Sherrod’s southern accent, so transcripts from her calls contain mistakes. The story recounts the ways in which Sherrod’s job are being automated and quantified, and some of the fears and complications that come along with it.

Today's guest has looked deeply at such issues. Ifeoma Ajunwa is the AI.Humanity Professor of Law and Ethics and Director of AI and the Law Program at Emory Law School, and author of the The Quantified Worker: Law and Technology in the Modern Workplace. from Cambridge University Press. The book considers how data and artificial intelligence are changing the workplace, and whether the law is more equipped to help workers in this transition, or to provide for the interests of employers.

What follows is a lightly edited transcript of the discussion.

Ifeoma Ajunwa:

This is Professor Ifeoma Ajunwa. I am the AI.Humanity Professor of Law and Ethics at Emory Law School. I'm also the author of the Quantified Worker and the director of AI and the Law Program at Emory Law as well.

Justin Hendrix:

Those are new positions, you've just moved to Atlanta, I understand?

Ifeoma Ajunwa:

Yes, I just arrived this July, so yeah, I'm excited to be here.

Justin Hendrix:

Excellent. We're going to talk about this book, The Quantified Worker, which is quite a term. How long have you been working on this?

Ifeoma Ajunwa:

Approximately six years. I wouldn't say consecutively, right? There were definitely starts and stops and I also did a Fulbright abroad, so I was gone for that. But yeah, it's been six years in the making, so a while.

Justin Hendrix:

And how is it situated in terms of your broader research agenda? How would you describe your broader research agenda?

Ifeoma Ajunwa:

Right. I really see the book as a synthesis of my research agenda to date. The book came out of my dissertation project in a very tangential way. My dissertation project while I was at Columbia University was looking at reentry of the formerly incarcerated. In interviewing them, that's really what tipped me off to the proliferation of automated hiring systems and their potential for discrimination. When I started on a tenure track as a law professor, I just thought to myself, I want to investigate these programs further and just see what they're doing for society, and also to see how they mesh with the law, especially thinking about anti-discrimination laws. I started researching automated hiring programs and that obviously informs a big chunk of the book, I think about three chapters I would say I devoted to automated hiring systems and all their iterations, especially automated video interviewing.

But in the course of that, I was also thinking about workplace surveillance. When I was a graduate student, I was also an intern, a research intern at Microsoft Research. During that time, I wrote a paper with Kate Crawford and Schultz about surveillance in the workplace and looking at what the law can actually do or not do in preventing discriminatory or burdensome surveillance. That paper, the title became "Limitless Worker Surveillance" with the realization that there were really very few laws that governed what employers could and could not do. So employers really had quite a lot of leeway. In fact, federally, they have carte blanche in terms of what kind of surveillance they can perpetrate in a workplace. You also have a chapter on that.

While I was working on the book, of course we had the Covid-19 pandemic and so that also informs that chapter that looks at telecommuting and the role that surveillance has to play when you now add in telecommuting and the attitudes of employers when it comes to allowing employees to telecommute and how that actually can in a lot of ways open the door to even greater worker surveillance and can also open the door to even greater privacy violations. Because now you're not really working from home as so much as in you are living at work, right? Because you have your computer with you and you have all the tracking things with you. A lot of employers are turning to all this boss wear to essentially spy on the workers throughout their day, and now that's in your home, you have boss wear in your home.

I think the book is really very much informed by my research agenda of really being focused on AI technologies in the workplace, being focused on issues of discrimination in the workplace and how those technologies enable that discrimination. And also being focused on issues of worker privacy, worker personhood, and also keeping with the times of looking at some workplace trends, Covid-19 and working from home and telecommuting, but then also looking at workplace wellness programs, which is now not so much a trend as really the norm and what that is doing for the worker. Yeah, it's all this big part of trends and business practices and issues stemming from them, and especially when technology, AI technology is introduced in the mix.

Justin Hendrix:

I want to get into some of those specific trends and ideas and areas that you go into through the interview. But in particular, you start with a history lesson and explanation of what you call the ideology of worker quantification. How would you explain the history and ideology of worker quantification to someone unfamiliar with that history?

Ifeoma Ajunwa:

When I talk about worker quantification, I'm really putting forth this theory that we now have really a paradigm where the worker is reduced to numbers, they're reduced to quantified data points in a manner and to a degree not previously seen in history. Yes, worker quantification is an iteration of Taylorism, so it's not completely new, right? With Taylorism, at the start of the 19th century, you had Francis Winslow Taylor really pushing this idea that to become more efficient, to become more productive, managers needed to really understand the work process to the minutiae, they really needed to go out into the factories, the warehouses and watch what workers were doing and try to quantify the work tasks. In doing so, really attempt to standardize the work tasks such that any worker can then be taught how to do it efficiently and as productively as possible.

But what I'm saying is we've gone beyond that. We've gone beyond just this focus on the work task. With worker to quantification, you have a focus also on the worker themselves so you're trying to break down the gestalt of the worker into these manageable buckets of data. You're looking at the worker's productivity in increments of time. You are looking at the worker's productivity in terms of keystrokes per minute, per hour. You are looking at emails sent per day by the worker. You're not just looking at the finished product, you are looking at the minutiae of how the worker is behaving. There's quantification of worker behavior. You are even going so far as to quantify the worker's health, which is done through workplace wellness programs. Everything about the worker is now quantified. All of course in the pursuit of productivity efficiency, and I'm saying it's gone too far. It's really gone too far because now you start to risk losing personhood. You start to risk seeing the worker not as a full person, but as a set of numbers.

Justin Hendrix:

Is the thing that, I suppose, ties that history from Taylorism on through Henry Ford and various other modes of quantification of work or is it how we comport to machines? Is that the through line?

Ifeoma Ajunwa:

Yes, I think the through line is really the use of technology. For as long as you've had humans, we've tried to use technology to make work easier. I think it's really important to understand what technology really means. Technology does not only mean digital technology. The person that created the first wheel, that is a type of technology that can then be applied in various contexts in various ways. And obviously you can use this for good, you can use the wheel to make farming more efficient, or you can use the wheel to create chariots and kill more people and so it's just really about how you use the technology. But obviously it's not solely also about how you use this technology because it can also be ideologies attached to technology. With Taylorism, when he started, all he had was a clipboard and a pen and he had to follow each worker around. And yes, it was still worker surveillance, he was still appearing over the shoulders of workers and documenting what they were doing, but it was limited. He couldn't follow them outside the workplace necessarily and see how well they were sleeping at night, see what they were eating.

With the advent of AI technologies that can produce productivity applications that can produce Fitbits and track other types of trackers, GPS, we now have surveillance that can be pervasive, constant, and really indefatigable. It's not a human that has to follow you around, they can now just give you a phone with a productivity app. The technology is what's propelling the change. But I also want to proffer that the ideology is also propelling the change because you could have had all these technologies and the use of them could still have been curtailed to the workplace.

But we do have an ideology now where we think work should be boundless. That work should not only exist just in the workplace, it can go home with you, it can go on vacation with you, it can go wherever you go. That is an ideology and we think work life balances, some people will directly say, "Oh, it's a myth, you just work whenever you have to." That is an ideology, so I do want to be careful in saying that technology can be used for good and bad. Yes, it can, but there's also sometimes ideology is attached to the technology that propel its use or influence its use.

Justin Hendrix:

You take us through a history of the emergence of Taylorism and its progeny, I suppose. There's all sorts of detail about things like the Pinkerton Detective Agency, the labor movement, the emergence of the Wagner Act, but I want to ask maybe before we move on from this bit about history and ideology. How has the law, particularly in this country, contributed to the quantification of work? How has it reinforced some of these ideologies?

Ifeoma Ajunwa:

From what I share in the book, the history of worker rights that I attempt to share in the book, there has been an asymmetry of power that's accorded to employers in the United States, unlike in other jurisdictions. If you look at Europe for example, there is a much more acceptance that workers should have a say in terms of how the workplace is run. There's much more prevalence of unions in Europe and elsewhere. And this of course shows, it is reflected in the quality of life for workers. It's reflected in even working conditions that are allowed.

I do think that the law thus far has been largely employer friendly to the detriment of the employee. Even using the word employer and employee is loaded because American law as a stance does not recognize everyone as an employee. Everyone can be an employer, but only really people that meet certain criteria can be called employee. That of course comes with its own limitations on rights and obligations, right? If you're not an employee, then there's certain rights that you don't have, right? And if you're not an employee, there's certain obligations that the employer does not have to you. I do think the law has a very important role to play when it comes to the quantification of the worker.

Justin Hendrix:

The quantified worker reports of course to the mechanical manager. You talk about the idea that increasingly the work of hiring, monitoring, evaluating workers is also being mechanized with AI technology, often. This is where you take us through all of these automated hiring systems, personality tests, video interviews, various other forms of workplace surveillance. I was particularly struck by the idea that this idea of video, automated video interviews are in fact taking place at such scale at the moment. What is this like for a worker? What is an automated video interview?

Ifeoma Ajunwa:

The automated video interview is the latest iteration of automated hiring. Automated hiring can really run the gamut so you have your proto automated hiring systems that were really just resume parsing systems, right? You submit your resume and there's a program that the employers can use to just basically check for certain keywords and pull out resumes that match those. That's really just like the rudimentary AI system. But now you have the automated video interview, and this is really when the candidate has already parsed, they've passed the resume parsing and their interview is conducted with basically a camera screen, so there's no human, you sit and you look directly into your camera and that's considered maintaining eye contact, and you answer questions that will show up on the screen. Typically, these interviews are timed so you have a certain time limit to answer a question. Sometimes you can pause and restart, sometimes you can repeat a question if you'd like. And generally these interviews are recorded and oftentimes they are actually evaluated by an AI.

That's something that a lot of candidates don't know. A lot of candidates that I've encountered while researching this book, they tell me about conducting such an interview and then I ask them, "Did you know that it was going to be reviewed by an AU?" And they're like, "No, really?" Because they have this expectation that yes, it's being recorded, but it's just so that a human can watch it later. That's not actually always the case. Oftentimes for reasons of efficiency, it's really an AI that reviews the video and gives it a grade, essentially. One of the leading companies doing this is HireVue, and they did do an audit of the system and there were some issues.

Chief among the issues was that the system sometimes had trouble with accents, and this was a wide gamut of accents. Another report that I read separate from HireVue mentioned that some systems, for example, have trouble with southern accents. I know you're from the south and I've taught in the south and it's really appalling to think that, oh wow, talented men and women from the south might be penalized just because of the way they sound, because the AI system has trouble making out their accent. One system was actually marking a lot of southerners as unconfident or unsure of themselves because the system kept thinking they were asking a question just because of how they ended their sentences. That's just one issue for when you have a non-human reviewing an interview.

There was also issues with the system operating on parsing body language. The HireVue one mentioned that it was reading facial expressions. They claim that they've since stopped this, but there's so many different automated video interviewing systems out there, and there's really no law against facial analysis as part of this automated video interviews. The HireVue one had claimed that they could use brow flooring, lip tightening to parse such emotions as veracity if the person was telling the truth or trustworthiness and such things. As you can see, this begs the question of how scientific is this right? And the science is not there.

The research I did for the book, I looked at the work of social psychologists who said that there's really no such thing as a universal expression of emotion. An American looking happy will look different from a Russian looking happy. The reason is that we actually are interpreting what happy looks like differently. If a Russian person looks at a American with a huge smile, they're not necessarily going to say that's a happy person. They might say, "Oh, that person looks suspicious, they're smiling too much, they're up to something", right? So it is just really cultural how we express and also perceive emotion. Automated video interviews raise that spectrum of discrimination because it's really saying, "Oh, we're taking one set of how people express emotion and we're going to apply it to all our candidates", and that just leaves a lot of room for discrimination.

Justin Hendrix:

Are there laws that already would apply to this particular practice? I know you mentioned the ADA, for instance. Are there rules that you think should be enforced at the moment on these systems?

Ifeoma Ajunwa:

Currently there's not really any federal law that specifically addresses automated video interviewing. There is some argument that the ADA can apply when the applicant is disabled, and as a result that disability does not allow them to give an effective automated video interviewing session. For example, somebody that has autism and has trouble maintaining eye contact, if they're supposed to be looking at a camera the entire time, maybe they can do it, maybe they can't, but that might be something that could become an issue. Another thing is the fact that some of the systems are using body language and using things like how still the person is sitting. For people who have some neurodegenerative diseases, sitting still is just not an option, but of course has no necessarily real impact on the ability to produce work. In those types of circumstances, potentially the ADA could apply.

There's also an issue if somebody is obviously blind and can't read the screen to be able to do the interviews, because a lot of times they just show the text of the question and then you are supposed to speak. I guess, the main thing is that even when applicants are confronted with these systems, if they do have a disability, they should know that they have the right to request reasonable accommodations in order to conduct your interview. That is your right under the ADA. But outside of that, there are no specific federal laws that actually apply to these programs. There are some state laws, so you have the BIPA, the Biometric Information and Privacy Acts of Illinois would apply to some extent to automated video interviewing because by recording your face, your voice, that is the collection of biometric information. That would need to be done in accordance with that law in terms of how that information could then be used. Because that's even frankly a separate problem in that these automated interviews are collecting a treasure trove of information.

They're collecting your face print, they're collecting your voice print, which as you know can end up in very nefarious places. We now have things like deep fakes happening. I think there is a huge need for a federal law, and I talk about this in my book, there is a huge need for a federal law that is explicit and that is comprehensive about how these automated hiring systems from resume parsing to automated video interviewing can operate.

Justin Hendrix:

I'm speaking to you of course from New York, despite my southern accent, the New York City just passed a automated employment decision tools, law, that just went into effect I think last week. What do you make of it? Does it answer any of the sorts of concerns that you have? Does it go in any way towards maybe providing any of the framework for what you'd like to see happen at the federal level?

Ifeoma Ajunwa:

I think that law is a step in the right direction. I think it's absolutely a much necessary, very relevant law, and I'm glad that the New York City government is taking that step. I hear there's an auditing aspect to it, and I think that's such a necessary component to any automated decision making system. I think we do need laws in general that put in place that auditing system for AI technologies across the board. I think I'm taking more of a wait and see approach in terms of seeing the actual efficacy and effectiveness of this law. There's been some critiques of it. Does it go far enough? Have you had too many loopholes? I do think in the coming years as you have some regulatory action and perhaps maybe litigation on the basis of the law, that's when we can truly discover if it goes far enough.

I think it is great to have those kinds of laws because then they can be like the blueprint for the bigger laws, the bigger federal laws that I think we still do need because we don't really want to create a system in America of piecemeal legislation where because I'm in New York City, I get to enjoy all the protections of when I apply as a candidate and I have to use the systems. Whereas somebody living in say, Nevada or Alabama or North Carolina does not get the same protections. I mean, we're all Americans, so we should have the same protections across the board.

Justin Hendrix:

This book gets into so many different aspects of the quantified worker from, as you say, workplace surveillance on through to wearables, the extent to which workers are having to give up certain information about themselves all hours of the day, often even reporting into automated systems or being managed by app as it were. I think that's something we've talked about on this podcast before. Generally, the gig economy and the relationship between gig economy workers and the firms that I suppose often don't exactly hire them, but nevertheless are their employer.

Do you feel like, I don't know to some extent, I guess what's the question I'm looking for here? Something like, it feels to me like this is the direction of the economy that this kind of quantification of worker or quantification or mechanization of management, this is where it's headed. Do you think that, I don't know if there's anything that will stand in the way of it, and I do want to switch over into ethical and legal frameworks, but do you see the same thing? Do you think that the world looks more like the types of things you're concerned about in five or 10 years or less like?

Ifeoma Ajunwa:

Right. I think that's a great question because we do want to always be thinking ahead and just remaining cognizant of coming challenges. And to that regard, I'm going to say that I do see this as an endearing trend. I think a lot of these AI technologies, what they offer to the employer is this a layer of efficiency, it's the allele of cost savings, right? Why hire a human manager when you can have a productivity app that can even fire workers. We've heard companies like Amazon where workers are getting text messages saying, "You didn't meet your productivity quota, you're fired." And that seems desirable in terms of cost-cutting.

But on the other hand, we do as a society, want to think about what this means for humans. It's one thing obviously to incentivize companies to make profit, and obviously companies have the shareholder primacy as their primary motivation, but I think we don't want to lose sight of also the corporate social responsibility. Corporations still have a social responsibility to society so in as far as we cannot stand the tide of innovation, we cannot turn back the clock on technological progress and we don't necessarily want to say, "Oh, let's put the genie back in the bottle for AI." Let's see how many more metaphors I can bring out here. We do want to still put guardrails, and I think that's something you'll hear a lot in the coming years. I think you've heard it already with the White House talking about AI regulation.

It's going to be all about guard rails. Yes, you're going to continue to have AI technologies in the workplace, but we are going to have to have thoughtful responses in terms of installing guard rails that allow workers to still enjoy being part of the workplace as long as they can. Of course, yes, we're still edging towards automation. We're not quite there yet. As we're edging towards our automation, we still don't want to overlook the humans in the workplace, I mean, they are our society. If humans are miserable, guess what? Productivity will still be down even with efficiency making machines so we do still want to have that human focus.

Justin Hendrix:

I want to move on to talk a little bit about ethical legal frameworks that you think are necessary to perhaps do what you just described, kind of create a different type of environment. You start with a ... well, you start with John Rawls. Why do you start with John Rawls?

Ifeoma Ajunwa:

Right. John Rawls is perhaps my favorite legal philosopher, which is the nerdiest thing to say, but I own it, I'm a law professor. He's my favorite because the way he approaches ethics isn't really to think of, oh, what do I think is good? What do I think is right? It's really more to say, take yourself out of the equation. Let's look at who is the least disadvantaged in society. Pretend you don't know that you are privileged because everyone has a certain amount of privilege in life, but pretend you don't have any of the privileges that you do. Pretend that you are the least advantaged person in society, and think of your ethics from that standpoint. How do I make life for that person as fair as possible? Obviously, life is never going to be fully fair, but how do I make life for that person not horrible with the understanding that this could be me?

I think that's really what we want to embrace as a society. I feel like frankly, all the jurisdictions have done this. In France, why do they have universal healthcare? Because they think about, well, I am young and healthy now, and I may not have X disease, but maybe in the future I'm going to be older and not quite so healthy. Maybe in the future I'm going to get X disease so I want universal healthcare that everyone can use. Maybe not me right now, but maybe in the future I might need it. I think we really want to adopt that same attitude when it comes to regulating AI technologies in the workplace and otherwise. That's really what John Rawls was about. He was about donning this veil of ignorance where you don't know where you stand in society, and as a result, you focus on the least disadvantaged. So you're thinking, "Oh, this automated hiring system that I'm creating, yes, is efficient for me, but can it cause harm? And how can I mitigate the potential harms it can cause? Does it have a potential for discrimination, and how can I reduce that potential?" That's really why I think that's the ethical standpoint we need to operate from.

Justin Hendrix:

The final chapters take us through a range of different potential legal and regulatory interventions. There's a variety of different laws that are discussed, some proposed, some existing. You put some stock in the Federal Trade Commission's abilities to perhaps intervene in some cases. I note that you call out the Algorithm Accountability Act in particular as one potential intervention that would be useful. Are there other proposed legislation at the moment, privacy legislation or otherwise that you think would be useful?

Ifeoma Ajunwa:

Yes, I definitely think the Algorithmic Accountability Act is a step. I know that Senator Chuck Schumer, I've been in contact with his office, they are working on putting forth legislation. Also, Senator Klobuchar's office was also interested, and I noticed even more people in Congress have really expressed this desire for something that's going to be bipartisan because this is not really a partisan issue, 99% of us have to work for a living. Regardless of what aisle of the political spectrum you fall on, this is something that has to be addressed for people to have human dignity at work. I do think that the White House has initiated listening sessions on AI accountability and regulation. I've been invited to lead one of those in the coming future, in the near future. I think we do want to do more of those before necessarily jumping pall-mall.

I do believe that we really want to be very thoughtful and deliberate because AI technologies are changing at breakneck speed so we don't necessarily want to put into place a wall that could then be outdated in three, four or five years. Do you understand? Just think about ChatGPT, for example, and it's evolution in just two years. So we really do want to have this sort of deliberate, yes, somewhat slow process of thinking through all the angles and then coming up with a comprehensive federal law. Now that being said, while that is happening, I do think there are other things that can be implemented in the short term that are quicker and could be perhaps done by certain agencies. You mentioned the FTC, for example. The Federal Trade Commission, I think, can play a very big role in actually regulating automated hiring systems.

Now, I understand there's sometimes inter-agency conflict. Agencies try not to step on the toes of other agencies. When it comes to employment, that is the remit of the Equal Employment Opportunity Commission. However, the Federal Trade Commission does have also the remit of, or the purview of any products that are sold in increments that will be used in a commercial way and that has claims and advertisements attached to them. I believe that some automated hiring systems are being advertised in ways that are misleading, in ways that promote unfair practices in ways that are deceptive and I think the FTC does have the purview to regulate that and to curtail that. For example, the FTC could mandate that there'd be audits of automated decision making system before they're even deployed in the marketplace. And that these audits would be clear about the limitations of each of these technologies because oftentimes there can be like snake salesmen behavior attached to AI technologies where these out sized claims made about what they can and cannot do. And the people who buy them are unsuspecting about their limitations.

I think this is certainly true of automated hiring systems. I've seen some that are advertised as, oh, this will definitely reduce bias in your hiring. Oh, this will definitely diversify your workplace. Obviously, that's not necessarily true. In fact, the opposite can be true depending on how you use it. I do feel that the FTC can could have a role to play in that regard. That was a subject of my op-ed that I wrote in WIRED responding to the chair, Lina Khan's op-ed in the New York Times about the FTCs desire to greater regulate AI systems.

Justin Hendrix:

You finished this book with both, I think a hopeful statement. You quote Martin Luther King Jr, suggesting that perhaps a society that performs miracles with machinery has the capacity to make some miracles for men if it values men as highly as it values machines. But you're also somewhat bleak about this. You say we're set on a course towards a disastrous future of work where the worker is quantified in all aspects. Which way do you think it's going to go if you had to cast your mind forward 10, 20, 30 years?

Ifeoma Ajunwa:

Well, I'm certainly hoping it goes in the positive direction, but I'm also a realist. I guess one thing I would say, when it comes to AI ethics, there's sort of a tendency to group people into camps, right? You're either an AI optimist or you are an AI pessimist or fatalist or whatever. I would say I'm neither. I am actually an AI realist. What I mean by that is really that I am realistic in understanding and accepting that AI technologies are here to stay, that they will continue to play a role in our work lives, in our social lives, et cetera. I also am realistic in not being a fatalist. I don't think that role necessarily has to be bad. I don't think that role necessarily has to be disastrous. I think we are currently headed there because there are no regulations. But I think we can get a handle on them if we actually take the time and the effort to institute regulations, the guardrails, necessary for AI to literally not go off the rails and function in a way that's actually humanity-preserving, in a way that serves society and in a way that helps us all retain our personhood. I think we can do that. We just have to have the political will to do it. That's my AI realism stance.

Justin Hendrix:

Well, perhaps the path towards that better future will start for some when they read this book, The Quantified Worker: Law and Technology in the Modern Workplace. Thank you so much for joining me.

Ifeoma Ajunwa:

Thank you so much, Justin. It's been a pleasure.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics