Home

Donate

Algorithms and the Plight of the Gig Worker

Justin Hendrix / Sep 5, 2021

Subscribe to the Tech Policy Press podcast via your favorite service here.

Tomorrow is Labor Day, a holiday that is the result of the efforts of 19th century labor activists to recognize the contributions of workers to society. The US Department of Labor’s website will tell you that the very first Labor Day holiday was celebrated on Tuesday, September 5, 1882, here in New York City. In 1894, after many other states decided to adopt the holiday, President Grover Cleveland signed a law making it a national holiday.

It makes sense this week to focus on Labor in our current moment, but not just because of the holiday. This week, we had a historic storm and flooding in New York City, as the remnants of Hurricane Ida passed over the Northeast.

That evening, a Twitter user shared a video of a delivery worker in Brooklyn toting a bag of takeout through knee-deep water. The video went viral- it even attracted the attention of Rep. Alexandria Ocasio-Cortez, who represents The Bronx and Queens in Congress.

Delivery workers told THE CITY reporter Claudia Irizarry Aponte that the algorithms on services like Grubhub that assign workers may lock out a delivery worker if they object to the delivery distance or number of trips- and of course, there was no way to compute the situation many were faced with in the storm.

An unidentified delivery worker in Brooklyn source

“This was the most horrible day ever on the job,” Lázaro Morales, a Grubhub worker in Astoria, told Aponte. He earned $277 total on Grubhub that day including tips after 14 hours of work in dangerous conditions. “The clients are very inconsiderate,” he said. “As long as they get their meal, they don’t care about us.

Hildalyn Colon, director of policy and strategic partnerships for Los Deliveristas Unidos, a group that advocates for delivery workers, pointed out to CBS New York’s Jenna DeAngelis that the man whose treacherous delivery went viral on Twitter was only one of many braving the storm.

“They were like, what option do I have? I have to feed my family,” she said.

This week, I spoke to someone who studies gig workers and the systems and companies that manage them. Diana Enríquez is a PhD candidate in Sociology at Princeton University. She studies labor, technology, informal economies, and law, particularly in the US and Latin America.

I caught up with Diana about two papers she helped author in recent months: Pre-Automation: Insourcing and Automating the Gig Economy, published in the journal Sociologica earlier this year and another, Managing Algorithms: partial automation of middle management and its implications for gig worker, published in the Proceedings of the Academy of Management this summer.

Justin Hendrix:

So Diana, can you tell me a little bit about yourself, your coauthors who you write with and how you came to this subject of labor and automation?

Diana Enríquez:

Absolutely. So I worked for a while before I came back to school. I was working in media- I was actually the researcher at TED on their content team, and I was really interested in what was happening with freelance work in media. It made me really nervous. My dissertation is about how freelancers simultaneously operate as small businesses and contractors in this kind of weird employee-business system, and in academia there's a lot of focus right now on gig work and what that means for everybody else.

So I'm writing about gig work, but I'm also writing about this "higher end" part of the contractor market. And when I got to Princeton, I started working really closely with two of my co-authors, Janet Vertesi and Adam Goldstein, who are both in the sociology department as well. They've had to read many drafts of my work and they also became really interested in what was happening with gig work. They weren't quite writing about it as much before but they both work on things that are tangentially related, so that was sort of our common interest.

Then Larry Liu is another coauthor, and he works specifically on automation and what's happening inside large companies. And our fifth coauthor on the pre-automation paper is someone who's starting a master's program- she just graduated from Princeton, and her name is Katie Miller. So I'd say that all of us are interested in what's happening with technology and I came at it more from a labor laws angle. So that's how I ended up on this particular project.

Justin Hendrix:

So we're going to talk about a couple of papers today, and we'll start with the one on ‘pre-automation,’ which you say focuses on “platform coordination of labor on the one hand and expansion of automation techniques for said labor on the other.” You call this a strategic configuration. What does that mean and what does it mean for workers?

Diana Enríquez:

We picked two companies who we think have really made a big effort to be so ubiquitous across the US. The first is Uber- now Uber itself is a verb, ‘I will Uber to you.’ I think of that as a symbol that it's so ingrained in things that that's how we think about it. So of course, it's competing with the taxi industry that existed, but it's also competing with people using their own cars and public transportation.

And the other company that we ended up picking was Amazon, who we think of as being so convenient that they've really successfully made shoppers think twice about going to a local store because even though there's a wait time in the delivery, it's so seamless and painless and guaranteed- especially in New York where you may or may not find what you're looking for. They say, "We will definitely get it to you," and that becomes so attractive, right?

So we see these two companies really building out this monopoly-level presence everywhere because they're trying to optimize for consumer convenience and comfort. Meanwhile, to do that, they have to have all these contingent workers. So it's in their best interest for both companies- and in particular Amazon for their delivery, but also Uber with their drivers- to really have a very large workforce to choose from, because then it accounts for tons of turnover, it accounts for any sort of issues that come up if someone's not a good fit.

It also makes it easier for them to render these workers invisible, which is a really important part of their narrative about being ‘tech’ companies and not ‘transportation company’ for Uber or ‘a market’ for Amazon. All of this effort to make all these workers invisible and give them as little voice as possible really helps with this kind of future building.

So we talk about the strategic configuration, as they're saying, "Look, look, we're tech. Look at all this money we're spending on tech." From our research side, we were paying a lot of attention to what kinds of talent they were trying to acquire. So we were looking at what departments did these people actively go into universities and poach everyone from, because that's happening a lot. And it tends to be engineering departments who are interested in automated technologies, self-driving cars, delivery drones. So we see a lot of people, talent-wise, kind of going that way.

We were paying attention to what kind of patents they were filing and the technology, which again, is pretty closely linked to some of these universities. And then we were looking at the ways that they really try to expand their contingent workforce. In particular, Uber is getting a lot of press because of Prop 22 in California, but their efforts are not just that particular bill that's drawing a lot of attention. They've been doing similar campaigns with departments of labor and local laws in Massachusetts and a couple of other states to really try to protect this ‘distanced’ status of these workers to make sure that we all know that they're not employees, they're these bizarre small businesses that really don't look like small businesses unless you dismantled worker protection laws, right? So it's a multi-pronged effort in the strategic configuration to make sure that they're maintaining their monopoly presence with consumers who pick them because they're the most convenient and consistent option.

Then we also see they're making sure that their workforce is giant and supporting all of their needs. And then we see their efforts to really invest heavily in research and development for what they see as the next step in maintaining their labor force.

Justin Hendrix:

So you get into, essentially, the imagination of these companies. I feel like you're kind of teasing out not only what types of technologies they're investing in and talent they're hiring to carry forward that investment, but also some of their management practices. Can you give a couple of examples of the patent space and what Amazon is investing in that gives you some insight into its goals?

Diana Enríquez:

So I know two of my coauthors spent much more time combing all of the patents really carefully, especially Katie Miller, who is the expert on what was happening there. And she did some really cool network models of who knew each other when they were filing these patents. But I'd say that some of the most striking ones tend to be on things like technology that you could use in a warehouse to do the automated gathering of objects and putting them together and moving them around that a lot of workers do now.

And I think that's gained a lot of press because there have been injuries, and of course COVID's been really challenging in the warehouses. So some of the patents about moving heavy objects and doing that type of work, and then of course there's a whole bunch of self-driving car activity that Uber seems to be distancing itself from now, but for a long time was going kind of all in on in terms of who it was trying to gather and whose patents it was trying to ensure were filed in a way that was useful to them.

Justin Hendrix:

So you talk about some of the key things that pre-automated workforces do. Can you kind of just run through the set of both?

Diana Enríquez:

So with the pre-automated workforce, we know that they help scale the system. So when I mentioned that both of these companies rely heavily on consumer convenience, we see a lot of the slack being picked up by workers. So the reason they're so convenient and consistent, is that there's a massive workforce that can be called at any time and will show up and the wait time will be short or predictable, usually both, but that relies really heavily on having so many workers that you always have someone waiting around for a job, which both of these companies rely so heavily on.

So that's happening. Then we also note that the workers take on a lot of the corporate risk, and this is true with a lot of contingent work in the United States. In this effort to sell workers as small businesses who, "Oh, it's so great, you control your schedule, be your own boss," whatever catchphrases we're selling right now, the cost of that is that it's even more risk put on the worker. And so Uber drivers are of course responsible for their cars and their car insurance and I think people are pretty open about that.

But there's a whole bunch of business risk that comes with that because they're like, "Oh, it's great. You'll get to pick when you work and you get to pick if it's the price that you want." But none of these workers get to set their own prices, which is a feature of a small business. There are all these weird parts that you're like, "Oh, clearly they're a hundred percent dependent on this platform who dictates all of the terms while gaslighting them and telling them that you get to set your own terms."

So with the corporate risk, it's obviously these actual machinery parts, but there's also the figuring out what a functional work schedule looks like, right? So a lot of the workers that I've talked to are like, "I was sold this idea of being able to have my own schedule, but in reality, I'm just sitting there refreshing”- specifically for Amazon's delivery program- “refreshing and refreshing and refreshing the page to try to get enough shifts that I can make ends meet." And it's pretty grim. They're talking about how they're trying to be with their children at night or with their families and realistically they just have their phone there and they keep scrolling and keep refreshing.

So it's two types of risks that way. And then with the training and trial tasking, which is what we call the other bit, there are all kinds of data that these workers end up having to collect, and some of it's really useful for figuring out routing and figuring out what parts of Google Maps work or don't for this type of work.

Workers are pretty aware that they're constantly under surveillance and they're collecting tons of data. But when I ask them about it, they're like, "Realistically at least I don't have a manager who's standing over me and breathing down my neck," which is unfortunately the alternative in a lot of low wage work in these hyper surveillance, hyper managerial super presence spaces where they're like, "This is better because at least I'm alone, I'd rather deal with a machine bothering me because I can ignore it and work around it than a human who's yelling at me all the time."

But it does mean that they're aware that the cost is, "I'm constantly under surveillance. I gather tons of data." And unfortunately, there's this built-in system, which if they do anything that's perceived as wrong by the machine, they get deactivated and there's no system for appeal. So they're like, "I know that this is more extreme in some ways, but maybe it's still better."

Justin Hendrix:

So that kind of leads us straight into the second paper quite well, which is on what you call the automation of middle management. In this first paper we've talked about, you've set up this idea of pre automation, that the workers themselves are quite literally participating in the activity of possibly replacing themselves over some period of time. But then you point to this idea that middle management has already gone through this on some level, that the company's already automated that bit. And as you say, there's a tension in what the workers feel about this- you start off with that tension.

Diana Enríquez:

It was an accident that we found this middle management automation. And it was for me specifically in this first paper, talking to a lot of workers and saying, "Hey, how's this going? What kinds of data are they collecting on you? Do you notice them trying to automate things that you're doing?" And mostly what came up in those conversations were places where they saw the platform tried to automate something, but it wouldn't go very well. So then they could tell me really clearly their work arounds. I'm always really interested in the ways that these workers pick up the slack essentially. But from those conversations, what I saw was these platforms have pretty effectively tried to automate the kind of coordination tasks that we associate with middle managers and production systems, but they don't do any of what we call entrepreneurial tasks, they just reject them entirely so those fall to the workers.

And if I was going to give you an example of what an entrepreneurial management task is, it's being able to identify problems that come up and address them because it requires improvisation- and tech is not good at that, right? Computers execute scripts really well when you tell them exactly what to do in case ABC, whatever you've been able to scope out. But if something goes wrong, which it frequently does, then it's left to the worker to improvise and work around.

So people are pretty clever about it. I'd say I heard some really amazing stories about, especially because both Amazon and Uber are trying to be in rural places, which have very limited connectivity to the phones. People would tell me about all the things they did to record when they delivered things and leave a long paper trail of proving that they had done the job while the app was accusing them of not doing the job.

Then I talked to a couple of workers who were telling me one of the issues that middle management's typically responsible for is coordinating the workforce to make sure that there are enough workers in the location where they're needed and make sure that the shifts are filled in the ways they need to be. We would kind of wave this away by saying, "We increased the prices to encourage drivers to go on the road." But what I saw typically in urban areas was that people, drivers specifically would kind of develop their own data collection system to say, "Typically, I get a lot of rides in this location, I get a higher rate for it. If it's raining, I know to be over here because there are more people here who need trips to go over to this other place."

So they're doing this really kind of N of 1 data collection and trying to guess and have a market view while they're on the ground because the system doesn't help them do it. And so it's inefficient in that way while they're still being sold this idea that it's going to be super great and efficient because technology is so great. So those are a couple of the workarounds. I can tell you about other ones if you're interested, but those are some of my favorites.

Justin Hendrix:

I want to focus on this idea of “as a manager, the algorithm is highly negligent rather than domineering." This seemed to be something that came through from the different workers you talked to. What's it like to work for an algorithm?

Diana Enríquez:

So I think that our work does respond to a lot of the sociological work that exists now that says algorithms are very domineering and there's no space for you to do anything except what they say. And I think with the things that they're scripted to do, that's absolutely true. So if you violate one of the "rules" that the algorithm places- let's say for Amazon drivers, what would come up a lot is consumer reviews are given an absolute premium and getting too many that say that you were a bad driver or whatever the complaint was.

It means that you were removed from the platform and you have no recourse and you have no ability to overcome that review, even if the consumer is actually horrible, right? And there are lots of cases of this that came up in discussions I had with drivers, especially in the groups where they organize and talk about how do I deal with this person because I've heard bad things about them. So there is that narrative and it's true.

But what we've found more often is that the algorithm itself is supposed to be this middle manager who's coordinating them, but in this kind of continued tech illusion of ‘it's so great you're a small business, be your own boss, choose when you're working.’ I think all of the things the algorithms don't do and the technology doesn't optimize for, or just doesn't care about, are then sold as, ‘Look at you, being your own manager, it's so amazing.’ But it's unpaid work, all of that is unpaid work.

So I gave the example of workers trying to figure out ‘where should I be so that I get enough rides and make enough money.’ I would like to take advantage of the pricing system in the Uber app, but I'm trying to guess and it only tells me when I'm in the right location so it's this guessing game. And it means that sometimes you end up with a lot of drivers who are in one neighborhood, but you need them in the other one and there's no manager kind of tracking that and saying, ‘This is what the market looks like on Thursday nights, so this is what the market looks like on Sundays.’ You have these workers who are sometimes talking to like three of their friends trying to figure out, ‘Hey, have you had more luck over there or not? Should I try that?’ So there are other times where... And those are interesting business ones, but it also came up in cases where the workers were in trouble and they had a really bad passenger or they had a package, I think one of the drivers was telling me about how he was given this package by a stranger and then asked to drive it two hours away and he didn't know what was in the box and he couldn't reject it because he was like, ‘It will count against me. But there's no way for me to record, I'm delivering a strange package, what am I going to do about it?’ Right?

So he did it, but he felt really unsafe for two hours and there's no way to report that. There's nothing for the driver to do there. So that might be a case where you have a system or a discussion about it when you have a real manager. But instead these drivers are again taking on the risk and figuring out, "Is it worth me taking the penalty that I get for rejecting a trip to feel safe?" And that shouldn't be a choice that people have to make, I see that as negligent management.

Justin Hendrix:

So it's probably a good time just to point out that for the second paper you talked to 41 "gig workers" and conducted structured interviews with each of them. Where were these individuals? Who were they typically and what can you tell us about them?

Diana Enríquez:

Absolutely. So it's interesting trying to recruit from these pools because they're so busy. And I think that something that's not discussed very often and especially in sociological research, is that it's really expensive to be poor both in terms of monetary means, but in terms of time. So I think with this recruiting, I knew that it was going to take a lot of effort to get people on the phone.

So I ran a couple of targeted ads in different apps and tech companies' platforms and asking people for 20 minutes of their time. So they're very structured interviews. I was asking about how they work with the app, what their job is like. I try not to delve too much into personal information, but it frequently came out where people would say, "I'm trying to earn this much per day. I have a family, this is what we need to almost make it."

What we ended up getting was a pretty good sample across the US, so I have a whole bunch of folks from the south and the east and the west coast and the Midwest. I think I got a few people from each of the regions, I didn't get someone from each day because we didn't have enough people. But I have a good mix of urban and rural drivers. And then in terms of gender, it ended up being kind of a 50/50 and a mix of ages too. I had a whole bunch of folks who are retired, either drivers with the taxi industry, I had some trackers.

But then other people who were doing this as I think another piece of the gig work industry that sometimes gets a little lost is you do have these folks who do it full time, but there are a lot of other people who do it as like a part-time job to try to make because they're either covering debt or they're students, or they just really need the extra income. So what frequently comes up in their Facebook groups specifically of gig workers and drivers is that people are always like, "What's your other job?"

And that's such a normal, basic part of their conversation that it's kind of assumed you can't survive on this and you also can't survive on the other things that are accessible to you. So what is this kind of patchwork quilt of work that you have to create to be able to survive?

Justin Hendrix:

So you identify three aspects of middle management via machine, can you walk us through them?

Diana Enríquez:

Yes. So we were thinking about what would make a successful middle manager doing production work and coordinating a workforce. So the three things we picked were one that they were providing the necessary equipment that workers need to complete their jobs. The second one's that they were scheduling work to make sure that labor was available in each of the places where there is some tasks that needs to be done by a worker. And finally, we said that management is responsible for providing feedback to improve worker performance.

So then with each of those, what we did was we compared what a normal company would do when it had middle managers to what the workers described their work to be. So of course, with Uber and Lyft, the whole thing is that you provide your own car and car insurance. They're not providing any equipment. They provide some software that gets updates, but doesn't really handle the actual task, especially well, it just kind of connects drivers with passengers.

Then with Amazon, it's interesting because they also have the with the Flex program specifically, which is what we were interested in. Drivers are responsible for their cars and car insurance again. For a while, Amazon would lend these contingent workers scanning equipment and workers had to pay for damage on it. So someone that I talked to was telling me about this device called a rabbit and how she would scan packages and when she put them in her car and then she would scan them out when she delivered them. And it was the data recording system.

But because these contractors were somehow responsible for that technology as a risk that they took on for Amazon, she was telling me about how in one week she dropped the device a couple of times and she would watch her paycheck be slashed in half. And they're not making enough money that having your weekly paycheck slashed in half was going to be okay. But it's very little money to Amazon, it was like $150. But to her, that's a third of her paycheck each time she drops it and was been responsible for the cost.

So in addition to bearing the risks straight up, there was kind of weird debt system. And it got more extreme when I was looking into how Uber and Lyft specifically now have a rental program for cars. So a couple of the drivers were telling me I can't afford to have a car, but now I can pay Uber to borrow one of their cars. So essentially I worked two days for free to pay back this car to do this job for this platform and then the other three days I might earn some income. And that seems questionable is a nice, more neutral word than I normally use for that but that's still questionable. So that's the first one.

But the second one in terms of coordinating labor, I gave an example earlier of the workers developing their own spreadsheets and really trying to figure out how do I'd guess what the market looks like while I have zero information. So workers would tell me about how some areas were super crowded and it felt like there were way too many drivers and they didn't get enough work and they would love to know where else they could go.

There were also drivers who would tell me about, especially if the driver was not white, they would tell me about places they really don't feel safe driving. And sometimes they'd figure that out while they were on route. So one person was telling me about how they ended up a couple of times in one particular neighborhood picking up people who were high and had their car slashed. And there was of course, some kind of racialized commentary added while it was happening and the driver felt really unsafe, but there was really no system for them to report it in any way that they felt better about.

So they're like, "I would like to know that I'm not going to be assigned this type of client again like that. If someone else complains, I'm not going to end up picking them up later." But there's no system for that. And then in the third one with worker performance, and what we learned was that the platforms send out these super abstracted emails that are like, "Be nice to your passengers," or, "Here are some tips on how to make the experience good."

But consistently the drivers told me, "I don't really read them because the feedback is so abstracted that it doesn't really help me. It's kind of basic parts of being a person who's decent." And then they would tell me if they did ever have a moment where the platform actually broke, it's automated scripts and it's abstracted kind of HR approved statements. So you got an email from someone who's like, "Hey, here's what's going on," you knew that it was a warning shot that you are close to being deactivated.

And they would say... they referred to them differently, they're like, "We get feedback versus advice. And if I get an email that says advice in it where they tell me something specific about a trip I just did, or a delivery that I just did, then I know I'm in trouble. It's not really supportive. It's don't become noticeable to me again."

Justin Hendrix:

So the minute you hear from a human, you know it's punitive.

Diana Enríquez:

Yeah.

Justin Hendrix:

So that leads us to four dysfunctional traits of algorithmic management that you lay out in the paper, absolutism, surveillance, conflict avoidance and aggrandizement. Tell me a little bit about each of these.

Diana Enríquez:

Sure. So I think the absolutism is covered really well in sociological literature. It's a lot about the example I've given where workers once they get some complaints and they're deactivated, have no recourse, they can't talk to anyone. It's sort of like they're stonewalled and they're done. So there are lots of penalties that count against them and they never seem to go away. There's no forgive, there's just a scoreboard of times that you messed up that you can never undo.

Then with the surveillance element, of course, the drivers are pretty aware that the apps are collecting their location data and there's a whole bunch of other stuff they have to put in. So they know that someone's always watching, but there's this detachment from it because they still, again feel like this is a better alternative to the surveillance I had to deal with when I had a low wage job in person. So I think that's important to note.

The conflict avoidance and the aggrandizement are newer things that we are hoping to contribute to this conversation. And this is getting into where I've described algorithmic management is kind of negligent. So I sometimes call it the conflict of orient passive aggression, because again, workers will file complaints. They'll note issues with the technology than the issues with passengers. But there's sort of a, "We hear you, uh-huh," kind of response from the platform and they frequently don't really feel like anybody's listening. It feels like it goes into an inbox that nobody reads.

So what was really interesting specifically with the conflict avoidance is I'm in a whole bunch of Facebook groups specifically for Amazon delivery people. And there's a whole ritual around sharing when they finally find somebody on the customer service line who will talk to them like a person rather than a robot talking to another robot through the script that they're allowed to say and how it's just so scripted that anything off it becomes lost into space.

So when one of the drivers would find a customer service representative who'd finally answer their questions and try to troubleshoot with them, they would very carefully share that person's direct phone number in the Facebook group and say like, "This person will answer your questions." And it became this precious resource that people tried to use really responsibly. And then inevitably, because there's some turnover in customer service too, because it's also a pretty miserable job, there would be a this funeral of, "We lost that one precious resource, they're gone now."

You're not going to be able to reach them at that number and looking again for someone who will answer their questions. And because of the platform really doesn't help them when they have issues, they form these Facebook groups where they crowdsource information and they crowdsource tips on dealing with difficult passengers, difficult delivery folks. They talk about what they think the market looks like. They really try to support each other because they don't get anything from the platform.

And finally with the aggrandizement, there are a whole bunch of cases where the app would do some kind of "improvement" that was supposed to make the experience better for the consumer and the driver in theory. But most of the time the drivers would tell me, "Yeah, it became another thing to work around." So one of the examples that someone gave me was that they knew the city they lived in so well that they had a pretty good mapping system for how they did the deliveries based on what traffic was like and where construction was happening and schools let out, all of these very specific kind of environmental things that Google Maps doesn't deal with especially well.

And she was telling me that Amazon tried to introduce this numbering system to packages where we put a four digit code on the package that was supposed to be helpful for the drivers figuring out how to specifically arrange packages in their car and therefore plan their driving routes. And she said it made it more difficult because it got rid of some of the information that used to exist that she relied on because it was based on real human things.

And this four digit number now assumed that she was a robot, the newest secret code, which she didn't know and there wasn't any sort of onboarding for it in any way that was helpful. So she told me about how she ended up getting around it and ignoring it. But in the app it looks like a success because she was driving efficiently and doing all these things successfully but that was based on her existing knowledge and not on this weird robot code that yay technology making us better at our jobs, right?

So she would say like, "Yeah, they do this stuff and it gets in my way, here's how I work around it." And that came up a lot. But we then say, "Oh great. The app's doing a good job. It was doing a good job," when in reality that's a pretty specific tech case. But a lot of the time, as I've said, the workers just end up picking up the slack and optimizing and doing all the same proposition on work that a machine doesn't do very well and instead the large invisible contingent workforce does a lot better.

Justin Hendrix:

You call that positive deviance, is that right?

Diana Enríquez:

Yeah. So it's interesting to think about these kind of gig workers as people who are given a script and a bunch of rules that they're supposed to follow, but then they're also told what the company's goal is. So these are things they're supposed to bear in mind. In theory, they should line up. But the workers gave lots of experiences and lots of stories about how they are frequently at odds. So they follow exactly the script that Uber gives them. It's going to be slower for the passenger. Is that a case where I break the rules and don't follow the Google Map and do the more efficient route and the passenger might yell at me because they value the transparency of the app and they see me going "rogue"?

But it's going to get them there faster so I'm meeting that goal or do adhere exclusively to what Google Maps says and it's going to be worse for the passenger in terms of an experience and in terms of costs or whatever but I'm following the rules. So we frequently saw places where gig workers are trying to make that decision and either choose to be invisible to the platform and not have any repercussions or to make the experience overall better because they improvise to meet the end goal.

Justin Hendrix:

So you get onto the idea that there's a growing distance between workers and employers and why that matters, the fact that these algorithmic management layers are essentially kind of a black box. And you conclude this notion that the tech companies aren't really innovative at all. What do you mean by that?

Diana Enríquez:

So I think again, in this future building, the company has spent a lot of time telling us what the future is going to look like. And that's in part, because if they can get us to buy into what they want their future to be like, it makes it a lot harder for us to fight it. And what I mean by that is if people give up and are like, "Yes, this is the future of work," we should dismantle all of our labor laws to make this easier for these companies to get the data they need and test all the technology because we're going all in on the tech and we're going to assume that there are no workers involved anymore. That's a lot of control for one group of people who is again saying, "Don't worry there aren't going to be workers here."

What we can test with that is that there will always be workers there in the history of Silicon Valley. So many of these companies rely on a lot of people doing small tasks repeatedly and filling in the gaps and improvising. So a lot of our paper is especially the pre-automation one is about how this is a lot of marketing of a future that we don't think is necessarily going to come to pass. It's very useful for them if we all buy into it because it helps them dismantle the labor laws they need to keep being as profitable as they are.

But we think from watching how much the workers improvise and all the different things that they actually do to make it work that yeah, there's an app. Uber has an app, it coordinates things and add some transparency. I'm not sure that that's a giant overhaul of all of our systems and changing cities and the way that their marketing teams really sell as a narrative, right? And Amazon similarly, I think, has been very successful in building the monopoly scale business.

It has a very successful search. I think it has a very successful distribution network. I'm not sure that this is quite the degree of life-changing difference from shopping and delivery that they want you to focus on when they sell themselves as a tech company, rather than a store that's very convenient, right? And Uber also in its effort to say, "We're a tech company, rather a driving company," it's this separation to say we're something different and new.

And when you kind of open up the box and you're like, "Who's actually doing the work and what is their work, what part has actually changed to be of the app or not?" What I see instead are two companies that very successfully optimize for consumer convenience and not really something that has ultimately changed the way that transportation works or the way that shopping happens or how people think about it, right? It's just more convenient now.

Justin Hendrix:

So on some level, you make this argument that if we were to kind of, I guess, look at where the ingenuity sits in these companies, it's really what the workers that you've got these kind of kludgy systems that the companies exert on their workforce. But the innovation to actually create the experience often relies on that deviance essentially or the sociological citizen, as you call them that's like smoothing out the entire thing, but not really being paid for that.

Diana Enríquez:

Yeah, absolutely. I think, I guess we could talk about the app in itself as an interesting coordination tool. I think that that is cool and has done some neat things and there are some design things we've learned from it that are important. I think it's a much smaller scale of innovation and change than what the stories they market say about them, if that makes sense. I think a lot of what the workers do every day to make sure that their jobs get done in a very small amount of time is really impressive and frequently overlooked. So, yeah.

Justin Hendrix:

So you've made this, some of your data available publicly where people can take a look. Where can people engage?

Diana Enríquez:

We are going to be posting it on, Princeton now is hosting an open data website so everything's under a Creative Commons license with attribution. And the goal with that is just to make our data accessible. If you want to open my research black box here and see what did they talk about? We of course are worried about privacy so I'm removing stuff that is identifying or a little bit too specific about the person such that someone would be able to find them.

But I'm going to give everyone my discussion guide and most of the interviews if you want to see what workers actually say about what their work's like, how much they're earning, some of the challenges that come up with them at work. I think that's an important part of what's happening in social science and we want to contribute to it so that's why we decided to do it. And of course, all of this was cleared with the people I interviewed, it was part of their consent form. They said, "We'd like to make the interviews available publicly after we remove information." So that's an important part of this that makes this possible where it might otherwise not be in everybody else's research.

Justin Hendrix:

So what's next for you and your co-authors? What are you all going to work on next?

Diana Enríquez:

I'm trying to write a book about high-skill freelancers in the US and the tension of existing as a small business and an employee especially when you're considered kind of an expert in what you do. So I talk to journalists, I talk to engineers and designers. So I'm talking to all these people who are showing up in a team that already existed and they're expected to just jump in and deal with whatever social stuff is going on there.

But they're also expected to execute something at a level that they know is good while translating that expertise to a group of people who frequently don't totally understand what you do and want you to do something different. So I'm writing all about the negotiation process, how you think about a contract, how you negotiate prices and what it means to essentially float both in the labor market and in the goods and services market as a business because I think that's kind of a terrible position to put people in.

But some people thrive and I'm interested in who's doing well and who's really struggling. I know that Janet Vertesi is writing a whole bunch of interesting things about what's happening with NASA and the technology there and how you think about. I think one of the really interesting questions that she spent some time answering is how do you design technology that's still relevant in 40 years because you sent it into outer space, right? What a challenging question.

And then of course Adam's always working on organizational stuff and what's happening with companies and management and the work issues that come up with that for labor. And I think Larry is continuing with automation work, especially at Amazon, he's very interested in that, follows it really closely. So again, we're all tangentially related to each other, but for this part of our project, it worked out really nicely in this cool creative exercise for all of us.

Justin Hendrix:

Diana, thank you for talking to me about this research.

Diana Enríquez:

Thank you for inviting me. I'm glad we got to talk about it.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics