Home

Donate

Tell the White House to Limit AI-Driven Worker Surveillance

Melodi Dincer / Jun 27, 2023

Melodi Dincer is a Legal Research Fellow at the Knowing Machines Research Project.

The pandemic and its aftermath have intensified existing rifts over worker autonomy and control in the U.S. Especially as young adults enter a workforce shaped by decades-long wage stagnation, anemic unionization, and regular waves of mass layoffs, they are proving immune to the promises of “workism”—the belief that “work is not only necessary to economic production, but also the centerpiece of one’s identity and life’s purpose.” Attention-grabbing buzzwords like “the Great Resignation,” “quiet quitting,” and the “anti-work movement” attempt to capture a broader sense of turmoil in our relationship to work, boundaries, and living a meaningful life both in and beyond our jobs.

Yet workism persists, with around 40% of workers seeing their jobs as central to their overall identities regardless of gender, race, ethnicity, or age. While certain groups of workers—educated young men, high-earners, and recovering workaholics—are spending less time working than pre-pandemic, employers continue to project their “productivity paranoia” onto workers of all stripes. This has led to a sharp increase in surveillance technologies permeating the workplace. Searches for employee monitoring software increased by 75% in March 2020 compared with the 2019 monthly average, and now around 80% of employers use monitoring software to track employee performance and online activity.

Today, technology is a critical factor in both what we do for work and how we do it. But while employers have been rapidly adopting new methods of AI-driven worker surveillance, this phenomenon is a continuation of much older labor practices established in the late nineteenth and early twentieth centuries. In her book Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, AI expert Kate Crawford notes “[w]e are witnessing new refrains on an old theme.” She describes how the atomized, tedious work of early factories required managers to maintain efficient and disciplined workers. This necessitated new systems of observation and control for managers that drew from earlier systems, including the central role of overseers for slave owners in the plantation colonies of the Americas. In Kate’s view, this historic oversight role has been “primarily deputized to surveillance technologies” today. But what these technologies enable is a more granular and invasive degree of worker surveillance than human managers could ever dream of in the past.

A Brief Glance at Automated Worker Surveillance

There are several types of automated worker surveillance. Much of it is used to assure employers that employees and contractors are actually working. This includes activity monitoring, or surveilling how workers spend their time through tools that track idle time, record keystrokes, and even take screenshots of a worker’s computer at random. But there are also several types of surveillance that go beyond monitoring productivity to uncover workers’ personal behaviors and characteristics. So-called “bossware” programs that monitor and collect data from workers’ emails, telephones, and online activities can be used to gauge productivity, but that data can also reveal personal behaviors and characteristics with no connection to work.

There are also tools to track workers’ locations, measure use of different applications on phones and laptops, and even tools that use facial recognition to ensure workers remain in front of their computer screens during business hours. Behavioral surveillance tools not only measure productivity and compliance with company policies, but some also attempt to predict when workers might be likely to quit. And tools like emotion recognition analysis are used to gauge a job candidate’s “fit” with the prospective employer, based on pseudoscientific conclusions that we all express the same handful of emotions across age, culture, and language through facial expressions alone.

These technologies inform the field of “people analytics” built on the vast digital data generated mostly passively as workers perform their daily activities. Its central premise is that as much data must be collected as possible, so that these data can provide accurate insights into persistent managerial questions, including who to hire or promote, who is likely to leave, who has been working collaboratively despite working remotely, and whether workers feel fulfilled. The algorithms that interpret these data are increasingly used to inform employment decisions, whether workers are aware or not.

People analytics beget an intimately quantified modern worker. Surveillance technologies allow employers to be omnipresent in each of their workers’ lives without physically being anywhere near them, enabling uninterrupted monitoring of a person’s communications, movements, and activities even outside of work. As Ifeoma Ajunwa observes in her new book, The Quantified Worker: Law and Technology in the Modern World, employers have converted their workers into “captive audiences for data extraction,” using these technologies to indiscriminately capture and transfer worker data that is often personal and sensitive. Workers have no uniform privacy protections to this data, and they largely lack any bargaining power over how employers exploit their data.

What Should the White House Do About It?

Back in 1976, Congress established the Office of Science and Technology Policy (OSTP) to advise the President on new technologies. Like almost everyone else these days, OSTP has turned its attention to so-called artificial intelligence (AI) technologies. Just last month, the White House announced that the Biden-Harris Administration is developing a National AI Strategy, and OSTP is running the show.

Part of that strategy concerns addressing growing tensions between new AI applications and the U.S. labor market. In the coming months, OSTP will investigate the many ways in which workers are surveilled, measured, and controlled through automated surveillance products, all of which increasingly rely on AI to power their predictions. It makes sense for OSTP to focus its attention on the plight of workers at a time when work life in the U.S. is distinctively fraught.

The first thing OSTP needs to recognize is that these AI-based systems depend on massive amounts of worker data to produce meaningful insights for employers, and that worker data is largely unprotected by current privacy laws. Most workers understand that some of what they do on the job is important to measure so they can be hired to the right roles, rewarded for solid performance, and compensated fairly. But the ubiquitous datafication of every moment on the clock creates vast pools of data that employers are currently free to collect and use without limitations.

This is important because worker data fuels automated surveillance tech. It serves as training data for machine-learning models, and the AI industry is enamored with building products from mind-bogglingly massive datasets. This creates a vicious cycle: workers use technologies that produce data, that data is compiled into vast datasets, those datasets are used to train machine-learning models, those models influence algorithms to find certain patterns in the data, and then those algorithms power automated surveillance tools used by employers to decide who to hire, advance, demote, and lay off. Workers become “trapped in a matrix of computer-controlled reality from which there is no escape.”

There is growing concern about the misuse of data to train machine-learning models powering automated surveillance technologies, and worker data should be no exception. Because these surveillance technologies require vast amounts of worker data for training purposes, employers are currently incentivized to collect worker data with abandon. OSTP must be mindful of the enclosure of worker data that enables the development of these systems in the first place. The lack of clear privacy protections for worker data means OSTP can create guidance for employers on when and what types of worker data they can collect, store, use, and sell (if any). This guidance should be directly informed by workers’ own expectations of privacy over their communications, locations, behaviors, and other forms of personal data, especially where employers have no meaningful reason to collect that data.

There are many other things OSTP can do besides provide privacy guidance, though. OSTP can coordinate with other government agencies to develop data privacy requirements in vendor agreements for automated surveillance products, including strict limits on how much and what data they can use and requiring that data collected in one context cannot be applied later in other contexts. For companies that collect worker data into databases that could be used to train AI-based systems, OSTP can incentivize greater clarity around database access and licensing by requiring employers to appoint stewards who decide and document how datasets may be used, derived from, and distributed outside of the company. And as employment law expert Pauline Kim told the U.S. Equal Employment Commission earlier this year, OSTP should flip the surveillance script—it can encourage agencies to develop people analytics for employers and their human resources processes instead of targeting workers, helping diagnose harmful management practices at the root.

At a minimum, OSTP can work in solidarity with organized and organizing workers to protect them from employer interference and retaliation. In line with the National Labor Relations Board’s recent ruling against Amazon’s anti-union surveillance efforts, OSTP can adopt clear policies prohibiting the collection of worker data related to organizing and union participation. By protecting workers from the weaponization of their own data to thwart organizing efforts, OSTP will empower workers to collectively bargain with employers about the specific limits they seek over their employers’ ability to access their personal data.

OSTP must act now to limit this insatiable thirst for worker data to fuel people analytics solutions. OSTP should support baseline privacy protections that help mediate the inherent power differential between employers and workers over control of their data. Instead of data exploitation, we must fight for a future of data liberation. Let’s start here and now, where we spend so much of our daily lives—at work.

- - -

OSTP will receive public comments on this subject before Thursday, June 29, 2023 at 5 p.m. EST at this link. Respondents can also email OSTP directly at workersurveillance@ostp.eop.gov.

Comments may be posted publicly, but respondents can submit comments anonymously. To preserve anonymity, remove and identify information from comments. To minimize the risk of retaliation by an employer, respondents should consider submitting comments from a personal (non-work) email address, device, and internet connection.

Adapted from a comment submitted by the Knowing Machines Research Project to the White House OSTP. Funding was provided by the Alfred P. Sloan Foundation as part of Knowing Machines, which traces the histories, practices, and politics of how machine learning systems are trained to interpret the world.

Authors

Melodi Dincer
Melodi Dincer (she/her/ella) is a technology privacy lawyer with expertise in biometric surveillance, AI policy, and data justice lawyering. Her work focuses on how the law can entrench power disparities in the development, adoption, and legitimation of new technologies. She is a Legal Research Fell...

Topics