Home

Donate
Analysis

Meta’s Worker Surveillance Tests EU Rules on AI and Labor

Liz Carolan / May 7, 2026

Liz Carolan is a fellow at Tech Policy Press.

The Meta logo on a smartphone in front of an image of CEO Mark Zuckerberg.

Last month, Meta told staff that the company would be introducing an employee surveillance system it calls the “Model Capability Initiative (MCI).” All US employees would have tracking software installed on their work computers to log every mouse movement, click and keystroke, and take periodic screenshots, according to reporting by Reuters.

Staff based in Europe will not face this kind of surveillance, which is not permitted under EU privacy laws. But that does not mean the workers outside of the US, or indeed outside of Meta, will not be affected.

The purpose of this data harvesting is to build AI agents capable of replacing not just Meta’s own staff, but also as a product Meta hopes to sell to other employers. Internal memos to staff are explicit that the data being gathered is to fill a critical gap in training data for agentic models. Having abandoned a costly attempt to pivot to VR, Meta is now betting that it will become a dominant force in AI enterprise solutions, which it once called “AI for Work” but has since rebranded as its “Agent Transformation Accelerator”. Agentic AI is software that can perform tasks autonomously, but as Meta’s memo to staff states, “they {agents} struggle to replicate how humans interact with computers.”

As part of this push, Meta last year paid over $14 billion for a 49% stake in AI agent firm Scale AI, installing its Co-founder Alexandr Wang as head of its “superintelligence” team. Scale’s major problem has been getting the data it needs to train its models. As Casey Newton of Platformer reports, Wang said in 2024, “there’s no pool of really valuable agent data that’s just sitting around anywhere. And so we have to figure out how to produce really high-quality data.”

With a global headcount of around 80,000 highly trained staff working daily on computers it owns, and with a history of invasive data mining of its user base, it is perhaps unsurprising that Meta would look inward to fill this gap. As the internal memo seen by Reuters said, "This is where all Meta employees can help our models get better simply by doing their daily work."

Anticipating backlash from employees, Meta has talked up safeguards for sensitive data that employees might access via their work computers. It also insists that employee surveillance will not be used for the purposes of performance management, something that has become common practice for lower-paid workers in the technology industry (like warehouse workers, content moderators and gig workers).

This mirrors two of the concerns dominating regulatory responses to AI in the workplace; worker privacy, and discomfort with automation of hiring, firing and compensation decisions. When asked by Tech Policy Press if Europe’s existing legislative framework was sufficient to address the collection of employee behavioral data to train AI systems, like Meta plans to, a spokesperson for the European Commission pointed to the GDPR and the AI Act. GDPR, they pointed out, limits when personal data can be used to train AI systems, and the AI Act bans things like inferring employees' emotions. It also places obligations on employers for, for example, human oversight and the right to be informed of AI use in human resource decisions.

Neither of these legal frameworks, however, were designed to address policy questions that go beyond individual rights and protections. They do not consider the scenario where Meta’s workers are now contributing valuable data to models that will make many of their jobs redundant. Nor do they account for the potential for massive labor market disruption should Meta succeed in bringing a product to market based on these models.

Meta has framed the production of this data as part of workers' existing jobs. When employees asked whether they could opt out of this new system, they were told: “No, there is no opt-out on your work-provided laptop.” This framing discounts the fact that the data itself has enormous value to the company, separate from the work it describes.

This dismissed value is something that law professor Ifeoma Ajunwa calls captured capital, the involuntary or coercive collection and utilization of worker data by firms to automate workplaces, leading to worker displacement and the reinforcement of employer control.

In some ways, this echoes the experience of writers, illustrators and musicians who had their work used, without consent or compensation, to train generative AI models. Being monitored as you switch between tabs is not the same as writing a novel, and employee relationships are, arguably, quite different from copyright situations. But both produce data of great value to those working to build the next generation of agentic models. And as an employee of a company simultaneously announcing a ten percent of global headcount, in Meta’s case, there are serious questions about whether consent can really be granted.

Creators are suing for their work to be either removed from data banks or to receive compensation for its use. There doesn’t appear to be any such moves from Meta staff, who probably couldn’t rely on very much public sympathy were they to complain. Meta has spent twenty years harvesting the personal data of its billions of platform users without compensation. That its own well-paid employees are now subject to a version of the same extraction has likely elicited schadenfreude in some quarters.

But regardless of public levels of sympathy, the impacts are likely to stretch beyond Menlo Park. As writers and illustrators will attest, the market value of creative labor across many sectors has taken a hit from AI tools. The behavioral data being extracted from Meta's employees today could do the same to the value and bargaining power of knowledge workers worldwide. The agents created with this data could soon be marketed to a wide range of employers as tools to either replace or extract more value from their employees.

Ideas around surveillance as an exploitative practice, and one that has labor market-wide implications, are something that policymakers are only starting to grapple with. According to Finnish MEP Li Andersson, there are gaps and “ambiguities” in the regulations when it comes to workplace data. Asked about Meta’s plans, she told me that they “showcase how the current premise of digital legislation at the workplace, privacy, is woefully insufficient… Meta isn’t primarily interested in the behavior of individual workers, but in pooling together the labor of workers to train their algorithmic systems.”

Li chairs the European Parliament’s Employment and Social Affairs committee, which succeeded in getting the full Parliament to pass proposals on algorithmic management last December by a large majority (451 in favor to 45 against). These proposals seek to deal with some of the gaps in the use of AI in the workplace by obliging employers to, for example, restrict their use of automated systems in hiring and firing decisions. They also emphasize worker engagement in decision-making over how these systems are used.

When asked, the European Commission told Tech Policy Press that it had replied to MEP’s proposals in March, and that their thoughts “should eventually be published on the European Parliament’s website”. It also said that it was looking at drafting a “Quality Jobs Act” for Q4 of this year, which “may include the subject of algorithmic management in the workplace.”

At the same time, however, Commissioners are also considering a package or “omnibus” of laws that would roll back the existing digital rulebook in Europe as part of the "simplification" or deregulation agenda. While the Commission spokesperson said that the omnibus would “further clarify” data use in AI training, MEP Andersson sees it differently. In her view, the omnibus has “the potential to upend the foundations of the digital rights and protections of workers, gutting in one swift stroke the results of years of negotiations and careful assessment.”

Meta did not reply to a request for comment at the time of publication.

Authors

Liz Carolan
Liz Carolan is a writer, advisor and advocate working on technology and its impact on democracy, with a particular interest in corporate accountability and digital and industrial policy in the European Union. She founded Digital Action, a global campaign organization demanding better standards from ...

Related

Perspective
How NDAs Became the AI Industry’s Tool for Surveillance and SilenceJune 20, 2025
Perspective
People Have the Right to Refuse AIMarch 5, 2026
News
How Platform Labor Laws are Shaping Gig Work in Singapore and MalaysiaMay 7, 2026

Topics