Home

Donate
Perspective

Don't Be Fooled By Trump's Plan To 'Upskill' Workers To Prepare For AI

Alex Hanna, Tina M. Park, Sophie Song / Aug 8, 2025

President Donald Trump arrives at the Salute to America Celebration, Thursday, July 3, 2025, at the Iowa State Fairgrounds in Des Moines, Iowa. (Official White House photo by Daniel Torok)

United States President Donald Trump’s “AI Action Plan,” along with a bevy of Executive Orders released alongside it, signals a massive rollback of progressive regulations initiated by the Biden administration against potential algorithmic discrimination, false advertising, and other harms of automated systems. However, somewhat surprisingly, the Trump plan signals that it wants to “empower American workers in the age of AI” and that the administration “will advance a priority set of actions to expand AI literacy and skills development, continuously evaluate AI’s impact on the labor market, and pilot new innovations to rapidly retrain and help workers thrive in an AI-driven economy.”

Even if the rest of the Trump AI Action Plan is a departure from what came before it, this language is notably congruent with much of what counts as mainstream political discourse on AI and labor in the US. The narrative around the need to “reskill” and “upskill” workers is consistent across the Trump administration, Democrats, labor unions, and tech companies alike.

For instance, last year, Senator Chuck Schumer’s (D-NY) long-awaited but ultimately unfulfilling roadmap for AI policy suggested that the US needs to plan for the “potential for displacement of workers” in light of AI. Sen. Bernie Sanders (I-VT) has suggested that the introduction of AI will not be “like the Industrial Revolution… this could be a lot more severe.” Last month, the American Federation of Teachers and its New York local chapter, the United Federation of Teachers, announced a $23 million deal with Microsoft, OpenAI, and Anthropic to create a “National Academy for AI Instruction” in order to “provide a national model for AI-integrated curriculum.” This follows the 2023 agreement between Microsoft and the AFL-CIO, which foregrounded “AI education for workers and students.”

Let us be clear: workers are being displaced by AI in the workplace, but that displacement is not due to the actual ability of AI technology doing the jobs of workers to the same quality, or even to a sufficient quality. It is a market-oriented move by firms to brag about becoming AI-first, while finding cost-savings by laying off employees. And the workers who do remain in their jobs are left to deal with labor intensification, that is, a combination of doing more work with less time, the addition of other work tasks as a result either new technological processes, or taking on the task of other people who have been laid off. As one of us wrote in a recent book, “AI is not going to replace your job. But it will make your job a lot shittier.”

The narrative around “expand[ing] AI literacy and skills development” and retraining workers that seems so popular across the political spectrum goes something like this: new technologies (and most recently, AI) will become increasingly good at replacing workers at a host of tasks, making those workers redundant. Those workers then need to be “reskilled” laterally to somewhere else within the same industry, or “upskilled” to incorporate these technologies into their daily work lest they be left in the dust by their coworkers or by their firms’ competitors. Reskilling has been posed as a labor market corrective by liberal labor economists for years at this point, although these arguments have a strange way of dovetailing with arguments from AI leaders that many white-collar jobs will encounter a “bloodbath” or will disappear whole cloth.

When we talk about AI, we need to be specific. “AI” is not a singular thing, even if employers and tech companies speak of it like it is. For educators and instructors, it may refer to large language models like ChatGPT which administrators want faculty to use for developing new assignments. For nurses and clinicians, it may involve statistical models that estimate patient acuity levels from hospital room monitoring or automated transcription tools for patient note-taking. And in computer programming, large tech firms have pushed code generation tools onto engineers to write more code in less time.

The main conceit around AI adoption in the workplace is efficiency: in the best case, current workers are freed from onerous and rote tasks, and they can spend their time doing other things. In the worst case, AI adoption means an organization will need fewer workers because those job responsibilities can be completed through AI-enabled tools.

However, emerging studies show that neither of these scenarios are beneficial to humans. In a study of 25,000 Danish workers in occupations susceptible to automation, labor economists found that the productivity gains following the introduction of large language models in the workplace were quite small: they found only a 2.8% increase in time-savings. On the flip side, the introduction of AI tools created new work for 8.4% of these workers, including people who do not use the tools directly. In another recent randomized control trial study around the use of coding assistants by seasoned open-source developers, research subjects predicted that use of these systems would lead to a 20% reduction in the completion of coding tasks. However, researchers found that these tasks took 19% more time on average across developers.

That tracks with our experience and what we hear from our team. We are technologists, social scientists, and researchers who have been in the labor movement for years, and have spoken to dozens of organizers, labor researchers, and unionists. We have found that teachers have to spend more time sussing out whether students used AI to produce text or code, illustrators have to recreate or fix images generated by models like Midjourney and Stable Diffusion, and programmers have to coax code-generation tools to produce correct outputs. A recent report by the American Association of University Professors, based on a survey of 500 of its members, says that the introduction of these tools “adds to faculty and staff workloads and exacerbates long-standing inequities… Required professional development on the use of AI in teaching and research adds to faculty and staff workloads—without evidence that AI improves productivity, pedagogy, or teaching and learning processes or outcomes.”

Technology firms selling the promise of AI-driven workplaces are themselves announcing layoffs, pointing to the need for more investments needed for AI development. These rolling rounds of layoffs since the beginning of the year are in response to the threat of a looming recession and the need to participate in the AI hype cycle they generate. It’s AI spending–not AI productivity gains–that are motivating these layoffs. We’re already seeing indications of the hype cycle deflating and hurting workers in the process. Klarna recently fired 700 employees, boldly claiming to be going AI-first, just to vow to hire more human workers in customer support roles a year and a half later. Firms everywhere are wielding AI as justification for layoffs and labor intensification, in part to free up capital resources for AI initiatives and to signal to investors that they are AI forward companies. Whether or not the tools can do what they are advertised as being able to do, this dynamic creates pressure and precarity that allow employers to squeeze more out of its workers for less. “Upskilling” affirms the narrative that these changes are inevitable and workers who suffer economically are at fault for “not keeping up,” all while trying to maintain market relevance as “AI leaders” to their financial stakeholders.

To actually ensure a future that empowers workers, we need to be clear-eyed about what’s actually happening with AI in the workplace. That includes combating tech industry-driven narratives and focusing on measures that build power for working people in these increasingly difficult and troubled times. Increasing evidence is emerging that the promise of AI is an empty one and is a poor substitute for human ingenuity, experience, and expertise. The cost savings from replacing human workers with AI tools is short-term at best, creating more follow-up tasks to identify and fix errors caused by AI.

The use of digital tools to improve the workplace should be determined by the people whose work life will be shaped by those tools. Workers are able to identify sites of their jobs where support from digital tools would help their efficacy, as well as predict the ways it will make their jobs more challenging and dangerous. For instance, the National Nurses Union, the largest union of registered nurses in the county, clearly identifies how the introduction of automation in its workplaces will lead to more worker fatigue, scapegoating of nurses as a potential medical liability, and worsening patient outcomes. The union contends that skilled nursing ”cannot be made without the assessment skills and critical thinking of registered nurses” and that the art of patient assessment needs to be “grounded in education and judgment honed through years of professional experience.”

Ultimately, the future of work does not require an unquestioning acceptance and embrace of AI technology. “AI,” particularly as it is currently being developed and deployed, is not an inevitability, but a technology that can be regulated and overseen. Employers must look past the shocking headlines and marketing ploys to understand what benefits might actually be achieved through the adoption of AI tools. More importantly, workers themselves need the protections and guarantees to determine what supportive tools are brought into the workplace to ensure they have a healthy and sustainable workplace.

Authors

Alex Hanna
Alex Hanna is Director of Research at the Distributed AI Research (DAIR) Institute. She focuses on the labor needed to build the data underlying artificial intelligence systems, and how these data exacerbate existing racial, gender, and class inequality.
Tina M. Park
Tina M. Park, Ph.D. is a sociologist and independent researcher examining the impact of AI systems and products on socially marginalized communities, including workers and communities of color. Most recently, Tina was the Head of Inclusive Research & Design at the Partnership on AI, where she resear...
Sophie Song
Sophie Song (they/she) is a researcher at DAIR focused on the intersection of AI and workers. They have a background in computer science, organizing, and policy centered on the intersection of tech and social justice. Their work has included building a worker-centered AI policy agenda with the labor...

Related

Perspective
The Trump AI Action Plan is Deregulation Framed as InnovationJuly 30, 2025
News
Unpacking Trump’s AI Action Plan: Gutting Rules and Speeding Roll-OutJuly 23, 2025
Podcast
Considering Trump’s AI Plan and the Future It PortendsJuly 24, 2025

Topics