“AI” Hurts Consumers and Workers -- and Isn’t Intelligent
Alex Hanna, Emily Bender / Aug 4, 2023Alex Hanna is Director of Research at the Distributed AI Research (DAIR) Institute, and Emily M. Bender is a professor of linguistics at the University of Washington.
The gold rush around so-called “generative artificial intelligence” (AI) tools like ChatGPT and Stable Diffusion has been characterized by breathless predictions that these technologies will be the harbingers of death for the traditional search engine or the end of drudgery for paralegals because they seem to “understand” as well as humans.
In reality, these systems do not understand anything. Rather, they turn technology meant for classification inside out: instead of indicating whether an image contains a face or accurately transcribing speech, generative AI tools use these models to generate media. They may create text which appears to human eyes like the result of thinking, reasoning, or understanding, but it is in fact anything but.
The latest generation of these systems mimic textual form and artistic styles well enough that they have beguiled all sorts of investors, founders and CEOs. Alongside Microsoft’s $10 billion investment in OpenAI and Google’s “code red” over generative AI, many lesser-known startups have gotten large amounts of funding: Adept, which promises to create a “universal collaborator” for office workers; Character.AI, which anthropomorphizes “character” agents based on large language models; EquiLibre Technologies, which states that it will revolutionize algorithmic trading; and Inflection, seeking to create “personal AI”. These privately-funded startups have been chasing a gold mine of investment from venture capitalists and angel investors, frequently without any clear path to robust monetization. Billions have been invested in generative AI companies in recent years, with more than $12 billion in the first quarter of 2023 alone, according to Pitchbook.
We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year America’s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors.
This incident should not have come as a surprise: the harms of generative AI, including racism, sexism and misinformation, are well-documented. The systems are designed to produce plausible word sequences, and in ChatGPT’s case, further tuned to produce output deemed helpful by humans. The result is a system that generally produces whatever it is asked for, irrespective of truth. For example, when a lawyer requested precedent case law, the tool made up cases out of whole cloth. Furthermore, training data is shot through with such bigotry that the tools deem hateful and abusive speech plausible response sequences.
One of the biggest threats from these technologies is how they will be used to discipline workers directly. Sam Altman, OpenAI’s CEO, spins tales of abundance where these tools will do every job from lawyer to customer service representative, freeing humans for better tasks or creativity.
But these claims are unfounded hype. Instead, so-called labor-saving devices will enable regimes of austerity and profit-maximization, that is, a marked decline in social services and weakened regulation in emerging markets. Rather than meet societal obligations to invest in education and physical and mental care, AI’s advocates risk creating a two-tier system where artificial facsimiles will be deemed good enough for those without means—but the well-to-do will hire actual humans. The flimsiness of their fantasy becomes clearer when one actually tries to imagine AI models excelling at the labor at the heart of our societies: carework, agriculture, and infrastructure repair and management.
Even within the information economy, generative AI systems cannot complete the tasks set out for them. Instead, they will require their output to be verified by a human who may also have liability for its correctness. This will lead to laid-off workers being hired back at lower wages to babysit these systems, or the further “gigification” of more sectors. This is already happening at Google, where contract workers report doing the same work as full-time employees, but with limited labor protections.
Such a business model is clearly bad for workers. But it’s also bad for consumers and companies licensing these tools: OpenAI and its ilk are coy about what goes into their models. This leaves companies unable to determine if they are exposing themselves to biases or misinformation, or whether the model’s training data makes it appropriate to their use-case. Moreover, as four American federal agencies recently warned, generative AI can be used to “turbocharge fraud” or “automate discrimination”. An un-empowered workforce will be ill-placed to prevent such outcomes, opening consumers up to harm and companies up to regulatory censure.
The good news is that workers are pushing back. After Hollywood studios rejected a demand by the Writers Guild of America to restrict the use of AI in the creation of material, the union has been striking since May. The Director’s Guild’s new three-year agreement includes provisions affirming that members’ duties cannot be performed by automation. Writers know producers are salivating at the prospect of laying them off en masse and rehiring them cheaply to edit AI-generated material.
In another act of resistance, artists and computer scientists created Glaze, a tool which protects artists from having their work stolen to train image generation tools like Stable Diffusion, which impinge on the same marketplace where they sell their art. Glaze applies barely-perceptible changes on original artworks that prevent AI models from using them effectively as training data. Individual artists, as well as Getty Images, are also suing Stability AI, Stable Diffusion’s owner, testing the applicability of copyright law.
This resistance sets the stage for regulation. To effectively ensure accountability mechanisms, we need to examine who gets funding, which academic projects are legitimized, and who is given attention in debates around regulation. Furthermore, we need to focus on the threat of a further casualization of labor, as jobs creating content are replaced by those that shepherd the output of synthetic media machines.
In their joint statement, US federal agencies warned companies not to assume any AI loophole, and asserted their jurisdiction to regulate the actions of companies even when those actions are automated. We call on regulators across governments to follow their lead and not be fooled that just because something is labeled AI that existing regulations don’t apply, or that technology is moving so fast regulators can’t keep up. The next steps are to use identified harms—to workers, to the information ecosystem and to data privacy—to plug gaps in existing laws and regulations. Meanwhile, we call on businesses not to succumb to this artificial “intelligence” hype.