How Shifting Responsibility for AI Harms Undermines Democratic Accountability
Suvradip Maitra, Louie Lang, María Hernández Jurado / Nov 24, 2025
Sam Altman at the TechCrunch Disrupt event in 2019 in San Francisco. (Max Morse for TechCrunch)
The recent backlash from fans against Taylor Swift’s alleged use of artificial intelligence in a music video is a symptom of a troubling trend: the moralization of individual AI use, which deflects responsibility away from powerful actors like corporations and governments.
Assigning personal responsibility for structural harms is not unique to AI. Climate change activist Greta Thunberg was similarly criticized for using plastic bottles, a supposed sign of hypocrisy. This occurred despite evidence that individual lifestyle choices have limited impact on climate change.
The many environmental, social and political harms of AI systems are by now well-established. And as the push to deregulate AI gains traction, focusing on individual responsibility becomes perilous and misguided, obscuring the systemic failures and eroding the democratic accountability that this moment demands.
Responsibility shifting in AI discourse
Big Tech’s shirking of responsibility is apparent in the industry’s approach to AI harms. For instance, OpenAI’s approach to AI safety focuses on efforts to “minimize” the potential of harm to children and personal information of private individuals being exposed by LLMs. Yet a recent review of the privacy policies of six major AI companies reveals that they are inadequate and fail to prioritize user consent in data collection. This raises the question: if we cannot ensure these technologies are safe, should we be making them available for public use?
As OpenAI’s CEO Sam Altman stated at the US Senate hearing on AI competitiveness earlier this year, OpenAI seeks to “give adult users a lot of freedom to use AI in the way that they want to use it and to trust them to be responsible with the tool.” But in many cases under the current regime, if the tool is misused, the responsibility would likely be placed on the user’s unethical use rather than the capabilities of the system and inadequate guardrails.
This individualized framing is evident in various domains of AI usage, including around environmental harms, in academic and professional settings and around AI companionship.
Environmental harms
Scaling laws popularized by OpenAI suggest that larger datasets and greater computational resources lead to better performance. They have led companies to invest in larger models, increasing AI’s energy use and demand for data centers.
One proposal to reduce AI’s carbon footprint includes carbon-labeling systems to enable users to responsibly choose energy efficient products. Similarly, Altman recently stated that being polite to AI is wasting money and compute resources, leading to calls for individual users to be less polite and academic research calculating energy costs of saying thank you to AI. Indeed, the statistics on environmental impacts of AI are often framed in individual terms. Regularly reported statistics frame AI energy use in terms of individual queries or video generation, with the overall effect of individualizing responsibility.
Relying on users’ personal virtues is particularly problematic given that the systems are designed to dissuade sustainable behavior, such as when a chatbot provides follow-up suggestions to encourage greater engagement. Our focus must instead shift towards structural solutions for reducing environmental harms. As the UN Environmental Program proposes, these can include measurement of harms, corporate transparency, algorithmic efficiency, greening data centers and stronger regulations.
Academic and professional shaming
According to Rachel McNealis, “AI shaming” occurs when “the use of AI tools is criticized as deceitful, lazy, or inherently inferior to human effort.” Shaming individual AI users has become commonplace, particularly in knowledge-based sectors such as academia. For instance, most of the media coverage following ChatGPT’s release that examined its impact on higher education focused on concerns about cheating, academic dishonesty or student misuse. Similarly, in the workplace, nearly half of workers in the United States report hiding their use of AI to avoid judgment. This mindset pressures individual users to question their integrity and competence, even as broader discourse and product design actively encourage AI adoption in the name of productivity.
A similar pattern emerges with possible AI harms, such as misinformation. Researchers have long pointed out that so-called “hallucinations” in large language models are not bugs, but features of AI. In line with this, a recent study found that AI assistants misrepresented news content 45% of the time. And yet these systems are designed to communicate information in a confident and persuasive tone, often exploiting users’ cognitive biases. OpenAI’s response to this risk consists of a small disclaimer — “ChatGPT can make mistakes. Check important info” — which subtly shifts the verification burden onto users.
Of course, in some contexts, such as law, users do bear legitimate professional and ethical responsibilities, and lawyers have rightly faced penalties for citing AI-generated false information in court. However, placing blame or shame solely on individual users obscures how the technology itself promotes a false sense of credibility — even when misinformation is a seemingly unavoidable byproduct of its design.
AI shaming is therefore misplaced: behavioral change cannot rely solely on the willpower of individuals, especially when the surrounding factors actively undermine their ability to act on their best intentions.
AI companions
This concerning trend is also evident in the fast-growing realm of AI companionship. Studies now unambiguously show that emotional AI companion apps can pose significant mental health risks,particularly for younger users. Yet companies have failed to meaningfully address the risks, instead opting to instruct users to engage responsibly. Character.ai, for instance, implemented a disclaimer reminding users that the AI is not a real person, while Replika warns its users that AI “is not equipped to give advice.” These measures, while not necessarily insensible, shift accountability onto users rather than confronting design choices.
The real issue is not users’ misjudgments, but the need for such disclaimers in the first place. If these apps are so misleading as to warrant warnings, why are they freely accessible to teenagers? Subtly shifting the blame onto supposedly irresponsible users risks distracting from more concrete issues, such as, say, Replika not yet having implemented an effective age verification system. The burden must remain on the companies creating tools that are so easily misused.
Altman recently deployed a similar burden-shifting strategy in his announcement loosening ChatGPT restrictions, including allowing erotica for verified adults. He frames prior restrictions as measures to be “careful with mental health issues,” while presenting them as a burden for the “many users who had no mental health problems.” Virtue-signaling aside, Altman here deflects blame onto users rather than the company that introduced the risks.
Harms of moralizing the political
Not only is it unjustifiable in many cases to place moral responsibility on users to restrain or adapt their AI use, it is also counterproductive to systemic change. The current discourse on AI harms continues a long history of conflating political and moral framings in public discourse. For instance, commentators have identified how the moralization of climate change under the guise of responsible consumerism has detracted from political projects to enact meaningful change. Likewise, the framing of social media usage as an individual problem has sustained tech companies’ extractive business models.
Dr. Matej Cíbik gives the example of how framing anti-slavery campaigns as about abolishing slavery rather than freeing slaves was more effective in engendering structural change by directing attention towards legislative change and not individual slave owners. Why should responsibility not be both individual and collective? Critically, as empirical studies on climate discourse establish, the systemic and individualized narratives of responsibility are often mutually exclusive due to limited attention bandwidth within public policy debates. In practice, the moral framing often crowds out the political framing, resulting in insufficient focus on collective action.
To be clear, our goal is not to excuse individual misuse, nor to imply that users bear no responsibility at all. Rather, we argue that those isolated cases are no justification for Big Tech and governments to relax safety measures and place the burden on the user. We are not devaluing individual or collective refusal and restraint; in fact, the neo-Luddite movement provides a great conceptual standing for collective restraint.
Rather, the goal is to ensure we are not expecting refusal and restraint from individuals, while nevertheless appreciating and valuing their efforts when they do choose to do so.
Way forward
Rather than putting the burden of responsibility on individuals, we must target structural remedies: compensation or repercussions for AI harms, reform actions such as product recall or discontinuation, more ethical design and development practices and legislative and policy updates.
Achieving this requires more than increasing public AI literacy. It demands questioning the ways in which powerful actors, such as the government and Big Tech, frame the causes behind AI harms, because these narratives ultimately shape which remedial solutions are prioritized.
Authors


