Home

Donate
Perspective

The Risks of the 'Observer Effect' from Being Watched by AI

Koustuv Saha / Nov 19, 2025

Dr. Koustuv Saha is an assistant professor of Computer Science at the University of Illinois Urbana-Champaign’s (UIUC) Siebel School of Computing and Data Science and is a Public Voices Fellow of The OpEd Project.

Imagine confiding in an AI chatbot late at night. You ask the AI about your relationship struggles, a health scare, workplace conflict, or the anxiety that’s been keeping you up. You believe it is private: just you and your personal computer or phone.

But what if you later learned that your words could become part of the chatbot’s training data, helping refine the system, and fragments of your intimate conversation might actually appear in someone else’s conversations with the chatbot? This question sits at the heart of an uncomfortable truth about AI: most people—including myself as an AI computer scientist—do not fully understand how these systems are trained or what truly happens to our data once we interact with them.

Only recently, several families filed lawsuits against major AI companies, claiming that chatbots contributed to delusions and suicides. These tragic cases reignited urgent debates among industry leaders, academics and among policy makers over how conversational AI is designed, how data is used, and what responsibilities developers bear when their systems shape real human emotions and choices.

To grasp the magnitude, consider the scale: ChatGPT alone now serves more than 800 million weekly users, and its parent company, OpenAI, has reached an annualized revenue of around $10 billion. These systems are woven into daily life across the globe, shaping how people seek information, make decisions, and even express emotions.

Large language models (LLMs) draw on massive datasets scraped from the internet, chat logs, and countless user interactions. Even when anonymized, fragments of individuals’ language, emotions, and even sensitive information can linger in the AI models that learn from them.

This kind of trade-off is not new. Search engines, for example, collect and personalize results based on our browsing history. Some people choose private search engines to avoid tracking, while others willingly or unwillingly accept the convenience of personalized suggestions.

But there is a key difference: a personal search history does not directly appear in what someone else searches for. In contrast, a private conversation with an AI—about finances, grief, or relationships—could, in theory, influence the AI model as well as appear in someone else’s chat interactions.

Every post, like, or comment a person makes on social media has probably helped train the AI models millions interact with.

A deeper question concerns how people behave when they know—or suspect—that they are being observed. This phenomenon, known as the observer effect, or more popularly as the Hawthorne effect, named after the studies involving workers at Hawthorne Works factories, has long fascinated psychologists and social scientists.

When researchers there tried to measure how lighting affected worker productivity, they found something surprising: productivity increased under every condition. The workers were not being productive in response to the changes in the lights—they were responding to being watched. Essentially, the awareness of being observed can make people deviate from their otherwise typical behaviors.

In one of my own research studies, we found similar patterns. When participants became aware that their social media use was being monitored, their behaviors shifted. Some posted less often, expressed fewer emotions, and tried to use more “appropriate” language.

Simply knowing they were being observed made them act differently. Such effects are consistent with established theories on social desirability, self-presentation, and self-monitoring.

The findings have important implications for how creators design and use AI systems. One reason LLMs have become so popular is their promise of seemingly private, judgment-free interaction. Users can ask sensitive questions without the sense of stigma or fear of exposure.

In this sense, AI offers something profoundly human-like: a space to think aloud, reflect, and seek comfort without being seen. But that illusion of privacy may be fragile. Once users believe their data is being recorded, studied, or reused, that psychological safety could vanish. The result might not just be privacy concerns—it could be a quiet, subtle shift in how people think, speak, and even feel when using AI.

The danger here is not only data misuse but the erosion of trust. When every digital interaction feels like surveillance, the risk is dulling the authenticity that makes such tools valuable. People may turn away not only from AI but also from honest self-expression altogether. And this could undermine the very success of these systems; many use AI companions and tools for the benefits they provide, not to altruistically contribute to their training.

If users begin to feel their privacy is compromised, they may stop using AI in the very ways that make it useful.

Beneath this practical concern lies a deeper philosophical one of where to safely vent. Self-disclosure—the willingness to share private thoughts and emotions—has long been linked to better mental health, stronger relationships, and emotional resilience.

Companies and individuals have spent decades building online spaces that encouraged expression, connection, and mutual support. If those same technologies now make people self-conscious, guarded, or fearful of being watched, that progress can stall.

With the goal of reconciling innovation with privacy, one promising path is to design AI systems that operate within personal confines—trained locally, stored securely, and disconnected from the cloud.

Advances in on-device AI, federated learning, and differential privacy make this increasingly feasible. Such systems can learn from our data without exposing it to others, representing a new model of human-AI interaction grounded in consent, transparency, and trust.

A policy issue arises on how to regulate and safeguard privacy. Policies that govern data use are essential, but if they move too slowly or too restrictively, they risk constraining the very innovation needed to make AI safer. The goal, then, is not to choose between progress and protection, but to balance both.

Without fear mongering, the goal is to rethink how individuals interact with AI, and what are the tradeoffs. As AI continues to observe individuals, can they still be authentically themselves?

Research shows people crave understanding, even from machines. In order for AI to help people reflect, create, and heal, there must be ways that allow them to remain unobserved, or to be human, even when the machine is listening.

Authors

Koustuv Saha
Dr. Koustuv Saha is an Assistant Professor of Computer Science at the University of Illinois Urbana-Champaign’s (UIUC) Siebel School of Computing and Data Science and is a Public Voices Fellow of The OpEd Project. He studies how online technologies and AI shape and reveal human behaviors and wellbei...

Related

Perspective
Confronting Empty Humanism in AI PolicyOctober 3, 2025

Topics