What Research Says About AI Chatbots and Addiction
Prithvi Iyer / Sep 24, 2025
Brain Control by Bart Fish & Power Tools of AI / Better Images of AI / CC by 4.0
On September 16, the United States Senate Judiciary Subcommittee on Crime and Counterterrorism convened a hearing to examine the dangers of AI chatbots, especially in regards to child safety. AI chatbots and their associated harms have also been the subject of recent news stories, including a Reuters investigation into Meta’s internal moderation policies that revealed the company sanctioned “sensual conversations” with children and a lawsuit against OpenAI over the suicide of a teen who sought advice from ChatGPT on how to end his life.
In a previous piece for Tech Policy Press, I looked at new research on the dangers of AI companions as they relate to mental well-being and safety, especially for minors. While those concerns persist, it is also crucial to understand how and why these chatbots are so popular. Are these chatbots addictive by design? Three papers on this subject reveal fresh insights:
'Dark addiction patterns'
Title: The Dark Addiction Patterns of Current AI Chatbot Interfaces
Date: April 2025
Authors: M. Karen Shen and Dongwook Yun
Published in: CHI Conference on Human Factors in Computing Systems
Overview
This paper investigates the “addictive potential of AI chatbots” by scoping prior literature on addiction mechanisms. The researchers also identify the specific pathways of addiction present in AI companions by examining the user interfaces of popular AI chatbots.
Why is this important?
- Prior research on AI’s negative effects has “mainly focused on overreliance on AI in decision-making,” according to the researchers. AI addiction works slightly differently and has been relatively underexplored.
- The authors say the paper is one of the few studies that “integrates established addiction neuroscience with AI interaction design, providing a foundation for future research on ethical AI design principles.”
Results
Based on a user interface (UI) evaluation of eight popular AI chatbots (Character.AI, ChatGPT, Claude, Gemini, Meta AI, Microsoft Copilot, Perplexity, and Replika), the researchers identified four key addiction pathways.
- Non-deterministic responses: The authors find that responses from AI chatbots are often unpredictable, wherein some users may be happy with a response while others are not. This corresponds to what neuroscientists call “reward uncertainty,” which tends to increase dopamine release, similar to playing a slot machine.
- Immediate and visual presentation of responses: Of eight platforms evaluated, five (e.g., ChatGPT, Claude) display responses “word-by-word,” while two (Gemini, Copilot) use a “fade-in” effect. These dynamic displays act as “reward-predicting cues,” much like “the reinforcing visual graphics…in slot machines,” potentially driving users to seek more rewarding chatbot interactions.
- Notifications: AI companions such as Character.AI have introduced features that enable them to initiate conversations with users receiving notifications via email. According to the authors, this feature may be perceived by users as the “AI chatbot wanting to talk and caring about them, which can cause dopamine release when users receive these notifications.” Similar trends have been observed in the context of notifications from social media applications, they say, which research has found to be a major contributor to addiction and smartphone dependence.
- Empathy and agreeable responses: The tendency for AI chatbots to engage in confirmation bias is well-documented. The researchers similarly find that ChatGPT will often agree with the user, irrespective of the accuracy of their claim, while AI companions like Replika use language that makes users feel heard and validated, which increases the likelihood of AI dependence.
Based on these four addiction patterns, the authors provide a few concrete design recommendations:
- For chatbots that can initiate conversations, users must be given the option to disable notifications in a way that is easy to understand and implement.
- AI companions should integrate AI literacy into their UI with the goal of ensuring that users understand these chatbots are not human and cannot replace the value of real-world interactions.
Takeaway
By analyzing the UIs of eight popular AI chatbots, this research paper shows how specific design choices may shape a user’s neurological responses and thus increase their susceptibility to AI dependence, highlighting the need for “ethical design practices and effective interventions to support users in striking a healthier balance between the benefits and risks that come with AI chatbot interactions.”
AI chatbot 'dependence'
Title: Investigating AI Chatbot Dependence: Associations with Internet and Smartphone Dependence, Mental Health Outcomes, and the Moderating Role of Usage Purposes
Date: August 2025
Authors: Xing Zhang, Hansen Li, Mingyue Yin, Mingyang Zhang, Zhaoqian Li & Zongwei Chen
Published in: International Journal of Human–Computer Interaction
Overview
This paper explores the association between “AI chatbot dependence, internet, and smartphone dependence, and mental health outcomes (depression, anxiety, and well-being)” in a survey sample of more than 1,000 adults.
Why is this important?
This is the first study, to the authors' knowledge, that examines the relationship between AI chatbot dependence and smartphone dependence. While AI chatbot usage is a relatively recent phenomenon, smartphone use and, in some cases, smartphone addiction are well-documented. Thus, this research paper aims to provide a more holistic view of AI chatbot dependence and how it is shaped by usage choices and susceptibility to other forms of digital dependence, i.e, smartphone usage.
Results
- AI chatbot dependence has a “moderate positive correlation” with internet and smartphone dependence, according to the findings. As the authors expected, the correlation is not as strong as between internet and smartphone dependence, because smartphones are a primary way users access the internet.
- Most participants reported using AI chatbots for “non-entertainment purposes,” and the correlation between AI chatbot usage and depression was weak but positive, while internet/smartphone dependence had a strong positive association with depression and anxiety levels among participants.
- While AI chatbot dependence “was not significantly associated with mental well-being,” direct effects of chatbot dependence on mental well-being were only found in participants “whose primary usage purpose was information retrieval.” This means that those who used chatbots for educational purposes (information retrieval) reported “more favorable psychological outcomes.”
Takeaway
This paper demonstrates that participants considered to be dependent on AI chatbots reported higher levels of depression and anxiety, but the usage purposes shaped the extent to which this relationship holds.
'Psychosocial effects'
Title: How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study
Date: March 2025
Authors: Cathy Mengying Fang, Auren R. Liu, Valdemar Danry, Eunhae Lee, Samantha W.T. Chan, Pat Pataranutaporn, Pattie Maes, Jason Phang, Michael Lampe, Lama Ahmad and Sandhini Agarwal
Published in: arXiv preprint
Overview
This preprint study examines the impact of AI chatbots' interaction modes (voice/text) and conversation types (open-ended, non-personal, and personal) on psychological outcomes, including loneliness, AI dependence, and problematic AI usage. The dataset included 981 participants and over 300,000 conversations with OpenAI’s GPT-4.
Why is this important?
Previous research into AI chatbots has documented their negative psychological impacts, the authors say, but this paper builds on prior work in a few crucial ways:
- Rather than only examining text-based chatbots, this authors examine how “voice modalities of a chatbot differentially impact psychosocial outcomes.”
- In most cases, they find AI chatbots either serve as general assistants to enhance productivity or as companion bots that are meant to provide emotional support. These use cases have different impacts on a user’s mental health, and to account for these differences, they looked at three conversation types using OpenAI’s GPT-4 and its corresponding impacts. The three types studied were: open-ended conversations (a user discusses a topic of their choice), personal conversations (user is instructed to discuss a personal topic, similar to how AI companions are used), and non-personal conversations (similar to interacting with a general assistant chatbot). Thus, the findings provide more details on precisely what types of conversations and modalities shape user well-being, rather than looking merely at one kind of AI use case.
Results
- Participants reported lower levels of loneliness at the end of the four-week study period, but also “socialized less with real people.” However, since the study did not compare findings with those who did not use AI chatbots, the reported improvement in loneliness could be due to external factors not considered in the study.
- Participants who spent more daily time with AI chatbots reported higher emotional dependence on them. In fact, those who spent more time engaging with voice-based chatbots demonstrated “significantly lower socialization with real people and higher problematic usage of AI compared to those using the text modality.” But when researchers controlled for time spent with AI bots, they found that those engaging with voice-based bots were “significantly less lonely, less emotionally dependent on the AI chatbot, and demonstrated less problematic use of the AI chatbot.”
- Participants engaging in “personal conversations” with AI chatbots were significantly lonelier, but these effects diminished when participants spent more time conversing with AI chatbots.
- The study also had some notable demographic differences:
- Women were less likely to reduce socialization with real people compared to male participants.
- Older participants reported higher levels of AI dependence at the end of the study.
- Participants with prior exposure to AI companions were more likely to develop emotional dependence, likely due to habitual usage patterns.
- Comparing text and voice modalities, the research finds that overall, “the text modality is generally more emotionally engaging than its voice-based counterparts, with marked differences not only in user behavior but also in the conversational strategies employed by the chatbots.”
Takeaway
Through a four-week randomized controlled trial that looked at AI chatbots and their psychological impacts while controlling for conversation types and chat modalities, the researchers provide empirical evidence suggesting that “while longer daily chatbot usage is associated with heightened loneliness and reduced socialization, the modality and conversational content significantly modulate these effects.”
Want to know more? Check out these research papers.
- Exploring the Effect of Attachment on Technology Addiction to Generative AI Chatbots: A Structural Equation Modeling Analysis
- Artificial intelligence addiction: exploring the emerging phenomenon of addiction in the AI age
- Can ChatGPT Be Addictive? A Call to Examine the Shift from Support to Dependence in AI Conversational Large Language Models
Authors
