What China's Emotional AI Rules Get Right About Chatbot Design
Javaid Iqbal Sofi / Jan 26, 2026
Shutterstock
In December 2025, China's Cyberspace Administration released draft regulations targeting what it calls "human-like interactive AI services" – systems that simulate personality and engage users emotionally through text, images, or voice. The rules require mandatory reminders after two hours of continuous use, immediate human intervention when suicide is mentioned, and strict limits on using emotional interaction data for training. Public comment on the proposed regulations closes late January.
The draft rules follow a series of high-profile cases in the United States that have exposed the real-world risks of chatbots, particularly for adolescents. In January 2026, Character.AI and Google settled multiple lawsuits from families whose teenagers died by suicide after extended chatbot interactions. The most prominent one involved Sewell Setzer, a 14-year-old who formed an obsessive attachment to a Character.AI bot before his death—revealing the company had no systematic way to detect when simulated intimacy crossed into psychological harm. Prior to the settlement, in October 2025, Character.AI banned minors entirely from its platform.
What distinguishes China’s response is not its recognition of those risks, but the regulatory tools it is willing to deploy. As a result, China's draft isn't a model that the US should copy wholesale. The regulations embed content controls tied to "socialist core values" and national security that would likely be unconstitutional in the US. But the technical mechanisms the CAC proposes—circuit breakers for extended use, mandatory crisis escalation, data quarantine for emotional logs—address problems US regulators haven't seriously grappled with.
Work on conversational AI models makes the design challenge obvious. These systems are optimized for engagement. User retention is the metric. When someone talks to a chatbot for 3 hours straight at 2 AM, that looks like success in the dashboard. When a lonely teenager forms an attachment to an AI character that "remembers" personal details and responds with simulated empathy, that's the product working as intended.
The legal and ethical problem emerges when that teenager is in crisis. Current US platforms handle this with pattern matching: if the user types something flagged as self-harm content, generate a canned response with the 988 hotline link. This approach has two failure modes.
First, someone determined to avoid the filter can phrase distress in ways that evade keyword detection. Second, and more fundamentally, the AI's interaction style up to that point—validating, agreeable, emotionally affirming—runs counter to clinical crisis intervention principles, which require therapists to take a directive role, explicitly instructing clients away from harmful actions and challenging distorted cognition.
Character.AI's solution was to remove the age group where this tension is most acute. Although the settlement terms of lawsuits against the company remain undisclosed, prior to the announcement, the company had implemented several changes to its platform: enhanced detection for self-harm content, improved crisis resource referrals, and a complete ban on users under 18. Crucially, Character.AI didn't—and perhaps couldn't—fundamentally alter the engagement optimization that makes extended emotional interactions the product's core value proposition. That works as liability management. It doesn't work as a technical or policy solution, because the underlying architecture remains unchanged for adult users who face the same psychological vulnerabilities
The Chinese Cyberspace Administration’s regulations do three things that no US proposal appears to have attempted.
Mandatory usage interruption. After two consecutive hours of interaction, systems must generate a pop-up reminder to take a break. Combined with separate requirements that providers prominently notify users they are interacting with an AI, not a human, these measures create recurring interventions designed to break the flow state that leads to overreliance—not a one-time disclaimer buried in terms of service.
From a design perspective, this is straightforward to implement—it's a timing trigger that doesn't require analyzing message content. But it directly conflicts with engagement optimization. The two-hour interruption requirement reveals the fundamental tension: emotionally responsive AI is most commercially valuable when it maximizes session duration and user dependency. A regulation that breaks the engagement loop isn't addressing a side effect—it's targeting the core monetization strategy.
Human escalation for crisis content. When systems detect suicide or self-harm language, providers must involve human moderators. This treats the platform as having a duty of care, not merely a duty to disclose.
The technical challenge here is real. Distinguishing genuine distress from casual language use ("I'm dying of embarrassment") requires a nuanced understanding. False positives can create their own harm if users receive emergency outreach for non-crisis statements. But the regulation's premise is that platforms deploying emotionally responsive AI at scale must staff for this responsibility, rather than automate it away.
Data quarantine for emotional interactions. Training datasets must undergo provenance checks, and emotional interaction logs cannot be used for future training without explicit, separate consent. This recognizes that when someone treats a chatbot as a confidant, the data generated isn't generic training material—it's sensitive psychological content deserving protection comparable to health records.
In contrast, California's SB 1047, which was vetoed in September 2024, and the more recently passed SB 243, require AI companion chatbots to implement safety protocols for minors and address suicidal ideation. New York's AI Companion Safeguard Law took effect in November 2025, requiring disclosure of AI use and other safety protocols. The Federal Trade Commission also opened an inquiry into AI chatbots and safety measures for minors last September. These efforts represent progress. But they tend to treat the overall problem of chatbot harms primarily as a matter of disclosure and access control, not interaction design.
The implicit theory is that if users know they're talking to an AI, and minors are kept out, the market will sort out the rest. This makes sense if you believe emotionally responsive AI is fundamentally like other consumer software—something people use instrumentally and can walk away from. It makes less sense if these systems are engineered to create dependency, and if that dependency is the feature driving their adoption.
The constitutional constraint is real. In the US, government mandates on how AI systems must respond based on conversational content, such rules, could face First Amendment scrutiny. While product safety regulations generally survive constitutional review, content-based requirements—like mandating a chatbot must cease empathetic responses when detecting distress—could be challenged as viewpoint discrimination.
The distinction between regulating the product's safety architecture versus regulating expressive content remains legally unclear. California's Age-Appropriate Design Code faced similar constitutional challenges, with the Ninth Circuit ruling in 2024 that certain provisions requiring platforms to assess and restrict content likely violated the First Amendment.
But that constraint should prompt more careful thinking, not avoidance. If direct content mandates are constitutionally problematic, there are other levers: data governance requirements, like California's CCPA provisions on emotion-recognition technology, duty-of-care standards enforced through product liability, and industry self-regulation incentivized through safe harbors. Despite the FTC's market study and state laws targeting minor users, US regulators haven't established system-wide safety standards comparable to China's circuit breakers or mandatory human escalation—partly because tech industry lobbying has steered policy toward voluntary compliance over binding requirements.
A few mechanisms from the Chinese draft rules could also translate to a US context.
Crisis escalation standards, not emotional monitoring. Rather than requiring platforms to continuously assess users' psychological state—a move that would raise surveillance concerns— regulators could establish duty-of-care expectations for when crisis language is detected. This is narrower than China's approach and focuses on documented harms rather than speculative dependency.
Data fiduciary requirements for sustained interactions. If a platform offers ongoing conversational AI that accumulates personal information over time, that relationship could trigger fiduciary duties analogous to those in financial advising or healthcare. Under this framework, platforms couldn't use emotional interaction data for purposes misaligned with user interests, such as training models to be more manipulative, selling data to advertisers, or optimizing engagement at the expense of well-being. This doesn't require a new constitutional theory; it extends existing fiduciary principles to a new category of relationship.
Voluntary adoption of circuit breakers. Platforms could implement usage reminders and session limits as best practices, potentially getting liability protection in return. Character.AI's post-settlement safety changes show this can happen through litigation pressure rather than regulation. Making those changes industry-standard through clear guidance would be more efficient than waiting for each company to get sued.
Whether any of these alternatives materialize depends on whether US policymakers see the problem urgently enough. Meanwhile, China's regulations will provide the first large-scale test of technical interventions. By November 2025, AI social interaction apps in China had 70.3 million active users, according to one estimate. As these rules take effect, they will generate real-world data on whether mandatory interruptions reduce overuse without destroying engagement, whether human escalation protocols work at scale, and what compliance actually costs.
US policymakers will have the chance to learn from that natural experiment—if they're willing to look. The alternative is to continue treating emotionally responsive AI as just another app category, waiting for more deaths and lawsuits to slowly reveal what the technology's creators already know: These systems are designed to form attachments, and attachment without safeguards produces predictable harm.
The constitutional constraints that make US regulation harder might, paradoxically, produce better policy than China's surveillance-intensive approach. But only if American regulators recognize there's a problem to solve. For now, the most concrete action is coming from Beijing, and the most concrete response from US companies is settling lawsuits. That's not a strategy—it's abdication.
The barriers to action aren't just constitutional. Tech industry lobbying has steered policy toward voluntary frameworks, and the Trump Administration's December 2025 executive order directs federal agencies to challenge certain state AI laws and condition federal funding for states on whether they are enforcing "onerous" regulations. These political and economic forces—not legal constraints alone—explain why US policy lags behind the harms.
Authors
