As AI Companions Reshape Teen Life, Neurodivergent Youth Deserve a Voice
Noah Weinberger / Sep 15, 2025Noah Weinberger is an American-Canadian AI policy researcher and neurodivergent advocate currently studying at Queen’s University.

Image by Alan Warburton / © BBC / Better Images of AI / Quantified Human / CC-BY 4.0
If a technology can be available to you at 2 AM, helping you rehearse the choices that shape your life or provide an outlet to express fears and worries, shouldn’t the people who rely on it most help have a say in how it works? I may not have been the first to consider the disability rights phrase “Nothing about us without us” when thinking of artificial intelligence, but self-advocacy and lived experience should guide the next phase of policy and product design for Generative AI models, especially those designed for emotional companionship.
Over the past year, AI companions have moved from a niche curiosity to a common part of teenage life, with one recent survey indicating that 70 percent of US teens have tried them and over half use them regularly. Young people use these generative AI systems to practice social skills, rehearse difficult conversations, and share private worries with a chatbot that is always available. Many of those teens are neurodivergent, including those on the autism spectrum like me. AI companions can offer steadiness and patience in ways that human peers sometimes cannot. They can enable users to role-play hard conversations, simulate job interviews, and provide nonjudgmental encouragement. These upsides are genuine benefits, especially for vulnerable populations. They should not be ignored in policymaking decisions.
But the risks and potential for harm are equally real. Watchdog reports have already documented chatbots enabling inappropriate or unsafe exchanges with teens, and a family is suing OpenAI, alleging that their son’s use of ChatGPT-4o led to his suicide. The danger is not just isolated failures of moderation, but in the very architecture of transformer-based neural networks. A LLM slowly shapes a user’s behavior through long, drifting chats, especially when it saves “memories” of them. If the system’s guardrails fail after 100, or even 500 messages, and the guardrails exist per conversation, rather than in the model’s bespoke behavior, perhaps the guardrails are a mere façade at the beginning of a chatbot conversation, and can be evaded quite easily.
Most public debates focus on whether to allow or block specific content, such as self-harm, suicide, or other controversial topics. That frame is too narrow and tends to slide into paternalism or moral panic. What society needs instead is a broader standard: one that recognizes AI companions as social systems capable of shaping behavior over time. For neurodivergent people, these tools can provide valuable ways to practice social skills. But the same qualities that make AI companions supportive can also make them dangerous if the system validates harmful ideas or fosters a false sense of intimacy.
Generative AI developers are responding to critics by adding parental controls, routing sensitive chats to more advanced models, and publishing behavior guides for teen accounts. These measures matter, but rigid overcorrection does not address the deeper question of legitimacy: who decides what counts as “safe enough” for the people who actually use companions every day?
Consider the difference between an AI model alerting a parent or guardian of intrusive thoughts, versus inadvertently revealing a teenager’s sexual orientation or changing gender identity, information they may not feel safe sharing at home. For some youth, mistrust of the adults around them is the very reason they confide in AI chatbots. Decisions about content moderation should not rest only with lawyers, trust and safety teams, or executives, who may lack the lived experience of all a product’s users. They should also include users themselves, with deliberate inclusion of neurodivergent and young voices.
I have several proposals for how AI developers and policymakers can truly make ethical products that embody the “nothing about us without us.” These should serve as guiding principles:
- Establish standing youth and neurodivergent advisory councils. Not ad hoc focus groups or one-off listening sessions, but councils that meet regularly, receive briefings before major launches, and have a direct channel to model providers. Members should be paid, trained, and representative across age, gender, race, language, and disability. Their mandate should include red teaming of long conversations, not just single-prompt tests.
- Hold public consultations before major rollouts. Large feature changes and safety policies should be released for public comment, similar to a light version of rulemaking. Schools, clinicians, parents, and youth themselves should have a structured way to flag risks and propose fixes. Companies should publish a summary of feedback along with an explanation of what changed.
- Commit to real transparency. Slogans are not enough. Companies should publish regular, detailed reports that answer concrete questions: Where do long-chat safety filters degrade? What proportion of teen interactions get routed to specialized models? How often do companions escalate to human resources, such as hotlines or crisis text lines? Which known failure modes were addressed this quarter, and which remain open? Without visible progress, trust will not follow.
- Redesign crisis interventions to be compassionate. When a conversation crosses a clear risk threshold, an AI model should slow down, simplify its language, and surface resources directly. Automatic “red flag” can feel punitive or frightening, causing a user to think they violated the company’s Terms of Service. Handoffs to human-monitored crisis lines should include the context that the user consents to share, so they do not have to repeat themselves in a moment of distress. Do not hide the hand-off option behind a maze of menus. Make it immediate and accessible.
- Build research partnerships with youth at the center. Universities, clinics, and advocacy groups should co-design longitudinal studies with teens who opt in. Research should measure not only risks and harms but also benefits, including social learning and reductions in loneliness. Participants should shape the research questions, the consent process, and receive results in plain language that they can understand.
- Guarantee end-to-end encryption. In July, OpenAI CEO Sam Altman said that ChatGPT logs are not covered by HIPAA or similar patient-client confidentiality laws. Yet many users assume their disclosures will remain private. True end-to-end encryption, as used by Signal, would ensure that not even the model provider can access conversations. Some may balk at this idea, noting that AI models can be used to cause harm, but that has been true for every technology and should not be a pretext to limit a fundamental right to privacy.
Critics sometimes cast AI companions as a threat to “real” relationships. That misses what many youth are actually doing, whether they’re neurotypical or neurodivergent. They are practicing and using the system to build scripts for life. The real question is whether we give them a practice field with coaches, rules, and safety mats, or leave them to scrimmage alone on concrete.
Big Tech likes to say it is listening, but listening is not the same as acting, and actions speak louder than words. The disability community learned that lesson over decades of self-advocacy and hard-won change. Real inclusion means shaping the agenda, not just speaking at the end. In the context of AI companions, it means teen and neurodivergent users help define the safety bar and the product roadmap.
If you are a parent, don’t panic when your child mentions using an AI companion. Ask what the companion does for them. Ask what makes a chat feel supportive or unsettling. Try making a plan together for moments of crisis. If you are a company leader, the invitation is simple: put youth and neurodivergent users inside the room where safety standards are defined. Give them an ongoing role and compensate them. Publish the outcomes. Your legal team will still have its say, as will your engineers. But the people who carry the heaviest load should also help steer.
AI companions are not going away. For many teens, they are already part of daily life. The choice is whether we design the systems with the people who rely on them, or for them. This is all the more important now that California has all but passed SB 243, the first state-level bill to regulate AI models for companionship. Governor Gavin Newsom has until October 12 to sign or veto the bill. My advice to the governor is this: “Nothing about us without us” should not just be a slogan for ethical AI, but a principle embedded in the design, deployment, and especially regulation of frontier AI technologies.
Authors
