Weapons of Mass Delusion Are Helping Kids Opt Out of Reality
Emily Tavoulareas / Aug 20, 2025Last week’s bombshell revelation of Meta’s internal chatbot guidelines has led to a surge of attention on chatbots and kids. The guidelines demonstrate what so many experts have been saying for not just years, but decades: these products are optimized for engagement over all else. Meta is not alone. The entire industry is building technologies that are designed not to connect us to reality, but to help us avoid it by living our lives on and through its products.
While there are well-documented upsides to social media, especially among marginalized youth, much of the conversation about the harms inevitably centers on contacts and content: who are young people interacting with and what are they consuming. This grounds the discussion and action in acute and tangible harms: inappropriate content, online predators, excessive screen time, etc.
But alongside both real and perceived visible threats is a more subtle and perhaps more nefarious phenomenon: a distortion of how children view themselves, and how they experience and understand human connection of all kinds. Across platforms there are literal and figurative filters that warp our faces, relationships, friendships, and intimacy into fantasies — a perversion of some of the most basic human experiences. Social media, augmented reality, and the rapidly growing world of AI chatbots are enabling avoidance at a massive scale. It’s time we start thinking about it that way.
Now surging onto the scene, we have AI companion chatbots that create (as Yuval Harari calls them) “counterfeit humans” that purport to be the perfect friend, or partner, tailored to each person's needs, desires, and opinions. They essentially allow people to craft a fantasy person that pushes them further and further into a safe and cozy echo-chamber that is completely disconnected from reality. This is not an imaginary doomer future, it’s already here. Products that we are already using are not only allowing, but actively enabling young children to trade real relationships for an illusion — or perhaps more aptly, for a delusion.
So what are the delusions? I see two main categories: (1) the delusion of physical perfection, and (2) the delusion of connection. Let’s start with what we can see: appearance filters and the delusion of physical perfection.
The delusion of physical perfection
Unrealistic beauty standards are not new; the beauty industry has long perpetuated them through ad campaigns, enabled by tools like Photoshop and airbrushing that edit images to unattainable standards. The effects of this are no longer a question; they have been researched for decades. But as with everything that meets social media, this same phenomenon has been exponentially accelerated online. Augmented reality has shifted that acceleration into warp speed.
Appearance filters (or face/beauty filters) are one of the most common uses of augmented reality. They are essentially automated photo-editing tools and are a very popular feature on social media apps like Instagram, Snapchat, and TikTok. The filters quite literally enable users to airbrush their skin to the tone and texture of their liking, and even change the shape of their face and features. The changes are somehow simultaneously jarring and realistic.
Social media companies seem hesitant to provide data that is specific to appearance or beauty filters. Instead, the data is a cross-section of filter types, making it harder to discern the age of users and frequency of use. Yet even for adults, filters have affected what people even think is attainable with surgery. In 2018, cosmetic surgeons coined a new term—Snapchat dysmorphia—to describe the phenomenon of patients bringing filtered selfies to their surgeons to demonstrate what they wanted to achieve with their surgery. These are adults with fully-formed brains and some pre-existing sense of self.
Researchers should absolutely continue to explore the relationship between kids and social media, but for parents, and decision-makers in government and industry who cannot wait for lengthy studies, do we really need statistically significant evidence that normalizing the daily distortion of the image of preteens and teens will have a negative outcome? While there is still much to understand about these products, we already know quite a lot about how children develop.
For instance, we know that identity is a core human need, and that self-esteem, confidence, and our ability to connect to others all stem from a sound identity and sense of self. We know that the absence of a strong sense of self leads to countless challenges that may be acute in adolescence and can persist into adulthood. And we also know that a struggle with identity largely defines pre-teen and teenage years. We can already do some of the math here. Parents do not have the luxury of time to wait for more peer-reviewed studies — they need to navigate this now.
Now let’s dig into what is less visible: relationships, intimacy, and the delusion of connection.
The delusion of connection
So much of the policy conversation centers on content moderation and abusive behavior, but we have a harder time with the intangible impact — largely because it is harder to measure and study over time. But what about these invisible features that are warping some of the most critical parts of childhood: friendship, intimacy, and the creation of one's own self image?
Snapchat has over 450 million daily active users worldwide. One of the central features of the platform is the friend list. Who is on your friend list is determined by a combination of the frequency, recency, and length of photos and videos sent between two people. They then rank those friendships and attach emojis and labels like ‘Best Friends Forever’ (BFF) and ‘Super BFF.’ What do children whose relationships are mediated by such applications believe friendship is? How does one know or feel that “we are friends?” How does a child show a friend that they care about them?
And now we have “companion chatbots” surging onto the scene. According to a new study by Common Sense Media, over half of American teens used AI companions “a few times or more” and nearly a third “find AI conversations as satisfying or more satisfying than human conversations.”
These “companion bots,” some of which are marketed to children as young as 12, can be tailored to your specific needs, desires, and opinions. They allow users to craft a person who does not — and probably would not ever — exist in real life, and then make them their closest confidant, reinforcing all of their beliefs and pushing people further and further into an echo chamber of safety, comfort, and consensus that is completely disconnected from reality. It’s the filter bubble on steroids. And thanks to reporting by Reuters, we now know that Meta allowed its chatbot to "engage in romantic roleplay with children" with approval from its lawyers and chief ethicist.
Last year, New York Times columnist Kevin Roose wrote a gut-wrenching article about a 14-year-old who took his own life after developing an unhealthy attachment to a chatbot on Character.AI. The child’s mother said, “I feel like it’s a big experiment, and my kid was just collateral damage.” It’s hard to disagree, especially when you consider that the founders, who left Google because “there’s just too much brand risk in large companies to ever launch anything fun,” describe it as “a cool first use case for AGI.” Since then, the company appears to have given up on AGI to focus primarily on “entertainment,” according to Wired. But there are countless other companies like Character.AI, and similar chatbots are being integrated into products kids are already using.
Let’s pause for a moment and imagine a young child whose first intimate relationship is with an AI chatbot that is tailored to their every desire: physically, in terms of personality, opinion, behavior, everything. This means that the person with whom they form their first non-familial intimate bond is someone whom they have personally designed to be everything they want. They do not need to compromise, or think about what they are saying, or even how they are saying it. They do not need any self-awareness or empathy. They do not need to figure out how to navigate an uncomfortable situation, or learn to navigate conflict, or even adapt to an environment they might not like. They do not need to grapple with difficult truths about themselves, and are never forced to confront annoying habits, unhealthy behavior, or uninformed opinions. They get to have exactly what they want without giving or changing a thing about themselves. They get the illusion of intimacy, without any of the work, complexity, or discomfort.
Now consider what happens when that child grows up and attempts to have a relationship with a human person — a person with their own opinions, a person with imperfections, a person with a potentially different family background, opinions, domestic habits, and expectations. Consider how this person is as an employee, who suddenly ends up in an environment with people who have their own opinions, who critique them for bad performance, or who dare to have expectations.
The bottom line is this: children are using products that are simulations of relationships — simulations of intimacy. And while one might say, “there is no evidence that the effects will be harmful,” how much evidence do we need to be confident that what we already know is true offline is also true online?
Community and socializing with peers and people outside of your family unit are critical to childhood development. We don’t need another peer-reviewed, statistically significant study to prove it. We already know this is true.
We already know that friendships are complicated but critical to childhood, and that what is experienced in childhood has long-term consequences. We know that childhood friendships don't just make kids happier, they build the social and emotional skills needed to thrive throughout life. We know that strong social bonds improve both mental and physical health. Studies are legion.
Melanie Dirks, a professor of psychology at McGill University who studies peer relationships in children, adolescents, and young adults says that “friendships are the first relationships in life that we get to freely choose,” and that “because of that, they present a really important opportunity to learn how to navigate challenging interpersonal situations before we enter relationships as adults.”
Companion bots could be useful in clinical settings in some situations, and even be a fun tool for escapism for adults, but for broad use by kids who are learning how to interact with the world — why? What exactly is the upside, and for whom?
In MIT Technology Review, Tate Ryan-Mosely described the users of augmented reality technologies as “[...] subjects in an experiment that will show how the technology changes the way we form our identities, represent ourselves, and relate to others.” Exactly. The users of these products are part of an experiment, and (as the Character.AI case demonstrates) collateral damage.
The Common Sense Media study that found over half of teens used AI companions “a few times or more” highlights that:
... the peril outweighs the potential of AI companions—at least in their current form. Our findings reaffirm our earlier recommendation that no one under 18 should use these platforms.
Companies will continue to hide behind a banner of innovation, and a fear campaign that the United States will lose the AI race if companies don’t have carte blanche to create technology that might get us closer to AGI. But do companies really need to experiment on our children to advance new technology?
Meta is not alone — its guidelines are just the ones we happen to have seen. The existence of products like this (and the tolerance of guidelines like Meta’s) is not inevitable — it is a choice. A choice by companies, a choice by investors, a choice by consumers, and a choice by governments. It is also a choice for them to be actively targeted at children. We can make different choices. Instead, we have federal policies that are enabling the proliferation of weapons of mass delusion.
This is about as urgent as it gets. Once we’ve trained a generation to prefer the comfortable lie to the uncomfortable truth, and to choose algorithmic validation and retreat into spaces where they always feel right and are never challenged, we won’t just have failed them. We will have fundamentally broken ourselves as a society for generations to come.
Authors
