Home

Donate
Perspective

Synthetic Images, Real Feelings: Designing AI for Connection, Not Comparison

J. Scott Babwah Brennen / May 19, 2025

Dr. Scott Babwah Brennen is the director of the NYU Center on Technology Policy.

In the Phaedrus, a dialogue likely composed around 370 BCE, Plato describes Socrates’s concern that the new technology of writing would degrade students’ abilities to remember lessons. Three hundred years ago, some critics worried the novel would corrupt the morals of susceptible youth. In the twentieth century, there were moral panics first about radio, TV, and video games. The first decades of this century saw concerns rise about the Internet, smartphones, and social media.

Today, between lawsuits against companies like Character.AI, new reports about the effects of chatbots on children and teens, and a handful of proposed state bills that address AI companions, we are seeing increasing worry about the harms that these technologies pose to children.

That most new media technology engenders concern about impacts on children does not mean those concerns are always unjustified. Media technologies can pose real risks to children. Yet, often these concerns derive less from a deep empirical evidence base, and more from a deep anxiety about a changing world, whether that world is 400 BCE or 2025 CE.

Today, as everyone from parents to policymakers tries to figure out how best to navigate generative AI, it is essential that we understand exactly where the risks and benefits lie, so that we can target interventions to maximize the benefits and minimize the risks.

In a new report released by NYU’s Center on Technology Policy, I analyze the likely risks and benefits that generative AI on social media poses to teen mental health. While generative AI is frequently used on stand-alone platforms like ChatGPT or Claude, it is also being integrated by social media platforms in the form of chatbots and new image and text editing tools.

I analyze the impact of these tools on teen mental health in two ways: first, I review existing empirical studies looking directly at teen generative AI use and mental health. Second, I ask how the specific uses of generative AI on social media may impact the mechanisms that research has identified linking social media use and teens’ mental health.

Based on these analyses, I conclude that the literature suggests that generative AI on social media presents both real benefits and real risks to teens’ mental health.

On one hand, generative AI may help children’s well-being by offering social support and helping them connect with other humans who can provide social support. It can also help users create and communicate rich and complex identities and self-conceptions. For example, in their “Algorithmic Crystal” metaphor for how users engage with personalised algorithms, Lee et al. conclude “the multifaceted and dynamic nature of the crystal may facilitate explorations of how experiences with algorithms may have self-transformative effects.”

On the other hand, it may worsen teens' mental health by exacerbating upward social comparison, where one user compares themself unfavorably to another. Chatbots may also provide vulnerable users poor emotional support or expose them to problematic content. As these tools increase time spent on platforms, they could lead to further displacement of sleep and other health behaviors, and increase the likelihood that teens engage in both risky posting behavior and risky offline behavior.

More importantly, understanding the specific ways that generative AI may help and harm kids permits us to target specific interventions.

To maximize the benefits of generative AI on social media, social media platforms should use these tools to bridge or connect human users in addition to offering users AI chatbots. Rather than a one-size-fits-all approach, platforms should personalize AI product recommendations for different users. For example, they could promote life-like bots for those who have expressed an interest in connecting with bots or who already have strong on-network social ties, while promoting human connection for those who do not.

Social media platforms should also ensure that integrated generative AI tools are trained on diverse data sets capable of allowing users to craft complex and distinctive identities. This may mean platforms should delay deploying tools until more advanced models can serve a wide range of user needs.

At the same time, to minimize the risks of generative AI, platforms should rigorously test chatbots, ensuring that they can identify and address when users express intent to harm themselves or others. When they know or detect that an image of a person has been substantially modified in ways that are likely to worsen upward social comparison, they should disable viewership metrics, append a tested label or disclaimer, and algorithmically deprioritize or derank that image in users’ feeds. To facilitate this, social media apps should continue to integrate image editing tools directly into platforms. While this may hinder competition for third-party tools, integrated tools will allow platforms to better identify and act on manipulated images.

Platforms should also continue to iterate and improve self- and parental controls, providing teens and/or guardians with features such as limiting total platform use or use after certain times and supplying less engaging or calmer material later in the day.

Common Sense Media recently rated chatbots as an “unacceptable risk” for teens, suggesting that regulators should ban chatbots for users under 18. While I am sympathetic to their concerns, a ban seems unlikely to happen; this sort of proposal denies two realities. First, kids are already using generative AI tools every day; generative AI will continue to be part of our lives moving forward. Regulation should accommodate and improve this reality rather than deny it. Second, these technologies have the potential to bring some good to teens, along with new risks. At the same time, while platforms are experimenting with new safety measures, there is more they could do to ensure that new generative AI tools and features are a net positive for teen mental health. Ultimately, platforms and policymakers should pursue thoughtful interventions that are based on empirical evidence and that are targeted to maximize the benefits and minimize the harms of these new technologies.

Authors

J. Scott Babwah Brennen
Dr. Scott Babwah Brennen is the director of the NYU Center on Technology Policy. He previously was the head of online expression policy at UNC’s Center on Technology Policy and was a senior policy associate at the Center on Science & Technology Policy at Duke University. Prior to Duke, Scott was a r...

Related

Perspective
Why AI ‘Companions’ Are Not Kids’ FriendsApril 30, 2025

Topics