Home

Social Media Warnings Alone Can't Solve the Youth Mental Health Crisis

J. Nathan Matias, Janet Haven / Jun 24, 2024

Dr. J. Nathan Matias is the founder of the Citizens and Technology Lab at Cornell University in the Department of Communication. Janet Haven is executive director of Data & Society.

New York, NY - September 19, 2022: US Surgeon General Vivek Murthy at the Concordia annual summit at the Sheraton Times Square. Shutterstock

Last week, US Surgeon General Murthy called for putting warning labels on social media platforms, an effort to combat a mental health crisis that affects millions of children in America. This is a contested position: while some young people have suffered terribly as a result of these platforms, the scientific consensus is far from clear on whether social media has a discernible negative impact on kids’ mental health overall. And since two-thirds of young people turn to social media and the web specifically for well-being resources, some researchers argue that digital platforms could actually become part of the solution. This is an area where we urgently need clarity — and yet it remains elusive.

Why is this the case? There are many reasons we could point to, including that research with kids is extremely difficult to do both rigorously and safely. But one reason looms above the others. Tech companies — whether or not they are the main cause of this country’s mental health woes — are actively obstructing the search for solutions.

Researchers have long argued that the safety and effectiveness of high-stakes digital products should be determined by industry-independent research. By disallowing such research on any terms but their own, tech companies (from established social media platforms to the creators of the latest AI chatbots) ensure that there are huge gaps in understanding the impacts of their products. Over the years we’ve seen attempts to create shared spaces for research into platform impacts, like Social Science One, and legislation coming out of the EU via the Digital Services Act would require companies to enable that research. But these attempts have not yet significantly moved the needle, while companies including X and Meta have threatened researchers with lawsuits, disrupted studies on frivolous legal claims, and restricted access to data.

Some advocates have called for corporate transparency, pointing to arguments (based on leaks by whistleblower Francis Haugen) that Meta knew about its products’ impacts on mental health. But transparency alone won’t make young people safer. Research shows that companies deploy “strategic ignorance” when it comes to understanding the impacts of social media on youth — avoiding data collection about young people and not including them as “average users” in design exercises. Companies also avoid transparency on issues where they fear legal liability.

To be sure, tech companies have funded a handful of studies looking into the harms of their platforms. Instagram, for instance, has given $50,000 to each of twelve research teams over the last five years. Much of that research is high quality. For an all-hands-on-deck issue like youth mental health, however, these limited efforts can serve to delay the public search for solutions. As we have long seen, when tech firms are in the driver’s seat of research into the harms of their products, they put limits on who conducts it, delay studies for years, manipulate what researchers see, and then try to spin results to contradict the science they supported.

And it’s not only about platform data. That data tells us how people behave within the closed system of the platform – an important starting point – but it doesn’t tell us about the broader societal and health impacts of these systems. It’s very unlikely that platform companies will undertake broader evaluations on their own initiative, leaving government as the bulwark against harm. Yet through long inaction, the US Congress has avoided passing even the basics of technology governance, such as comprehensive federal data privacy and data minimization practices, independent third-party research, and impact assessments that involve affected communities to track and prevent harmful societal effects of these technologies.

It’s no secret that tech companies cannot and will not govern themselves to society’s benefit. In a situation like this, the government needs to govern, and ensure safety for all — particularly kids. It’s essential that they do so from a robust evidence base, one that informs meaningful solutions and accountability for harms.

Surgeon General Murthy is right that progress on mental health depends on a collective effort from everyone in young people's lives. A social media warning label might even nudge some people to take mental health more seriously. But that can only happen if the government acts to ensure the safety of its citizens by creating the conditions necessary to enable independent technology research. Otherwise, parents and kids are left to weigh incomplete evidence as they try to make the best choices — alone.

Authors

J. Nathan Matias
Dr. J. Nathan Matias is a Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University and an Assistant Professor at the Cornell University Department of Communication, where he is founder of the Citizens and Technology Lab. Nathan collaborates with the public in citizen...
Janet Haven
Janet Haven is the executive director of Data & Society. Janet is a member of the National Artificial Intelligence Advisory Committee (NAIAC), which advises President Biden and the National AI Initiative Office on a range of issues related to artificial intelligence. She also acts as an advisor to t...

Topics