Home

How to Assess Platform Impact on Mental Health and Civic Norms

Nathaniel Lubin, Thomas Krendl Gilbert / Jun 22, 2023

Nathaniel Lubin is an RSM fellow at Harvard’s Berkman Klein Center and a fellow at the Digital Life Initiative at Cornell Tech. Thomas Krendl Gilbert is a product lead at Mozilla, AI ethics lead at daios, and postdoctoral fellow at the Digital Life Initiative at Cornell Tech.

Last month, the US Surgeon General argued that technology platforms have fomented systemic harms, especially with respect to mental health, and do not merely reflect underlying societal challenges. He joined a range of academics, like Jonathan Haidt, and other critics that have raised the specter of a social media crisis. On the other hand, observers including the Washington Post Editorial Board declare that the “results aren’t in yet,” or that effects of social media are mixed between positive and negative, with some policymakers centering recommendations primarily on parents’ roles in limiting harmful usage (including Utah’s new law).

These debates are difficult. For one, outside parties have very limited access to platforms. At the same time, platforms have disincentives to directly assess the effects of their product choices outside of maximizing growth. As a result, we have quite limited data connecting systemic outcomes to the product choices made by platforms. In a just-released proposal incubated at Harvard’s Berkman Klein Center and the Cornell’s Digital Life Initiative, we seek to change that.

Our proposal argues that adjudicating the responsibilities of platforms with respect to systemic challenges requires differentiating between what we call “acute” harms and “structural harms.” Acute harms are suitable for content moderation, and require textual analysis of content; structural harms, by contrast, affect user populations over time, and their assessment requires public health methods for which the gold standard is randomized controlled trials.

As difficult as this sounds, platforms already utilize randomized controlled trial systems as part of their management and product development procedures. The key recommendation posited by our proposal is to set public interest metrics – such as mental health assessments – that are measured alongside growth metrics, with adequate third party review.

The Platform Accountability and Transparency Act (PATA), recently reintroduced in the US Senate, offers a good option for how to credential and provide access to data, while the European Union’s Digital Services Act (DSA) and the United Kingdom’s proposed Online Safety Bill also offer interesting frameworks. While neither the DSA nor the UK Online Safety Bill establish implementation procedures similar to our proposal, they both do include provisions related to systemic effects of large platforms.

Consider the DSA, which implements a range of provisions and requirements. For our purpose, we focus on Section 5, which sets out specific obligations for “very large online platforms,” which the law defines as those with more than 45 million EU users (10% of the population). For those products, the law requires annual risk assessments, including for algorithmic promotion, for four areas: (1) dissemination of illegal content; “actual or foreseeable effects” on (2) human rights; (3) civic discourse, elections, and public security; and (4) public health and wellbeing, including specifically for minors and gender violence. DSA requires that these assessments address a range of provisions, including recommender systems, moderation systems, advertising systems, and data collection. And it allows for follow-up requests by the Commission, as well as requiring for an independent audit; it also specifies requirements for “access to data that are necessary to monitor and assess compliance with this Regulation.”

Each of these provisions are reasonable and well-intentioned. We expect, however, that absent additional specifications, most systemic harms – including the types highlighted by the Surgeon General – will be very difficult to identify using these provisions. (We are most optimistic about the “illegal content” provision.) So far as we know, platforms do not regularly or systematically assess mental health effects of their products, and without direct requirements that they do so, even access to massive troves of private data is unlikely to conclusively connect specific platforms to the outcomes of populations of users. This would be akin to requiring that researchers have access to outcome data from a clinical drug trial without specifying which participants were receiving the drug and which the placebo.

More so, it is far from obvious which assessments and which topics are deserving of direct assessment; and even for topics where there is a lot of consensus, like mental health effects on kids, the specific metrics used to facilitate those assessments need credentialing and review processes. Regulators will need to add specifications to assessments and audits enumerating these provisions.

Large platforms are complicated. They produce a range of effects and potential challenges. As the circumstantial evidence mounts for harmful effects on at-risk populations, we have the opportunity to establish much more meaningful assessment systems. We hope that laws like DSA, the Online Safety Bill, and potential US legislation like PATA offer a foundation for building accountability infrastructure not only for acute harms like promotion of illegal content, but also real systems capable of measuring – and preventing – structural harms as well.

If interested in participating in this ongoing work, please learn more and contact the authors at platformaccountability.com.

Authors

Nathaniel Lubin
Nathaniel Lubin is an RSM fellow at Harvard’s Berkman Klein Center and a fellow at the Digital Life Initiative at Cornell Tech. He is the founder of the Better Internet Initiative and is the former director of the Office of Digital Strategy under President Barack Obama.
Thomas Krendl Gilbert
Thomas Krendl Gilbert is a product lead at Mozilla, AI ethics lead at daios, and postdoctoral fellow at the Digital Life Initiative at Cornell Tech.

Topics