Home

Donate
Analysis

What US Lawsuits Reveal About Platform Design That DSA Reports Don’t

Peter Chapman, Matt Steinberg / Feb 25, 2026

Meta CEO Mark Zuckerberg arrives for a landmark trial over whether social media platforms deliberately addict and harm children, Wednesday, Feb. 18, 2026, in Los Angeles. (AP Photo/Ryan Sun)

In just the last month, litigators and regulators in the United States and the European Union have taken significant steps against social media platforms. TikTok and Snap have each settled claims related to product design risks in California. Meta and YouTube are now in trial, with Instagram’s Adam Mosseri and Mark Zuckerberg each testifying in February. Meanwhile, the European Commission has found that TikTok is in breach of EU law related to “addictive design.” The EC has also just opened a new investigation into whether the global e-commerce platform Shein deploys addictive design.

These two influential governance regimes in the EU and the US — the EU’s Digital Services Act (DSA) and ongoing platform litigation in the US — focus on the role of platform design and the protection of minors. Both the DSA and US litigation are developing new bodies of evidence about how large social media platforms conceptualize, assess, and respond to potential risks to minors. But they’re doing so in very different ways.

Documents disclosed from US lawsuits show that these platforms meticulously track data related to their products and the risks associated with design. In the EU, platform risk assessments assess risk and document safety policies and features. Despite troves of data, internal metrics and evidence rarely feature in platform risk assessments, company announcements, or public debates around the risks and safety of their products.

A recent report by the Knight-Georgetown Institute, “Measuring Risk: What EU Risk Assessments and US Litigation Reveal About Meta and TikTok,” examines what can be learned by reading these two bodies of emerging evidence side-by-side. The report identifies critical gaps in how Meta and TikTok communicate publicly about risks to minors and the actual steps they take to mitigate risks based on their own internal data and evidence.

Measuring Risk compares the second round of the DSA risk assessment disclosures alongside US litigation that has spurred the release of a trove of internal company documents. The emergence of these two new bodies of evidence offers the ability to compare insights and outcomes side-by-side. What emerges is a stark contrast between what platforms publicly disclose and what they know privately.

A tale of two governance models

EU risk assessments and platform litigation in the US represent distinct approaches to governing and mitigating risks posed by social media platforms. The DSA establishes a proactive framework that requires very large online platforms to identify, assess, and mitigate defined categories of “systemic risk,” including risks to minors. By contrast, a wave of consumer protection and product liability cases in the US against large digital platforms — including cases brought by over 40 state attorneys general — work to establish liability for concrete harms.

While different in scope and approach, both processes focus on similar risks and concerns related to problematic social media use impacting minors. Specific risks include sleep deprivation, depression, self-harm, sextortion, eating disorders, and other physical and mental health impacts.

What the public sees and what companies track

TikTok’s and Meta’s 2025 DSA risk assessments describe a range of risks and a multitude of mitigations addressing risks to minors: screentime management, parental controls, privacy-oriented design defaults, and restrictions on notifications. However, the risk assessments provide very little information about the level of risks and the effectiveness of chosen mitigations.

Internal company documents released in US litigation, on the other hand, tell a different story. They offer detailed, if unstructured, information as to internal assessment of risk and the effectiveness of platform mitigations. TikTok and Meta internally categorize millions of US minors as engaging in “objectively harmful” or “problematic” use. At this stage of the litigation, however, released documents largely connect to arguments being made by the plaintiffs, and the defendants highlight that they have not had a full opportunity to present alternative evidence. Nonetheless, internal company documentation is illuminating in the context of risk mitigation and clarifies how platforms can — or could — be expected to track and report on identified risks.

Below are some findings from KGI’s recent report on what Meta and TikTok know about harms to minors, and how that compares to what they publicly disclose.

TikTok

TikTok's DSA risk assessment describes screentime management tools as a key safety measure of the platform. Yet, litigation in the US has disclosed internal TikTok data finding that 10 million minor users spend more than 6 hours on the platform a day, which the company describes as “objectively harmful usage.” TikTok’s studies “found that 19 percent of users 13-15 and 25% of users 16-17 were active on the platform from 12 a.m. to 5 a.m.”

At the same time, internal documents released in US litigation show that company leadership would only approve the development and deployment of screentime management tools if they did not reduce users’ time spent on the app by more than 5%. For the heaviest teenage users, for example, the top 10% of US users who spend over 4 hours daily on TikTok, a 5% reduction means an average of 12 minutes less per day on the platform. If you’re in the top 1% of TikTok’s minor users — who TikTok reports spend more than 6 hours a day on the platform — a 5% reduction would be from an estimated 6 hours to 5 hours and 48 minutes per day on average. Internal communications also show that TikTok teams then tracked these “guardrail metrics” carefully to ensure the tools did not negatively impact revenue beyond the “acceptable” limit.

From January 2024 and January 2025, 77-91% of US users kept default daily screentime limit reminders enabled. In contrast, screentime tools that were not user defaults saw much less adoption. Screentime breaks were optional, for example, and only 1.5% of users chose to enable them. Sleep reminders saw similarly low adoption, with just 0.7% and 1.8% of users opting in to use the feature. Even when users were shown a “take a break” reminder, internal research found that more than 90% of users watched it for less than 5 seconds before skipping it.

Meta

Facebook and Instagram’s EU risk assessments consider the risk that adults could “connect bad actors to minors.” To mitigate these risks, Meta’s reports describe how the company deploys “specialized tooling” to “identify suspicious actors and take appropriate action” while also allowing users to block users or recommendations.

But internal statistics tell a different story. US plaintiffs cite internal Instagram statistics which suggest that in 2023, Instagram’s “Accounts You Might Follow” (AYMF) product feature recommended adults suspected to have had inappropriate interactions with children to nearly 2 million minors in just 3 months. More than 20% of these AYMF recommendations resulted in an actual follow request. This would mean that nearly half a million minors made follow requests to adult groomers over just 3 months, because Meta proactively recommended those accounts. A 2022 internal audit further found that in just one day, the AYMF feature proactively recommended 1.4 million adult accounts, suspected to have had inappropriate interactions with children, to minor users.

The evidence and accountability gap

The evidence emerging from EU systemic risk assessments and US platform litigation underscores a central gap in current approaches to platform governance: risks are increasingly well-described, but mitigations are rarely communicated using rigorous, outcome-oriented data and evidence.

While the EU has created an obligation under the DSA for platforms to identify and mitigate systemic risks, the first two years of risk assessments show that companies rely on high-level descriptions of policies, tools, and user controls. Assessments provide extremely limited transparency into whether any mitigations meaningfully reduce harm, particularly for minors. In short, they communicate stated policies but they do not assess.

By contrast, US litigation has surfaced previously unreleased internal platform data, experiments, and deliberations. These documents reveal how platforms internally measure risk and define acceptable trade-offs related to risk, engagement, and revenue. But this approach is reactive and largely limited by the facts of each specific case.

This divergence points to a clear opportunity: Risk mitigation requires more than merely the existence of safety policies or features; effective mitigation requires testable hypotheses, clearly defined metrics, and research and evaluation methods capable of understanding progress over time. Internal documents released through US litigation show that platforms possess the data and technical capacity to conduct such analyses. Indeed, platforms are already evaluating their product decisions in these ways. Yet this methodological rigor is almost entirely absent from public-facing communications, including EU risk assessments. This inhibits regulators, researchers, users, parents, and the public from actually understanding whether platforms and their mitigations present concrete benefits with acceptable risks.

Authors

Peter Chapman
Peter Chapman is the Associate Director with the Knight-Georgetown Institute (KGI), a new center at Georgetown University that connects independent research with technology policy and design. Peter works across KGI’s areas of focus, including platform governance and design. Peter is an attorney with...
Matt Steinberg
Matt Steinberg is a Tech and Public Policy Scholar at Georgetown University’s McCourt School of Public Policy and a Policy Fellow at New America’s Open Technology Institute. His work focuses on technology’s impact on data privacy, democracy, and media ecosystems. He is a former film and television e...

Related

News
The EU Wants to Label 'Addictive Design' a Systemic Risk Under the DSAFebruary 6, 2026
Analysis
How Have Platforms Addressed Addictive Design Under the DSAFebruary 23, 2026

Topics