The Jury Has Spoken on Big Tech. Now It’s US Lawmakers’ Turn.
Mariana Olaizola Rosenblat / Mar 26, 2026
Lori Schott, center, is embraced as she holds up a photo of her daughter Annalee Schott, after the verdict in a landmark trial over whether social media platforms deliberately addict and harm children at Los Angeles Superior Court, Wednesday, March 25, 2026, in Los Angeles. (AP Photo/William Liang)
This week, two juries in two different states delivered a message Silicon Valley can no longer ignore: the era of accountability for social media's harms to young users has arrived. But the most consequential outcome of these trials may not be the damages awarded. It may be the information forced into the public record, and what that evidence now makes possible.
On Wednesday, a Los Angeles jury found Meta and YouTube liable for the mental health harms suffered by a young woman who began using the platforms as a child. The jury awarded her $6 million in damages after finding that both companies acted with malice and were negligible in the design of their platforms.
A day earlier, a New Mexico jury ordered Meta to pay $375 million for violating the state’s consumer protection laws by misleading users about the safety of its platforms and failing to protect children from predators.
The companies say they will appeal the verdicts.
These are landmark outcomes. In Los Angeles, the plaintiffs sidestepped the Section 230 and First Amendment defenses that have shielded platforms for decades, instead applying traditional product liability tort law (the same tool that allowed tobacco companies to be held accountable) to a new kind of product. Section 230 shields platforms from liability for what users post, but it says nothing about how a product is engineered. The question posed was whether the platforms were unreasonably dangerous by design, and whether the companies knew it.
American law permits companies to sell things that consumers find intensely compelling, even addictive. But that permission has limits, defined by the harm a product or service causes, what the company knew about that harm, what it concealed, and whether it targeted vulnerable populations. The jury found that Meta and YouTube crossed those lines.
The civil damages certainly matter, especially as bellwethers for the roughly 2,000 pending cases against social media companies nationwide. But the implications go beyond litigation.
What these trials have surfaced through discovery is an extraordinary body of evidence— evidence that no legislature has had access to before—linking specific design features to specific harms. Meta’s own internal research, compiled and annotated by the Tech and Society Lab at NYU Stern, documents what the company knew: that employees compared the platform to slot machines and drugs; that 85 percent of clinicians surveyed by Meta said social media can be addictive; that its own research showed teens were unhappy with the time they spent on its apps; and that when users were randomly assigned to stop using Facebook and Instagram, their depression, anxiety, and loneliness improved.
This is not speculation about corporate behavior—it is a documented record of known harms and deliberate choices. This evidence provides exactly what has been missing from the regulatory conversation: an empirical basis for design-based regulation. As our Center documented in the report Online Safety Regulations Around the World, the global regulatory landscape has been dominated by content-based approaches that require platforms to remove certain types of content. But content moderation, while important, addresses symptoms rather than causes. It doesn’t touch the architectural and design features—algorithmic amplification, the infinite scroll, the variable-reward notification systems, or the beauty filters—that social media platforms’ own researchers identified as drivers of harm.
Design-based regulation—compelling platforms to implement or refrain from implementing specific design features—represents a fundamentally different approach. Several US states have begun moving in this direction. New York’s SAFE for Kids Act restricts addictive feeds for minors, and California’s Protecting Our Kids from Social Media Addiction Act targets features designed to promote compulsive use. But these laws have faced legal challenges, in part because the evidentiary link between specific design choices and specific harms has been difficult to establish from the outside.
The trials in Los Angeles and New Mexico have changed that calculus. The internal documents unsealed through discovery can and should inform a new generation of design-based regulation that is empirically grounded, proportional, and focused on the architectural features that drive harm.
On social media, such regulation should address common mechanisms that drive compulsive engagement, such as push notifications, autoplay, follower counts, and the infinite scroll. On gaming platforms, it should target monetization and time-pressure tactics that are often invisible to players—so-called “dark patterns”—and which can result in similar problematic use. But rather than providing a one-size-fits-all design prescription for platforms, regulators should incentivize platforms to give users genuine autonomy over their online experience, allowing them to customize privacy settings, algorithmic feeds, and other features in ways that promote their own wellbeing.
These verdicts will face appeal. Meta will likely argue that its strongest defenses were improperly denied at the pre-trial stage. A federal trial in Oakland this summer will test these claims on different terrain – school districts, not individual plaintiffs, suing to recover the institutional costs of a platform-driven mental health crisis across an entire student population. But regardless of what happens on appeal, the evidentiary record is now public. The question going forward is whether we will rely solely on tort litigation, an after-the-fact remedy, or use what we have learned to regulate by design and prevent harm before it occurs. Legislators, and Congress in particular, should treat this evidentiary record as the foundation for design-based regulation: rules that target the specific features that Meta’s own researchers identified as drivers of harm.
The courtrooms have done their part. Now it’s time for legislators to do theirs.
Authors
