Home

Donate

Assessing the NUDGE Act

Ellen P. Goodman / Feb 14, 2022

Ellen P. Goodman is a Professor at Rutgers Law School, Co-Director of the Rutgers Institute for Information Policy & Law (RIIPL), and a Senior Fellow at the Digital Innovation & Democracy Institute at the German Marshall Fund.

Last week, Senators Amy Klobuchar (D-MN) and Cynthia Lummis (R-WY) introduced the Nudging Users to Drive Good Experiences on Social Media Act, dubbed the NUDGE Act for short. The legislation is clearly trying to provide an outlet for anger at the platforms from both the left and the right. Some of the concepts are promising– especially adding friction to slow down viral transmission, increasing transparency around platform design, empowering the Federal Trade Commission (FTC) to enforce against manipulative ad tech, and investing in research. But the language is too loose, the rulemaking structure too slow, and the interventions vulnerable to First Amendment attack. These defects can be cured by scaling back some of the Act’s ambitions. Federal legislation should be much more ambitious in another respect: it should direct the federal government’s spending powers to help create pro-social technologies and civic information.

Background

In a statement, Sen. Klobuchar said the purpose of the proposed legislation is to address “algorithms pushing dangerous content that hooks users and misinformation” by “implementing changes that increase transparency and improve user experience.” Sen. Lummis added that legislation would address “Big Tech overreach” by “empowering the National Science Foundation (NSF) and the National Academy of Sciences, Engineering and Medicine (NASEM) to study the addictiveness of social media platforms” and “the impact designs of these platforms and their algorithms have on our society.”

Faulting social media platforms for uneven enforcement of their own content moderation policies and recognizing the limitations of automated systems in addressing a variety of problems, the proposed legislation prizes "content-agnostic interventions”-- nudges, labels, alerts, and other prompts and design changes that may result in healthier online behavior. It directs NSF and NASEM to establish scientifically informed best practices, and makes the FTC responsible for deciding how the platforms must implement them. And, it mandates transparency in content moderation practices, similar to the European Digital Services Act (DSA).

Platforms covered under the Act would be required to produce reports and data for the FTC regarding their content moderation practices. And, violations of the mandates in the legislation as implemented by the FTC through rulemakings would open the platforms to claims of unfair or deceptive acts or practices.

Analysis

The recitals use terms that are controversial and undefined. For example, what’s a “nudge,” and how is it different from a dark pattern? What is addiction and how is it different from extreme popularity, or even usefulness? The recitals also group together manipulative and microtargeted ads, suggesting that microtargeting is necessarily bad in the same way that manipulation is. Perhaps these definitions may be addressed in the course of the study and problem definition required by the legislation, but more clarity in the Act itself may lead to better legislative outcomes.

The bill itself has some good and bad elements. First, the good:

  1. It is good to recognize that the latest research supports friction as a way to reduce online harms, and it is good to support more research on which kinds of interventions are effective. We know already that some of the hoped for results from certain interventions haven’t materialized.
  2. It is also good to givet the FTC rulemaking authority in this area and to expressly yoke Section 5 enforcement powers to violations by the platforms.
  3. It is good to push transparency and reporting. In this respect, the legislation looks a lot like the DSA transparency measures.
  4. The proposed language also includes a number of nuanced elements that show real understanding of the domain, such as the push for uniform metrics, the requirement that content moderation policies address multiple languages, and the adoption of a wait-and-see approach to how platforms should be grouped together (rather than just going with Very Large Online Platforms, or VLOPs, as in other bills.)

And, the bad:

  1. There is a temporal dimension to this work that will be difficult to address in practice. The study and then the rulemaking imagined in this Act will take years, and may produce recommended interventions that will not be relevant to the platforms at that time. Just as the law struggles to keep up with the pace of technological change now, there is no solution in this bill for how to arrive at results and recommendations in an actionable time frame.
  2. The term “content-agnostic” is unlikely to save the bill from the First Amendment problems that confront any legislation that touches platform content moderation. The bill tries to sidestep the moderation problem, saying that it is agnostic with respect to user-generated content. But the harms discussed will be generated by particular kinds of content being algorithmically amplified. Design interventions to de-amplify certain content, or nudge users towards other content, probably won’t be content-agnostic. Moreover, the agnosticism achieved under this definition falls away if the interventions are differentially attached to ads and other speech deemed “manipulative.” That’s content-based. Ads do not raise the same constitutional concern as noncommercial speech– for now– but it’s a risky bet for the long term.

Conclusion

I have been interested in how friction, including circuit breakers to reduce the viral spread of content that violates the platforms’ terms of service, can help make a better internet. Research has shown that carefully crafted speed bumps in the slipstream of social media content can improve the ratio of credible to crap information, increase deliberation, and reduce the virality of misinformation. But it’s complicated. For example, credibility indicators on news can reduce intent to share misinformation. On the other hand, flags on Donald Trump’s tweets falsely claiming election fraud did not reduce spread, and may even have increased it. New research that I’m about to publish with Orestis Papakyriakopoulos provides some further insight into design choices around fact-checks. It turns out that words matter and how the fact-check is framed changes user behavior. It is very unlikely that legislation or rulemaking can effectively design friction.

What we need instead are general performance standards and incentives (sticks) to get platforms to implement the frictions their own research tells them will work. Giving the FTC rulemaking authority over deceptive and unfair ad and data practices, which the Nudge Act does, seems like its most promising part. Tasking the science academies with studying the design of addictive platforms and salubrious nudges is a good thing – so long as they’re producing recommendations for the metaverse. Because the social media platforms they study today will not be the things that require interventions when they reach their conclusions.

Government subsidies for pro-social technologies and content the market will not support would be another way to go about making healthier social media. If, for instance, the Pentagon would spend its massive ad budget on local journalism outlets, it could help small communities keep their news sources. If the national academies sponsored regional contests for social media hacks, engaged civil society, and provided seed funds for the best ideas, this would help create a more diverse ecosystem of technologies. It is what the government has done successfully before when innovation stalled, and democracy seemed threatened.

Justin Hendrix contributed to this analysis.

Authors

Ellen P. Goodman
Ellen P. Goodman is a Professor at Rutgers Law School, Co-Director of the Rutgers Institute for Information Policy & Law (RIIPL), and a Senior Fellow at the Digital Innovation & Democracy Institute at the German Marshall Fund.

Topics