Today, House Democrats including Energy and Commerce Committee Chairman Frank Pallone, Jr. (D-NJ), Communications and Technology Subcommittee Chairman Mike Doyle (D-PA), Consumer Protection and Commerce Subcommittee Chair Jan Schakowsky (D-IL), and Health Subcommittee Chair Anna Eshoo (D-CA) published a draft bill that plan to introduce to make reforms to Section 230 of the Communications Decency Act.
The Justice Against Malicious Algorithms Act would remove the immunities that Section 230 affords platforms in certain contexts, such as “when an online platform knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury,” according to the Committee’s announcement.
“Social media platforms like Facebook continue to actively amplify content that endangers our families, promotes conspiracy theories, and incites extremism to generate more clicks and ad dollars. These platforms are not passive bystanders – they are knowingly choosing profits over people, and our country is paying the price,” said Rep. Pallone in a statement.
Rep. Doyle appeared to refer to documents and testimony brought forward by Facebook whistleblower Frances Haugen in a statement supporting the proposed bill.
“We finally have proof that some social media platforms pursue profit at the expense of the public good, so it’s time to change their incentives, and that’s exactly what the Justice Against Malicious Algorithms Act would do,” he said.101421-EC-Section-230-Text
Reactions from some in the tech policy community were negative, even though some found merit in the intent of the proposal.
- “This bill is well-intentioned, but it’s a total mess,” Fight for the Future‘s Evan Greer told The Guardian. “Democrats are playing right into Facebook’s hands by proposing tweaks to Section 230 instead of thoughtful policies that will actually reduce the harm done by surveillance-driven algorithms.”
- “Almost nothing here makes any sense at all. It misunderstands the problems. It misdiagnoses the solution. It totally misunderstands Section 230,” wrote Michael Masnick in Techdirt. “It creates massive downside consequences for competitors to Facebook and to users. It enables those who are upset about moderation choices to sue companies (helping conspiracy theorists and misinformation peddlers). I can’t see a single positive thing that this bill does.”
- Eric Goldman, Associate Dean for Research, & Internet Law, Advertising Law at Santa Clara Law wrote on Twitter that “not only would it kill Section 230 several different ways, but it’s terribly drafted.”
.@WillOremus Intent of this bill is terrific. By targeting algorithmic amplification at post level, bill puts burden on plaintiff. Is there is a better way to same goal? Alternative: remove 230 protection for any app that uses algorithmic amplification at all. Burden on platform.— Roger McNamee (@Moonalice) October 15, 2021
Daphne Keller, Platform Regulation Director at the Stanford Cyber Policy Center, suggested in a tweet thread that the language of the bill suggests more work needs to be done in the policy community to find solutions to the problem of algorithmic amplification.
“Anyone who thinks (1) amplification causes harms, but also (2) any regulation needs a lot of nuance: Your work is done on (1),” Keller tweeted. “If you believe in (2) – the nuance part – it is time to start talking about that. Or we’ll get laws like this one.”
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Innovation. He is an associate research scientist and adjunct professor at NYU Tandon School of Engineering. Opinions expressed here are his own.