The Case for Mandating Finer-Grained Control Over Social Media Algorithms
Robert Diab / Jul 11, 2024As major elections take place in the US, Europe, and elsewhere this year, concerns continue to arise about social media’s impact on public discourse. Chief among them are that platforms have so much power to amplify content that they have the potential to sway elections. They also use opaque algorithms that to decide in large measure what users see and hear, in some cases hindering rights to free expression and autonomy.
Debate continues to unfold as to how to address these concerns, but a consensus has emerged among many experts around the notion that platforms should provide more user choice over the make-up of the algorithms that sort their content — as a means of fostering autonomy and limiting platform power over amplification and thus democratic deliberation. Facebook recently provided finer-grained control over its News Feed, and Bluesky offers the option of using third-party algorithms.
But new bills in Europe, the US Congress, California and other states have mandated a more limited solution, focused on the use of personalized data. Platforms must give users a choice of content streams not based on likes, location, or user history — which might mean a chronological feed, or one that isn’t personalized but still algorithmic.
Neither approach fully addresses concerns about speech rights or democracy. Neither gives users what they want or curbs platform power over democratic outcomes. The bills should go further by compelling social media to provide finer-grained control over content selection or a choice to use third-party algorithms.
The normative case for doing so is clear: no private entity should have so much control over the public sphere as to be able to sway elections or hinder individual choice over what we see and hear.
But as the US Supreme Court held last week in its decision to remand suits over laws in Florida and Texas back to lower courts, social media’s algorithmic curation is, in some cases, a form of protected speech under the First Amendment. It did not, however, address the question of whether mandating more user control over algorithms would be constitutional — and this question has not been explored much in the literature. As this commentary explains, laws mandating more user control might curtail platform speech rights but would be justified. Offering more choice would advance important state interests without hindering a platform from offering its own curation.
Why new bills miss the mark
New bills purporting to address concerns about algorithmic feeds on social media fail in a similar way. They try to curtail platform power and foster user expression rights by mandating options that won’t help on either front.
Europe’s new Digital Services Act prohibits large platforms from offering a product that “deceives or manipulates or ... otherwise materially distorts or impairs” a user’s ability to make “free and informed decisions.” Platforms must disclose the “main parameters used in their recommender systems.” And they need to provide “at least one option” for a content stream “not based on profiling.” The EU claims that these measures “guarantee greater transparency and control on what we see in our feeds.” But the DSA doesn’t altogether preclude using algorithms that amplify misinformation or extreme content.
The Filter Bubble Transparency Act, a bipartisan and bicameral bill proposed in the last Congress, is also reductive in its approach. It distinguishes an ‘opaque’ algorithm, drawing on user history, location, and engagement data, from an ‘input-transparent’ algorithm. Large platforms must allow users to “easily switch” between the two. But as The Verge senior tech and policy editor Adi Robertson notes, the bill does not ask platforms to explain how their algorithms work nor provide the option of a non-algorithmic (chronological) feed. “[N]othing stops companies from delivering inflammatory content that encourages negative engagement.”
New York’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act and California’s similar Protecting Our Kids from Social Media Addiction Act prohibit platforms from providing an “addictive feed” to those under 18. This is defined in both bills as a stream based on “information associated with the user or the user’s device.” Yet neither bill bans the use of algorithmic feeds, even in the case of young people – just personalized feeds.
The need for algorithms, but not just any
To be clear, the problem is not algorithms per se, and the goal is not to prohibit their use. We need algorithms. What we want is more control.
We need them to make the torrent of content coming at us manageable. Studies have repeatedly found that opting out of algorithmic feeds leads users to be less engaged and to spend less time on a platform. As Nick Clegg, President of Global Affars at Meta, aptly put it, without algorithmic feeds, “people would see more, not less, hate speech; more, not less, misinformation; more, not less, harmful content.”
But as the Senate testimony of former Facebook product manager turned whistleblower Frances Haugen in 2021 affirmed, large platforms are incentivized to employ algorithms that maximize engagement. The most efficient way to do this is to boost content that “amplifies division, extremism, and polarization.” Scholars debate whether amplification fuels polarization offline, with studies going either way. Other research has found that mass media has played a more prominent role than social media in stoking division in some cases, such as the misinformation campaign that followed the 2020 election. Yet even here, the findings suggest social media played an important “supportive role.”
In the late 2010s, platforms made changes to their algorithms as knowledge and concern about their power over political debate grew – but the potential for abuse remains. YouTube revised its algorithm to make extremist content less accessible. Facebook, Reddit, and other platforms altered their approach to moderation to address hate speech and other harmful content. Researchers assessing YouTube’s new approach found in one study a smaller likelihood of the site recommending extremist content to those with moderate views, and in another study that such content is seen by only a small portion of users. But the latter study found that YouTube still recommended extremist content at a similar rate to mainstream content. As communications scholar Aaron Shaw, among others, has noted, YouTube remains a “powerful tool” for those seeking to disseminate extremist beliefs by providing a “hospitable environment” to “disseminate ideas, build solidarity, or plan and publicize egregious acts.”
Social media are diverse, and each platform should be assessed on its own. But evidence continues to raise concerns about platform power and abuse. A study of Twitter in 2023 found that its algorithms amplify “emotionally charged, out-group hostile content that users say makes them feel worse about their political out-group.” A pair of studies in 2023 by the Anti-Defamation League and the Tech Transparency Project found that large platforms including Facebook, Instagram, and Twitter, “at times directly contribute to the proliferation of online antisemitism, hate, and extremism through their own tools and, in some cases, by creating content themselves.” A study in 2024 by researchers at the University College London and Kent found a “fourfold increase in the level of misogynistic content” on TikTok’s ‘For You’ page after only 5 days on the platform.
What we really need is more control, but would it be legal?
To curb the potential for abuse on the part of platforms, we need more control over how algorithms function. Platforms are beginning to provide this, with Facebook offering a choice over which friends, posts, and groups to prioritize. The X (formerly Twitter) alternative Bluesky offers a choice of third-party algorithms. But none of the new bills compels other platforms to take similar steps.
Whether platforms can be compelled to provide more choice, however, depends on whether social media algorithms are held to be a protected form of speech under the First Amendment or analogous protections in other countries.
The Supreme Court addressed this question in the US context in Moody v NetChoice and NetChoice v Paxton, suits representing a challenge to laws in Florida and Texas that hinder social media from demoting or removing third-party speech.
Prior to the decision, it was unclear whether algorithmic curation was a protected form of speech in the US. Legal scholar Sofia Grafanaki argued in 2018 that social media could be regulated without violating First Amendment rights on the basis that they carry out an editorial function in their ‘content moderation policies’ but not in their ‘content navigation algorithms.’ Justices Alito, Gorsuch, Thomas and (separately) Barrett and Jackson contemplate a distinction of this nature in Moody.
In Moody, the Court remanded the case to lower courts for further fact-finding, rendering its opinion on the merits of the First Amendment challenge obiter. Justice Kagan (with Sotomayor, Kavanaugh, Barrett, and Chief Justice Roberts) suggested that navigation algorithms cannot be distinguished from content moderation policies. Any algorithmic ‘ordering’ or ‘selection’ is expressive, Kagan opined, engaging a line of cases in which the Court struck down laws that compelled private entities to include, in a given form of expression, content not of its own choosing (Miami Herald v Tornillo, forcing a newspaper to provide a right to reply; Pacific Gas v Public Utilities Commission, compelling a private utility to include content in a newsletter; and Turner Broadcasting v FCC, compelling a cable provider to carry local channels). But writing separately, Barrett differed from the Kagan majority in suggesting that some social media algorithms may not be expressive since they do not express a viewpoint. The remaining four Justices contemplated a similar distinction.
Moody makes it likely, however, that a law directly targeting the make-up of an algorithm will violate the First Amendment; it does not address a law mandating a choice over algorithms. Such a law might still be construed as “a restriction on the ability to control how third-party posts are presented to other users,” as Justice Kagan put it. But if so, it would be a lesser infringement — and one that should be justified by substantial state interests in protecting autonomy and democratic process.
The tenor of the Supreme Court’s decision in Moody suggests that a platform’s speech rights will prevail over the state’s interests where those interests amount to little more than a difference of opinion as to how best to foster the ‘marketplace of ideas.’ Courts in other countries may take a different view. The open question here is how American courts, along with those in other countries, should approach laws that mandate more user control but still allow platforms to offer their own curation.
This is fundamentally different from compelling platforms to amend their algorithms to include certain content they would prefer to remove. It may entail curtailing platform speech, but it’s really about curtailing platform power.