Home

Delegation, Or, The Twenty Nine Words That The Internet Forgot

Richard Reisman, Chris Riley / Feb 28, 2022

Richard Reisman is an independent media-tech innovator and frequent contributor to Tech Policy Press; Chris Riley is executive director of Brave New Software and senior fellow for internet governance at R Street Institute.

It is the policy of the United States… to encourage the development of technologies which maximize user control over what information is received by individuals… who use the Internet…” -- from Section 230 of the Communications Decency Act

47 U.S. Code § 230

In the beginning, the internet promised a new utopia of empowerment and user choice. Yet by the mid-1990s, challenges with content moderation online had grown. Congress responded with the Communications Decency Act to regulate the transmission of indecent material online, though the content regulatory provisions of the statute were subsequently struck down as unconstitutional (in Reno v. ACLU). Retained in the law is the famous “Section 230”, credited by some as The Twenty Six Words That Created the Internet, but now the subject of heated legislative debate as platform power has grown while moderation has failed to keep up. At the same time, legislators around the world are re-opening their competition and antitrust frameworks to consider changes that give them more power to intervene in technology markets and shape the behavior of tech companies.

But consider the less-attended-to section of text from Section 230 that this article began with, and how it takes a normative position on balancing between individual empowerment and centralized control: twenty nine words within the law’s statement of policy at the top, identifying that a goal of government intervention in the context of the internet is “to encourage the development of technologies which maximize user control over what information is received by individuals.”

This article explores why this emphasis on user control is far more important than generally recognized, and how an architecture designed to make high levels of user control manageable can enhance the nuance, context, balance, and value in human discourse that current social media are tragically degrading.

The promise of user control: Freedom of impression

This language in support of “user control over what information is received” implies something fundamentally different from, and complementary to, the universally recognized freedom of expression. It’s an aspirational statement of individual agency over the receipt, not the production, of information - a sort of “freedom of impression.” And looking at content and competition issues from that lens helps create some interesting, and tractable, pathways for potential government action, without posing the same kinds of free speech challenges as content-specific interventions.

Filters for content as it is uploaded are a well-established part of content moderation practice and conversations around the role of the private sector. Filters on the consumption of content, however, are far less attended to, outside the context of protection against malware or indecency. (That is, user-controlled filters, in countries that largely protect internet freedom. Of course, mandatory filters in internet repressive environments greatly limit consumption, but not in a way that meaningfully advances a “freedom of impression”.)

While awareness of the role played by recommender systems is growing, the focus often remains lodged at a macro level, branded as a question of black-box “algorithms” and machine learning, for which the opportunity for intervention centers around macro-level rules mandating transparency and accountability from the service provider.

Consideration of expression alone, rather than expression paired with impression, limits the questions of harm and the potential remedy – it can lead to a one-sided focus on the speaker whose voice might then be promoted or marginalized by the recommender system. It is more effective to consider, in parallel, the freedoms or choice of the individual whose agency as an information consumer is preemptively replaced by limited recommendation options that a dominant platform has calculated by its own criteria to be of most value (whether to the consumer or the platform itself).

In any case, the idea of controlling speech is anathema, whether the controller at issue is a government or a dominant platform provider (as “platform law”). However, many internet users expect - and increasingly need - social media services to help them manage their listening, to prevent overload by the “firehose” of inbound information. But at the same time, that platform control of content prioritization, at a macro scale, has been shown to drive the spread of misinformation and other online harms. And at a micro scale, it leads to a range of different policy side-effects, as the technical implementations involve exploiting user data for purposes that are not necessarily visible to, controlled by, or acceptable to the user.

Social, technical, business, and governance issues

There is an underlying technology problem here, not merely a social one. On some level, the scale itself could be identified as the culprit; Masnick’s Impossibility Theorem contends that content moderation at scale is impossible to do well. But a more tractable lever into the technical problem may come from focusing on the reflexivity that has emerged – to reconfigure a massively scaled network now designed for driving engagement with minimal friction, that exploits basic flaws in human nature, promoting a reflexive virality that is hard to address and creates real-world harms with new levels of virulence.

It’s often said that there is no technology fix for social problems. Certainly, for example, better artificial intelligence is not a sufficient solution for the spread of misinformation and online harm. But at the same time, there is no good social fix for technology problems, including where the structural reflexivity of the medium poorly addresses, or is used in ways that amplify, social problems. For that, the technology itself must be improved.

Humans have finely evolved skills for organizing to draw on the intelligence of their communities and building a flexible ecosystem of institutions that mediate their understanding of the world in ways that add great value (see DiResta, Ferguson, Rauch). Current social media technologies have short-circuited traditional deliberative processes that help us assimilate new information and have disintermediated the institutions that help mediate a shared understanding of reality. Our media technology must facilitate not only our transmission of messages, but also our process of sense-making from those messages.

To be sure, part of the challenge stems from the underlying business model of today’s major social media platforms, which preferences engagement as a means of generating more advertising revenue. While that can indeed lead to contradictory incentives to favor popular but harmful content, platforms have nevertheless taken steps to mitigate that tension, including higher investments in moderation infrastructure and more powerful content flagging systems, as well as concepts like virality circuit breakers that slow suspicious spread. But these moves have thus far been no match for the modern firehose of virulence – and indeed they can never be more than part of the solution.

In 2014, Ethan Zuckerman described advertising as “the original sin of the web.” Advertising was not intended to be the internet’s dominant business model, but it proved seductively effective. Traditional business models built on scarcity did not scale up well for digital abundance. Low marginal cost but high fixed cost, coupled with increasing network effects, drove businesses to scale – most easily with subsidized, seemingly free-to-the-user services, leading to winner-take-all centralization. For individual content consumption, user agency becomes shaped by platform-controlled automation, and the “user control” objective embedded in statute lost out – as bluntly expressed by the now-cliched line, “If you’re not paying for it, you’re not the customer; you’re the product.”

The first wave response to this dynamic, in the context of privacy, was the paradigm of “notice and choice,” which has by and large failed. The second wave, Europe’s extensive cookie banners governed by the ePrivacy Directive, similarly leaves many users with an illusion of choice but a reality of annoyance and frustration. From a regulatory perspective, the conversation has in practice shifted to a more tractable frame of data protection, ensuring that user privacy is protected regardless of user action.

While some of these remedies may be highly valuable, lost in this evolution is the centrality of promoting user control and agency. The platforms have little incentive to offer meaningful controls, and counter, with some justification, that few users are willing or able to clearly specify what control choices they want. Thus, the question remains: How to make the level of user agency that came easily in the traditional media ecosystem both practical and powerful in what now seems to be becoming an increasingly disintermediated ecosystem?

The power of delegation

In parallel to evolving privacy work, the idea of delegation to an intermediary as a practical and powerful means of promoting user agency has a quieter history. The concept of an “infomediary” (information intermediary), first suggested in 1997, describes a paradigm where user interests and data are managed by separate services operating between the end user and remote service providers. The idea is to empower the user to make choices as simply as they like about how their media tools work, while the infomediary handles the messy and frequently changing details and negotiates compliance from the platforms. This approach holds user data separately from the service provider as a legal and technical means of mediating access and providing user agency over what data is shared for what use, and what criteria determine how their feeds are filtered. It also counters the “sins” of the ad-based business model, by seeking fair compensation from the platforms to the user for both data and attention. This idea has resurfaced, in some sense, as “data trusts” or “data cooperatives.”

To promote some of the same objectives, but leaving the data with the platforms, some have proposed they embrace an explicit role as fiduciaries, a sort of direct delegation from the user to respect their interests but in a more legally protected way. There continue to be attempts to make infomediaries a reality, but they have yet to gain the critical mass of user power to negotiate with increasingly powerful platforms.

Implementing in law and practice

Throughout this somewhat diffused history of delegation, it remains a powerful potential circuit breaker to the dominant model of centralized platform control - a valuable balance point between moderation at scale and individual empowerment. And as a result, delegation has begun to re-emerge in modern conversations around internet governance. In particular, the 2019 ACCESS Act introduced in the U.S. Senate included among its provisions a requirement to provide “delegatability” – enabled through APIs that allow a user to authorize a third party to manage the user’s content and settings directly on the user’s behalf. Language such as this might revive the dream of the 1990s of infomediaries, through a politically viable and pragmatic intervention. EFF produced a favorable analysis on delegatability (while noting some privacy and security challenges), as did The Mozilla Foundation, and a prominent VC said he was “most excited about…delegation…[as] critical to making all of this work.”

Suggestions of ways to apply this to give users control of what they see on social media have been outlined by Masnick, Wolfram, Fukuyama, Dorsey, Zuckerman, Reisman, and others. The idea of distributing infrastructure functions presently controlled by singular platforms among many actors may seem daunting, but astoundingly complex distributed realtime infrastructures have already fueled huge growth in e-commerce, fin-tech, and ad-tech. Ultimately, the diversity of user choice facilitated by delegation can help restore the openness and generativity at the heart of the internet. It can build on the nuance and diversity of human reflexivity to compensate for the limitations of artificial intelligence – and thus create a vibrant new dimension in how our media ecosystem extends human society.

The dynamic virality and reflexivity of the modern internet mold society’s communications far more deeply than earlier media, and the response to manage harm must be commensurately different. User choice is essential to a social and media ecosystem that preserves and augments democracy, self-actualization, and the common welfare – instead of undermining it. And delegation is the linchpin that can make that a reality.

- - -

This is the first in a continuing series of related essays by Reisman and Riley in Tech Policy Press:

  1. Delegation, Or, The Twenty Nine Words That The Internet Forgot
  2. Understanding Social Media: An Increasingly Reflexive Extension of Humanity
  3. Community and Content Moderation in the Digital Public Hypersquare
  4. Contending for Democracy on Social Media and Beyond

Authors

Richard Reisman
Richard Reisman (@rreisman) is a non-resident senior fellow at the Foundation for American Innovation and a frequent contributor to Tech Policy Press. He blogs on human-centered digital services and related tech policy at SmartlyIntertwingled.com, and his work was cited in a Federal Trade Commission...
Chris Riley
Chris Riley is Executive Director of the Data Transfer Initiative and a Distinguished Research Fellow at the University of Pennsylvania’s Annenberg Public Policy Center. Previously, he was a senior fellow for internet governance at the R Street Institute. He has worked on tech policy in D.C. and San...

Topics