Home

Donate

The Ghost of Surveillance Capitalism Future

Richard Reisman / Dec 19, 2021

Richard Reisman is an independent media-tech innovator and frequent contributor to Tech Policy Press.

Discussions of tech reform are mired in disagreement on the nature and cure for present harms – but few take an imaginative look at future scenarios. Policy-makers are often driving in the rear-view mirror, or looking just beyond the hood, instead of well down the road ahead. That leads to overcorrecting for problems we face today and missing early signs of the ones we will face tomorrow. In the spirit of Charles Dickens’ A Christmas Carol, sometimes it’s helpful to peer into the future to imagine possible bad outcomes. The Ghost of Social Media Future can be seen in a paper published in the Vanderbilt Journal of Entertainment and Technology Law titled Watching Androids Dream of Electric Sheep: Immersive Technology, Biometric Psychography, and the Lawand written by Brittan Heller, Counsel at Foley Hoag and Technology and a Human Rights Fellow at Harvard Kennedy School.

Concerned about what Facebook and other platforms know about you and use to manipulate you, now? Just think about head-mounted displays and other sensors that track your eyes, pupil dilation, heart rate, gestures, and micro-expressions -- and use that to guide VR/AR/MR experiences that feel so real they create a visceral effect. “While this innovation raises incredible opportunities for good—from medical treatment to bridging distances…and improving skills training—it also presents new, complex, and groundbreaking policy challenges.” Heller calls this “biometric psychography” and explains how it enables a “mind reading” capability that “can gauge what their users are looking at, how long their attention is captured, and how users may feel about what they are seeing.” Such mechanisms will expose personal details we ourselves may not be aware of and could “put users on guard for self-censorship of their innermost thoughts, feeling, and emotions.”

Heller proposes a framework for analyzing risk based on not just privacy, but also human rights law, including equality and freedom of association and expression. She outlines implications for regulators, product developers, and industry. Heller relates her framework to content moderation, “where there are different options and questions about power, actors, and responsibility based on the different layers of the internet stack where moderation may occur,” referring to the architectural design layers of Internet protocols, and how clear structuring of functions into cleanly interoperating layers enables complexities to be decomposed. These issues are already apparent in “social VR” as it evolves into virtual communities and already generates various forms of harassment that have real impact..

This view of a likely future points to the need to step back and reexamine how we think about social media -- to regulate for the present in ways that head off bigger problems in the future. As Heller suggests, the lens of privacy alone does not consider the value of good uses of personal data – a broader framework of human rights considerations is required.

A framework centered on the user experience and human rights would protect individual rights to use technology as a tool that serves us – to augment our intelligence, discourse, and experience-- both in the form of global social media as now understood, and in the immersive future. We have a human right to augment ourselves with powerful filters and recommendation systems that guide us to ideas, people, and other things of value-- even if we must permit some level of “mind reading” to get it. Key is that theservices that read our minds work as faithful and discreet servants, not the other way around. Otherwise, the alternatives are to jettison technology, throwing the baby out with the bathwater; or to drift aimlessly toward a future that is some combination of 1984 and Brave New World.

This points to an unaddressed challenge in reforming present social media. The power the platforms unleash is in how the content flows, not just the content itself. They mine metadata about how information flows, including likes, shares, and comments, and more subtle behavioral signals that are then used to understand and shape our behavior. Now the platforms use that metadata to serve advertisers, which requires engaging users-- a logic that has unfortunately produced various externalities, such as more anger. Instead, algorithms of the future could use that data to serve us-- mining signals of human judgments to seek content that builds community and enlightenment. As Heller observes, controlling this is not just a matter of individual privacy, but of how sensitive data is used, and whom its use serves. Will these powerful new tools serve advertisers and oligarchs – or authoritarian governments? Or will they serve users and society in ways we can only begin to imagine?

Two families of emerging proposals offer synergistic models for reforms specifically aimed at redirecting those tools toward serving users. One is an older suggestion that has re-emerged. Intermediary services could be introduced to represent user interests-- controlling access to their data and protecting their attention as user agents empowered to negotiate with platforms (and thus advertisers) and to extract fair compensation for that exchange of value. Such services were first called “infomediaries,” and more recently been referred to as “data fiduciaries” or “data cooperatives.” Another proposal – one that has emerged more recently – is that of spinning out the filtering of our news feeds (and recommendations of people and groups) to an open market of filtering services that manage user attention in the ways their clients choose to be served.

Such paths forward are radical and challenging. They would likely require a new regulatory agency. Maybe they are not quite the answer. But both address the issues we face today, as well as some of the concerns that people like Heller foresee coming down the road, in a structural and scalable way that few other remedies do. They might also help limit the abuses of surveillance capitalism more broadly. Of course, there is a need for practical short-term firefighting and containment, but it is essential to build for the future, or at least not block the path.

Whatever direction we choose, the underlying question is “whom does the technology serve?” The motivation to serve users and society should be built in, not pasted on. Modern policy must recognize that these global networks are far too universal, and their future potential far too powerful, to leave this to laissez-faire markets with business models that primarily exploit users.

Additional commentary on these issues is on Reisman’s blog

Authors

Richard Reisman
Richard Reisman (@rreisman) is a non-resident senior fellow at the Foundation for American Innovation, contributing author to the Centre for International Governance Innovation’s Freedom of Thought Project, and a frequent contributor to Tech Policy Press. He is on the team that convened the symposiu...

Topics