Home

Resetting the Rules for Tech to Preserve the Public Interest

Paulo Carvão / May 11, 2023

Paulo Carvão is a Fellow at the Harvard Advanced Leadership Initiative.

The technology marketplace has changed dramatically since the 1990s, and so must our expectations about the role and obligations of online platforms. We are living in a new phase of the “Information Revolution” in which technology is impacting and altering the way we communicate, how we relate to each other, the way we work, and the nature of work. This requires new laws, new regulatory frameworks, and new enforcement mechanisms.

To develop these new approaches, it is appropriate to take a step back and revisit the ideological, institutional, and technological dynamics that have brought us here, and to investigate how power got concentrated in a small number of large technology firms with business models deliberately shaped to exploit user data. And, it’s worth thinking about how the outcome of two cases currently before the Supreme Court – Gonzalez v. Google and Twitter v. Taamneh – may create an opening for change.

The History of “Progress”

The early and formative days of the computer revolution took place in a moment of cultural torment in the United States. In the Silicon Valley of the late 1960s and 70s, the first technological demonstrations of modern concepts of personal computing and contemporary human-machine interfaces took place in the context of the civil rights movement, the unraveling of the Vietnam War, and economic upheaval. With a deep distrust of government authority and the counterculture movement happening in the background, these technological innovations inspired a discussion about the possible emergence of a new society with the individual at its core, empowered above traditional institutions.

Engineers embraced a “hacking” culture developing new hardware and software that anyone could tinker with, leading to the personal computers of the 1980s. The Apple 1984 Superbowl commercial introducing the Macintosh computer epitomizes the mentality of that era. Proto-social media constructs in the form of bulletin boards emerged as a utopian, anarchic dream where an absence of rules would lead to self governing communities with ideas flourishing on merit. In reality, the loudest voices dominated, and from there on, the internet would embrace majoritarianism as its ideal. In 1996, John Perry Barlow’s “A Declaration of the Independence of Cyberspace” laid out the ideology of the nascent Web, centered around freedom and self-determination.

To better understand the emerging ideology of Silicon Valley, one should also look at the thinking of the engineers and investors that have dominated the industry, exemplified by white male dropouts from prestigious universities with an appetite for breaking rules and aspirations to get rich by changing the world.

As Max Fisher put it in his 2022 book, The Chaos Machine: The Inside Story of How Social Media Rewired Our Minds and Our World, “the hidden force behind everything, setting both the culture and the economics, was venture capitalism. The practice of engineers becoming VCs who pick the next generation of dominant engineers kept the ideological gene pool incestuously narrow.” Likewise, Peter Thiel, one of Facebook’s original investors, and his co-author Blake Masters wrote in their 2014 book Zero to One: Notes on Startups, or How to Build the Future that “the history of progress is a history of better monopoly businesses replacing incumbents.” In this conception of capitalist progress, dominant winners are the path to innovation, which benefits us all. This argument is in direct conflict with the view that monopolies leverage their power to deliver less value while extracting greater rents from consumers.

New Rules of the Game

Despite such inconsistencies, the Silicon Valley monoculture and its version of progress enjoyed the fruits of back-to-back technological revolutions: personal computing first, followed by the pervasive use of the internet via Web protocols. Pro-industry institutional change enacted by governments and courts eager to clear the path for tech firms supported a new generation of business models. In her book Between Truth and Power, legal scholar Julie Cohen describes efforts by the large internet platforms, through litigation and legislation, to shape intellectual property, privacy, and data protection rules, making individual data “public domain” and, therefore, free for harvesting. In parallel, data collected by the firms were deemed “private property,” and thus protected from competitors and the prying eyes of regulators.

Cohen expresses concerns about the use of personal data by internet platforms in what she calls the "biopolitical public domain." She argues that the collection and use of personal data by these platforms creates a new form of power that can be used to influence and control individuals in ways that were not previously possible. This power can be used to manipulate individuals' behavior and decisions, as well as to exert social and political control, representing a presumptive entitlement that platforms assert to appropriate and use data flows extracted from people. These data flows play an important role as raw materials in the political economy of informational capitalism.

Cohen builds from there to say that personal data flows – and, by extension, people themselves – are the commodity inputs, with their choices and behaviors monetized. This is a new form of surplus extraction to produce wealth. At the heart of the model is the matching of populations with specific advertising and targeting strategies to drive behavior and content consumption patterns. When properly executed, this becomes a reinforcing loop that maintains and stabilizes those populations. The attention economy and the platform advertising business model are the results. Cohen posits that the “idea of a public domain of personal data alters the legal status of the inputs to and outputs of personal data processing” with “a pattern of appropriation by some, with economic and political consequences for others.”

Amy Kapczynski's work on trade secrets and constitutional law since the 1980s show how these transformations strengthened companies’ bargaining power across a variety of industries. Her 2020 Yale Law Journal article “The Law of Informational Capitalism” highlights how legal ordering has been used to insulate private power from democratic control. She argues that the changes in legal frameworks have strengthened the bargaining power of companies and reduced the ability of regulators to protect the public interest. In the context of Big Tech and social media platforms, this has implications for issues such as data privacy, content moderation, and competition, which are increasingly important for the functioning of the digital ecosystem. Kapczynski's work underscores the need for a more comprehensive and democratic legal framework that is responsive to the needs of the broader public.

Likewise, Sabeel Rahman and Kathleen Thelen's work on platform capitalism examines how digital platforms have transformed the dynamics of the modern economy. In a 2019 article in the journal Politics & Society, “The rise of the platform business model and the transformation of twenty-first-century capitalism,” Rahman and Thelen argue that platform capitalism has led to the concentration of wealth and power in the hands of a few dominant players, resulting in growing economic inequality and political polarization. Platform companies use their network effects to gain an advantage over smaller players, which leads to a self-reinforcing cycle of growth and consolidation. The power of digital platforms over consumers, where dominant platforms can exercise significant control over users and data, results in a loss of consumer autonomy.

Moreover, Rahman and Thelen highlight the limitations of traditional regulatory frameworks in addressing these challenges. Existing consumer protection laws, for example, are ill-suited to address the unique challenges posed by digital platforms, such as the use of algorithmic decision-making and targeted advertising. Rahman and Thelen suggest that policymakers must take a more proactive approach to address these issues, such as by promoting transparency and accountability in platform operations, enforcing data privacy protections, and promoting competition in the platform economy.

Platforms engage in what legal scholars Ellizabeth Pollman and Jordan Barry call “regulatory entrepreneurship,” practices where the laws are “unclear, unfavorable, or even prohibit the activity outright” at the edge of labor, financial, and other economic regulations as new technologies have not yet been addressed by government agencies. More recently, there is a growing body of academic research calling for reviving state power as a counterbalance to market power. Pollman and Barry’s normative approach is congruent with this view, and supports the establishment of new regulations and a regulatory body that will address the internet, social media platforms, and artificial intelligence algorithms.

The question of whether we can reform a system from within is not new. The speed of technological change accelerated and the hype about generative AI has brought existential questions about the survival of humankind to the forefront. Whether one believes that these are valid philosophical arguments or yet another way to garner attention amid the cacophony of voices in today's media, the current debate has brought the public into the discussion about the ethical usage of emerging technologies.

From outside of the industry, pressure is building for regulation to curb misinformation excesses and mitigate future risks. It looks like social media is in a crisis, with increasing market pressures on the large platforms that hold power over commodified consumers. This tension, the alternative points of view, and how civil society is organizing the debate were exemplified in a recent panel discussion between representatives of Net Choice (a free market, free expression trade association of large internet platform companies) and Tech Oversight (an advocacy group focusing on antitrust and accountability on issues such as disinformation and privacy).

Remember the Public Interest?

In the early to mid-90s, the World Wide Web emerged, representing a means to democratize information access. That read-only Web was a far cry from what we saw at the start of the millennium. By 2004 we were talking about Web 2.0, social media, the internet as a platform, and an explosion of user-generated content and advertising business models. With our personal data as the fuel for these business models, we became “the product.” By 2014, decentralized versions of the internet emerged based on blockchain technology. Much was said about how Web3 would give power back to users and steer it away from the tech giants. We barely had time to digest the spectacular crypto implosion when generative AI took over.

These hype cycles came and went despite growing evidence of technology’s negative impacts. Now, there is a great deal of activity to reassert the public interest. Some prominent executives and scientists have called for a pause in AI development, while various countries and their constituent states launch regulatory efforts. Politicians are working to frame technology issues according to their own values and geopolitical agendas.

Of course, the ethical development and use of technology is a big part of how to address the challenge. If a robot will review credit applications, analyze radiology images, or write a draft for a movie script, we want the data and the algorithms that govern its action to be free from bias. If AI will curate the news that we read, we want it to be based on facts. Companies that develop tech should adopt principles of ethical development, respecting privacy and scanning for bias in the data and algorithms. But self-regulation is not sufficient.

One way of aligning the profit interest of corporations and the public interest is through regulation that corrects or mitigates market failures. Much like what Europe is doing with its recently passed Digital Services Act and the proposed AI Act, we still need to address these issues in the United States.

Resetting the Rules

While the U.S. has been a laggard in comparison to Europe, it is possible that the outcome of two cases before the Supreme Court may spur Congress to Act.

Gonzalez v Google and Twitter v. Taamneh both ask the Court to consider the limits of platform immunity. Current Big Tech business models and, arguably, the AI startup ecosystem assume that platforms are not liable for user-generated content nor for taking down harmful third-party content. It should be clear to the Court following oral arguments in these cases that upsetting that would be disruptive, and could have unintended consequences. Moreover, from a civil rights perspective, we must preserve free expression. This is especially important for minorities that are typically the first to be subject to acts of censorship.

If it is clever enough not to diminish the protections of Section 230, the Court will direct action back to Congress and to voters, where it belongs. The issues previously discussed should be addressed via legislative action, creating a new regulatory framework that strikes a balance between platform liability protection, the amount platforms invest in content moderation, and considerations about the value that our society places on freedom of speech. Liability protection should be limited, extending the current carve-outs related to federal crimes, sex trafficking, and intellectual property violations. The new construct should include more muscular provisions for civil rights violations, targeted harassment, incitement to violence, hate speech, and overt disinformation. This is an opportune moment to create a new government body dedicated to this area that will act as the enforcement arm to protect consumers and competition. A new regulatory framework focusing on transparency, fairness, auditability, and due process will rebalance the power equilibrium by reducing the commodification of user data flows.

At the same time, it is urgent to accelerate the debate in society to better determine what constitutes acceptable content on major social media platforms. This is not a technical issue, and, with few exceptions, it is also not a legal matter. Rather, it is a normative and moral discussion. There is a fine line between moderation and censorship as internet platforms have to allow for “lawful but awful” First Amendment-protected free speech. Still, an important and desired dynamic effect of Internet platform regulation is the reduction of misinformation, which has been identified as one of the most pressing risks to economies and societies over the next two years in the 2023 World Economic Forum Global Risk Report.

As Nick Couldry and Ulises A. Mejias remind us in their 2019 book, The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism, “the path away from data colonialism will startwhen we reclaim the capacity to connect that human beings have always possessed, and we decide that today’s costs of connection are neither necessary nor worth paying.” We need to control the tech that surrounds us. Otherwise, it will either continue to be manipulated by bad actors or become a bad actor itself in the hands of a dystopian artificial master that creates an alternative reality. In doing so, we will start to address economic inequality, enable a more competitive environment, re-empower consumers, protect democracy, and create the conditions that will allow us to continue to enjoy the benefits of innovation.

Authors

Paulo Carvão
Paulo Carvão is a Fellow in the Harvard Advanced Leadership Initiative. Carvão is an accomplished global technology executive with a record of leading large businesses at IBM, where he was a member of the senior leadership team until 2022. Since then, he has acted as a strategic advisor for technolo...

Topics