Théophile Lenoir is a PhD student at the University of Milan.
Europeans like to think that, even if technology is created elsewhere, we at least keep the privilege of regulating its impacts on humanity. This comes from a profound attachment to human rights, which take root in the centuries old European philosophical, cultural and political movement known as the Enlightenment. The essence of the Enlightenment was a call to universality, in fields such as politics, giving equal rights to everyone, or science, making the assumption that knowledge can apply equally everywhere. When members of the United Nations signed the universal declaration of human rights in 1948, they effectively extended these principles to the rest of the world. The problem with these principles is they place Europeans in a particular position: each time they defend their own values, they think they work in the interest of everyone else.
When social media challenged the values Europeans cherish the most, European policy makers worked hard and in a record-breaking time to draft a new regulation. These debates, which unfolded in the European Union, but also in other Western countries attached to Enlightenment ideals, such as Canada, the United States, the United Kingdom or Australia, took on a universal nature. As a result, the EU Digital Services Act (DSA), as does other Western legislation, deals with the thorny issue of finding the right balance between values that are deeply European and universal, such as freedom of expression, privacy or transparency.
Can these texts serve the universal ambitions of the ideals they invoke? The problem is that the world has changed since 1948, and these values that were thought to be universal now show their local nature. Consequently, even if the DSA marks important progress in Europe, from a non-European perspective it might ask the wrong questions. As debates on content moderation continue to unfold in international arenas, including at the United Nations, the relevance of the text outside of Europe becomes unclear. In what follows, I will start by showing the European approach to deal with these values. I will then turn to the problem of their universality. Finally, I will show how the problem is formulated outside the West.
Some European values in content moderation
The debates on content moderation that took place in Europe and in the United States over the last few years brought to the fore important values. Freedom of expression, regulation, privacy, transparency, objectivity… Time and time again these are invoked to make progress on the best way to deal with a problem that is eminently political. In Europe, these values are shared across the political spectrum, which has allowed the European parliament to agree on legislation that frames the problem in terms of how to best deal with the tensions they generate.
Freedom of expression and regulation
For example, there is a natural tension between freedom of expression and regulation. It is self-evident: regulating speech will limit freedom of expression; letting speech run freely will limit the ability of regulators to protect citizens. Policymakers are accustomed to this tension in Western Europe, where there were laws regulating speech prior to social media. This made online content regulation easier to agree upon. Overall, the DSA asks platforms to take actions against illegal content (that was already defined). For content that is problematic but not illegal, it asks platforms to evaluate the risks that their systems pose for society and to take actions to mitigate those risks. No speech is forbidden that wasn’t already regulated by law, and issues around grey areas are dealt with from a public interest perspective.
The situation is very different in the United States, where there are important differences in who cares more about freedom of expression, and who cares more about regulation. Finding an agreement on what is the right equilibrium between the two is extremely difficult: Democrats blame platforms for doing too little, and Republicans for doing too much. The recent so-called “Twitter Files,” which made communications between the FBI and Twitter during the 2020 election public, show the extent to which both sides disagree when reading what these exchanges mean for the balance between freedom of expression and regulation.
Transparency and privacy
A second tension is between transparency and privacy. On the one hand, transparency as to how content circulates, how technological systems are managed, and how regulators interact with and control platforms must be the bedrock of regulation. On the other hand, this information can also be used to the detriment of communities or individuals. Platforms therefore like to remind regulators that they need to regulate access to the data to prevent episodes like the Cambridge Analytica scandal (which started as an academic project) from happening again. Recently, the collaborative Forbidden Stories investigations show the timeliness of these issues. But, in parallel, academic debates continue to underline the lack of knowledge society has of these systems to make informed decisions.
Needless to say that everyone around the globe does not simultaneously care about privacy and transparency. Some governments will use any opening to require full transparency of platforms with less regards to individual privacy, if not to outright collect personal information against opponents. And they will do so without too much concern for the transparency of their own decision processes. In such places, discussions on how to find the right balance between both are of little use.
Universality put to the test
These values are often framed as universal in debates on content moderation. After all, they are universal by design. And the Western motivation to spread them around the globe is easy to understand: who wouldn’t want freedom? How could we not fight to make sure that more people can benefit from it? For anyone trying to protect universal human rights, these values are the ones to defend.
The problem is that universality is increasingly challenged, because individual interests are harder and harder to avoid. This is true everywhere, even in science, where objectivity and neutrality were thought to be a given. Years of academic work have demonstrated that even ‘neutral’ facts always benefit someone. This does not mean that facts are only constructed, made up, fake, nor that everything is relative. It simply means that, by virtue of them being facts, they are open for anyone to use. And those who do so will end up benefiting from them. In this perspective, chemists benefitted from water being H20, satellite companies benefited from the Earth being round, etc. So, if you don’t like chemists or satellite companies, you will realize how much individual power there is behind facts that are seemingly neutral and universal.
This explains the inherent tension between universal and individual interests that are at the heart of human rights. On the one hand, any effort to protect human rights should serve universal interests. On the other hand, these efforts will inevitably lead to some winning and others losing. Because of this inescapable truth, the West, as a broad political entity, is now like everyone else: when it says it does something for someone else, eyebrows rise. This is the Achilles heel of politics in the defense of human rights: because the West invokes universal principles, it hides the fact that it also serves its own interests. This is extremely visible in debates on content moderation. The fact that the extremes are benefitting the most from social media also means that they have the most to lose from regulation. Therefore, the argument that content regulation serves everyone’s interests does not hold.
Because a majority of Europeans still think they only work to protect universal rights, this tension went largely unnoticed here. This explains why the DSA was adopted in the first place. But the tension also helps understand why, in the United States, billionaires are so invested in buying social media companies and correcting chat bots. There, the trick stopped working.
European values outside the West
What does this tell us about the DSA and its potential use elsewhere? Invoking universal rights to protect the quality of online speech has been successful in Europe, where Europeans are attached to all these values simultaneously. The question there has been to find the right balance between freedom of speech and regulation or between privacy and transparency. But these discussions are of no use in places where people do not care about them equally. In such places, the DSA asks the wrong questions.
An interrogation that came up several times at a recent UNESCO meeting on “Internet for Trust” was, “Who watches the watchers?” In Europe, there seems to be a consensus on the ability of regulators to work independently from governments, the media, social media platforms and civil society, to control the actions of platforms whilst respecting human rights. The right level of transparency also allows for other actors to control decisions these regulators make.
As a result of this trust in the intent of a multiplicity of actors to serve the public interest, European lawmakers agreed on a text to arbitrate the processes put in place to control problematic content. But, even inside the European Union, in countries where attachment to these principles is not equally distributed, the DSA might fail its objective. So it is easy to anticipate that it will be the case outside Europe.
If you are attached to human rights and you work in a country where the government isn’t, what should you do? Many citizens present at the conference stressed the importance of government and regulators transparency as regards decisions made on content. Naturally, then, an idea emerged of building an international organization that can control the work of regulators. But, despite well-intentioned deliberations, the question of the legitimacy of such an organization remains. The challenge for the international community defending human rights is that it can no longer say that the interests it defends are only universal. As it is reminded of the ways in which these values served it in the past, finding legitimate arguments becomes increasingly difficult. Beyond the broadest of principles, our common interest often breaks down.
Théophile Lenoir is a PhD student at the University of Milan. Previously, he was Head of the Digital Program at Institut Montaigne, where from 2017 to 2021 he developed the think tank’s research on digital issues. He is the co-author of Institut Montaigne’s note Information Manipulations Around Covid-19: France Under Attack (July 2020), and coordinated the production of various reports, including Media Polarization “à la française”? Comparing the French and American ecosystems (June 2019). He also worked for Reset Tech and was one of the lead authors of The French Information Ecosystem Put to the Test (June 2022), a report by the French Online Election Integrity Watch Group on disinformation during the French election. He is a graduate of the London School of Economics and the USC Annenberg School for Communication and Journalism.