Home

Donate

Durov's Arrest and the Shadow Politics of Platform Regulation

Robert Gorwa / Sep 16, 2024

Robert Gorwa is a postdoctoral researcher at the WZB Berlin Social Science Center. His first book, The Politics of Platform Regulation: How Governments Shape Online Content Moderation, was recently published open access by Oxford University Press.

An image of Pavel Durov superimposed on the French flag.

For a minute, take a break from the endless stream of international tech policy goings-on and imagine a hypothetical. You’re a policymaker in a medium sized country, and a technology company entered your market a few years ago, offering a set of purportedly innovative products that it had begun marketing to your citizens. These included physical hardware (some kind of internet-enabled personal computing device), as well as various complementary platforms. After a few years, the product that sticks, becoming widely popular, is a simple application that permits one to exchange messages with other users and also broadcast information via large groups, somewhat akin to the online bulletin boards of old.

After flying under the radar for a number of years, a major spike in users in your country occurs after a certain local celebrity mentions that they are a fan of the product, and the company capitalizes with a well-executed influencer marketing campaign. The company’s products become more and more popular, especially with young people — after all, who doesn’t love secure, low cost messaging — but as the product becomes more pervasive, worrying reports about the company begin to pop up on blogs and in the global media.

The first public relations crisis follows a research report that suggests that the privacy-preserving practices of the firm are actually not as good as the company claims. While the company says its messenger is secure, privacy engineers and cryptographers say otherwise, showing the limitations of the secretive and ‘proprietary’ encryption protocol that the firm apparently deploys somewhat spottily across the platform’s various surfaces. Other issues with the tech firm’s business practices steadily emerge: the application now used by millions of your citizens has lax moderation practices, and channels on the app allow for the distribution and resale of counterfeit, unsafe goods, as well as content that is illegal and dangerous to public health and safety.

Over time, public attention paid to the tech firm’s products grows. Academics begin to study its various platforms, its governance issues, and propose new regulatory models involving various aspects of competition policy, data protection law, and online content regulation. A transnational array of rights holders, consumer protection groups, and even certain industry competitors start agitating for regulation to rein in the tech firm’s business model. The company responds by ratcheting up its lobbying expenditures—exponentially growing its policy staff in international capitals, while also funding think-tanks, consultancies, and academic institutions around the world to produce research indicating the positive economic and social impacts of its products. Nevertheless, demand for regulation in your country, its biggest market, seems to be building to a level where it can no longer be tamped down by the traditional corporate influence playbook.

If you’re a policymaker in this country at the moment, what are the next steps that you might take? You are being motivated by your fellow leaders (some of whom may be elected and responsive to constituents, others of whom may civil servants tasked with protecting the public interest) to try and shape the content moderation practices of the company — to try and get it to take it’s governance practices seriously, or, at the very least, invest substantially in the processes through which it seeks to detect and police some of the most illegal forms of material circulating amongst its usership, such as illegal child abuse imagery.

What do you do? What is your strategic playbook? What are the political conditions and institutional dynamics that are likely to shape your response, as well as your chances at success?

Convince, Collaborate, Contest

That hypothetical scenario (which I have editorialized in light of recent events) appears in the first section of my new book, The Politics of Platform Regulation: How Governments Shape Content Moderation. I take as a jumping off point the slew of great recent research outlining the intricacies of what in different circles is called trust and safety, integrity operations, platform governance, or content moderation — in a nutshell, how platforms create and enforce rules at scale — but then moving beyond it to look, in a comparative perspective, at the strategies being deployed by policymakers around the world seeking to directly intervene in those processes of rulemaking and enforcement.

At the core of the book is a framework for thinking about government efforts to shape the content governance practices of platforms with a variety of business models: what I call convincing, collaborating, and contesting. These range from the least formal and ad-hoc efforts to ‘convince’ platform leadership to change their content moderation rules or enforcement through carrots and sticks to the more formal, and institutionally complicated, efforts at ‘contesting’ what companies are doing through binding rules and regulations. A core intuition is that different regulatory moments can have quite different underlying politics, and policy actors seeking to deploy various strategies face limitations on not just what they can do, but also when and how. In other words, the full platform regulation toolbox is not always available to all actors at all moments.

Concretely, I think that this kind of approach can help provide some insights into a situation quite similar to the hypothetical that I presented above: the much discussed recent arrest of Pavel Durov, the cofounder and CEO of Telegram, after his private jet touched down at a commercial airstrip in the Paris suburbs on August 24th. While that story is still developing and some of the details remain unclear, a few takeaways from the book can help observers parse this important and high-stakes tech policy moment as it unfolds in the coming weeks.

Breaking Down the State

Firstly, it’s often helpful to break down governmental actors into the precise sub-actors that are actually driving what is happening on the ground. While commentators might paint the showdown in grand terms as one between France and Telegram — and certainly, this story can be told from a macro, geopolitically-tinged perspective, especially given the role that Telegram is playing in the Ukraine conflict and it’s purported links to Russian espionage and disinformation campaigns — but the key French actors, at least according to more detailed reporting, seem to be a set of lesser-known domestic law enforcement agencies. Perhaps the leading organization involved, known as OFMIN (office expert pour lutter contre les violences faites aux mineurs, or the Expert Office for Fighting Against Child Abuse, generally abbreviated in French as simply the ‘Office for Minors’) is quite new, having been founded less than a year ago in November 2023.

A special institution inside the investigative and serious crimes wing of the French police, OFMIN was founded with a staff of approximately 30 and a wide child safety mandate, ranging from the most serious forms of child sexual abuse both online and off to the much softer and less clearly defined issue of bullying in schools. Headed by a former police commissioner that previously had a domestic violence portfolio, the organization’s stated mission is to centralize and improve investigations into various child safety issues, and to better integrate these operations into European and international law enforcement networks. The agency has grown rapidly since its inception, seeking to nearly triple its staff in 2024.

While precise numbers are not readily available, and it is not entirely clear whether OFMIN itself was in charge of reporting groups and users suspected to be sharing or producing illegal child sexual abuse material to Telegram (or if it was referring those reports through other branches of French law enforcement), some reporting by French outlets picked up by POLITICO suggests that law enforcement officials in France made at least a few thousand of such requests over the past year without receiving any reply.

Although the circumstances here are rather unique, the same basic scenario — government actors in a jurisdiction seeking to get a platform’s staff to take a specific issue more seriously — is perhaps the central theme in the global platform regulation space since the infamous ‘techlash’ period began in 2016 or so. At least since Germany’s much discussed effort to force firms to issue transparency reports and comply rapidly with takedown notices via the NetzDG (and the lesser-known voluntary ‘code of conduct’ that preceded it), as my book shows, governments around the world have been experimenting with a wide spectrum of strategies intended to shape corporate behavior in this space, including co-regulatory approaches and more costly forms of binding regulation.

But when ordinary channels of communication are being ignored, and efforts to collaborate voluntarily with the company have been similarly rebuffed, the options available for the leadership of a newly created institution seeking to make an impact — as in the case of OFMIN — are quite limited.

A classic move would be for law enforcement to relay their concerns across the political system, getting the ear of senior elected officials and civil servants and eventually hoping to change the regulatory status quo and bind the hands of Telegram’s executives, forcing a classic legal showdown: comply, or exit.

My book explores how this kind of formal, ‘contested’ platform regulation is often very difficult to accomplish, requiring policy entrepreneurs to pass through many veto points and sites of democratic deliberation. In France, the effort may be doomed from the start, given the potential interest misalignment between civil servants and the executive branch: after all, President Emmanuel Macron was himself the one who conferred French citizenship on Durov just a few years ago as part of an unusual effort to court ‘innovative’ tech leaders to the country.

At the EU level, European law enforcement and security-oriented actors, almost certainly including OFMIN, are already actively engaged in a long-term project seeking to pass sweeping new rules that would affect Telegram and other similar apps, mandating CSAM detection and reporting (including potentially in encrypted settings via ‘client side scanning’ or other means). This effort, however, remains highly controversial, has met resistance from civil society and privacy oriented stakeholders across the EU, and has uncertain prospects for success.

As my book puts it, understanding the drivers of platform regulation requires looking at its key actors, their interests, and the institutional channels through which they can pursue them. Independent agencies like OFMIN — or J3, a special unit in the Paris Prosecutor’s Office that also appears to have played a role in the arrest — that have no clear path to collaborating with firms or ‘contesting’ their content moderation practices through formal regulation are thus limited to convincing them to take their demands under the existing legal frameworks more seriously.

Informal Bargains

There is a long history of government actors seeking to persuade company executives to take certain pieces of content down or massage how they apply the rules on certain policy areas. Jillian York’s work is full of examples of world leaders calling up tech CEOs to try and convince them to change certain policies or their approach to enforcement. While threatening that certain platforms will be blocked from a country if they do not change their behavior is a classic part of the authoritarian information control playbook, individual liability for executives has occasionally even encoded into law in high-income democracies, as in the case of the Abhorrent Violent Material Act, passed in Australia after the Christchurch shooting.

These kinds of informal acts of carrots-or-sticks persuasion can help actors achieve their goals in the short term — there is already some indication that Telegram may be somewhat getting its act together, updating its terms of service and stating publicly that it will ‘significantly improve’ its efforts to prevent ‘criminal abuse’ of the app. There remains an open question about the ways in which this effort may clash with or undermine other efforts made by other parts of the French government, however. Nevertheless, I wouldn’t be surprised to read in the coming months that OFMIN is more successful at leveraging contacts at the company for its child safety investigations work. If Telegram were to start actually hashing-and-matching unencrypted content, finally start reporting instances of CSAM to clearinghouses like NCMEC, perhaps building up an actual child safety investigations team — in other words, doing the kind of work that the minimally responsible large international platform services all already do — that could certainly have a major and immediate global impact.

In my book, I explore the ways in which less formalized collaborative regulatory outcomes, such as the reorganization of the Global Internet Forum to Counter Terrorism following the New Zealand-led Christchurch Call, can potentially have major effects on company practices, and occasionally, might even be more transformative than classic forms of state-led binding law and regulation. Of course, these are more structured co-regulatory efforts than the very pointy and coercive-feeling arrest of the CEO of a company, something more-or-less unprecedented for a major European country, depending on how you view Durov — as Daphne Keller recently put it on the Moderated Content podcast, is he more of a Ross Ulbricht/Silk Road figure, or better understood as an ordinary tech CEO whose arrest also threatens the leadership of other companies that operate with less impunity than Telegram does?

But the history of Telegram’s always complicated engagement with regulators and various government bodies suggests that this kind of sustained and serious collaborative engagement, the kind which the history of platform regulation suggests can be impactful in the long term, is unlikely. It is not easy to foist these kinds of changes voluntarily onto a company that appears to have a business model diametrically opposed to the kinds of changes to the content moderation status quo desired by policymakers.

That said, while some commentators have suggested that the core value proposition of Telegram is centrally about facilitating illegal content and criminality, I think the overall platform economics here are a little less clear. Telegram does not really serve ads, and appears to more or less be operating as a prestige and perhaps ideologically-motivated passion project of the Durov brothers. (It’s remarkably hard to give Telegram money; the interface of their app, highly unusually, features no marketing information about its premium subscription features that purportedly help fund the service). A shoe-string operation, operating at a loss and lazily cutting many corners from a security perspective, does not lend itself easily to major, institutionally complex and costly investments in trust and safety.

More and more government actors around the world — law enforcement agencies, media regulators, judges and judicial bodies, as well as elected officials of various stripes — are increasingly trying to sink their fingers into platform firms, looking for pain points that they can use to shape the ways that these multinational companies govern the increasingly important digital spaces in which many of us work, play, and communicate. Understanding what these players do, as well as why and how, requires zooming into the domestic politics, coalitions, and conflicts lurking behind the scenes of what otherwise can appear to be crude power politics.

Some of these ensuing conflicts may play out a la Telegram in the public eye, while others will involve the mundane and yet nonetheless high-stakes behind-the-scenes work of implementing complicated new regulatory frameworks like the EU’s Digital Services Act. Regardless of what one thinks of Durov, if platform leadership starts ending up behind bars, it’s clear that the stakes — for freedom of expression, the ability for individuals to access information, the possibility for us to safely and securely use our digital devices — will only be higher than ever.

Authors

Robert Gorwa
Robert Gorwa is a postdoctoral researcher at the Berlin Social Science Center. He studies the politics of technology policy, with a special interest in platform governance and emerging socio-technical regulatory arrangements in the digital economy. He received his doctorate from the Department of Po...

Topics