Home

Donate

What's Next for the Digital Services Act

Justin Hendrix / Jan 25, 2022

Last week, the European Parliament gave initial approval to the Digital Services Act, which contains a set of regulations that will have major implications for tech platforms, including on how they moderate content and on their business models. This spring, the European Parliament and the Council of the European Union will debate the details of the legislation before voting on a final version.

To learn more about what is at stake and where the DSA is in the process, I spoke with Mathias Vermeulen, public policy director at AWO, based in Brussels, Belgium.

Justin Hendrix:

Tell us what AWO does.

Mathias Vermeulen:

So AWO is what we're calling a new type of data rights agency, which basically consists of three pillars. On the one hand, we are a law firm based in London, and we have been active in a number of significant law lawsuits, especially against some of the bigger companies in this space. For instance, we have been representing Professor David Carroll in his claim against Cambridge Analytica that is being led by my colleague, Ravi Naik. The second pillar of AWO is more traditional consultancy, GDPR consultancy with the caveat that we are not predominantly focusing on commercial operations, but mostly working with large foundations, universities, international organizations, the ICRC. I'm the director of the public policy part of AWO, where we provide advice to a range of clients also, including civil society organizations and foundations on European tech policy developments in general.

Justin Hendrix:

And just a little on your background as well- you had four years as a digital policy advisor to a Member of Parliament– and a notable one at that, someone who's been on the show before…

Mathias Vermeulen:

Oh, I didn't realize actually– I'm not such an avid podcast listener actually, but indeed, I had been working at the European parliament as an advisor to Marietje Schaake, who's currently at Stanford University and a columnist at the Financial Times and a Dutch newspaper as well. And before that I was working for the UN Special Rapporteur on the Protection of Human Rights While Countering Terrorism for a number of years. My background is that I have a PhD in European data protection and privacy law. And I had been working with organizations like Privacy International in the past for quite some time.

Justin Hendrix:

Great. So with that background in mind, let's talk about what happened this week. The European Parliament voted on a variety of different options to regulate online ads and to advance the Digital Services Act. So can, can you explain for my mostly American listeners the process, and where we're at in it, and what happened?

Mathias Vermeulen:

I'd be happy to do so because often if you read newspaper articles– especially from American newspapers– trying to explain a particular vote in European parliament, you might have the impression that we just basically signed off on a large bill that would regulate some of the big technology companies in this space from next week onwards. But the fact is that we are not really there yet. We still need to take a couple of steps before we arrive at that specific point. So basically to summarize, what has happened until now is that two years ago, in December 2020, the European Commission issued its long awaited proposal for a Digital Services Act. And basically the Digital Services Act proposal was an update of the eCommerce Directive in the European Union, which is sort of the equivalent of Section 230 of the Communications Decency Act in the U.S.

It was a really ambitious proposal in the sense that one of its main aims is to broadly harmonize the process in which users can notify illegal content to the platforms, after which the platforms actually then have to take that down or take any other content moderation action. Until now every European member state had slightly different timeframes and different procedures for how such a notice and action system worked. One of the aims of the Digital Services Act was to harmonize all these procedures and have a specific set of rules for all these different companies in all the different member states. A very big second part of that proposal was that it also created a tiered set of transparency obligations for a range of different entities, ranging from intermediaries– what they're calling ordinary platforms– and then a very specific category of so-called very large online platforms.

The Commission really, I think, took inspiration from the infamous Spiderman quote, “with great power come great responsibilities.” And so they define this specific category of platforms as very large online platforms– which are platforms with more than 45 million monthly active users. And that figure sort of roughly corresponds with 10% of the population of the overall population of the European Union. The idea of the European Commission was like, well, these specific platforms, they play such a crucial role in facilitating our public debate and safeguarding people's freedom of speech. They are so, so important. So important from an economic perspective, as well, that we can ask a little bit more in terms of transparency, obligations, and other responsibilities compared to much smaller platforms who don't have the same sort of scale.

And so what happened is that the really crucial part of this new set of due diligence obligations is that very large online platforms will now be forced to make a risk assessment of a whole set of societal risks that their products and design decisions can cause. To give you an example, they would need to assess in advance the extent to which their products could lead to specific categories of people being exposed to illegal content as defined in one of the 27 European members states, or they would need to assess to what extent some of their targeting practices could lead to users seeing discriminatory ads, for instance, like job ads only being directed towards certain categories of people or not being shown to people of color, for instance.

And then there's a couple of different definitions of societal risks. And the idea is after such a broad risk assessment, companies would need to take risk-mitigating measures. And I think the end of the proposal is that it doesn't really say specifically, well, if you have been exposed to a specific piece of content, for instance, you are forced to take it down; but it says well, companies, they have actually the freedom to decide what, what would be the most effective, necessary, or proportionate content moderation decision– is it for us to demonetize a specific piece of content? Is it better to sort of downrank it a little bit more, or is it to remove it after all? And then after these risk mitigating measures have been taken, the European Commission proposed there to be an independent audit which can then check whether these risk mitigating measures have actually been necessary and effective.

For instance, if a platform has identified that their recommendation system disproportionately exposed people to COVID 19 misinformation and it thought, well, one of the risk mitigating measures that we see is to attach fact checking warnings to those messages, then this independent auditor sort of in theory should then assess to what extent those fact checking efforts have been effective. And so I think that is more of sort of a systematic approach to content moderation, which moves us away from the whole whack-a-mole approach of ‘is a piece of content really illegal, not illegal,’ ‘who should define what is harmful content,’ and so on.

There are many different aspects of the original proposal that we could go into. Long story short, that proposal came out in December of 2020, and then all throughout 2021 in last year, the two different European entities then sort of started looking at the texts. You have the European Council, where all the heads of governments and relevant ministers are sort of sitting together on the one hand; and on the other hand, you have the European Parliament.

Justin Hendrix:

Before we get into the specifics of the amendment process and, and some of the decisions that were taken this week, you know, you mentioned your work on GDPR, the general data protection regulation, How does the DSA fit within that framework or relate to that framework in the kind of broader EU tech regulation space? Can you explain how these pieces fit together?

Mathias Vermeulen:

It's a very good question and there's a long and short answer to that– and probably we don't have enough time to really get into the long. But in a nutshell, the Digital Services Act is what in new lingo is called a horizontal instrument. So everything that's being written down there in terms of new obligations and responsibilities will apply to every single platform, very large online platforms or intermediaries at the same time. All these companies, they're also bound by this other horizontal instrument, which is the GDPR. Now it's going to depend on what the final text will look like to see how these two instruments will actually interplay with each other. But what is really clear is that it will be tricky to see how these different laws will interact with each other, which is one of the reasons why the Parliament asked the Commission to provide guidelines on those interactions between the DSA and existing laws. For instance, there is a very impactful new Article 31, which would basically force platforms to hand over data, including personal data to academic researchers, or to auditors if they’re asked to do so by a regulator. How that whole process, that whole data sharing infrastructure, all the mechanisms that you would need to set up to make that happen, would function in a GDPR compliant manner will be thought of later.

That is gonna take quite some imagination and thinking of how that is going to work exactly. But what often happens in EU legislative debates is that the European Commission and Council and Parliament, they are just deliberately ignoring that specific problem for now. What they will do is they will first agree on a broad set of general principles in the Digital Services Act. And then in the separate process, once we have agreed on the DSA and all these principles and new short articles, they will draft up these nitty gritty details on how, for instance, the access to data regime needs to be GDPR compliant in a so-called delegated act, which is actually sort of a separate piece of legislation that focuses on that one specific topic, for instance. How is this going to happen, how can you make access to platform data for researchers GDPR compliant? And then you can have a document which is probably by itself gonna be as long as the whole DSA actually. And so it's sort of, the principles will be in the DSA and then the actual thinking too, on how are we gonna do this in practice? That's being pushed aside for a moment and we'll solve it later, basically.

Justin Hendrix:

So let's talk about some of these specific amendments and some of the things that were really, I guess, at play in the conversations in the Parliament. There was a lot of discussion around targeted ads- that's beena point of discussion this week in the United States as well. We've had a Senator and a couple of Congressional Representatives put forward a proposed bill banning surveillance advertising. What is the DSA going to do about targeted advertising?

Mathias Vermeulen:

So this indeed has been one of the few, I would say, politically controversial topics on what should Europea do? What should the European Union be doing on top of what already exists?

Justin Hendrix:

This has huge implications for the business models of the platforms.

Mathias Vermeulen:

Exactly. And in theory, I think if you look at the GDPR, the general data protection regulation, and its cousin, the e-privacy directive– if you really read through the letter of the law and some of the principles and practices that are already laid down by the GDPR, then you would easily come to the conclusion that you don't actually need any type of extra legislation to sort of ban targeted ads - whatever that phrase may mean. And we can come back to that in a second, actually. And so I think, for instance, the original European Commission proposal– nothing was really being said about banning targeted ads, but the purpose of the Commission proposal was more like, well, there is still a lot that we don't know about how the online ad tech ecosystem works. And we are going to impose very specific transparency obligations that make that whole ad tech supply chain system much more transparent.

So that was sort of how far the European Commission wanted to go. In the Council, where the heads of government gather, they agreed with that approach basically, and they said, yes, we already have GDPR. We already have the E-privacy directive– those two instruments of law, they will deal with this topic. They already deal with to what extent large companies, what they can do with our personal data, how they can use that data to target people and so on. But the Parliament, or at least a significant part of the European Parliament said, “well, that might be right. And if you really read the letter of the law correctly, then that is even possibly right. But we acknowledge as an institution that there is a really big enforcement problem of the GDPR and we need mechanisms to address this issue. We have those laws on the books, but actually nothing really happens because for a whole variety of reasons, the law as itself isn't actually being enforced”.

And then you had a couple of political groups– especially from the left of the political spectrum, like the Greens and the Social Democrats– which argued, well, “many of these societal problems that are being caused by very large online platforms, like the spreading of misinformation, hate speech and so on. They are actually the results of the quote-unquote business model, which focuses on targeted ads. And so if the DSA wants to focus on these negative societal effects, we have to do more, beyond the GDPR in this Digital Services Act”. And it was sort of a long discussion, like the European Commission didn't really agree for a long time. Just like the European Council, they also didn't really agree with that analysis. And so there was like a whole range of measures that were being proposed in the Parliament. Like you had the far left proposing an actual ban on the use of personal data and the use of profiling techniques for every single actor, a real ban on targeted ads - that was not agreed upon by the European Parliament.

So there was no majority for that. Then we entered into sort of like a couple of more nuanced provisions for instance. And I think a really important one, what was approved now in the European Parliament, is a ban on actual dark patterns. So basically companies trying to quote-unquote trick you into consenting that your personal data can be used for targeted advertising, and you can only use their service if you press consent and the consent option is then most of the time like a very large green button. And you really have to scroll through the page to find “I decline” to see that option, so to regulate basically the design options of how you show and how you give your consent actually. So that got a majority in the European Parliament right now.

And I think the proponents of such a ban on dark patterns, they also see this as a way to end these infamous cookie banners that many website visitors in the EU are always getting. if this passes in the final version of Digital Services Act, then if you indicate in your browser, for instance, that you refuse tracking then that choice will be binding and you would never have to press consent on any sort of web page, basically.

Justin Hendrix:

So that's the kind of thingthat we would have to really see how it plays out, right? You know, what is a dark pattern? How should these consent mechanisms work? The legislation doesn't say that exactly. I assume companies will engage in ultimately some kind of regulatory back and forth– they're going to figure out what that looks like and how the consumer internet changes.

Mathias Vermeulen:

Yeah, exactly. I'm just going to check for the exact language because I happen to have it in front of me right now… So until now I have indeed been sort of talking about a ban on dark patterns, but the actual text of the amendment that was voted on doesn't mention this term. It really speaks about online interface design and its organization. And so it already gives a couple of clues of practices that could be considered as dark patterns. For instance, when a very specific provision gives more visual prominence to any of the consent options, or if you repeatedly request a person, do you really want to object to the processing of your personal data and so on? So there are already a number of elements in there, but I'm pretty sure that these are still going to be tweaked actually.

And I think like the third big thing that was voted on in terms of, of target ads was this sort of ban on using sensitive data, which is data that can reveal your political opinion or your sexual orientation and prohibit advertis using that specific category of data to target you with ads, which I think like many sort of privacy lawyers and data protection experts in the EU would argue was actually already prohibited if you look at the GDPR. But I think there is this one word which also speaks about prohibiting targeting on the basis of inferred data and that basically isn't covered by the GDPR and which would actually broaden the scope of the types of data that would be prohibited to use to show you personalized ads, actually. And so this is basically what has been agreed on by the European Parliament with relatively stable majorities. And now the big question is– and we come back now to where we are in the process– basically what has happened this week is that the European Parliament has signed off on its negotiation position.

And from next week onwards now, the European Council and the European Parliament are going to negotiate with each other, at least for three to six months in the quickest scenario, basically. And on the basis of their negotiations, we will arrive at the final text of the Digital Services Act, which we expect to be adopted at the very, very earliest around the 8th of April, which is just before the French presidential election. And if that isn't going to succeed, we will probably have a deal by the beginning of summer. We are really focusing on French events and French national holidays also because France has the presidency of the European Union for the first six months of this year. And then the companies get a couple of months to implement those obligations before it actually becomes the law of the land in the European Union. And then we're speaking about the first quarter of 2023, when all this will become reality, actually.

Justin Hendrix:

Let me just ask you about a couple of other elements, just for the listener to understand how these things will play out with regard to the provisions around speech and around harmful content. I understand that the current approach will still, to some extent, leave it to EU member states to make their own decisions about what is legal or illegal, certain forms of speech that may be regulated country by country. How does that work in practice?

Mathias Vermeulen:

We could fill a full episode with this particular question, but in a nutshell, it's indeed up to national member states of the European Union. They have the competence to decide for themselves what types of speech they deem to be illegal. And this reflects like national sensitivities, for instance, Holocaust denial is criminalized in different ways in EU countries. In other countries, you have very, very specific ideas about what is considered to be hate speech or not. And the European Union has said, there is still so much difference among you member states. We will leave it to the national member states to decide what is legal and what is illegal actually.

And so I think you have to make a distinction between illegal content and the sort of actions from the platforms that are seen as resulting in systemic risks. Like the DSA doesn't define what harmful content is compared to the UK's online safety bill, for instance. But it speaks about systemic risks. That can be the results of specific activities from, from the platforms actually. And so for the first part of the DSA is like when somebody notifies to a platform that they think like an illegal piece of content has been, has been found, or is still on their website, they have to file a notice, which needs to be substantiated, needs to identify the specific piece of content and in response platforms needs to response within a given timeframe and then remove if it's really a, a piece of legal content actually. The DSA harmonizes these different notice and action mechanisms that previously were different from EU country to EU country.

Whereas there's a slightly different system when a piece of content is not illegal, but violates the terms of services of a specific company, actually, then it's really sort of much more left to the discretion of the company to decide whether something is in violation of their terms of services and what the other measures other than removing that type of content is that would be the most appropriate to apply. And then indeed you are talking about demonetizing specific pieces, stop recommending specific categories of content, and issue temporary bans on users. So then your whole catalog of potential options is much wider than this binary “let's leave it up or take it down” decision, actually.

Justin Hendrix:

Are there other peculiarities of the DSA that you feel like people should understand? I mean, I know there are some specific provisions for instance, around advertising to children, other elements around children.

Mathias Vermeulen:

So yes, his ban on showing targeted ads towards kids is in there as well. You could argue that that was sort of already covered by the GDPR, but I think it's, it's interesting to have that explicitly mentioned in another piece of legislation with different enforcement mechanisms there as well. I do think that one of the most important elements of the DSA, which also is interesting for the United States for that matter, is the whole provision that actually forces companies to hand over platform data to auditors, regulators or independent researchers. I think that is absolutely crucial to really develop a body of evidence that really demonstrates to a certain extent what the actual scales are of the problem that we're talking about. Like, we shouldn't be depending on anecdotal stories about YouTube's recommendation system, for instance, or we shouldn't be relying on whistleblowers like Francis Haugen who come up with internal Facebook research to demonstrate, okay, this is actually the real scale of the problem, but rather setting up this mechanism that would allow party audits or scrutiny of these big companies.

I think that is going to be tremendously helpful, not only for citizens and regulators in the EU, but also beyond actually. The DSA is important from a content moderation perspective, but you really should see it as a sort of data generation generating machine as well that would allow probably in the future much more targeted interventions on a range of very specific problems actually. So I think this whole access to data and the ability to quote-unquote, look under the hood of the platforms. I think that's probably gonna be the one provision that is gonna have the biggest impact in practice.

Justin Hendrix:

There are a lot of folks in the U.S. who have kind of come to the conclusion that the EU is going to be the de facto regulator of our Silicon valley companies, for better or for worse. I want to get a sense of the politics of the DSA in these next months. How would you describe the forces that have aligned in opposition to the DSA and what are their prospects at this point? And the forces that support it, what are their prospects at this moment? I mean, I assume there are some interesting coalitions that have formed.

Mathias Vermeulen:

What has surprised most observers of this particular piece of legislation is the actual speed to which the different political parties were able to come to an agreement on all these specific topics that we, that we just discussed. I think if you would've asked anyone two to three years ago, ‘how long is it going to take to arrive at the final Digital Services Act?’ People would have said, ‘it's gonna take us equally as long as the GDPR.’ So three, four or five years before we will actually see a final piece of legislation– but quite surprisingly there is really a very high level of agreement from all the political groups from the left to the right that more responsibilities and obligations– especially for the specific category of very large online platforms– actually need to be created.

So the, I think there is a lot of agreement on the structure of the Digital Service Act and its roles, principles such as “the bigger you are, the more substantial your transparency obligations are” or that there is a need to beef up capacities for third party investigation and scrutiny, not sort of relying on self-regulation and the word and transparency reports that these company needs voluntarily provide. And so on a harmonized notice and action provision, I think there was a lot of agreement among all the mainstream political parties.

I think the most resistance until now that we have seen is from countries like Poland and Hungary, which are having a number of significant issues with freedom of speech domestically as well, and where many of the ruling political parties are heavily relying on some of these very large platforms to reach their audiences, to boost their messages and so on.

And so for instance, some of the, the censorship arguments that you often hear in the U.S coming from especially the Republican side of the aisle, those resonated a little bit among Hungarian, Polish MEPS; Slovenia as well, for instance, where you have slightly more right wing governments in power these days. But I think other than that, there is a very strong agreement except for this role of how you could, how you should regulate online ads in the DSA. But other than that, I think there is a really surprising, actually big agreement on this original text of the European Commission. And I think, I think that that you have to give a lot of credits for the European Commission for coming up with like a relatively good proposal to begin with in the first place, because I still remember in sort of three years ago, when at the height of f Brexit and Donald Trump and so on, and the emergence of national fake speech laws and outlawing different types of speech in the European Union, we really discussed at one point the option of getting rid of the whole principle of intermediate liability altogether.

And I think if that would've been, for instance, in the proposal of the European Commission, we would have had a much more difficult and toxic debate. But I think this whole idea that indeed you shouldn't hold platforms liable for the contents that their users are uploading, except for like a category of sets of examples that are defined by national laws that you should always be able to say what you want on these platforms. I think that the fact that this principle of intermediary liability was kept and the fact that platforms shouldn't monitor everything that's happening on their sites (a so-called no general monitoring obligation) that that was retained as well, really sort of created this big sense of agreement on the direction of the DSA.

And the European Commission often used this very tired metaphor, saying ‘we don't want to become this ministry of truth.’ And I think it was a very good idea to focus on processes and to hold platforms responsible for things they can actually be held responsible for, like the design of specific features or determining what to show in people's newsfeeds. For instance, those are things that you can hold platforms accountable for, and you shouldn't hold them accountable for every single individual piece of content that's that a user wants to express on their platforms actually.

Justin Hendrix:

What role has industry lobbying played in this process?

Mathias Vermeulen:

I mean, it's a favorite topic for many publications, and of course there has been a massive lobbying campaign from a lot of different players, right? Not only the Googles and Facebooks of this world, but also for instance, cloud providers who really didn't want to be included in the scope of the Digital Services Act. You had publishers who basically asked that platforms would give specific publications a heads up before taking any content moderation before making any content moderation decision. You had lots of NGOs and civil society organizations getting active as well. I think there has been, especially on the topic of online ads, the companies were very active with a very big campaign both in print and on billboards. You couldn't walk around Brussels without bumping into a Facebook advertisement saying ‘targeted ads are good for the economy.’

And this is what is going to get us out of the economic recession after COVID and so on. So definitely lots of money was thrown at that lobbying campaign at the same time. I think most of the companies were potentially even more worried about the Digital Markets Act, which focuses more on the antitrust and competition aspects of this because every single word and comma can have a direct consequence on your business model. But I do think that in indeed in general, the context of the techlash and that especially members of European Parliament were quite united in their determination to create new rules for what they're calling the ‘the wild west online’ or, or any of these other metaphors. But I do think at the same time, the strength of the lobbying power of the big platforms has never been as strong in the European Parliament as it is towards the European Council, for instance. Like many of the civil society groups, they are organized in Brussels and they form coalitions in Brussels, but they are much less well equipped as the platforms in all the capitals, for instance– in Paris or in Amsterdam or in Vienna, whereas all the companies, of course, do have specific offices in the member states. They are much more aware of what's happening in those capitals.

And so they have a much bigger influence on the positions of the governments of the member states actually. And so that's why now you always have a slightly more conservative position in the European Council and a more progressive– if you wanna call it that– in the European Parliament. And traditionally the real lobbying power of industry is much bigger in the negotiation process that we are now going to enter. So I think like many of the very progressive things that are in the text of the European Parliament version will not end up probably in the final text, partly because that's the nature of politics and people and organizations negotiate with each other. But also because I think the lobbying power of some of the companies that will be affected by the Digital Services Act is much stronger vis-a-vis national governments than to members of the European Parliament.

Justin Hendrix:

So I just have one last question. When you take into consideration progress on tech regulation in the EU over the last so many years from GDPR through to the DSA, are you optimistic that you're going to be able to address the harm and the real problems that have been at play? Are you feeling that progress has been made and progress that is commensurate with the challenge?

Mathias Vermeulen:

I mean, I think we have a moral duty to be optimistic in this sense. And I do think that what's on the table right now in the European Union is our best chance at tackling some of these big societal challenges that are being caused by the behavior of online platforms. And in that sense, and I mean, of course nothing is decided until everything is decided. So if you ask me again in six months, I can have a very different opinion, but based on the positions of both the European Council and the European Parliament, I am pretty optimistic that this is really the best that we can do at this point in time. And I really expect a lot from a DSA in terms of getting us much more information about what actually are the cause and effects of some of the design features and product decisions that some of these companies have been making, and what their effects are on our societies. So yes, I am optimistic and I just hope that relatively soon, the U.S. can follow suit and maybe arrive at a set of joint principles with the European Union. I know there is the US Tech and Trade council where there are talks about sort of like, okay, what can we agree on in sort of the transatlantic relationship? But yes, I'm an optimist. And I think this is as good as it's going to get for quite some time actually.

Justin Hendrix:

Well, Mathias, thank you so much for taking the time to walk us through all of that. And I hope your optimism bears out, not just in the EU, but with regard to the potential for the United States to follow suit as well.

Mathias Vermeulen:

Fingers crossed.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics