Home

Understanding Systemic Risks under the Digital Services Act

Ramsha Jahangir, Justin Hendrix / Sep 15, 2024

Audio of this conversation is available via your favorite podcast service.

At Tech Policy Press, we’re closely following the implementation of the Digital Services Act, the European Union law designed to regulate online platforms and services.

One of the DSA’s key objectives is to identify and mitigate systemic risks. There are four categories of systemic risks that platforms can present, according to the law: These include:

  • The dissemination of illegal content;
  • Negative effects for the exercise of fundamental rights;
  • Negative effects on civic discourse, electoral processes, and public security; and
  • Negative effects in relation to gender-based violence, the protection of public health and minors, and serious negative consequences to physical and mental well-being.

But beyond these general categories, how do we gauge what rises to the level of a systemic risk? How do we get the sort of information we need from platforms to identify and mitigate systemic risk, and how do we create the kinds of collaborations between regulators and the research community that are necessary to answer complex questions?

Ramsha Jahangir, a reporting fellow at Tech Policy Press, recently discussed these questions with Dr. Oliver Marsh, who is head of tech research at AlgorithmWatch, an NGO with offices in Berlin and Zurich that works on issues at the intersection of technology and society. Marsh has been leading research on systemic risks and the DSA’s approach, and has recently just put out a detailed summary of his work.

What follows is a lightly edited transcript of the discussion.

Ramsha Jahangir:

Oliver, thanks so much for being here. So maybe let's start by introducing your risk repository work to our listeners. And it's very interesting because you've taken a rather practical approach to researching systemic risks. So what's the project about?

Oliver Marsh:

Thanks Ramsha for this opportunity and great to get this message out to tell all the listeners. So systemic risks are, as you've already alluded to, conceptually quite unclear. Is a risk systemic because it happens multiple times? Is it systemic because it's predictable or could be repeatable in the future? And we just decided to cut through some of this conceptual issue. We would focus on real world, real life cases, that might be systemic risks, that could conceivably be systemic risks. So we began with this idea of a risk repository, which was collecting real world observed cases on platforms and search engines, which might be systemic risks under the DSA, if you take that as a very broad definition. So things like threats to fundamental rights, threats to public safety, to civic discourse and electoral processes, and so forth.

Having collected these, we then pulled out a seven or so that we considered to be real "gray areas." Areas where we really thought there might be some debate about those, and distribute them to various partner organizations to see what they thought of these cases, whether they would consider them systemic risks, and more importantly, the reasons they gave for their arguments.

Ramsha Jahangir:

The definitional issues with defining systemic risks is something that's come under a lot of discussion since DSA came into effect, but also before. But you also contend and through your work, that definitional issues don't make it impossible for researchers to support enforcement under DSA. But you also mention in your research, that often researchers reached very different conclusions about whether a given case was evidence of systemic risks or how clearly this decision could be reached. So could you speak to a little bit about that? What were some of those conversations like and what were some of the tension points when it came to defining systemic risk?

Oliver Marsh:

Yeah, so I can give a fairly specific example. So one of the gray area cases that we sent around was a now pretty famous case in which deepfaked audio of a Slovakian election candidate was circulated on Meta platform shortly before the Slovakian election, which cast this candidate in a bad light, claimed he was talking about, if I remember correctly, making secret payments to certain groups to get them to vote for him. Now, the conceptual question of whether this is systemic risk is quite tricky because you have to ask questions of is this only a risk if it could theoretically actually impact the election results? Do researchers need to show that there was actual impact on voting behavior? Is it a systemic risk when it only happened once? Or does this sort of deepfake case need to be repeated over and over again for it to be counted as systemic risk?

It's not clear in the text of the DSA. And to be clear, the work we're doing at the moment is focused on researchers. So one can see how for enforcers and for platforms, this question of is this a systemic risk or not, does require a bit more clarity. However, what we found for researchers is that they didn't say, well, we don't know if this is a systemic risk or not, so we can't really use the DSA to research it. We are just a bit stuck until there's more clarity." They effectively said, look, there's a bullet point that says in the DSA, that says threats to electoral processes are a systemic risk. This could conceivably be a threat to electoral processes, so we do think there's room to do things like data access requests under the DSA, there's a potential interest from the European Commission as customers for this research, and so on.

So what we ended up arguing in the report is that a quite broad definition of systemic risks would support researchers. It might, in other ways lead to complications around actual kind of legal enforcement, but for researchers it's extremely helpful if they can just say, well look, here's a thing I've observed. Here's a potential problem. It's conceivably part of the DSA systemic risks framework. So we should use the tools made available by the DSA to research it rather than spending time in advance trying to establish if something's a systemic risk before even beginning the research.

Ramsha Jahangir:

And one thing that you already pointed to was the issue of data access and transparency, which is at the core of the work researchers do, right? So given the broader lack of clarity surrounding systemic risk from the DSA, what specific questions or challenges, if any, do researchers face in terms of data access and transparency and how do these challenges intersect with the definition of systemic risks and contribute to the overall difficulty of accessing and addressing these risks?

Oliver Marsh:

Many of your listeners will have heard of the now famous Article 40 of the DSA, which effectively allows researchers to gain data access to very large online platforms and search engines if, and this is the key point, the research they are doing contributes to the understanding and mitigation of systemic risks within the union. So the question there is how do you make sure when you are doing this application that you are demonstrating your research contributes to systemic risks? And again, we'd argue that that should be conceived of broadly by, for instance, the digital services coordinators who will be making some decisions about this, to not unduly narrow the scope of research in advance. However, again, talking to other researchers, it seems that some of these bigger data access request problems are not to do with this conceptual definitional issue. They're much more to do with delays in guidelines and the delegated act from the European Commission around them accessing non-public data.

Currently, platforms are supposed to be making public data available on what's called a CrowdTangle provision. All platforms should have some kind of version of the now sadly deceased CrowdTangle and although that is in force now, a lot of the platforms are, particularly X, are rejecting applications from researchers that seem to match the requirements of the systemic risks under the Digital Services Act. So again, just a broad outline that just says, "Look at the start when you're doing a data access request, systemic risks should not be unduly narrowly defined," would at least allow researchers to produce that kind of research, find evidence, et cetera, that later on could support legislators and enforcers and platforms themselves, perhaps operating under a more legally specific definition of systemic risk.

But at the start, we are saying that the problems with data access right now are much more practical. They're more about timelines. How long does it take to get the data? How much effort do you have to put in to do an application? And this question of, or is this not a systemic risk? is not a particularly live one right now, but if the definition did become narrower later, that would become a problem.

Ramsha Jahangir:

I'm thinking of, again, how we can improve the clarity and consistency of assessments regarding systemic risks. What sources of data is already accessible? What information is already out there that researchers can work with, and are there any metrics that the research community would perhaps suggest, but also, are there new metrics that you think need to be developed to assess risks?

Oliver Marsh:

The range of potential systemic risks is so broad that it's often quite useful to bucket them into various categories. The organization, CERRE, did a very useful report on metrics, specifically in the relation to electoral disinformation for instance, or not just disinformation but electoral threats. And I do think that's a good approach, to think what are the metrics that might be applicable in this particular case? But again, metrics are useful. They do allow for consistency of evaluation. They also allow for watching change over time. But it is important that organizations that don't deal in easily measured topics are able to contribute. So for instance, it would be useful for researchers to be able to collaborate with investigative journalists, for instance, to locate potential risks and harms and then develop metrics that meet that case, rather than starting from here are the standardized metrics of systemic risk and everyone must fit into these whatever their research project is.

Ramsha Jahangir:

Absolutely. It's also the question of what's enough and what's good enough, right?

Oliver Marsh:

Exactly.

Ramsha Jahangir:

Yeah. And putting things into a box can often also be narrowing in this case. And also there's a risk of enforcement and compliance. So to be too prescriptive is also a challenge.

Oliver Marsh:

What we are attempting to do fundamentally, is deal with risks, not harms. We're not trying to show that something has actually happened. We are in broad policy terms, trying to say to platforms, have you considered a bad thing that could happen in the future? And have you done things, reasonable and proportionate things, to mitigate the possibility of that happening? Now, the language of systemic risks is there to try and make sure the platforms aren't liable for any possible bad thing that might possibly happen on that platform. The idea of systemic risks does have this sense of it's larger scale than a single incident. But again, in a broad sense, we are trying to say to the platforms, look, this isn't about trying to optimize for certain metrics where you can tick a compliance box because you've reached certain metrics, but you haven't actually dealt with the fundamental underlying problems. And as researchers, as journalists, as civil society, we want to be able to point to various possible risks without having to always have to quantify.

Ramsha Jahangir:

One thing that you already mentioned going looking a bit beyond researchers, but the role of other actors in this space, particularly the Commission and DSCs who of course are also involved in judging the platform's response to identifying and mitigating systemic risks under DSA, what does that collaboration look like right now and how do you ensure that your research is aligned with their expectations but also users? And then also that there are opportunities of direct collaboration with these external stakeholders who also have a part to play. There's also the role of auditors in this in judging what's enough and what's not. So what are the conversations and opportunities like with the Commission and DSCs directly on this topic?

Oliver Marsh:

In a world where we're often complaining about regulators, I do want to give a shout-out to the Bundesnetstagentur, the federal network agency in Germany who have been really interested and really happy to have dialogues with us about the work we've been doing and ask our views. And we are now sitting on their advisory board, which has been set up precisely to make sure that these kind of views are represented. So hopefully that can be a model for other regulators. With the European Commission and the Digital Services Act, compared to some other legislation, for instance, the AI Act, we do feel that civil society has been heard and does get a hearing. However, at present, it tends to exist in two forms, which is either very large roundtables where it's very hard to have a proper discussion or more ad hoc communication, often taking place in Brussels.

And neither of these are particularly ideal for this question of systemic risk. It does mean that if something is discovered at short notice, you either have to hope that a roundtable is coming up and that the roundtable has something of interest to you or have these kind of ways of speaking to people in Brussels. So as civil society more broadly, we have been advocating that there needs to be more thematic, smaller scale, regular opportunities for transparent engagement with the Commission where we can share research topics and we can have proper dialogues rather than large roundtables, but in a way that is a bit more structured and allows for planning ahead. We've also seen a lot of high profile requests for information coming out from the European Commission for certain platforms. These give some sense of the priorities of the Commission, but again, they can sometimes be quite unpredictable and result in us all scrambling around thinking, hey, should we be saying something about this? Have we got work already existing on this? Which isn't ideal of long-term planning.

And the final point I'd make just on the Commission and the platforms more generally, is a key point of the systemic risks is that the platforms are supposed to have already assessed for systemic risks and released these systemic risk reports, the first of which will be coming in November, that will be discussing the, or at least showing the results of the systemic risk report that was done last year. So that's already quite delayed. And this annual cycle of seeing risk reports that we don't even know how detailed they'll be, is not really particularly regular drumbeat of information from which we can learn, develop research projects, point to gaps in current assessment.

So I would say that the Commission has been overall, has shown a willingness to engage and a willingness to take on board what civil society is saying and use it to iterate this concept of systemic risk, which is very welcome. The issue is the mechanisms for doing so at the moment are a bit infrequent or unstructured. And for something as important as working out how entire research projects and research fields will grow up to support this regulation, it would be good to have something a bit more structured and with a bit more sense of forward planning.

Ramsha Jahangir:

Absolutely. And just in terms of looking forward, because I do want to ... Already, you've done enough to show with the lack of transparency in this environment, of the lack of structured conversations with the regulators as well as companies, what opportunities do you think as we anticipate these reports to come out in a matter of few months, actually, are there for the research community to be constructive in their feedback, but also assess and try to get as much out of these reports as it would be possible? And if this creates further opportunities for research?

Oliver Marsh:

And then of course we do want to make sure that looking forward, it's positive as much as possible. I do think we have to caveat that pretty much everyone thinks the first round of reports are going to be, as the phrase goes, "the first pancake," as in not very good, but hopefully subsequent pancakes will be better, just because everyone is still learning and it's the first time this will happen. But I think there are various ways in which civil society, human rights defenders, researchers, and journalists, can point to areas in risk assessments that say, okay, this was a good attempt at, for instance, dealing with fundamental rights risks. However, the mitigation measures you put in place won't actually work for the communities we represent for the following reasons. Perhaps they're a bit inaccessible or hard to understand. So there are opportunities for collaboration there.

There are also opportunities to go beyond data access requests and form genuine collaborative partnerships, or at least a bit more back and forth, with platforms effectively saying, okay, we would like to do a data access request, but when we get the data, could we also have a discussion about how to update that data or how to do a second quick request based on the data we've had, rather than a data access request that you wait for weeks to get a response and then it isn't quite what you expected and you have to do another one, etc.

So there are various opportunities from these reports potentially, to see what the platforms have already done and see where our research and our expertise could contribute to that. And also to consider the kind of research relationships and partnerships that might actually work to meet those goals, whereas, as I say, at the moment there's quite a lot of shrugging shoulders, and we know as organizations the topics we want to work on and the sort of expertise we can bring for that. We just don't know how it fits into this broader work by platforms and the Commission that's going on behind the curtain at the moment.

Ramsha Jahangir:

And one last update that everyone has been keeping an eye out for is the Delegated Act on Data Access. Do you have any sense of when that finally will be out?

Oliver Marsh:

I asked this question only last week, and it is still unclear. And again, regulation has to meet this difficult balance between not being rushed, not being hastily put into place just to meet urgent needs. But also when regulation comes out as slowly as it currently is, it does mean that various harms and problems can build up in the meantime. It was exciting to see things like the Commission publishing interim guidelines for election integrity on platforms that didn't have the force of a delegated act, but did at least say, okay, here are some directions for thinking and potential ways of structuring partnerships. So more like things like that would be helpful if things like delegated acts are going to take time while they go through all of the legal necessary legal scrutiny.

I think, as I say, we do need a bit more of a sense of trying things out, putting things out there and getting commentary rather than this annual cycle of reports that might be bad and then we have to wait a year for another set of comments. Or, as I say, large roundtables that don't really give us much chance to communicate. Just an opportunity, a bit more experimentation saying, does this look good? and us to be able to say yes or no, I think will help smooth and speed up this learning process in a way that means the complications of Article 34 will become real world discussions rather than a bit more wait and see and conceptual debates.

Ramsha Jahangir:

Absolutely. Thanks so much for your very, very concrete take on this, and thank you so much for being here and sharing your candid insights.

Oliver Marsh:

Thanks Ramsha.

Authors

Ramsha Jahangir
Ramsha Jahangir is an award-winning Pakistani journalist and policy expert specializing in technology and human rights. Ramsha has extensively reported on platforms, surveillance, digital politics, and disinformation in Pakistan. Ramsha previously reported for Pakistan’s leading English newspaper, D...
Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics