Home

Unpacking “Systemic Risk” Under the EU’s Digital Service Act

David Sullivan, Jason Pielemeier / Jul 19, 2023

David Sullivan is Executive Director of the Digital Trust & Safety Partnership, and Jason Pielemeier is the Executive Director of the Global Network Initiative (GNI).

EU flags flying in Brussels. Shutterstock

At present, many of the world’s largest online platforms are conducting a first-ever, mandatory assessment of “systemic risks” arising from the design and functioning of their services, as part of the EU’s sweeping content regulation, the Digital Services Act (DSA).

Although information and communications technology companies have been using a variety of risk assessment methods and approaches for years to inform their work on human rights and trust and safety, the framework set out in the DSA, which for now applies only to platforms and search engines with more than 45 million EU users, presents the first regulatory requirements for risk assessments around online content and conduct. But they will not be the only such requirements, as other nations move to enact similar regulations. From Singapore’s Online Safety Act, which passed last year but will be implemented in the future, to the long-expected UK Online Safety Bill, risk assessment is a common element across many content regulation regimes. And from Brazil to Taiwan, legislative proposals have adopted the concept of “systemic risk” presented by the DSA.

But when does online content or conduct risk become systemic? In other fields, such as finance, the concept of systemic risk is relatively well-developed. Not so for digital services.

In May 2023, the organizations we lead, the Global Network Initiative and the Digital Trust & Safety Partnership, brought together our company members and experts from civil society and academia to explore the DSA’s approach to systemic risk assessment in more detail. Company participants included the providers of 11 of the 17 very large online platforms (VLOPs) designated by the European Commission, and both very large online search engines (VLOSEs). Non-company participants included experts from Europe, as well as international participants from the Global Network Initative’s civil society and academic constituencies.

Over two days of virtual meetings, the group dove into the details of the four categories of systemic risk specified in the DSA:

  • The dissemination of illegal content;
  • Negative effects for the exercise of fundamental rights;
  • Negative effects on civic discourse and electoral processes, and public security; and
  • Negative effects in relation to gender-based violence, the protection of public health and minors and serious negative consequences to the person’s physical and mental well-being.

This article summarizes some of the key observations. The full workshop report is available here.

Limitations

In organizing the workshop, we recognized it would face multiple constraints. Although an in-person event would have allowed a more in-depth discussion, we opted for a virtual format to ensure we could hold the sessions while doing so was still relevant to the companies undertaking risk assessments, and to facilitate participation across geographies and time zones.

Moreover, the same reason why organizing the discussion at this moment was so important – to allow outside experts to provide actionable input to inform ongoing company risk assessments – also meant that companies’ interventions were necessarily somewhat constrained.

Finally, although the views and experiences of communities affected by online services were top-of-mind for all participants, direct participation by users and other affected groups was not possible in this initial workshop series. Instead we prefaced the discussion with an overview of the importance of stakeholder engagement in identifying, assessing, and mitigating risks. This highlighted best practices that have been established through frameworks like the UN Guiding Principles on Business and Human Rights (UNGPs).

So what did we learn? Frankly, the discussion raised more questions than answers.

What makes a risk “systemic”?

First, despite substantial work underway to define systemic risk, we lack clarity about what this term means, and how it should be understood and operationalized in DSA risk assessments. Even within our workshop, views diverged on whether a systemic risk is one that has an impact on a system and, if so, which system(s), or if it is a risk that is caused or exacerbated by a system.

Examples of risks that were identified as being systemic because of their impact on systems include the following:

  • Risks arising from content and conduct that happens across multiple platforms, services, and products, amplifying the scale of potential harms and complicating mitigation efforts
  • Risks that impact multiple, interdependent fundamental rights
  • Risks from the largest of the very large online platforms, or those that affect the largest number of users, given that some services have hundreds of millions of EU users
  • Risks associated with a platform being so ubiquitous that it becomes a public space of significance for exercising fundamental rights
  • Risks that have an impact on a societal level (i.e., via elections or in the context of an emergency), in contrast with those that affect specific individuals in more typical situations

On the other hand, examples of risks that are caused or exacerbated by systems or systemic factors include the following:

  • Risks caused to a material extent by platform systems, such as an algorithmic recommendation system, which exposes large numbers of users to content that they might not otherwise seek out
  • Risks from how systems interact with each other (e.g. how advertising intermediaries interact with social media), which gets complicated when some of these services (such as private messaging) fall outside the scope of the DSA;
  • Risks that cannot be addressed by ordinary controls, such as the intentional manipulation of a service’s content moderation mechanisms
  • Other failures of risk mitigation systems, for example if a service’s content moderation system is itself undermining freedom of expression or other rights through improper design or operation.

Which risks and which rights to prioritize?

The “human rights paragraph” of the DSA, Article 34(1)(b), requires very large platforms to assess “any actual or foreseeable negative effects for the exercise of fundamental rights” and then goes on to list several rights “in particular” such as human dignity, privacy, freedom of expression, non-discrimination, rights of children, and consumer protection. So how should companies interpret this list?

A lesson on the legislative history of the DSA proved instructive for our discussion. Initially proposed with a more narrow focus on privacy and freedom of expression, this portion of the DSA expanded throughout the negotiation process to include rights identified by specific interest groups, and eventually approached a general human rights impact assessment obligation.

This broader approach is fortuitous, since there are well developed human rights risk assessment methodologies, thanks to the UNGPs, although they will need to be adapted to fit some of the rights that are particular to the Charter of Fundamental Rights of the European Union. Human rights saliency assessments were flagged as a key tool for understanding which rights to prioritize, based on their scope, severity, and remediability.

Illegal content, according to whom?

It might seem that the dissemination of illegal content would be the most clear-cut element of systemic risk, since companies have been contending with illegal content under European law for decades. Our discussion, however, quickly surfaced a host of complicating factors.

Notably, under whose law does content need to be illegal for it to be considered in scope for DSA risk assessments? Is this just about content that is illegal across Europe, such as CSAM, terrorist content, or copyright infringement? What should services do when jurisdictions take different approaches to defining and interpreting illegal content?

On civic discourse, is the “dark Brussels effect” of concern?

The civic discourse and elections dimension of systemic risk is among the most pressing, but it is also the most vaguely defined. With 65 elections across 54 countries coming in 2024, there was palpable urgency for platforms to deeply consider how they can anticipate and mitigate risks to democracy. At the same time, participants acknowledged that methods for measuring risks in this category were among the least well developed.

Although EU policymakers are quick to cite the “Brussels effect” by which EU regulations, such as the General Data Protection Regulation (GDPR), have become de facto global standards, some participants were concerned about an unintended “dark Brussels effect” that could negatively affect other parts of the world, especially during the 2024 election cycle. In particular, will companies, especially given current budget constraints, decide to allocate more resources to identifying and mitigating risks specific to the EU at the expense of risks occurring elsewhere, including countries at risk of significant political violence?

This concern was counterbalanced by other more optimistic views, that responses to the DSA will help define what responsible investment and decision making at the platforms looks like, allowing effective practices to be scaled and replicated globally.

How to assess intersectional risks around gender, race, sexuality, and religion?

Our discussion on the final dimension of systemic risk identified by the DSA focused more narrowly on gender-based violence, as we did not have sufficient time to cover this topic along with public health, child protection, and physical and mental well-being (all of which are topics that can and do merit their own deeper reflections).

On gender-based violence, we were reminded by participants that this is an area of concern that overlaps substantially with all of the other areas of systemic risk: it is often illegal, negatively impacts multiple fundamental rights, and can impact civic discourse, for example when sustained online harassment and abuse drives people offline and excludes them from public participation.

Research shows that gender-based violence disproportionately affects people in intersectional ways, so that around the world, women and girls are more likely to suffer negative impacts if they are also ethnic or religious minorities, disabled, or are from LGBTQI+ communities. There was a strong consensus among discussion participants that more research, as well as more sustained inclusion of the perspectives of these communities themselves, will be critical to developing best practices for assessing these types of risks.

Key takeaways and consensus points

Clearly the text of the DSA leaves substantial room for interpretation, and while this ambiguity presents a challenge, it could also lead to opportunities for flexibility, experimentation, and collaboration in the emerging field of systemic risk assessment. Participants largely agreed that the more platforms can anchor their DSA risk assessments in established international frameworks for risk assessment and human rights, the higher the likelihood we will get consistency across company practices and achieve shared understanding across regulators, the regulated companies, interested stakeholders, and citizens. Finally, participants were unanimous in their endorsement of the importance of meaningful transparency and the need for on-going multi-stakeholder collaboration. We agree on these points and look forward to continuing to help foster more cross-stakeholder engagement and learning.

Authors

David Sullivan
David Sullivan is the founding Executive Director of the Digital Trust & Safety Partnership. An experienced human rights and technology policy practitioner, he brings together unlikely allies to solve global challenges related to rights, security, and democracy in the digital age. Most recently, Dav...
Jason Pielemeier
Jason Pielemeier is the Executive Director of the Global Network Initiative (GNI), a dynamic multi-stakeholder human rights collaboration, building consensus for the advancement of freedom of expression and privacy amongst technology companies, academics, human rights and press freedom groups and so...

Topics