Home

The Digital Services Act Meets the AI Act: Bridging Platform and AI Governance

Jordi Calvet-Bademunt, Joan Barata / May 29, 2024

This piece is part of a series that marks the first 100 days since the full implementation of Europe's Digital Services Act. You can read more items in the series here.

EU Commissioner Thierry Breton speaking at the first meeting of the European Board for Digital Services. Feb. 19, 2024. Source.

The adoption of the Digital Services Act (DSA) represented a major development within the context of the EU and beyond. Based on the precedent of the eCommerce Directive, the DSA incorporates new legal responsibilities for online platforms and new rights for users. It covers significant areas of intermediary and platform regulation, such as liability, transparency, appeal and redress mechanisms, regulatory bodies, systemic risk assessment and mitigation, data protection, and online advertising. With most of its provisions currently being enforced, the big test to determine this regulation's appropriateness and overall success has only started. This testing period will particularly affect the so-called very large online platforms (VLOPs) and very large online search engines (VLOSEs), which are subject to the most specific and demanding obligations.

The implementation process of the DSA has been accompanied by the almost simultaneous discussion and adoption of other pieces of legislation, which will also impact certain aspects of platforms' activities, particularly content moderation policies, such as the European Media Freedom Act (EMFA) or the regulation on transparency and targeting of political advertising (PolAd).

In this context, a new piece of legislation is of particular importance. The Council gave the final green light on May 21 to the adoption of the AI Act. It constitutes another landmark moment by enacting, at a global level, the first-ever legal framework addressing AI based on a risk-based approach. It provides developers and deployers with requirements and obligations regarding specific uses of AI.

Legislators discussed the DSA and AI Act in parallel, but, in principle, they cover separate areas of technology regulation. The AI Act generally governs AI technology, whereas the DSA regulates intermediary services, including online platforms. While the main discussions around the DSA took place within a context where AI was more nascent or had yet to trigger substantial societal and political debates, the AI Act was at the top of the legislative agenda in the EU in parallel with the more recent emergence of popular and controversial applications such as ChatGPT.

Though the DSA and AI Act were enacted separately, both platform regulation and the use of AI systems are becoming increasingly intertwined, as the preamble of the AI Act acknowledges. However, reaching a conclusion regarding the legal regime applicable to matters at the intersection between AI and platform regulation may, in some cases, require efforts to find consistency between two different and, in many ways, “parallel” pieces of legislation.

Generative AI may create uncertainty and gaps in the DSA and the AI Act enforcement

The DSA covers three broad categories of intermediary services: conduit, caching, and hosting. From a general perspective, standalone AI services such as generative AI that “creates” new content based on prompts from users would not fit in any of those categories since the supply and dissemination of content by AI systems encompasses a series of different and more complex processes not contemplated by the DSA. It is worth mentioning, however, that in some cases, the line between standalone large language models exhaustively searching across the Internet and “regular” search engines is now more challenging to draw. For instance, the very recent launch of Google’s AI Overviews will transform the company's “traditional” services by presenting users with AI-generated answers selected from information on the web. The aim is to directly lay out the information that the user is seeking (instead of a list of links). Such a new tool has triggered both important expectations and concerns, as recently reported by Wired.

In addition, interpersonal communication services, such as emails or private messaging services, fall outside the scope of the DSA provisions regarding hosting services and may be subject to specific requirements only when they operate through public groups or open channels. This difference means that AI chatbots offered as channels for users' individual interactions on online platforms would, in principle, be excluded from DSA rules applicable to the main service. A more complicated scenario is the use of generative AI products when they are embedded into platforms subject to the provisions of the DSA and offered to users as a specific service. These tools may be used either to prompt the creation of new pieces of content, such as text and images or to assist in their making, incorporating a certain degree of direct human intervention.

When an individual or media entity publishes or makes available to the general public through an online platform a piece of synthetic content selected, prompted, and/or created by them, consequences in terms of legal liability and responsibility shall be the same as those associated to the dissemination of other types of content. Online platforms may be covered by the intermediary liability protections incorporated into the DSA. However, there is already a debate around the implications for the liability regime in cases where original users’ content is significantly modified by an integrated AI tool, a specific matter that the DSA does not contemplate.

The regulation of systemic risks in the DSA and the AI Act

One significant area where the AI Act and the DSA overlap is the obligation to assess and mitigate the so-called “systemic risks” in both laws. This provision already applies in the DSA and has raised concerns from the UN Special Rapporteur on freedom of expression, experts, and civil society. Recognizing the risks that generative AI can create, the DSA precedent suggests that freedom of expression advocates should pay close attention to this obligation in the AI Act.

When it becomes applicable, the AI Act will require “providers of general-purpose AI models with systemic risk [to] assess and mitigate possible systemic risks.” General-purpose AI models are a type of AI trained with a large amount of data that can perform a wide array of tasks. General-purpose AI models with “high impact capabilities” are considered to have systemic risks.” Models are presumed to have “high-impact capabilities” when the cumulative amount of computation used for its training measured in floating point operations is greater than 1025. Industry experts have pointed out that this will “encompass many of the models on the market today.” For example, commentators consider OpenAI’s GPT-4 and perhaps Google’s Gemini as models with systemic risk.

The DSA, adopted in 2022 and fully applicable since last February, also obligates VLOPs and VLOSEs to assess and mitigate “systemic risks.” While the meaning of systemic risks in the AI Act and the DSA is not identical, it has many similarities. In fact, the AI Act clarifies that AI systems embedded into VLOPs or VLOSEs are subject to the risk management framework in the DSA. The AI Act adds that if the AI models comply with the systemic risk obligations in the DSA, they are also presumed to fulfill the AI Act obligations “unless significant systemic risks not covered by the [DSA] emerge.”

The DSA’s obligation concerning systemic risks raises several critical concerns regarding freedom of expression, including its vagueness and, given that a political institution like the European Commission enforces it, its potential for abuse. The same concerns apply to the AI Act.

The AI Act defines systemic risk as “a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.” The DSA also establishes a wide array of broadly defined risks, including the dissemination of illegal content, actual or foreseeable negative effects on the exercise of fundamental rights, civic and electoral discourse, and public security, as well as in relation to gender-based violence, the protection of public health and minors, and serious negative consequences to the person’s physical and mental well-being.

There are undoubtedly good reasons to protect the interests identified by both the DSA and the AI Act. At the same time, it is easy to imagine how these provisions may excessively restrict freedom of expression if they are not applied appropriately. Safety, public security, and public health, for example, are among the most common grounds administrations use to restrict speech. Public safety and national security concerns are often used as a justification for internet shutdowns. TikTok’s shutdown in New Caledonia following riots in this French territory evidences that this is a real risk in Europe, too. Such measures were also floated by President Macron and the EU’s digital “enforcer” and EU Commissioner, Thierry Breton, during protests in France in 2023. The reference to “negative effects on [...] the society as a whole” in the AI Act is particularly vague, broad, and dependent on subjective values. It is easy to imagine how, in contested topics, such as the current war between Israel and Hamas, the AI Act’s provision on systemic risks could be used to repress specific views.

The risk posed by this provision is particularly significant because a political body - the European Commission through the AI Office - enforces the AI Act requirements on general-purpose AI models. The AI Office sits in the same Directorate-General of the European Commission that enforces the DSA (DG Connect), which is supervised by Commissioner Breton and is part of its administrative structure.

The DSA has already demonstrated the risks the AI Act's obligation on systemic risk may imply. According to Politico, last year, Commissioner Breton “wanted [DSA] investigations into how X (formerly Twitter) was handling the Israel-Hamas conflict, according to four separate officials with knowledge of those discussions, who were granted anonymity to discuss the previously unreported meeting. Within weeks, Breton got his wish in the form of DSA probes." Regardless of the merits of the investigations against X and other platforms, the first few lessons learned from the DSA should encourage advocates to double down on their efforts to protect freedom of expression. The AI Act and the DSA are no doubt well-intentioned and may prove positive, but it is indispensable that they do not come at the expense of fundamental free speech rights.

This article is based on the final version of the AI Act, adopted by the European Council on May 21, 2024. This law will be published in the EU’s Official Journal in the coming days.

Authors

Jordi Calvet-Bademunt
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University. He is also the Chair of the Programming Committee of the Harvard Alumni for Free Speech and has been a fellow at the Internet Society. At The Future of Free Speech, Jordi f...
Joan Barata
Joan Barata works on freedom of expression, media regulation, and intermediary liability issues. He is a Senior Fellow at The Future of Free Speech. He is also a Fellow of the Program on Platform Regulation at the Stanford Cyber Policy Center. He has published a large number of articles and books on...

Topics