Home

Donate

No Content Moderation for Media Publishers? Proposed Amendment for Digital Services Act is a Lesson in Unintended Consequences

Julian Jaursch / Nov 10, 2021

Small things can cause big issues. This cliche rings true for Europe’s major proposal to establish rules for big tech companies like Google and Facebook. A few sentences added to the European Union’s massive bill on platform regulation, the Digital Services Act, are well intended to support independent media but might hurt the EU’s efforts to tackle disinformation. The language would prohibit platforms such as Instagram, TikTok or YouTube to interfere with content by media publishers such as newspapers, broadcasters or online blogs. This is ostensibly a big win to shield media from arbitrary content moderation by platforms, yet at the same time might disallow labelling for fact-checking or downranking debunked content. Whether or not this amendment ends up in the final text, it serves as a good reminder for how important it is in platform regulation to consider potential unintended consequences.

What the DSA is and how it could help tackle disinformation

To understand this controversial detail of the draft Digital Services Act (DSA), it is helpful to quickly highlight the main parts of the DSA and what it wants to accomplish. The DSA is Europe’s attempt to build a comprehensive regulatory framework for digital platforms – ranging from Amazon and booking.com to Facebook, Instagram and YouTube. It was presented in late 2020 and is currently being discussed by the EU member states and in the European Parliament, with a view to being finalized next year. Its main innovation is that it establishes “due diligence” rules for very large online platforms: for example, they need to create transparency around political advertising and algorithmic recommender systems; they need to conduct risk assessments regarding their business practices; and there are guidelines for how they need to share data with researchers and regulators.

Such provisions do not directly address issues like disinformation and hate speech online, but still have important implications for tackling them. At a basic level, the DSA would continue to allow content moderation but would embed it in a framework of compliance rules. This would ideally help provide better public interest scrutiny of how platforms go about designing recommender systems and how they assess and mitigate risks such as the spread of illegal content, but also disinformation (which is not always illegal). In addition, data access rules would enable external research on disinformation, instead of policymakers and the public having to rely only on whistleblowers and journalists to report on internal-only research – such as via the recent “Facebook Files”.

How changes to the DSA might weaken efforts on tackling disinformation

To be sure: key DSA proposals require serious improvements. The proposed rules on risk assessments and audits are too vague, for instance. Data access requirements should be expanded beyond academic researchers to journalistic and civil society researchers and should include more detailed vetting standards. Enforcement also needs to be streamlined. Nevertheless, the idea to create compliance standards for platforms instead of creating rules for individual pieces of (disinformation) content goes in the right direction. Considering this overall progressive approach, it is even more troubling that some amendments could undermine the DSA’s intentions.

One example that can neatly highlight the difficulties of getting platform regulation right is the debate over content moderation for media publishers. There is a push to prohibit platforms from interfering with content and accounts by media publishers. This idea seems very much in line with the DSA’s basic goal: to rein in the power of digital platforms in singlehandedly determining the online information spaces for millions of people. It would ensure that platforms do not undermine the independence of media publishers by arbitrarily deleting legal, public-interest content. Public broadcasters across the EU have experienced removals of their legal, legitimate content meant to inform the public, for example, when reporting on violent events such as shootings or wars. Platforms might delete such content on purpose or by accident for “glorifying violence”. Similarly, reporting containing nudity – even breastfeeding – might be removed by big US platforms. In many cases, no explanation for removals is given at all. That is why European publishers are in favor of “protecting editorial integrity” against platform interference. They succeeded in getting this amendment into opinions of European Parliament committees (for example, no. 156 hereno. 79 here), thus bringing it into the legislative debate of the final text.

What reads like a sensible move to keep platform power in check was met with disbelief by researchers, civil society experts and activists working to tackle disinformation. Many are highly concerned that what they call the “media exemption” undermines their efforts: Because any interference by platforms might be forbidden, this could mean that not even fact-checking labels in Facebook’s “News Feed”, contextual information under YouTube videos or downranking debunked posts would be possible. Moreover, as the definition of what constitutes media is rather broad, the exemption might not only apply to public broadcasters that already have to adhere to many different rules from media regulation, but also to other channels, users or blogs that do not have high editorial standards and are not subject to media regulation. They, too, might essentially become exempt from any content moderation.

Media outlets have been shown to play a key role in spreading disinformation in different regions across the world. For example, a comprehensive study of disinformation during the 2017 German election showed how rumors often originated on niche online sites, calling themselves “media outlets”. Even high-quality traditional media publishers played an important role in spreading disinformation in that election cycle. The EU DisinfoLab, a European NGO researching disinformation, has highlighted other recent disinformation campaigns featuring “media” as well. In addition, if it were up to governments to decide what constitutes media, there is a risk they could only allow government-friendly outlets to be exempt from content moderation – essentially giving state propaganda a free ride and making topics and people undesired by the government subject to more content moderation. The European Commission, which had developed the original draft, also thinks the amendment is a bad idea: One of its vice presidents, Věra Jourová, said this was an example of good intentions “leading to hell”.

Lessons for future tech regulatory issues

The “media exemption”/“editorial integrity” amendment could end up in the final DSA text, but the debate on it might also be exaggerated, with the change not making it into later parliamentary reports. For Europe’s efforts to tackle disinformation, it would be better if the amendment were dropped. Rather, transparency and accountability mechanisms for platforms should be strengthened. Beyond this single case, the discussion highlights the balancing acts necessary in (tech) regulation. Here, it is about balancing the need to free media from arbitrary interference with the need for transparent, consistent content moderation to tackle disinformation. Other topics in the DSA similarly raise this bigger question: How can unintended consequences be considered and divergent interests be balanced in platform regulation? Take the example of data access: It is desirable to allow as many researchers as possible access to platform data, not just wealthy, institutionalized ones. Yet, if one allows data access with no strings attached, this carries risks for privacy and is ripe for abuse by bad-faith actors.

Especially in an area like platform regulation that touches millions of people’s daily lives and their fundamental rights, compromises on such issues are highly consequential and might not leave all sides satisfied. But that should not lead policymakers to either reject any sort of regulation or to settle for bad compromises. Instead, it is worthwhile to ask: What compromises are unacceptable with a view to fundamental rights? Could there be different effects over the short versus the long term? Might it be that the idea under discussion has the opposite effect as intended? More generally, what do provisions (even if they look good on paper) mean for the broader public sphere? Considering such questions as a kind of check is easier said than done, because many actors involved in tech policymaking have their own (economic) interests and agenda in mind. That is why it is all the more important to support public-interest, independent research and advocacy to explore these questions.

Authors

Julian Jaursch
Julian Jaursch is a policy director at not-for-profit think tank interface in Berlin, Germany. He works on issues related to disinformation and platform regulation.

Topics