5 Things to Know about the Digital Services Act’s First Risk Assessments and Audits
Mark Scott / Dec 11, 2024Mark Scott is a senior resident fellow at the Atlantic Council's Digital Forensic Research Lab's Democracy + Tech Initiative where he focuses on comparative digital regulatory policymaking topics.
When the European Union created its new social media laws, known as the Digital Services Act, officials had one clear goal: to boost accountability and transparency for some of the world’s largest tech companies.
Those ambitions are getting their first test. The likes of Alphabet, Meta and TikTok just published the first-ever risk assessments and outside audits into how these companies handled potential illegal content and material harmful to individuals’ fundamental rights that represented “systemic risks” to the 27-country bloc. Here’s a full list of the available reports.
In lengthy — and often unwieldy — documents, company executives and independent auditors scoured the platforms to understand how these Big Tech giants tackled thorny and often complicated problems, including how to reduce the amount of illegal content like terrorist and child sexual abuse material accessible online.
They also outlined how firms can give users a greater say in how content is displayed in their feeds, as well as what areas need to be improved upon to ensure companies’ compliance with Europe’s new rules.
It marks the first time that such legally required risk assessments and audits into social media companies’ impact on wider society have ever been conducted. These reports will now be published annually. The European Commission and EU national regulators will release a separate analysis, based on their own review of these reports, during the first half of 2025.
Below are five key takeaways to help you understand these risk assessments and audits:
1) Step forward in accountability and transparency
By publishing these mandatory insights into how platforms operate, the firms and their outside auditors provided the most comprehensive and independent analysis of the good, bad, and ugly of so-called Very Large Online Platforms and Very Large Online Search Engines. The documents range up to more than 200 pages. They now represent a baseline for regulators, both within and outside the EU, to make judgments on whether these firms are reducing legally defined online harms.
The risk assessments and audits are inherently complicated. You need a crash course in the inner workings of EU regulation and global content moderation policies to unpick what is in them.
On average, the independent audits — still paid for by the companies but nominally shielded from firms’ interference — are more useful than the risk assessments.
The audits break down, per each DSA Article, what platforms are doing to reduce systemic risk, or an unclear term used by the EU to define illegal content and counterfeit goods that may endanger those within the 27-country bloc. Companies then submitted rebuttals to these audits to explain how they would improve internal systems to bring services in line with the EU’s DSA.
All firms except Wikipedia were given “negative” overall valuations by their outside auditors for failing to comply with all mandatory provisions within the new social media rulebook. On average, though, companies were assessed to have complied with more than 85 percent of the legislation’s requirements.
The company-led risk assessments, which outlined how they were handling illegal content and goods under the EU regime, provided little, if any, additional information that had not already been published in firms’ content policies and terms of service. They represented a comprehensive overview of existing internal practices that, while helpful, did not offer further insight into how these platforms functioned.
2) A lot is left to the companies and auditors
Under the EU’s DSA, companies are obliged to carry out their own internal risk assessments and hire outside experts to conduct independent audits. The second task almost exclusively fell to global accountancy and consultant firms KPMG, Deloitte, and EY — many of which have existing relationships with the platforms. Only two smaller firms, FTI Consulting and Holistic AI, respectively, conducted separate audits for two of the firms (X and Wikipedia).
In the audits, the outside consultants outlined how each company met its DSA obligations. In Facebook’s document, for instance, EY detailed how internal processes within the social media giant ensured people were not shown online advertising based on sensitive personal information like individuals’ religious beliefs or ethnicity. In the separate audit of Google’s search product, EY explained, based on an internal review of the company’s processes, how the tech giant allowed users to access search results and other information that was not based on data the company collected on them for profiling.
A key question arising from these audits was whether such checks were adequate. Auditors were almost exclusively reliant on existing processes within these firms to assess compliance. It was unclear, based on the review of these audits, how such information was collected; what pushback, if any, the auditors received when asking for the required data; and what steps were taken to go beyond firms’ existing internal compliance structures to meet the overarching obligations of Europe’s rules.
Such transparency was inevitably reliant on the goodwill of both companies and auditors. Yet in X’s document, FTI Consulting explicitly stated that, based on its review of the company’s internal processes, Elon Musk’s platform was compliant with its DSA obligations to provide outside researchers with real-time access to its public data. That, however, was in direct contradiction to preliminary charges filed by the European Commission, which accused X of failing to comply with such outsider data access.
3) Negative audits, but the devil is in the detail
Overall, every platform, excluding Wikipedia, received a negative evaluation based on the DSA’s strict criteria for assessing compliance with the EU’s new rules. But in each of the companies’ audits, most firms successfully demonstrated, based on existing internal processes, that they had reduced the potential for their goods and services to worsen systemic risks within the 27-country bloc.
Despite almost all the firms relying on just three outside auditors, it remains impossible to compare one platform to another because companies used different reporting periods to assess their compliance via the audits.
Most of the separate risk assessments covered the period between September 2022 and August 2023. But some, such as Booking.com and Zalando, also produced reports for 2024 alongside the earlier reporting period. Others, including Meta and China’s Alibaba, only published risk assessments for the most recent year (2023-2024.)
Firms that have yet to publish their most recent risk assessments — including TikTok, the China-owned social media company currently at the center of a political storm engulfing Romania’s annulled presidential election — are expected to do so by the end of the year.
4) Where the companies failed
Based on companies’ DSA obligations, firms successfully complied with most of the rules’ requirements. That includes basic obligations like making it easy for people to communicate with company representatives and more complex requirements like conducting internal assessments to reduce systemic risks.
Yet failures still arose. For Meta, both Instagram and Facebook did not accurately report the number of user accounts that had been suspended for posting or sharing illegal content. Nor did either platform provide sufficient information on how each made content moderation decisions. That included a lack of granular data on how such material was found on the platform and what types of illegal content, per the platform’s terms of service, was discovered.
On TikTok, auditors discovered the social network had not clearly explained, within its terms of service, how its so-called recommender systems, or algorithms used to display short videos in people’s feeds, could be modified. The platform equally did not give users enough clarity on how these algorithms could be changed if individuals decided, under their legal rights within the DSA, to alter how specific posts were chosen to be displayed in their feeds.
For Apple, whose App Store falls under the DSA’s obligations, auditors detailed that the iPhone maker had not acted swiftly enough at the beginning of 2024 to remove apps from its online store that had failed the company’s verification process. Apple also initially approved other app developers that had also failed, during the self-certification process, to provide necessary information under Europe’s new laws.
After the audits were published, all companies agreed to update their internal procedures to fix these issues.
5) Where do we go from here?
For those looking for a smoking gun within these risk assessments and external audits to confirm bad behavior, you will be disappointed.
At best, the documents are an initial starting point for how the EU’s new legislation can improve accountability and transparency by forcing companies to document how they are removing illegal content from users’ feeds. At worst, they are a public relations exercise to highlight ways that these platforms have reduced the spread of illegal content and products while papering over the ongoing levels of hate speech, Russian and Chinese disinformation and terrorist content that many, including my colleagues at the Atlantic Council’s Digital Forensic Research Lab, continue to detect on these platforms.
As the European Commission and national regulators prepare their own assessments of these reports, to be published in early 2025, there are ways to improve these risk assessments and audits.
A first step would be to include more outside groups, including those within civil society organizations whose members are often targeted with online abuse and hate, in how these documents are produced. So far, that outreach, beyond a small number of bodies, has been lacking from tech firms and auditors.
Companies should also provide these often lengthy and jargon-laden reports in formats that are both machine-readable and formatted in identical ways to offer like-for-like assessments by both regulators and other interested parties. Currently, firms submit dense PDF documents, and a shift to a format that can be read and processed by a computer would be a significant step forward to fast-track analysis.
A third improvement would be for all tech companies to be more forthcoming about how they and independent auditors reached their conclusions. Currently, how complex internal processes function — ranging from how recommender systems serve up content to how firms mitigate content moderation issues — are reduced to a few lines in reports with little, if any, granularity on how such decisions were made.
Providing greater detail on these often tricky decision-making processes would go a long way to boost platforms’ accountability and transparency — the main aim, after all, for why European officials created the risk assessments and audits in the first place.
Related Reading:
- What to Do with the Long-Awaited DSA Systemic Risk Assessments
- Assessing Systemic Risk Under the Digital Services Act
- Understanding Systemic Risks Under the Digital Services Act
- Unpacking “Systemic Risk” Under the EU’s Digital Service Act
- The European Commission's Approach to DSA Systemic Risk is Concerning for Freedom of Expression