Home

Donate

Ensuring Digital Services Act Audits Deliver on Their Promise

Jason Pielemeier, Ramsha Jahangir, Hilary Ross / Feb 19, 2024

European flags in front of the Berlaymont building, headquarters of the European Commission in Brussels. Shutterstock

This year, many of the world’s largest online platforms and search engines – specifically, those with more than 45 million users in the European Union – will be audited by third-party auditors for the first time under the EU’s sweeping content regulation, the Digital Services Act (DSA). The law came into full effect on February 17. In October last year, the Commission adopted the delegated regulation on the performance of DSA audits (‘DRA’) to specify audit procedures, which will apply to the first round of audits of such Very Large Online Platforms or Search Engines (VLOPSEs) due in August 2024. These audits represent a significant experiment in tech regulation. And yet we find the audit procedures have been under-developed, leaving lots of open questions that would benefit from further, coordinated, multistakeholder discussion.

The DSA seeks to hold online platforms accountable by identifying and mitigating possible societal risks from their products, and audits are critical to the DSA’s overall compliance and enforcement architecture. Specifically, auditors will examine and verify companies’ compliance with the extensive and diverse requirements set out in Chapter III of the DSA, including assessment of their own “systemic risks,” the mitigations they’ve taken to reduce those risks, and their crisis plans and responses.

While some technology companies have been using a variety of voluntary risk assessment methods and approaches, including the one that applies to Global Network Initiative (GNI) member companies, to demonstrate responsible business conduct, the framework set out in the DSA represents the first regulatory requirements for audited risk assessments around online content and conduct. And, while the DSA only applies to Europe, these regulations are expected to influence global content moderation practices and the regulatory approach that other jurisdictions will take. Already, other nations are moving to enact similar regulations, including Singapore’s Code of Practice for Online Safety to the UK Online Safety Bill. Given this anticipated “Brussels effect,” it is especially important that implementation of the DSA is effective, avoids unintended consequences, and centers human rights like freedom of expression and privacy.

Notwithstanding the attention that has been paid to the DSA and the recognition that it will set an important global precedent for regulation, significant questions remain about how the DSA’s risk assessment, mitigation, audit, and algorithmic transparency practices will be implemented, verified, and enforced. Based on our experience and after engaging with many stakeholders on these questions, we have three key areas of concern:

  • First, the DRA—the delegated regulation that governs DSA audits—does not provide standard definitions, methodologies, or benchmarks for conducting audits. This raises risks that the process will not be rigorous or comparable, which in turn could have unintended consequences on human rights.
  • Second, the DRA appears to expect more from these audits than may be reasonable, creating risks that it could disincentivize discovery of mistakes, learning, and improvement over time.
  • Finally, auditor eligibility is limited and achieving auditor independence will be a challenge.

Our analysis stems from lessons the Global Network Initiative (GNI) – a multi-stakeholder, human rights collaboration among technology companies, academics, human rights and press freedom groups, and investors – has learned over four cycles of assessing the internal systems and processes used by technology companies to identify and address risks to freedom of expression and privacy. We also leverage insights from GNI-facilitated conversations and events, both internally with members and with a broader audience of experts through the Action Coalition on Meaningful Transparency. Lastly, we reference public comments solicited by the Commission on conducting independent audits.

DSA audits have no standard definitions, methodologies, or benchmarks

Under the DSA, auditors hold significant power in assessing the compliance of online platforms. In other sectors that have been subject to audits for longer, auditors typically use established methodologies and criteria or standards against which they assess compliance. In their feedback on the draft DRA, many stakeholders asked the Commission to provide clarity around the development of auditing standards, or at least refer to possible existing methodologies that could be adopted. Yet, the DRA does not provide or specify any preferred methodologies or benchmarks for auditors to assess against, nor does it announce any intention on behalf of the Commission to facilitate such processes (despite the fact that Article 44 of the DSA says “shall support and promote the development and implementation of voluntary standards … in respect of … auditing”). Instead, the DRA asks companies to set their own benchmarks, and auditors to develop their own methodologies. In other words, companies get to provide the answer key against which auditors will grade their homework.

This is troubling because for many of the provisions in Chapter III there is no existing consensus around definitions of key terms or methods for conducting related risk assessments and mitigation. For instance, the DSA’s definition of “systemic risk”, which company risk assessment and mitigation efforts hinge on, is not clear, as was unpacked at a workshop GNI co-hosted with the Digital Trust and Safety Partnership last year. Amongst the group of attendee experts, “views diverged on whether a systemic risk is one that has an impact on a system and, if so, which system(s), or if it is a risk that is caused or exacerbated by a system.” Assessments without clear definitions, standards, and methodologies could have adverse impacts on fundamental rights like freedom of expression and privacy. 

For example, under the DSA, platforms are prevented from disseminating illegal content, such as terrorist content or copyrighted material. At face value, this might seem clear-cut, as platforms have already moderated illegal content within Europe for decades. But, even getting clarity on this seemingly more settled idea is quite complex under the new DSA framework, and raises risks to rights. Who defines “terrorist content”? What should services do when jurisdictions within Europe take different approaches to defining and interpreting illegal content? Without clarity around the definition and processes, there is a risk that platforms err towards caution, take down excessive amounts of content, and limit users’ freedom of expression.

In addition, there are particular challenges related to assessing and auditing systems or features that rely on algorithms or artificial intelligence. For example, research from the AI Now Institute noted that algorithmic audits, as currently practiced, can further entrench power in the private sector, as too often it is the companies that define the criteria against which audits are conducted and carry them out. Algorithmic audits as practiced today can over-rely on technical evaluation of algorithms to address concerns such as bias and discrimination, missing key broader socio-technical evaluations of the impacts of AI systems.

This lack of shared definitions, methodologies, and benchmarks risks making the overall DSA assessment and audit process less rigorous, comparable, and ultimately effective. To start, it is complex to compare the impacts and harms of online platforms, as they provide different services, have varying features, and different types of user bases. If, as we imagine will be the case, platforms hire different auditing firms, who use divergent benchmarks and methodologies, it may be very difficult to compare their results. As a result, many observers have been left wondering how the Commission will be able to use the resulting audits to enforce the DSA in an equitable, fair, and rights-respecting manner.

DSA audits may not meet expectations

The DRA’s lack of clear benchmarks and methodologies for Chapter III compliance creates ambiguities for auditors. Adding to the complexity, auditors must provide “reasonable level of assurance” on compliance, without defined standards for reaching that conclusion. While this may sound, well, reasonable, it is actually a very high bar. Reasonable assurance effectively requires the auditor to be confident affirmatively stating that the information they are attesting to is materially correct (as opposed to “limited assurance,” which requires a negative determination that no evidence of material misstatement has been found). Imposing this high threshold for the audits of complicated and controversial systems, like recommendation algorithms, could create significant reputational risk for auditors. Beyond that, it is important to recognize that auditors can and have faced legal liability (from regulators, shareholders, and the companies they audit) for determinations they have made in other contexts.

While reasonable assurance is used in some other regulatory regimes, there are relatively few examples of this standard being applied to audits of corporate approaches to more qualitative systems, such as environmental, social, and governance (ESG) matters. For example, the 2022 EU Corporate Sustainability Reporting Directive (CSRD), which focuses on companies’ public reporting, recognizes and sets out a process for the establishment of detailed, relevant standards for reporting and requires only limited assurance for such reports initially, before eventually phasing in the requirement for reasonable assurance.

By contrast, the DSA requires review of an arguably much more complicated set of internal systems and controls, some of which are deeply technical in nature, and expects auditors to be able to demonstrate reasonable assurance immediately without the benefit of any clear standards or benchmarks. And, this type of audit – combining assessment of company practices with technical algorithmic auditing – is so new that many auditors don’t have prior experience to learn from.

To be blunt, this overall lack of shared understanding makes determining “reasonable assurance” an effectively impossible task. Recognizing these risks and complexities, some civil society groups and auditors, including GNI, recommended the Commission either clarify what is required for “reasonable” level of assurance or lower the level to “limited”, at least to start. Unfortunately, the Commission did not take those recommendations (indeed, it is not clear how carefully these submissions were read, as the final DRA oddly claims that “consulted parties were unanimous in proposing a ‘reasonable level of assurance’,” despite these submissions).

DSA auditor eligibility is limited and achieving real independence will be a challenge

Auditor competence and independence are critical to the credibility of the audit exercise and resulting reports. However, given the specialized socio-technical skills required and the novelty of this type of audit process, there is an extremely limited pool of auditors with appropriate levels of expertise to credibly conduct such audits. For example, auditors are required to have expertise in risk management, technical systems, and “the subject-matter of [the DSA] and, in particular, the systemic societal risks referred to in Article 34.” Additionally, while it is not specifically required in the DRA, auditors would benefit greatly from also being familiar with well-established methodologies like human rights impact assessments. While this range of expertise will be critical for audits to be conducted well, at the moment, few firms hold all of these skill sets.

The DSA’s auditor independence requirements may limit that pool even further. For instance, platforms are limited from contracting with anyone who has previously provided any non-audit related services to the platform, including “those linked to any system, software or process involved in matters relevant to the audited obligation or commitment, such as consultancy services for assessments of performance, of governance and of software, training services, development or maintenance of systems, or subcontracting content moderation.” While this is well-intentioned, given the small market of possible auditors, it could be overly limiting. There are a number of other possible ways to set limits that create appropriate independence without excessively restricting the pool of auditors. For example, there could be provisions for auditors to apply for exceptions under certain circumstances. Or, there could be time-bound limits; for instance, auditors who have non-audit related experience with a platform need to have had at least five years before then working for an auditor. Additionally, if a firm provides multiple kinds of services, there could be mandated firewalls between auditing and advisory teams.

Further complicating this scenario is the practical reality that auditors will depend significantly on the companies to identify, explain, and provide access to the systems that will need to be audited. The DRA attempts to address this by stating that VLOPSEs “should not impose, give any guidance, or otherwise influence the auditing organization through ... their choice and execution of audit procedures, methodologies, collection and processing of information and audit evidence, analysis, tests, audit opinion or elaboration of audit conclusions.” Yet other parts of the DRA require that companies choose the benchmarks against which provisions should be audited, and furnish information and expertise to the auditor so that the auditor can understand and assess relevant systems and policies. It’s difficult to see how this will allow auditors to be – and be perceived as – independent.

The combination of the paucity of available expertise, the complexity of the methodologies that need to be developed, the restrictions on who can conduct the audits, and the tensions in requirements, is likely to make an already difficult exercise exceedingly difficult to execute in practice.

Looking ahead: multi-stakeholder engagement can strengthen implementation

The challenges we’ve identified are perhaps not surprising, given the unprecedented nature of this exercise and the diversity of covered providers. Indeed, there is value in allowing latitude of choice and experimentation in audit criteria and methodology. Yet, in the absence of clear standards, there is a corresponding risk that VLOPSEs and auditors will be incentivized to choose relatively simplistic methodologies and benchmarks to facilitate what will be a very onerous and complicated audit exercise and mitigate their own risks. This is especially likely considering the short amount of time and the degree of “reasonable assurance”required. This may be the case notwithstanding the significant work that covered providers have done and are doing to identify and mitigate risk, including through processes like GNI assessment.

In our experience, effective risk assessment and assurance requires committing to a set of shared public principles based on international human rights, as well as detailed guidelines on how to implement those principles, a standardized process to assess that implementation, and a mechanism to accredit and train assessors. This overall framework fosters the very types of systems and processes the DSA aims to encourage, enabling companies to build sustainable and responsible content governance practices.

Addressing the concerns surrounding the DSA’s audit requirements is not simply a technical exercise, but a critical step in achieving the law’s intended goals and ensuring that their implementation respects fundamental rights. There are significant risks that could result from governments, companies, and auditors working out the details and parameters of audits behind closed doors. Benchmarks and methodologies developed by companies and assessors alone – without multi-stakeholder input – can raise questions about the reliability and consistency of the findings, making them difficult to trust. Conversely, enforcement actions taken by regulators without public transparency and justification risk creating the impression that the law is being applied selectively or unfairly.

The Commission has decided for the time being not to provide guidance around audit benchmarks, criteria, and methodologies. But that doesn’t mean others can’t come together to discuss what “good” looks like. Last year, a broad multi-stakeholder community of experts – including auditors, civil society organizations, companies, and academics – submitted suggestions for improvement to the Commission. We hope to build on those ideas. Given GNI’s multistakeholder composition and experience working in this space, we will continue to enable engagement around tech company risk assessments and audits, so those active in the field can more openly and rapidly gather insights about how implementation is going: what’s working, what’s not working, and where the gaps are. This is particularly important this year, as in addition to navigating the first year of the DSA, stakeholders will also be preparing for and responding to global elections. The field’s time and expertise are finite resources, so the more that stakeholders can learn from each other in coordinated and structured forums to iterate and improve practices, the better.

Authors

Jason Pielemeier
Jason Pielemeier is the Executive Director of the Global Network Initiative (GNI), a dynamic multi-stakeholder human rights collaboration, building consensus for the advancement of freedom of expression and privacy amongst technology companies, academics, human rights and press freedom groups and so...
Ramsha Jahangir
Ramsha Jahangir is an award-winning Pakistani journalist and policy expert specializing in technology and human rights. Ramsha has extensively reported on platforms, surveillance, digital politics, and disinformation in Pakistan. Ramsha previously reported for Pakistan’s leading English newspaper, D...
Hilary Ross
Hilary Ross serves as a special projects advisor at GNI. She is a program consultant, primarily advising public interest technology policy organizations. Previously, she worked in industry on data transparency and on staff at the Berkman Klein Center, where she maintains an affiliation. Additionally...

Topics