The Difficult Life of Trusted Flaggers
Théophile Lenoir / May 28, 2024This piece is part of a series of published to mark the first 100 days since the full implementation of Europe's Digital Services Act on February 17, 2024. You can read more items in the series here.
Are there enough incentives for organizations to take on the complex role of ‘trusted flagger’ under the Digital Services Act (DSA) in Europe? Given the current landscape of online safety, this is uncertain.
Trusted flaggers – organizations tasked with notifying platforms regarding potentially harmful speech – face significant challenges. Efforts to regulate speech often involve silencing someone's voice, whether for good or bad reasons. In tense political contexts, especially around elections, flaggers can become targets of politically motivated attacks. In the United States, those who have attempted to study and prevent the spread of misinformation online are now facing lawsuits. These legal battles consume time and resources, making it harder for flaggers to perform their duties.
What about in Europe? The DSA set clear rules for trusted flaggers: they must focus on illegal content; their status is granted by a national Digital Services Coordinator (DSC) after proving their expertise; they must maintain independence from platforms; they commit submitting notices diligently, accurately, and objectively; and they must publish a yearly activity report. In return, platforms must prioritize their notices. If trusted flaggers misbehave, a platform can request the DSC to investigate, during which their status remains upheld.
This process offers significant protections against potential abuse and protects trusted flaggers against legal attacks. However, it has its limitations, a significant one being that it may be too risky politically and financially to apply to the status. This paper highlights the inherent fragility of trusted flaggers, who must serve both the interests of a specific social group and the general public. It then presents three scenarios that could hinder the regulation from achieving its objectives: transparency requirements and operational costs could deter trusted flaggers representing social groups from applying; the notification process could be overwhelmed by notices concerning companies' intellectual property; and the national DSC could be politically biased in its administration of the trusted flagger program.
The tension in trusted flaggers
The concept behind trusted flaggers is straightforward: no single authority should be solely responsible for regulating online content. Instead, a multi-stakeholder approach should be employed, involving various actors in the regulation process. This distributes power rather than concentrating it in the hands of one entity, which could potentially misuse it for political gain.
There are two perspectives on this approach. Optimists believe it allows for the inclusion of diverse voices and interests in decision-making. Marginalized groups who suffer from online attacks can participate in platform moderation. Trusted flaggers can represent these groups, ensuring their voices are heard. This enhances the quality of moderation, as those directly impacted have the most expertise in identifying problematic jargon or behaviors targeted at them.
However, there is a downside to inclusion. Pessimists argue that trusted flaggers can be influenced by corporate and political powers. If trusted flaggers represent problematic interests due to significant financial resources and strong motivation, powerful actors can distort fair representation in the moderation process.
Why regulation is hard
This tension presents two challenges for the regulation process. First, how do we draw the line between beneficial and harmful interests? While collective interests are usually positive, associations defending misogynistic views are not sought-after candidates. Similarly, corporate interests can be problematic, yet companies subject to defamation or intellectual property theft should be able to protect themselves.
Ultimately, trusted flaggers are required to represent small groups as long as they act in the public interest. That is, their participation should aim to defend individual rights such as privacy, ownership of intellectual property, security, freedom of expression and freedom of thought and opinion. However, tensions exist between these values. For example, freedom of expression can be misused to share personal information or threaten someone’s security. In some cases, the assessment of whether there is an appropriate equilibrium comes down to a judgement call.
Second, how do we verify that trusted flaggers genuinely represent the interests they claim to? Trusted flaggers must prove they carry out their activities "for the purposes of submitting notices diligently, accurately, and objectively" (Article 22). The Irish regulator's application form asks about procedures to ensure flaggers act impartially and objectively. These could include independent governance structures or sanctions in case of abuse by employees. In addition, money makes interests traceable. The DSA specifically requires that trusted flaggers be "independent from any provider of online platforms." But despite these safeguards, interests may be unreported, or the DSC could also choose not to see them.
What could go wrong? Trusted flaggers could not apply
These challenges impose significant constraints on trusted flaggers. They must maintain high levels of transparency, which presents two major drawbacks. First, a substantial portion of their resources must be allocated to producing transparency reports. Second, a similar amount of their attention may be diverted to defending against potential attacks that arise from the disclosed information. If only 50% of the content they report is taken down, are they reporting too much? Conversely, if 95% of the content is removed, do they wield too much influence? These figures can easily be used to challenge the work of trusted flaggers – they were designed for such scrutiny. In the hands of politically motivated actors, they can become weapons taking time and resources away.
Despite the high cost of transparency, the DSA does not account for how the diverse actors regulating the online space are funded. Trusted flaggers must secure revenue while maintaining independence, especially from social media platforms that often funded their activities. But other sources of funding also pose problems. An association receiving 80% of its funds from a government agency can raise concerns. In order to prove their neutrality, trusted flaggers will increasingly need to diversify their income sources.
Platforms could be overwhelmed by IP notifications
In contrast, trusted flaggers representing business interests have a clear business model. Businesses must be able to protect their intellectual property, and the DSA allows them to do so. However, the current form of the DSA states that “industry associations representing their members' interests are encouraged to apply for the status of trusted flaggers” (Recital 61). This provision leaves the door open for individual companies to apply, potentially increasing the number of notifications received by platforms.
The DSA aims to ensure a variety of interests are protected. However, platforms may face a disproportionate number of solicitations from private companies, to the detriment of organizations representing social groups. This situation could also disadvantage small companies with fewer resources to monitor online content. If the majority of platforms’ attention is directed toward protecting the commercial interests of the largest actors, the DSA may fail to achieve its goal of building a more inclusive democratic society.
The DSC could be politically motivated
Finally, a significant part of the process relies on the DSC's goodwill. Not all European countries benefit from independent regulators, and a politically motivated DSC could have negative repercussions. DSCs are responsible for awarding the trusted flagger status. In theory, this could make it harder for actors defending minorities to gain status, or easier for those serving political interests aligned with the DSC’s. In the first case, the DSC might argue that an association’s interests conflict with public interests; in the second, that public interests prevail.
However, there are ways around this. For example, a trusted flagger whose status is rejected in one country could apply in another and notify content from there. This solution is costly and inconvenient, however. A trusted flagger working for political interests would also have limited power: if a Very Large Online Platform (VLOP) refuses to take down the content, it can be protected by the Irish DSC or the European Commission. Nevertheless, overall trusted flaggers might be further discouraged from applying in countries where the DSC is politically biased.
Building a sustainable environment for trusted flaggers
Very few organizations have gained trusted flagger status to date. In Tech Policy Press, Inbal Goldberger lists several reasons for this, including the fact that not all DSCs have been designated, many organizations are not yet aware of the status, and there are too few incentives to apply. The ability of a sustainable trusted flagger ecosystem to emerge depends on country-specific factors that must be taken into account. In countries with a strong civil society and philanthropy, organizations may find it easier to secure funding while retaining their independence.
Member states need to build a safe and sustainable environment for trusted flaggers. In the short term, public money may be needed, but in the long run, they will need to diversify their revenue sources to ensure their independence. This is the challenge for the next hundred days.