Home

Donate

Safeguarding Freedom of Expression in the AI Era

Jordi Calvet-Bademunt / Nov 4, 2024

Shutterstock

The European Union (EU) adopted the Artificial Intelligence (AI) Act in June 2024. Hailed as “the world’s first comprehensive AI law,” the AI Act includes a set of obligations for high-impact general-purpose AI models, which will start applying next year. Models are presumed to have a “high impact” when they have a certain amount of computation power. According to an August 2024 analysis, eight models from companies, including OpenAI, Google, Meta, and Mistral, are likely to be designated as high impact by the AI Act.

The AI Act requires providers of high-impact general-purpose systems to “assess and mitigate possible systemic risks.” Europe’s Digital Services Act (DSA) - a law imposing similar requirements on very large online platforms and search engines - has shown that such obligations can unduly restrict freedom of expression if inadequately applied.

Although the AI Act is already in place, there is an opportunity for freedom of expression advocates to protect this fundamental right through the General-Purpose AI Code of Practice, which is being developed. This Code is intended to guide general-purpose AI providers in implementing the provisions of the AI Act on systemic risk and other requirements until harmonized standards are approved in a few years.

What is systemic risk, and what are the concerns?

Requiring powerful, general-purpose AI models to mitigate “systemic risks” sounds reasonable, as responsible companies should want to assess the societal impacts of a technology as revolutionary as AI and address any associated risks. But what does assessing and mitigating systemic risk entail in practice? And what risks might this pose for freedom of expression?

The AI Act defines systemic risk to include risks “having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole.” Based on this definition, does generating content that promotes a protest constitute a systemic risk? How about content that is critical of the government, supports one of two sides in the Israeli-Palestine conflict, advocates for LGBTQ+ rights, or opposes abortion?

In practice, tackling systemic risk is an extremely complex balance. AI providers will presumably struggle to determine what are “actual or reasonably foreseeable negative effects on […] society as a whole,” as the AI Act puts it. Similarly, providers may struggle to assess and mitigate systemic risks relating to values often in tension, like the fundamental right to freedom of expression, public security, and safety.

Moreover, the AI Act joins the DSA and the United Kingdom’s Online Safety Act as part of a compliance model that, as Daphne Keller puts it, “is a mismatch with the regulation of speech.” The systemic risk obligations in the AI Act reflect a shift toward compliance-oriented governance of digital speech.

This model is problematic because it incentivizes the over-removal of speech. To avoid fines and related reputational hazards, it is reasonable for companies to prioritize interests other than freedom of expression, which often protects content that is controversial or disliked by authorities. Social media has a long history of suppressing minority voices. Systemic risk obligations could worsen this trend not only on online platforms but also in AI.

The AI Act’s lack of clarity also opens the door to abuse from public authorities, particularly from the enforcer of the Act’s rules on systemic risk, the European Commission. For example, “public security” and “safety” – terms included in the AI Act – are among the most common reasons governments worldwide cite to justify shutting down or blocking access to the internet for their populations. Moreover, the political nature of the enforcer remains a cause for concern. Even if very competent technical teams are in charge of the day-to-day application of the AI Act, they are supervised by a Commissioner. Commissioners typically are or have been active in politics, which can reduce their perceived neutrality. These concerns may sound hypothetical. However, as the DSA has already shown, the challenges of applying these rules while protecting free speech are real.

Lessons learned from the Digital Services Act

The DSA, Europe’s online safety rulebook, requires very large online platforms and search engines to assess and mitigate systemic risk. The definition of systemic risk in the DSA is similar to the one in the AI Act while also requiring companies to balance conflicting interests such as freedom of expression, public security, civic discourse, or physical and mental well-being. The United Nations Special Rapporteur on Freedom of Expression raised concerns when the DSA was being discussed, pointing out that whether the systemic risk obligations protect human rights “will depend ultimately on how clearly and narrowly they are drafted into law and on the effectiveness and independence of the regulatory bodies.”

In just a few months of being in force, it has become clear that the free expression concerns regarding the DSA were not hypothetical.

In the summer of 2023, the then-commissioner overseeing the DSA, Thierry Breton, stated that online platforms could face shutdowns under the DSA if they did not crack down on problematic content during riots. After facing strong backlash from 66 civil society organizations, he softened his statement. In October of the same year, in the context of the war between Israel and Hamas, Commissioner Breton sent letters to Meta, TikTok, X, and YouTube, drawing a false equivalence between illegal content and disinformation, which is not inherently prohibited. In February 2024, reports surfaced that Commissioner Breton had allegedly pressured the DSA’s technical team to pursue investigations against X. Finally, in August 2024, he sent X a letter shortly before Elon Musk’s live-streamed conversation with one of the two major U.S. presidential candidates, cautioning that the Commission was monitoring “the potential risks in the EU associated with the dissemination of content that may incite violence, hate, and racism [...] including debates and interviews in the context of elections.” Two letters from civil society raised concerns about Breton’s actions representing interference in foreign affairs and the politicized enforcement of the DSA.

Mr. Breton resigned in September, and the Commissioner in charge of the DSA and the AI Act for the 2024-2029 term will hopefully be more considerate toward freedom of expression. Still, the DSA and the AI Act continue to be susceptible to misuse, placing freedom of expression at risk.

What can freedom of expression advocates do?

The AI Act, like the DSA, brings challenges. Still, the DSA shows that civil society and, generally, freedom of expression advocates should remain engaged and vigilant to ensure that enforcement decisions reflect a commitment to safeguarding freedom of expression. For now, freedom of expression advocates should pay particular attention to the draft of the General-Purpose AI Code of Practice. According to the AI Act, this Code will contribute to the “proper application” of the Act until harmonized standards are approved in a few years. This code is officially non-binding, but AI providers will have strong incentives to align with its provisions. Adhering to this Code of Practice demonstrates compliance with the relevant provisions of the AI Act. Providers would have little incentive to deviate from this Code to protect freedom of expression, especially to protect unpopular speech.

Over 1,000 people are developing the Code of Practice for Generative AI, a process led by five chairs who supervise four working groups on issues like “risk identification and assessment” and “technical risk mitigation.” These chairs have been appointed by the EU AI Office, which is part of the European Commission.

AI safety risks will undoubtedly be part of the discussion and focus areas, as evidenced by the public consultation on “trustworthy general-purpose AI” that the Commission conducted over the summer. While tackling these issues, all of us involved in the Code of Practice should do our best to protect freedom of expression in AI.

In particular, all those involved in the working group focusing on “risk identification and assessment” should ensure that freedom of expression is adequately considered. So far, the Commission’s actions have not been encouraging. While its public consultation included references to “fundamental rights,” it had no explicit reference to freedom of expression. Similarly, the AI Act’s definition of systemic risk does not include a reference to this fundamental right, unlike the DSA.

As I have argued, human rights standards may be used for guidance and to protect freedom of expression in generative AI. These standards are not perfect, but at least they would require that companies have public, clear, and detailed general usage policies. They would also require the speech restrictions in AI to comply with the principles of legitimacy and proportionality, ensuring that content restrictions are based on solid justifications and do not go beyond what is necessary.

In addition, the experts in the working group focusing on “technical risk mitigation” should ensure that restrictions at the model level do not go beyond what is necessary. Prebunking, debunking, and counterspeech should be preferred to censorship when dealing with misinformation and hate speech. Researchers and journalists should be able to use AI models to study controversial topics. Users should be able to explore issues, including political topics, freely. Europe should not emulate some aspects of Chinese regulation, where the leading internet regulator reportedly tests AI models’ responses on sensitive topics to ensure that they “embody core socialist values.”

The Code of Practice chairs will share the first draft of this crucial document in mid-November. This is a pivotal moment for safeguarding freedom of expression within AI governance. As this draft takes shape, advocates must press for clear protections that ensure AI models reflect a diversity of viewpoints and that the AI Act protects the fundamental right to freedom of expression.

Authors

Jordi Calvet-Bademunt
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University. He is also the Chair of the Programming Committee of the Harvard Alumni for Free Speech and has been a fellow at the Internet Society. At The Future of Free Speech, Jordi f...

Topics