Home

The Bronner report on disinformation hints at Macron’s political priorities

Rachel Griffin / Jan 27, 2022

Rachel Griffin is a PhD candidate at the Sciences Po School of Law and a research assistant at the Digital Governance & Sovereignty Chair.

On the 11th of January, a landmark report on social media was submitted to France’s President Emmanuel Macron. Written by a 14-member commission of academics, journalists and civil society representatives and led by sociology professor Gérald Bronner, it primarily addresses issues around mis- and disinformation on social media, as well as other harmful content such as hate speech, and more abstract issues like the promotion of healthy democratic debate. The report, and Macron’s response, are worth paying attention to. It includes 30 policy recommendations, many of which would represent quite far-reaching reforms if implemented, and could influence France’s positions in the ongoing negotiations for the EU’s major platform regulation project, the Digital Services Act.

The report’s key motif is Enlightenment (Lumières). It opens with a quote from philosopher Immanuel Kant’s 1793 essay ‘What is Enlightenment?’ on the need for individuals to ‘dare to know’, question established wisdom, and develop their own understanding of the world. This choice may seem ironic, since the report then dedicates 100+ pages to the dangers that arise when people refuse to take experts at their word and instead make up their own minds about things like the benefits of Covid-19 vaccines.

French President Emmanuel Macron with Gérald Bronner. Source

However, the report’s overall vision seems rather consistent with that of Kant’s essay, which praised the supposedly enlightened despotism of Prussia’s King Frederick II and recommended that authoritarian leaders should permit a degree of free debate while maintaining tight political control. The report’s vision of a flourishing online public sphere brings individuals together in democratic debate, yet is highly regulated by a state which exercises tight control over ‘false news’ and guards against foreign security threats; this seems to resonate quite well with such a philosophy.

First, the report reviews literature on the psychosocial mechanisms that make people vulnerable to believing false information. It argues that strengthening individuals’ capabilities to evaluate the credibility and reliability of information is ‘without a doubt the best response’ to disinformation. In particular, it calls for better promotion of critical thinking, media literacy and the values of the Enlightenment at all levels of the education system, recommending the creation of a new expert body to put together standardised curricula for classes in these areas.

Noting that empirical research in many areas discussed is lacking, ambiguous or contradictory and that much of it skews towards the US context, the report also calls for more research on how platforms affect people’s information environments, beliefs and behaviours. While this should soon be facilitated by the requirements in the Digital Services Act for platforms to share more data with academic researchers, the report also recommends more structured dialogue and cooperation between platforms and researchers.

The report also highlights the tendency to believe information more strongly after repeated exposure – something which represents a particular problem in the context of social media, since many content recommendation systems work on the principle that if someone enjoys content on a particular topic, they should see more of it. Accordingly, the second section investigates the effects of algorithmic curation and recommendations. While the report notes that evidence on this subject is uncertain and contradictory, it in particular criticises the ‘popularity bias’ – the principle that content with which users are already engaging should be promoted to others – which often enables sensationalist, extreme or controversial content to go viral.

In this context, the report recommends stricter regulation of platforms’ technical design. While the details are left very vague, it suggests that regulatory bodies need more industry-specific and interdisciplinary expertise to adequately oversee design decisions. For example, users should easily be able to choose not to use algorithms which favor the most popular content. ‘Dark patterns’ - design features which manipulate or mislead users into making certain choices, something we are all familiar with from website cookie banners which make it unfeasibly time-consuming to reject tracking – should also be discouraged or banned. A provision banning many forms of dark pattern was included in the European Parliament’s proposed amendments to the Digital Services Act, so this might soon be introduced at European level.

Next, the report discusses the strategic dissemination of disinformation by foreign actors. Foreign security threats are a major point of emphasis: the report even has a section titled ‘The Militarisation of Informational Space’. While the report makes limited concrete recommendations in this area – other than forming an OECD committee to work with platforms on disinformation-related security threats – this section is notable for promoting a highly securitised discourse, in which foreign disinformation actors are presented as a critical threat requiring a strong state response.

The next section evaluates the legal regulation of disinformation. Its conclusions and recommendations are perhaps the most concerning part of the report; if implemented by French and EU institutions, they would permit a worrying level of state censorship. In particular, the report praises and recommends retaining Article 27 of the 1881 press regulation law, which criminalises the publication of false news if it is published in bad faith and is of a nature that could disturb the public order. This provision has been heavily criticised and is highly dubious from the perspective of international human rights law. Under human rights treaties including the European Convention on Human Rights, the right to freedom of expression unambiguously includes the right to make false statements. While this right can be proportionately restricted to serve other legitimate aims, human rights law generally does not accept prohibitions that are as broadly and vaguely defined as Article 27, or restrictions that are not justified by a clear social harm. As the report acknowledges, Article 27 does not even require that the information concerned disturbs public order, only that it in principle could – making the provision’s scope very flexible, and potentially broad enough to target large amounts of legitimate journalism and commentary.

The report also suggests adding corresponding civil liability provisions, specifying that anyone who digitally distributes news that they know to be false and harmful to the interests of others could be civilly liable – which would also apply to platforms, once such content is brought to their attention. This is also an extremely wide definition which would incentivise platforms to implement broad proactive censorship of content to minimise liability risks, as well as giving state authorities an easy way to have content they dislike removed by reporting it to platforms. Finally, the report even suggests that France should push to amend the proposed Digital Services Act, so that platforms would be required to delete content meeting the conditions of Article 27 across the EU.

Notably, although the report claims that ‘criminal sanctions are an essential instrument in the fight against disinformation phenomena’, it does not back this up with any examples or evidence as to how such sanctions have been used or had positive effects so far. Given the vague and malleable definition of ‘false content that disturbs the public order’, retaining the criminalisation of false news and extending civil liability to cover almost all false information raise serious human rights concerns. Such provisions could become a powerful instrument of state censorship. Despite the report’s rhetorical emphasis on democracy, its recommendations in this area display a markedly authoritarian character.

Interestingly, the report also makes heavy use of the term infox – a rough French equivalent of the term ‘fake news’, which has been largely discredited in English-language research and commentary on disinformation because its vagueness and accusatory tone make it such a useful rhetorical tool for authoritarian and far-right politicians. Connoting a blend of information and intoxication, the term infox presents false information as something sinister, poisonous and powerful. Like the language of militarisation, it plays into the often hyperbolic tone of public discussions around disinformation and the broader rise of ‘internet threat’ discourse used to justify greater state regulation of online speech.

Although the report’s introduction notes that disinformation and conspiracy theories thrive in a broader socioeconomic context of precarity and destabilisation – for example, in areas with higher unemployment rates - this is never mentioned again and is not addressed in the recommendations. In general, there is a disconnect between the report’s discussion of the ‘information environment’ people encounter online, and the broader political and social environment of which digital media are just one aspect.

For example, the report emphasises the government’s desire to crack down on online hate speech, but does not consider how online hate speech might be influenced by the broader political discourse, and by the government itself. Racially charged rhetoric and policies that stigmatise Muslims are ubiquitous in French politics, and have played a prominent role in the campaign for April’s presidential elections. For example, Macron’s interior minister has called Islamism a ‘gangrene’ infecting French society, and famously told far-right leader Marine Le Pen in a TV debate that she is too soft on Islamism. Evidence suggests that the use of such rhetoric by politicians encourages hate speech by citizens, both on- and offline. An approach like the Bronner report’s, which appears to consider harmful speech on social media as a technical problem isolated from broader social and political divisions, is unlikely to adequately address the issue.

More generally, the report arguably betrays what public policy scholar Ben Green recently conceptualised as a solutionist approach to tech regulation. As famously described by Evgeny Morozov, technosolutionist attitudes insist that social problems can be solved through technological innovation, without addressing their underlying social and political causes. Green suggests that much of the debate around tech ethics and regulation follows a similar pattern, in which not technology itself, but superficial tech ethics initiatives like consultations and codes of conduct provide too-easy solutions that avoid engaging with difficult social and political problems.

In this case, except for the parts dealing with media literacy, the report mostly focuses on solutions involving the platforms themselves, or how they are regulated, rather than the broader social context. Even media literacy education is arguably treated as a silver bullet. There is little discussion of the concrete mechanisms by which it would address disinformation, and no engagement with arguments that people are motivated to share information not only by their understanding of truth or falsity, but also by powerful emotional dynamics. Many recommendations also follow a ‘create a new expert body to oversee X’ template, with few concrete ideas for what such bodies could actually do – arguably simply kicking the can down the road and making it someone else’s problem.

Macron responded to the report’s publication with a speech before France’s press associations. He highlighted the dangers posed by foreign ‘propaganda media’, and placed particular emphasis on the role of the press in combating disinformation and foreign influence operations. As well as calling for stronger press self-regulation with the aim of promoting reliable information, Macron positioned himself as a defender of the press more generally: for example, he promised strict enforcement of the new ‘neighbouring rights’, introduced in the EU’s 2019 Copyright Directive, for press publishers to demand more revenue from platforms. With a tense election campaign underway, Macron’s response to the report and its proposals will likely be influenced by the need to stay on the good side of the press and to position himself as a strong leader who can stand up to powerful American companies. He has welcomed the report’s recommendations in general terms, but has not yet indicated concretely which might be prioritised and implemented. Which of its proposals will have a lasting impact beyond the election remains to be seen.

Note: the Bronner report is currently only available in French. Direct quotes from the report are approximate translations by the author.

Authors

Rachel Griffin
Rachel Griffin is a PhD candidate and lecturer at the Law School of Sciences Po Paris. Her research focuses on European social media regulation and its implications for structural social inequalities.

Topics