Making Media Pluralism Work in the Age of Algorithms
Urbano Reviglio / Jul 17, 2025
Digital Nomads Across Time by Yutong Liu & Digit / Better Images of AI / CC by 4.0
Last September, over 60 civil society organizations and academics signed a call published in Le Monde advocating for “algorithmic pluralism” on social networks. Drawing from Francis Fukuyama’s 2021 “middleware” idea, the main proposal is to create the conditions for a consumer-facing market of algorithmic systems in social media, allowing users to choose which one to employ and thus shape their online experiences according to their tastes, interests, and moods. By choosing their customized feeds, users could outsource content curation to third-party providers, be it companies, newspapers, or even individual users and communities. In theory, such a market could spur competition, prompting platforms to refine their currently applied engagement-driven algorithms into more meaningful, interest-based curation systems.
This idea is not new, and has been openly debated for several years now. Though it represents a promising path, it is widely acknowledged as a challenging one. Even in the best-case scenario, where most users enjoy many and more personalized experiences, it seems unlikely this would directly ensure a more pluralistic media environment. Traditional news consumption and algorithmic-driven news consumption are two fundamentally different experiences. Algorithmic systems behave differently with different users, and change across platforms and over time, eventually influencing not only what users are exposed to, thus affecting news discoverability and users’ worldviews, but even information behavior — how users consume news.
These outcomes are not determined by algorithms alone, but emerge from the sociotechnical systems they are embedded in: interface design, content moderation, and governance practices all interact with algorithmic systems to shape how users engage with information. Simply multiplying personalization algorithms wouldn’t automatically lead to more pluralistic outcomes. Furthermore, amid increasing media fragmentation, this may even deepen concerns about echo chambers and filter bubbles. Some have also raised concerns that middleware could increase privacy risks by exposing more user data to third parties. And even if coupled with decentralized social media that are interoperable—where users would have the opportunity to switch social media bringing with them their data and social graphs (i.e., all their followers/friends)—it is unclear how this vision would concretely unfold in practice, and whether it would ultimately promote media pluralism.
At first sight, the middleware proposal seems to place too much trust in self-regulation and market forces to deliver diversity. Rather than algorithmic pluralism, what is being proposed is more of an “algorithmic plurality.” As with legacy media, plurality typically denotes market diversity and anti-monopoly safeguards, while pluralism generally refers to ensuring broad access and visibility of diverse voices and perspectives. While algorithmic plurality focuses on increasing provider options, algorithmic pluralism concerns the actual diversity of content and viewpoints that users encounter and, eventually, experience. These tensions should come as no surprise. In media policy, a conflict has long existed between market-oriented and cultural or democratic political rationalities, as well as their differing conceptions of media pluralism. Accordingly, such concepts are often subject to political contestation, with different stakeholders seeking to frame the problem in ways that align with their interests.
In the European Union, where media pluralism is a longstanding policy objective, several concerns arise. Should we really welcome a digital sphere in which users fine-tune personalization algorithms without their providers having to actively support specific objectives and values such as promoting algorithmic awareness, quality media, or public-interest content? Is it truly desirable that users could eventually choose their community-managed and possibly ideologically-driven content moderation systems? There are also more pragmatic concerns: How can alternative personalization algorithms compete with the highly optimized, engagement-driven ones of dominant platforms? Would the average user embrace such model? These questions are open-ended, but one thing is clear: algorithmic plurality is a welcome yet insufficient condition for media pluralism. So, what should algorithmic pluralism really entail?
Towards a more nuanced view of algorithmic pluralism
To move forward, we need to unpack what algorithmic pluralism could truly mean in practice. Despite its growing relevance, this concept has been only marginally explored in academic literature, let alone in media policy. Existing definitions mirror traditional concepts of media governance in digital environments to refer, respectively, to media plurality and media pluralism, or even both. However, a comprehensive conceptualization of algorithmic pluralism has yet to be systematically developed.
One of the key challenges that algorithmic plurality fails to address is exposure diversity—the idea that users should be exposed to a wide range of content and perspectives. Diversity is a multidimensional and multivalent concept that can be interpreted in many different ways. What kind of diversity shall be promoted—whether content, source, topic, or other dimensions—and, importantly, how and how much of these would be desirable, is a normative question that defies definitive answers. Although the new European regulations, such as the Digital Services Act (DSA), contain some basis for promoting diversity in social media, it remains unclear how this would or even should be substantiated, if at all. In practice, this could mean, amongst others, to algorithmically amplify public-interest content, to give prominence to authoritative and/or public service media content, or even ensuring a balanced exposure to political issues, especially during electoral periods —much like how traditional must-carry rules ensure that public broadcasters are included in TV offerings. While a market of recommender systems would likely give rise to similar diversity-oriented personalization algorithms, a problem would persist: if users do not enjoy granular options and tools for diversity exposure—or even if they are not even aware of the “lost diversity” and its value—how can we expect them to proactively seek for it?
A broader conception of algorithmic pluralism shall adopt a more comprehensive approach, considering the entire process of algorithmic development and deployment. Firstly, we cannot disentangle algorithmic pluralism from the broader set of values that media pluralism upholds. Other intertwined values—such as tolerance, factuality, and civic engagement—can also be promoted by design. For instance, this could involve proactively promoting “authoritative and professional media content”, something the EU regulation could mandate. Beyond values, algorithmic systems can also be designed with specific objectives or normative goals. These include promoting ‘pro-social’ behaviors that encourage a healthier user engagement or favoring the exposure to diverse perspectives in order to foster societal cohesion (so-called ‘bridging systems’). Ensuring a plurality of values and objectives that algorithmic systems promote could be another precondition for algorithmic pluralism.
Second, we cannot disentangle algorithmic systems from interface design: the interface is where users meet and interact with algorithms. Yet today’s design choices often prioritize simplicity over diversity. Tools and features that allow users to modulate exposure, express interests, or explore alternative viewpoints are rare. As such, it would be essential to develop minimal standards of diversity-oriented options and tools. At the same time, the heterogeneity of users needs to be considered. Algorithmic pluralism should also mean interface pluralism—options for novice and expert users alike to shape their own media diets. Users have different needs to achieve their own optimal pluralism, especially by leveraging user feedback. Offering a range of interface options catering to diverse user preferences and skill levels – an aspect often overlooked yet crucial – should be part of algorithmic pluralism.
Third, we cannot disentangle algorithmic curation from algorithmic development and content moderation. There are indeed several influential processes where pluralism could be fruitfully embedded. First of all, the diversity of datasets for algorithmic training and user modeling. In fact, existing datasets and the consequent user modeling (e.g., how a profile user is constructed) often lack diversity on various dimensions. A prominent example is the underrepresentation of different languages, an issue that has prompted the European Commission to launch the Alliance for Language Technologies European Digital Infrastructure Consortium. Moreover, the algorithmic systems of large platforms interpret user behavioral data narrowly — yet effectively — eventually clustering users in homogenizing ways.
For example, when they conduct A/B testing to fine-tune their algorithms, platforms tend to assume a “universal user,” ignoring the multiplicity of contexts and experiences that shape how news content is consumed. At the same time, implicitly inferred preferences weigh more than the ones users make explicit, further contributing to homogenization. With less diverse datasets and less explicitly personalized experiences, users may be exposed to less diverse and less relevant information. Benchmarks for datasets and user modeling that better capture the complexity of users would represent another decisive step for ensuring pluralism.
The diversity in terms of background and culture among the subjects involved in algorithmic development, deployment, and governance also represents a fundamental aspect. Take, for example, the tech workers employed in algorithmic development who are predominantly white and male. Literature shows that different backgrounds could help reduce bias and potentially create the conditions for more inclusive, pluralistic outcomes. To some extent, this echoes a similar traditional policy challenge aimed at ensuring the diversity of newspapers’ editors, which rarely come from minorities. A comparable issue arises with content moderators, as they could be rather influential for content availability and diversity exposure. More moderators from a wider variety of backgrounds and, in particular, linguistic ones, can have substantial benefits in the accuracy of content moderation decisions. The same can be applied to fact-checkers. Indeed, when fact-checking organizations come from diverse backgrounds, they could be better equipped to identify and challenge biases in selecting the news and the political issues to fact-check.
Even the policymakers involved in algorithmic governance, especially AI practitioners such as auditors, could be more effective when they are more diverse in terms of cultural background and academic disciplines. At present, existing AI governance frameworks fail to incorporate ethically and culturally diverse voices into decision-making processes. At the same time, it is equally essential to integrate all stakeholders' voices, especially users who are usually the least heard. These could be integrated not only through explicit feedback but also through participatory governance such as the community-based fact-checking initiative Community Notes of X (formerly Twitter), as well as through participatory design. All in all, there are several processes where diversity could be mandated and cultivated to eventually ensure that a market of social media algorithmic systems would truly promote pluralism at all levels.
Conclusion
Safeguarding media pluralism in the digital sphere requires fundamentally different strategies than in traditional media environments. Simply multiplying platforms or personalization options won’t automatically lead to more pluralistic outcomes. Algorithmic plurality alone cannot preserve the public infrastructures we need in order to sustain democratic discourse in today’s interconnected societies. Not only do we need decentralized and community-driven social media, but also large social media that serve as public global spaces with shared norms and a common basis of facts. The challenge ahead is not only to diversify algorithms or platforms, but to embed pluralism in the very fabric of these systems and social media themselves.
More robust and systemic policies are needed to ensure that algorithmic plurality is socially–and not only individually–beneficial. A genuine commitment to pluralism must go beyond user choice. It requires attention to how algorithms are developed, deployed, and fine-tuned, as well as to the cultural, legal and institutional contexts in which they operate. This calls for a systemic, holistic approach that moves beyond market-driven incentives and sets standards for promoting pluralism by design.
With its distinctive values and regulatory frameworks, the European Union is well-placed to lead this shift, moving beyond viewing media merely as a marketplace of (algorithmically-mediated) ideas toward one that prioritizes inclusion, dignity, and pluralism throughout all the algorithmic development. We must therefore rethink algorithmic pluralism as part of a broader structural effort to uphold media pluralism in the digital age.
Authors
