Home

Moving Toward More Responsible Recommendations

Maximilian Gahntz / Feb 15, 2023

Maximilian Gahntz is a senior policy researcher at Mozilla, where he works on issues related to AI policy and platform regulation.

It’s become a truism that online platforms have an outsized effect on our information ecosystem and society at large. They contribute to the spread of hate and violence. They can help undermine the integrity of democratic processes — just in the past year, social media has been used in a number of countries to undermine elections, from the Philippines to Kenya and the United States, and most recently Brazil. And platforms play their part in changing the social fabric of society.

Still, many questions remain: For example, how exactly do these platforms proliferate such harms? And what can be done to effectively rein them in and limit these effects?

For years now, debates have focused on content moderation in a narrow sense: the rules and processes determining what content stays up and what gets taken down. Arguably less attention has been paid to platforms’ recommendation systems that determine what content finds an audience on the platform with no (or minimal) human intervention; these are the algorithmic engines that decide whose friends’ photos we see on Instagram, which hot take we see when we open Twitter, or what we might watch next on YouTube.

That is starting to change now. Last year, the EU passed its landmark platform regulation, the Digital Services Act (DSA), which contains dedicated provisions for recommender systems (although it maintains a focus on content removal and ‘notice-and-action’ mechanisms). For instance, the DSA mandates that online platforms must disclose information about how their recommender systems work are disclosed in their terms and conditions, and that the very largest platforms offer a feed that doesn’t rely on users’ personal data. Meanwhile, on the other side of the Atlantic, the United States Supreme Court is hearing a case next week — Gonzalez et al v. Google — that could upend how online content is curated and recommended. Despite these developments, discussions of recommender systems often revolve around quick fixes and piecemeal solutions that fall short of meaningfully addressing the problems we face.

Systemic solutions, not quick fixes

In a recent paper we published at Mozilla on this topic, we lay out a comprehensive approach to creating the right conditions for a healthier, more responsible recommending ecosystem. It builds on two key pillars: Layered oversight and scrutiny, and empowered and informed users.

To start with, platforms need to enable greater transparency into how their recommender systems work. This includes publicly releasing detailed information about these systems, for example what metrics they optimize for, what signals are used to generate recommendations, or whether certain types of content (for example, from “authoritative” news sources or on specific issues like vaccines) are promoted. Transparency reporting has long become routine due diligence for the largest platforms, but when it comes to information about the recommendation engines at the heart of their service, we’re still mostly looking at blank space.

Similarly, platforms should disclose policies and aggregate data around what content they demote (often also referred to as “shadowbanning”, “borderline” content, de-amplification, or reduction) and how. While such policies are usually made public when it comes to content takedowns, companies mostly stay silent about content demotion. The reasons for this opacity aren’t entirely clear. After all, this is content that is within the boundaries of platforms’ policies and free expression but still considered too harmful to deserve a large audience. In short, it is considered acceptable but undesirable. More light deserves to be shed on this gray area.

Additionally, public interest researchers should be granted access to fine-grained data and documentation in order to better study recommendation systems and their effects. They should further have the ability to run tests and simulations on platforms’ systems. This can be an important lever to uncover potential harms and risks which can further our understanding of the information ecosystem, and help regulators, civil society, and the public hold platforms to account when they misstep. The EU’s Digital Services Act marks important progress in this area, but critical details remain to be specified in secondary legislation.

Still, platforms can act as information bottlenecks — be it on purpose or unwittingly. Twitter’s recent move to restrict access to its third-party application programming interface (API) is a case in point in this regard. It’s no coincidence that much of what we have learned about harms caused by platforms has come from whistleblowers or independent researchers scrutinizing them from the outside. That’s why we need to ensure that researchers who — in good faith and adhering to research ethics and strong data protection standards — can use other ways of collecting data and their own tools to study platforms without having to fear being sued or prosecuted. This requires strong legal protections for public interest research.

But platforms also need to conduct their own due diligence. This is why they should commission systematic and rigorous third-party audits of their core algorithmic systems and subsequently publish the auditors’ findings to allow for more public scrutiny. While the field of algorithmic auditing is still nascent, it can help shed light on harms caused by these systems as well as potential weaknesses, and trigger the platforms to take mitigating measures.

Make this button work

In addition to platforms’ fundamental lack of transparency and oversight, users toooften are treated as passive consumers of a service rather than self-determined agents. Instead, platform design should enable users to exercise genuine control over their experiences on the service. Too often, controls are either hidden away or fail to actually empower users. For example, my colleagues at Mozilla found in their crowdsourced study “Does this Button Work?” that YouTube’s user controls are both confusing and often fail to produce the purported effect, with unwanted content still making its way into study participants’ recommendations.

An array of measures can be deployed to counter this and put users in the driver’s seat. First, platforms should provide users with the means to better tailor their feeds or recommendations to their preferences. This includes, for example, safety features that allow them to block certain keywords, hashtags, or creators; platforms are starting to make headway in this regard. It also includes providing functional user control mechanisms that do as they claim. Secondly, users should be able to exert more and more fine-grained control over what data, including personal and inferred data, is used to inform their recommendations. Conversely, platforms should not deploy deceptive design techniques that steer users away from making such choices and limiting the flow of data from user to platform. This way, users could also better influence the degree of platform-driven personalization in their user experience — that is, tailored recommendations based on users’ personal data or observed behavioral profile (instead of, for example, deliberate choices to follow a personal or channel) — or opt out altogether.

Additionally, the precondition for users to exercise control and make informed choices is that the information they need is available to them, like meaningful explanations of why and how content is recommended to them, intuitive descriptions of user controls, and insights about why platforms demote or otherwise take action on the content they share.

In it for the long run

None of the measures we propose in our recent paper will fix all of our problems. They can, however, create the foundation for more targeted interventions and long-term change in order to build a healthier information ecosystem.

We’re not fated to the logic currently underlying the design and operation of our digital public sphere. At the same time, more far-reaching insights into how recommender systems work as well as access to platform data will catalyze public-interest research and ensure stricter oversight of platforms by regulators and the public. This would both create a better understanding of platforms’ impact on people’s lives and facilitate the development of alternatives. For example, work is underway to explore new optimization metrics that aim to align recommendations with users’ and the public interest and to bridge social divides.

Social media and content sharing platforms aren’t going to disappear. If anything, the role of recommender systems is only going to grow more important across domains. Fixing their flaws takes time, but we’re overdue to get started.

Authors

Maximilian Gahntz
Maximilian Gahntz is a senior policy researcher at Mozilla, where he works on issues related to AI policy and platform regulation. Before joining Mozilla, Max was a fellow of the Mercator Fellowship on International Affairs, working with the European Commission. He holds degrees in Public Policy and...

Topics