Home

Donate

Advancing Trust & Safety in the Majority World

Nadah Feteih, Diletta Milana, Jenn Louie / May 1, 2024

On April 3-4, 2024, over 60 experts, ranging from researchers, academics, and practitioners in online trust & safety, were selected to join in a convening co-hosted by the Institute for Rebooting Social Media (RSM) and the Integrity Institute. This two-day, closed-door workshop focused on how trust & safety at tech companies plays a powerful role in, and is increasingly responsible for addressing global inequities that perpetuate conflict and violence in the majority world. The participants gathered to present the current state of trust & safety and the prioritization of future work to accommodate for the global reach of platforms while reconciling their business incentives.

From battling techno-authoritarians to advancing inclusion

We began the event by setting intentions to cultivate a brave and inclusive space. Individuals that contributed to the discussions combined their expertise in this field with their lived experiences. The goal of this workshop was to share diverse perspectives on the existing gaps in trust & safety and how we can rethink and reconstruct industry practices. We covered a range of topics in parallel sessions, from collaborating against techno-authoritarianism to how we can incorporate safety by design in social media products and features. Attendees actively championed inclusion and representation of marginalized communities who don’t typically have access to influence corporate decisions. Many platforms remain Western-centric in the way they are governed, and incident responses are reactionary and often an afterthought rather than strategically addressing the most pronounced failures.

In order to ensure local contexts and cultural nuances are understood in the product and policy development process, we found resounding agreement among participants of the importance of multi-stakeholder engagement. Many individuals from grassroots organizations expressed frustrations in contributing to the discourse on these topics: there is generally an unequal representation of folks from the regions affected by crises in decision-making rooms. We discussed the potential in creating a streamlined process for companies and policymakers to consult with a wide-range of stakeholders on nuanced issues.

As most platforms still optimize for engagement, viral content often tends to find itself very close to violating policies, leading to both visible and invisible harm to the most vulnerable communities impacted, including censorship, silencing, trolling, abuse and algorithmic discrimination. Automated components in the content moderation pipeline are still heavily flawed by bias and lack of representation in the training datasets, which are hard to eradicate. Transparency in policies, data or algorithms, especially as they interact in a multi-platform landscape, don’t necessarily guarantee user awareness, nor prevent harm, and might indeed provide advantages to malicious actors willing to exploit vulnerabilities in platform design.

One of the main challenges in understanding and addressing inequities in products is the disconnect in communication and lack of shared language to discuss issues that arise; terminology creates barriers and misunderstandings between tech workers, individuals in civil society, and researchers. An example of this is the term “shadowbanning,” which is used to describe the perceived censorship of content shared by individuals raising awareness during crisis situations. While platforms like Meta claim they do not deliberately suppress voices on their platforms, algorithmic decisions to rank and demote content has that effect (though it is described in opaque and difficult to understand wording). During these times of crisis, tech companies often adopt a risk management approach to minimize harm, while civil society and advocacy groups advocate for fairness and justice. However, the definitions of harm employed by these companies often fall short and perpetuate Western-centric constructs.

The misalignment of incentives

Across many sessions, we dissected instances of misalignment and errors by tech platforms that led to catastrophic consequences, and found how they intersected with broader societal issues and numerous other apparent and persistent failures. The categories and definitions of harm and abuse are usually defined within western constructs and are difficult to apply to cases from the majority world. Various product interventions, such as fact-checkers, are developed based on research from the Global North and don’t account for differences in communication norms across majority world cultures. Centralized and US-based platform teams struggle to grasp the nuances in language and cultural significance behind specific facts, topics and ideas, and consequently fail to design safe products for different geographies.

The platforms’ reliance on internationally-recognized fact-checkers, who tend to mostly focus on international sources for their verification, as well as differences in media freedom and general digital literacy, exacerbate even further this issue of cultural and geographical representation. Limited resources dedicated to human and automated moderation become even more constrained, as stricter regulatory frameworks enforced in the global West draw resources away from efforts to guarantee safe experiences for users in the majority world.

The conversation will continue

We are in a moment when technologies are implicated and weaponized in global conflicts and crises. But more than anything, participants left the event hopeful that in this space lies the commitment to global equity, and collaborative power to grow awareness and innovate towards a future that centers justice and equity. This convening serves as a starting point to elevate and empower diverse perspectives in the field of trust & safety. These gatherings are essential to speak candidly and share recommendations on how to influence decision-making within tech companies to equally factor in the majority world rather than considering it a liability. Our emphasis on knowledge-sharing and fostering connections underscores our belief that positive change towards equity building is possible within tech companies.

We look forward to continuing to work with experts that attended this convening, and collaborating with organizations with common missions: TSF Global Majority Research Coalition, Tech Global Institute, and the Integrity Institute.

Authors

Nadah Feteih
Nadah Feteih is currently an Employee Fellow with the Institute for Rebooting Social Media at the Berkman Klein Center and a Tech Policy Fellow with the Goldman School of Public Policy at UC Berkeley. She holds B.S and M.S degrees from UC San Diego in Computer Science with a focus on systems and sec...
Diletta Milana
Diletta Milana is a dual-degree MBA/MPA candidate at Stanford GSB and Harvard Kennedy School. Originally from Italy, she graduated from Politecnico di Milano in Computer Science and Engineering, and worked as a Data Scientist and AI Open Innovation Lead at Eni. Most recently, she worked as an AIML P...
Jenn Louie
Jenn Louie is the founder of the Moral Innovation Lab, based on her research initiated at Harvard Divinity School, and works as a Product Manager at the Berkman Klein Center's Applied Social Media Lab. Her research is a compassionate interrogation into how technology is shaping our moral futures and...

Topics