Home

Centering Community Voices: How Tech Companies Can Better Engage with Civil Society Organizations

Nadah Feteih, Elodie Vialle / Oct 5, 2023

Involving individuals that have expertise working with at-risk communities in the design process will allow companies to address online harms in a scalable and sustainable way, say Nadah Feteih and Elodie Vialle.

Social media platforms have helped a virtual world flourish where people are connected now more than ever. But these platforms have also been abused and misused to amplify hate and harassment and thus harm marginalized and at-risk communities.

Activists are threatened when speaking up against oppressive governments. Human rights defenders and journalists are targeted ahead of elections. Social media has been used to influence the outcome of democratic processes, and with over 75 elections coming in 2024 there is more pressure to tackle issues like misinformation, online abuse, and microtargeting.

We are two individuals that come from different backgrounds—a software engineer from Egypt that has worked in Big Tech in the US and a journalist from France working with civil society groups internationally—yet we see the same problems. As human rights defenders, over the last few years we have documented examples of abuse where social media platforms have a direct responsibility in spreading hate and suppressing the voices of marginalized individuals (i.e. people of color, women, etc.). As tech workers and advocates, we’ve noticed how Big Tech's business model often incentivizes growth, profit, and “innovation” over addressing human rights and ethical concerns.

There is an urgent need to increase collaboration between companies and communities. Tech companies must center marginalized individuals from ideating features to launching them into production, along the lines of the framework proposed in “Design from the Margins”. We’ve been involved in efforts with the Berkman Klein Center at Harvard to codesign solutions with tech platforms representatives through a multistakeholder approach that is intended to better protect journalists online and center vulnerable groups in both the product design and development process. We believe that instead of endlessly triaging products and features that have been flawed and historically broken, it is time to focus on reforming product design processes.

Profit vs Trust and Safety: Tech companies have fallen short in prioritization

Whistleblower and former Facebook (now Meta) employee Frances Haugen claims the company still prioritizes profit and growth over tackling the major harms its platforms have caused. We argue that there need to be actions taken to fix biased enforcement systems, address algorithmic bias, invest engineering resources to build tools to help users affected by online abuse escalate concerns, and improve content moderation in various languages (i.e. having content moderation teams based in more countries would help with understanding local context and dialects).

Over the past few years, tech companies have been forced to mitigate harms by establishing Trust & Safety policies and teams focused on content moderation, privacy, and security. Independent and semi-independent external entities (such as the Oversight Board) have been created to hold platforms like Meta accountable and require them to review policy and content decisions. While the ability to reverse biased and unfair decisions is valuable, it is not, in and of itself, a scalable or adequate solution. Bringing back a post that was incorrectly removed (whether it through a human or an automated review system) isn’t fixing the underlying algorithms that caused that decision to be made or fundamentally addressing the mistakes that are made during the human review process. We can’t conflate short-term band-aid solutions with comprehensive solutions to rebuild systems to center accuracy and fairness.

Unfortunately, many teams at these companies find themselves trying to patch infrastructure that was not initially built or designed with privacy and safety in mind. The communities most affected by bias are often those at the margins and who do not have a seat at the table to influence product decisions. In recent years, many individuals are taking a stand and establishing external organizations, such as the Integrity Institute and All Tech is Human, to bring together professionals at the forefront of public interest tech to develop ideas, write papers, and imagine solutions to solve the most pressing issues that threaten safety, online and off. It is important to include a diverse set of users in the design process to ensure fairness and minimize bias from the outset, not only when responding to a disaster, crisis, or appeal.

Interventions should focus more on process over product decisions

In 2021, a change to the Instagram ranking algorithm demoted reshared content on stories. It was aligned with the goal of prioritizing original content but unfortunately stifled a large audience of users (i.e. activists and community organizers) that leverage stories to bring awareness to certain causes and crises. In an ideal scenario, before an algorithm or ranking change like this is rolled out it should be tested by a wide audience and management should consider various use cases from diverse groups by ensuring that the stakeholders impacted by these changes are consulted.

Community driven design is how we can focus on responsible innovation. As these social media platforms (Meta, TikTok, X, YouTube) feel increased pressure to address tension and reconcile with their audiences, much can be gained by adopting a different approach. At the Global Gathering convening digital rights organizations mid-September in Portugal, several organizations expressed frustration at tech platforms not consulting countries from the Global South before launching new products. These individuals from the Majority World need to be more involved and considered on decisions that affect them in areas such as content moderation, data privacy, product design, and policy.

Indeed, these companies face even more pressure, not only from individual users but from governments, as they need to comply with regulation from the EU such as the Digital Services Act (DSA). According to Article 26, very large online platforms are required to conduct risk assessments to minimize negative effects on freedom of expression and inauthentic use or exploitation of the service. It is important for at-risk individuals and civil society groups most familiar with harms perpetuated by the platforms to be involved in consultations during these assessments so that potential negative effects can be mitigated from the outset. Abiding by community driven design will set up a precedent and encourage companies to deploy more responsible design processes and work more with civil society organizations.

Teams have started working with communities on policies, yet there is more work to be done in the product development process. Partnered with affected individuals, these teams can brainstorm scenarios, situations, and simulate the implications of product and feature decisions through a multistakeholder design process to preemptively mitigate potential harms. As this codesign process is implemented and scaled, these major questions must be considered - how are these individuals and communities picked, how is the agenda of the design process set up, and how will the diversity of voices (ethnic, geographical, background) be represented. It is paramount to acknowledge the power dynamics and knowledge asymmetry between organizations representing users at risk and the platforms to ensure a fair and equitable participation in these processes.

Providing a voice to those affected by platform decisions

Individual advocates and civil society organizations have banded together to push for change, pulling together global coalitions, such as the Coalition Against Online Violence, organizing campaigns and open letters, and blowing the whistle. Many of the individuals that have been at the forefront of online abuse and attacks had no other choice than to develop their own expertise so as to mitigate the risks, with many survivors becoming pioneers and experts themselves. When these platforms aren’t answering the needs of these communities, people have resorted to building their own solutions such as Block Party, a browser extension that reduces exposure to harassment and online attacks on Twitter (now known as X). Unfortunately, with the launch of an expensive Twitter API subscription tier, many smaller developers were left with no choice but to shut down their apps (including Block Party) and a decision like this further illustrates the glaring divide between large tech platforms and individuals as it strains already under-resourced organizations. The onus in solving the problems that these platforms have created should not fall on these individuals and instead time, energy, and people should be allocated at these companies to address these issues.

Meta’s Trusted Partner program was created as a communication channel between the platform and civil society organizations. However, there have been many points of frustration in the process, with trusted partners, often under-resourced nonprofits in the global majority, expending time and resources to track, report, and escalate harms that do not receive a timely, accurate, or relevant response, as highlighted in a recent Internews report. These platforms need to invest more time and money into building better crisis protocols (as required by article 37 of the DSA), including tooling for escalation channels and dashboards to track user and nonprofit reporting. While some social media platforms, like Facebook and TikTok, have rudimentary support inboxes for user reporting of hate, harassment, and harm, recent research by nonprofits PEN America and Meedan found that user reporting mechanisms need to be overhauled, including the creation of dashboards to “track outcomes and understand why decisions were made.” We propose that platform engineering teams should collaborate with vulnerable communities to make the reporting and escalation processes easier for individuals and partners to use.

Earlier this month, we reconvened at Harvard University for the 25th anniversary of the Berkman Klein Center for Internet & Society. A common observation in our conversations with others were the unique perspectives that each person brought and the power of inclusivity when brainstorming solutions, involving individuals from marginalized communities and the Global South instead of solely pushing Western and techno-solutionist perspectives. We see the way existing social media platforms can improve and even beyond that we believe it’s time to reimagine the digital public sphere, develop digital public infrastructures and social platforms, and build new solutions from the ground up.

The future of tech, democracy, and human rights is intertwined now more than ever, and it is the responsibility of our generation to address these challenges together.

Authors would like to thank James Mickens and Viktorya Vilk for their support.

Authors

Nadah Feteih
Nadah Feteih is currently an Employee Fellow with the Institute for Rebooting Social Media at the Berkman Klein Center and a Tech Policy Fellow with the Goldman School of Public Policy at UC Berkeley. She holds B.S and M.S degrees from UC San Diego in Computer Science with a focus on systems and sec...
Elodie Vialle
Elodie Vialle is a journalist and an Affiliate at the Berkman Klein Center at Harvard, focusing on escalation channels for journalists and human rights defenders facing attacks on social media. She is also a Senior Advisor on Digital Safety and Free Expression at PEN America. Prior to that, she was ...

Topics