A Digital Crisis: Solutions to Online Abuse
Wafa Ben-Hassine / Oct 22, 2024Wafa Ben-Hassine is Director of Responsible Technology at Omidyar Network.
California Governor Gavin Newsom recently signed into law new legislation aimed at protecting kids and teens from the harms of social media. The Protecting Our Kids from Social Media Addiction Act, which will go into effect in 2025, will put new constraints on how tech companies deploy algorithmic feeds in minors’ social media accounts, including limiting “addictive feeds” by default. The law says that the algorithmic delivery of content is linked to harmful mental health outcomes and digital addiction for young users.
This law represents an important step forward — and not just for its impact on kids and teens. In passing this legislation (along with a host of other technology safeguards) California is showing that it is indeed possible to implement meaningful guardrails online — and that the public sector plays a key role in getting it done.
But the scope of dangers online is vast. From child sexual abuse material (CSAM) and non-consensual intimate images (NCII) to fraud, malicious targeting of children, misinformation, and political influence operations, the spectrum of online harm is complex and expansive, making it virtually impossible for any single actor or sector to tackle alone. And children are not the only vulnerable population at risk — research conducted by DFRLab, Dr. Samuel L. Woolley, and Dr. Inga Kristina Trauthig found that immigrant communities are targeted with digital propaganda. Legislative safeguards are an important starting point, but they can’t represent the end of our work. To ensure online safety, we need a multifaceted, coordinated, and nuanced approach.
Compounding this challenge, the national narrative surrounding online safety is often reductive, focusing on binaries such as “preserving encrypted communications” versus “introducing back doors to safety.” The recent arrest of Telegram cofounder and CEO Pavel Durov reignited a critical public dialogue about how we balance privacy and security. Law enforcement frequently frames its concerns in stark terms, but such dichotomies risk underestimating the vast challenge at-hand, and the comprehensive solutions needed to address online harm and preserve civil liberties successfully.
To start, we need buy-in from tech companies, civil society, and policy makers. Too often the national conversation about online safety remains stuck in black-and-white terms — largely due to the rooms in which these debates are occurring. The conversations about online safety are generally held in legislative or corporate spaces, and are therefore too focused on singular potential solutions, such as encryption, increased government regulation, or the use of law enforcement.
Each of these strategies has merit, but none is sufficient on its own. While end-to-end encryption is critical to ensuring privacy, it alone cannot protect against the many forms of online abuse. And in fact, a narrow focus on encryption risks diverting attention from more comprehensive strategies that are needed to counter online harm at its roots. While increased government regulation is essential, we need industry support as technology develops (often faster than regulation can keep up). And while law enforcement can crack down on criminal abusers, we know that many online harms don’t rise to the level of criminal prosecution.
Instead, a multidimensional, transparent approach can inspire new (safer) product designs and hold companies accountable for unmitigated online harms. As suggested by Dalberg Design, we must enhance platforms’ responsiveness to public agencies and researchers working in the public interest. Transparency should not be an afterthought; it is essential that platforms clearly communicate their operational mechanisms and how they address various harms.
As we work together, we cannot ignore disproportionately impacted groups. Vulnerable and marginalized communities such as young people and new immigrants face unique risks in the online ecosystem, from being targeted by propaganda and misinformation to experiencing more direct forms of exploitation and harassment. Civil society organizations involved in digital rights must engage in this dialogue without infantilizing their experiences, and should leverage the expertise of health professionals and social workers trained to address such sensitive topics. All of us — policymakers, companies, and advocates — must center the most vulnerable in our response to online harm.
Coordination is no longer a “nice to have” — it is essential. Tech companies must increase transparency about their operations and design safer products. Policymakers of all levels need to continue advancing legislation that protects vulnerable populations. Civil society must advocate for those who are most at risk, ensuring that the digital environment becomes safer for everyone.
This is not a task we can afford to delay any further. In today’s digital ecosystem, no family is untouched by the risks that lurk on messaging platforms and social media. As we become more reliant on these platforms for communication, work, and school, the threats they pose must be directly addressed. Ignoring the growing dangers of online abuse— or worse, focusing on strict binaries — will only perpetuate a system in which vulnerable demographics are left to operate in an increasingly hostile digital environment.
The scope of online harm is vast, but so is our opportunity and ability to take action. Together, by recognizing and acting upon the multidimensional aspects of this digital crisis, we can create a safer digital ecosystem for all users. It is time we think critically and creatively about the tools at our disposal, across all sectors. We have the power to navigate the complexities of our online lives and shape a digital future that is safe, respectful, and inclusive for everyone.
Tech Policy Press was a recipient of a grant from Omidyar Network in 2022.