Home

Donate

Topic

Human Rights

Online speech and communication is a vital component of human rights in the digital era. As a consequence, the decisions made by social media platforms to curb hate speech, disinformation, and other harmful content and behavior can foster meaningful discourse or harm global human rights. Striking the right balance requires nuanced content moderation policies that consider cultural contexts and diverse perspectives. Equally crucial is navigating government requests, as compliance can either protect the public's well-being or be leveraged by authoritarian governments to quash political dissent. Robust collaboration between tech firms, international institutions, and civil society is imperative to establish global standards that protect human rights, ensuring online spaces remain platforms for dialogue, not vehicles for oppression.

In-House Data Harvest: How Social Platforms' AI Ambitions Threaten User Rights