Human Rights Can Be the Spark of AI Innovation—Not Stifle It
shirin anlen / Feb 20, 2025
French President Emmanuel Macron speaks at the Paris AI Action Summit, hosted on February 10-11 at the Grand Palais. Source
Last week’s Paris AI Action Summit painted a concerning and disappointing picture. Rather than using this moment to foster global collaboration, leaders fixated more on national self-interest and commercial expansion. The EU AI Act was framed as a "burden," the concept of AI safety was downplayed in favor of a narrative that pits regulation against innovation, and discussions failed to meaningfully address the associated harms and risks of new technology. Countries—including France, the EU, and the US—positioned themselves in an AI arms race, emphasizing the goal of market dominance and rapid development over responsible governance, accountability, and public trust. US Vice President JD Vance’s claim that the AI future will be "won by building," not by "hand-wringing about safety," exemplifies the soft law dangerous mindset. However, Vance’s rhetoric echoes sentiments similar to those of leaders like Emmanuel Macron, who advocated for a "simplified" regulatory approach in Europe.
Yet recent history tells a different story. Just this month, the release of DeepSeek, an AI model developed by a Chinese startup, demonstrated how limitations can actually fuel innovation. While the potential for censorship and surveillance embedded in the DeepSeek platform are incredibly concerning and disqualify it as a model for responsible AI development, its success under market restrictions and financial and technical constraints highlights an important reality. Despite limited access to high-powered AI chips, DeepSeek built a powerful language model with significantly less capital and computing power than its American counterparts. This achievement reinforces a critical point: constraints can be catalysts for creativity. Regulation and ethical considerations can challenge technological development to be smarter, more efficient, more impactful, and even more sustainable.
Recent surveys show public trust in AI remains low, with concerns about safety, security, reliability, and respect for rights. Yet paradoxically, it is precisely this trust—earned through robust safeguards—that will drive meaningful innovation and adoption. While financial incentives often drive breakthroughs, history shows that prioritizing human rights has consistently fueled transformative innovation and widely accepted innovations.
The notion that a human rights-centered approach hinders progress is a dangerous fallacy. In reality, it has done the opposite. Some of the most groundbreaking technological advancements were born out of the fight for fundamental rights with applications far beyond their initial purpose:
- Privacy and Freedom of Expression: Concerns about surveillance and data collection have driven the development of encryption technologies like the Signal Protocol, powering secure communication apps like Signal and WhatsApp. These tools, often developed in response to crackdowns on activists and journalists, protect our privacy, data, and freedom of speech, especially in times of crisis and under repressive regimes. Similarly, efforts to counter censorship have led to the creation of VPNs, the Tor browser, and secure platforms, enabling vital communication in the face of internet shutdowns and government control. Even projects like Starlink, deployed in Ukraine and Iran, demonstrate how the right to access information can drive technological innovation.
- Inclusion and identity: The fight for disability rights has been a powerful catalyst for technological innovation, driving advancements that benefit everyone. Screen readers and text-to-speech technology, originally developed for blind users, now power AI assistants like Siri and Alexa. At the same time, voice recognition software designed for people with mobility impairments laid the foundation for hands-free computing and transcription services. AI-generated captions, initially created for the deaf and hard of hearing, have become essential for online communication, language learning, and media accessibility. Digital accessibility standards, such as the Web Content Accessibility Guidelines, have improved usability across the web, benefiting not just people with disabilities but also aging populations and mobile users. The right to identity has also spurred innovation in digital ID systems, particularly for stateless and undocumented individuals. The World Food Programme's pilot of blockchain-based aid demonstrates how technology can enhance access, security, and efficiency in humanitarian efforts.
- Freedom of Assembly and Information: The right to assemble has driven the development of real-time video broadcasting tools like Facebook Live, prominent during movements like Black Lives Matter and the Arab Spring. Organizations like WITNESS have advocated for better documentation tools, leading to platform improvements in archiving and tamper-proof metadata. The right to information has spurred innovation in authentication and verification systems like C2PA, ProofMode, and open-source investigation tools. These advancements combat misinformation and protect vulnerable communities. For instance, PhotoDNA, developed to detect child sexual abuse material, demonstrates how protecting vulnerable populations can drive technological solutions as well as innovation in AI detection tools to combat AI deceptive content and preserve trust in an increasingly digital AI-driven world.
These examples are not exceptions; they illustrate a clear pattern. Human rights advocacy has consistently driven technological innovation, demanding accountability, fostering public trust, encouraging wider adoption, and creating new tools to protect vulnerable communities. We must reject the false narrative that prioritizing human rights harms AI development. A human rights-centered approach is not a burden; it is an opportunity. It is the only way to ensure that AI innovation is sustainable, just, and truly beneficial to humanity.
The real burden lies in ignoring our fundamental rights to equality, information, freedom of expression, privacy, data control, freedom to assemble, and dignity in the pursuit of financial gain and national competition that benefits the few. As AI development progresses, human rights must be a core benchmark of what makes technology viable. For too long, “innovation” has been used as an excuse to sideline trust, safety, and user rights. We have already seen how the tide turns: smart regulations, clear liability frameworks, and appropriate guardrails have made technological development safer and more predictable. We cannot afford to go backward.
Authors
