Home

Donate

AI Policy Should Put Privacy Front and Center

Nicholas Piachaud / Feb 20, 2024

The two issues aren’t just independently important — they’re inextricably linked, says Nicholas Piachaud.

Linus Zoll & Google DeepMind / Better Images of AI / Generative Image models / CC-BY 4.0

At Davos last month and in legislative chambers around the world right now, public policy discussions about technology are focused on AI — its benefits, its harms, and if and how to regulate.

This is a critical discussion. AI is the defining technology of our time, and we need watershed laws that match the moment. Laws that protect individual rights, promote openness and transparency, stop big tech from cornering the market, and hold AI companies accountable when things go wrong.

But in some ways, discussions about AI policy have muscled out another crucial policy topic: privacy. Deliberations about the EU AI Act command far more attention than the European Commission’s current General Data Protection Regulation (GDPR) consultation. But, since it came into force six years ago, the GDPR has helped to safeguard the privacy of millions of people across the EU. Its upcoming evaluation (May 2024) might leave the GDPR vulnerable to certain interests that wish to see its provisions watered down. Big tech companies, after all, will certainly not miss their chance to try and roll back its most stringent protections.

It’s essential that we pay both issues the attention they deserve. And not just because they’re independently important — but because the two are deeply entwined. Indeed, sound data privacy regulations are a necessary precondition for sound AI regulation. Both topics come down to heated discussions around data.

Powering the Large Language Models dominating policy agendas and news headlines are data. Having more proprietary data is a major competitive advantage when companies are training their AI models — which incentivizes a race to the bottom in privacy standards. Mozilla’s *Privacy Not Included guide crystallizes this trend: As more consumer products and services integrate AI technology, they also become more aggressive about collecting (and sharing, selling, and leaking) users’ data. This pattern extends to mental and reproductive health apps, cars, and even kids’ toys. In 2022 alone, there were over 1,800 data breaches in the US, affecting an astonishing 422 million people around the world. Meanwhile, major AI companies like Microsoft are cagey about how they use our data to train AI.

AI and online privacy overlap on many more fronts, too, like the need for transparency and notice for consumers, and the dangers of deceptive design. So where are the most strategic opportunities to put privacy principles at the forefront of the AI regulatory framework?

In the US, passing legislation such as the previously introduced version of the American Data Privacy and Protection Act (ADPPA) — endorsed by Mozilla — would be a significant step toward providing the necessary privacy guarantees that underpin responsible AI. Legal obligations to minimize data collection can prevent companies from scraping and amassing as much data as possible to maintain a competitive advantage.

But a federal privacy law has proved elusive for years. Until Congress passes such a law, there are tactical solutions. The Federal Trade Commission can push forward its critical Commercial Surveillance and Data Security rulemaking, and existing rules protecting consumers and competition can be aggressively enforced. Further, individual states can follow in California’s footsteps, passing laws akin to the California Consumer Privacy Act. In Maine, a proposed data privacy law has rankled big tech companies — which is usually a good indication of impactful privacy legislation.

There are also opportunities across the Atlantic. In the EU, as the GDPR approaches its six year evaluation, it’s crucial to keep the pressure on enforcement and to ensure the landmark law is not watered down. GDPR's protection of personal data is the bedrock of much European digital policy and weakening of the rules could have a downstream effect. The latest version of the AI Act makes reference to the GDPR 34 times. Meanwhile, European Data Protection Authorities are already using the GDPR to crack down on invasive data processing by OpenAI in the training of ChatGPT.

As these policy discussions unfold and intersect, policymakers should also understand how open source solutions are key to both trustworthy AI and not at odds with privacy. For example, open source can accelerate privacy-preserving techniques in AI by enabling people to run models on their private devices with private data. Open source unlocks many other opportunities, too, like an increased ability to scrutinize AI models and more competition in the marketplace.

Privacy is the backbone of robust AI policy. As we look forward to new rules for AI systems, we should also look to our existing privacy rules and strengthen them — or, in the case of the US, make them a reality to begin with. In a world where AI systems increasingly make decisions for us and about us, our data must be protected.

Authors

Nicholas Piachaud
Nicholas Piachaud is the director of campaigns for the Mozilla Foundation. He is a policy and advocacy expert who has led work for international organizations across the technology, education and human rights sectors. Today, he leads the Mozilla Foundation's campaigning for open and transparent AI. ...

Topics