Home

Donate

Beware the Privacy Panacea

Joseph Jerome / Apr 20, 2021

Worries about the negative impacts of technology are one area of bipartisan agreement. At one congressional hearing after another, lawmakers in both parties have expressed concern about problematic online content, manipulative technologies and filter bubbles, and privacy invasions and algorithmic bias, all amplified by perceived anti-competitive behavior amongst the biggest tech companies. While everyone has identified the disease, the cure has remained elusive.

There is no agreement about what legislative remedy to advance first. Part of the challenge is that issues affecting the digital ecosystem are interrelated. Efforts to increase competition amongst tech and data-driven companies implicates privacy proposals, while ongoing debates about online content moderation may well be better addressed by privacy rules. Because we’re ultimately fighting over data governance, privacy and data protection sit at the center of the techlash Venn diagram. I’m hardly the first person to make this claim. Brookings’ Cam Kerry, a leading figure in the Obama-era Consumer Privacy Bill of Rights, has argued that federal baseline privacy legislation could help “end the anything-goes information system that has aided platforms’ growth [and] limit their ability to exploit their power with manipulative misinformation or marketing and mitigate anticompetitive or antidemocratic effects.”

But we should be careful not to place national privacy legislation on a pedestal. Industry voices posit privacy law as a way to legislate longstanding fair information practices that seem to boil down to making it clear(er) how companies collect data and and giving people a modicum of control over their information, but this limited vision is far from how privacy advocates and academics think about the potential for privacy rules. Some groups have put privacy at the center of a new set of digital civil and human rights, while others see privacy rules through the lens of social and racial justice. To some, a privacy law is incomplete unless it addresses algorithmic amplification, bias, and discrimination.

Better individual privacy is not a panacea for our fraught relationship with technology, and we should consider whether too many hopes and dreams have been placed on the enactment of privacy rules.

Recently, Zachey Kliger suggested on Tech Policy Press that federal privacy rules could curb the impact of online disinformation. Privacy rules may discourage some of the ways disinformation gets amplified through filter bubbles and micro-targeting of ads and political content, but they cannot resolve the organic spread of lies. We should also take care not to overstate the impact of privacy-invasive data mining like that conducted by Cambridge Analytica, which arguably wasn’t that effective in the end. But we should be skeptical that any current problem will combat disinformation campaigns waged by committed actors or resolve how synthetic media like deep fakes can undermine social trust.

Kliger acknowledges that privacy is no silver bullet, but he also doesn’t explain how privacy rules would achieve what he claims. Instead, he points to the EU’s General Data Protection Regulation, the so-called gold standard of privacy protection. Perhaps, but the EU is separately advancing proposals like the Digital Services Act and leaked A.I. regulation that more directly address these issues. My concern is that Kliger suggests that issues of consent, purpose limitations, and accessibility should be at the heart of data protection rules, but misunderstands how these provisions could be deployed to fix fake news.

Consent is a problematic place to begin. Not only is it inaccurate to describe the GDPR as a consent-based framework, but “privacy self-management” concepts like control, informed consent, and choice have been dismissed as impractical and ineffective. Any assumption that individuals can make informed decisions about how they share data ignores the time it takes to read privacy policies and the basic challenge of understanding what they even mean.

Focusing on purpose specification -- or requiring entities to clearly delineate somewhere how they intend to use information -- is better, but presents huge questions by itself. But Kliger then calls for a prohibition on Google and Facebook selling data, which ignores the reality that neither company makes money by selling data directly -- it’s too valuable to them. One of Google’s core privacy principles, in fact, is to never sell data. This also misses the forest for the trees. For years now, advocates in the United States have been fighting over where and when companies “sell” data, but the real challenge is trying to put firm limits on secondary uses of information. This is compounded by the fact that problematic secondary uses come from both rampant data sharing and companies’ own internal collection and processing of information. Under either scenario, however, one has to grapple with the fact that social media companies are operating as designed.

The goal seems to be to use privacy to just make platforms less effective, and so much is expected over affirmative rights for individuals to access and delete information about them. These mechanisms are important; data access can be a powerful transparency tool for watch dogs, researchers, and would-be competitors. There’s not much evidence that individuals would take advantage of their privacy rights vis-a-vis social media platforms, and access rights are rarely made available in a form that is truly useful to individuals. An even bigger issue is that traditional data subject access tools may be ill-equipped to handle the types of data collection on the horizon. Aggressive collection of biometric and neural data will only expand the ability of bad actors, political campaigns, and commercial entities to prey on individuals’ preconceived notions, desires, and confirmation biases -- and meaningful access will be difficult. Any legislative solution will need robust public education about data access tools and ensure that companies are not able to overwhelm individuals with useless controls. Otherwise, we cannot expect individuals to use expanded access and deletion to empower themselves outside of narrow areas where data is actively used against them.

Ultimately, none of these components of a privacy law will stop conspiracies from percolating on social media. I worry about calling on Congress to pass a targeted privacy law based largely on a high-level examination of EU data protection rules while ignoring the serious political disinformation challenges still facing Europe despite the GDPR.

There are good reasons for Congress to pass a national privacy law. Ever more information about us is being generated and used in ways that are surprising, inappropriate, and harmful, but using data for disinformation campaigns is just one of a million ways data can be used today. If our goal is to combat that, there are better tools to directly address dis- and misinformation. Many solutions may well lie outside of the realm of legislative action and focus on platform design and functionality. Congress could also invest in civics education, digital citizenship efforts, and media literacy. Lawmakers could fix a lot of problems simply by restricting their own campaigning activities and toning down their own conspiratorial rhetoric when it comes to technology.

Better individual privacy is not a panacea for our fraught relationship with technology, and we should consider whether too many hopes and dreams have been placed on the enactment of privacy rules.

Authors

Joseph Jerome
Joseph Jerome is Visiting Assistant Professor at the University of Tampa and a Tech & Public Policy Visiting Fellow at Georgetown's McCourt School of Public Policy. He was previously a policy manager at Meta Reality Labs. Before that, he worked on state advocacy efforts at the Center for Democracy &...

Topics