Home

Donate

A Federal Data Privacy Law May Be the Best Tool to Combat Online Disinformation

Zachey Kliger / Apr 16, 2021

The COVID-19 pandemic relegated data privacy to the backburner on the public policy agenda. But discussions of a federal data privacy law – which, to date, the U.S. lacks – have resumed in Congress. Proponents of a federal bill argue that it would provide much-needed protections for citizens whose personal information continues to be excessively and, often unknowingly, harvested by internet-service providers, like Xfinity, Verizon and Comcast, and internet platforms, like Google and Facebook. Indeed, a well-enforced privacy law could do just that.

But a federal data privacy law would likely accomplish another goal: Curbing the impact of online disinformation – misleading or false information that is intended to deceive.

Events of the past four years have crystallized the dangers of online disinformation. In 2016, the political consulting firm Cambridge Analytica harvested data from eighty-seven million Facebook profiles to target individual US voters with political advertisements. Some of the ads were designed to suppress Democratic turnout in states that swung the 2016 election to Donald Trump.

In 2017, members of the Myanmar military relied on Facebook’s data repositories to identify and target the country’s Muslim Rohingya minority, precipitating the worst genocide in recent memory.

And throughout the pandemic, foreign actors have sought to spread disinformation about COVID-19, sowing doubt in vaccines and fanning the flames of partisan divide.

A data privacy law wouldn’t rid the digital universe of these bad actors. But it would render them less effective: Without detailed data on users’ political beliefs, search history, consumption habits, and location, disinformation campaigns become weapons without a target. By weakening their precision, it is possible to lessen their potency.

Europe’s Privacy Law, the General Data Protection Regulation (GDPR), went into effect in May 2018, and to date is considered the gold standard in privacy regulation globally. The reception from businesses and consumers alike has been mixed. But early returns indicate the law’s impact has been overwhelmingly positive. For starters, EU regulators have issued over $57 million in fines against Google alone for violations. Furthermore, GDPR has put a spotlight on data protection, and compelled most of the OECD countries, including the U.S., to consider privacy legislation of their own.

A report by the Internet Policy Review, a Europe-based online journal covering internet regulation, found that GDPR has reduced unlawful political micro-targeting, and proven to be an effective tool for limiting disinformation and political manipulation. A well-crafted federal privacy bill that mirrored Europe’s GDPR would allow the U.S. to reap similar benefits.

The key now for federal policymakers is to produce a bill that includes the provisions that have made GDPR successful, while excluding overly ambitious regulations that have plagued privacy bills in New York, Maryland and Massachusetts, and would surely slow the bill’s progress in a divided Congress.

But unlike other proposals to curb online disinformation, a data privacy bill sidesteps thornier issues like censorship and political bias, and could realistically make it through a divided Congress.

Specifically, a federal data privacy bill in the U.S. must address the three core issues at the heart of the GDPR: Consent, purpose limitation, and accessibility.

First, the law should require the biggest technology companies to receive explicit consent to collect user data. Currently, Facebook and Google automatically opt in users to data collection. The burden is on users to comb through Facebook and Google’s onerous privacy settings to opt out of data collection. Federal law should require companies like Facebook and Google to make opt out their default privacy setting. This would allow users to make an informed decision about how much personal information they want to share.

Second, the bill should include a purpose limitation provision that limits the information the largest tech platforms are entitled to sell to third parties. For example, Facebook and Google should be barred from sharing users’ political views, private messages, photos and facial recognition data, to name a few. Establishing commonsense guardrails to prevent Facebook and Google from sharing intimate personal details is unlikely to jeopardize ad sales. More importantly, it will make it harder for bad actors to access sensitive information.

Third, the big tech companies should be required to maintain documentation of the user data they collect, including the metadata that drives their advertising business. Users should be able to access this information at any time, and request to have it erased.

Without strong enforcement, a federal data privacy law is unlikely to be effective. A 2011 consent decree with the Federal Trade Commission (FTC) bars Facebook from sharing user data without obtaining explicit consent; and yet, the company routinely continues this practice, without recourse.

Therefore, in addition to the House drafting a new data privacy bill, the Senate should pass the Data Protection Act, a bill introduced by Senator Kirsten Gillibrand (D-NY) in February 2020 that proposes creating a U.S. federal data protection agency. If this sounds like a pipedream, consider recent history: Elizabeth Warren’s proposal for a Consumer Financial Protection Bureau, which was dismissed by Washington insiders in 2007, materialized within months of Obama’s presidency. To date, the CFPB has returned $12 billion to 29 million Americans who have fallen victim to financial wrongdoing.

The Information Technology and Innovation Foundation (ITIF) estimates that a privacy bill that mirrors Europe’s GDPR could cost the U.S. economy approximately $122 billion per year. The lion’s share of this figure is based on the projected impact a privacy law would have on business productivity and economic value. Specifically, ITIF estimates that reduced access to data and lower advertising effectiveness could cost U.S. businesses more than $100 billion in annual revenue.

Mark Zuckerberg, Sundar Pichai and other Silicon Valley executives have also argued against a comprehensive federal privacy bill in the United States. They argue that unchecked data collection, and the behavioral targeting it enables, helps small-and-medium sized enterprises (SME’s) as much as it helps the largest tech platforms.

But such arguments don’t hold up to scrutiny.

First, if consumers find targeted advertising as useful as Facebook and Google say they do, many will choose to opt in to data collection. In fact, in the two years since GDPR was passed, 90% of EU citizens have opted in to online data collection. If a similar trend were to hold in the United States, ITIF’s estimated cost would be exaggerated.

Second, a targeted bill that avoids overly ambitious provisions, like requiring businesses to act as “data fiduciaries”, will reduce compliance costs and lessen the burden on smaller companies.

A federal data privacy bill has notable limitations. Facebook and Google have done their best to sidestep compliance with GDPR. Larger companies will always be better equipped than smaller ones to respond quickly to new regulations. And strengthening data privacy doesn’t directly penalize bad actors who launch disinformation campaigns.

But unlike other proposals to curb online disinformation, a data privacy bill sidesteps thornier issues like censorship and political bias, and could realistically make it through a divided Congress.

The disease of online disinformation defies any one cure. Content moderation, fact checking, and other efforts to curb disinformation are necessary, but reactive: These strategies aim to police the internet. Expanding data privacy would begin to root out its underlying ills. Absent a silver bullet, federal policymakers should prioritize data privacy.

Authors

Zachey Kliger
Zachey Kliger is an MPA candidate at Columbia’s School of International Affairs studying social policy and technology. Before starting at SIPA, Zachey worked at two brand consulting firms, conducting original research on the drivers of corporate reputation, and partnering with small businesses to ex...

Topics