Home

The First Amendment Should Protect Us from Facial Recognition Technologies – Not the Other Way Around

Jake Karr, Talya Whyte / Aug 15, 2023

Talya Whyte is a 3L at NYU Law. Jake Karr is the Deputy Director of NYU’s Technology Law & Policy Clinic and a fellow at the Engelberg Center on Innovation Law & Policy.

Image by Alan Warburton / © BBC / Better Images of AI / Quantified Human / CC-BY 4.0

Should the First Amendment allow facial recognition companies to scrape photos of you off the internet and train AI surveillance systems that law enforcement then uses against peaceful protestors? Clearview AI thinks so. In recent years, Clearview has faced numerous lawsuits seeking to hold the company accountable for its invasive facial recognition app. And in every one of the suits, the company has attempted to immunize itself from liability by arguing that its business model for developing and deploying its facial recognition technology is “speech” protected by the First Amendment.

But if courts were to accept this argument, it would be dangerous. Not only would it allow Clearview’s activities to go unchecked, but it would also create a major obstacle to regulating and protecting the public from the harms and risks of emerging AI technologies.

The absurdity of Clearview’s First Amendment defense is laid bare in litigation that is currently working its way through the California state courts. In Renderos v. Clearview, immigrants’ rights activists and community organizations sued Clearview, along with local governments and law enforcement agencies that have purportedly used the company’s facial recognition app. The plaintiffs are particularly concerned because they regularly engage in peaceful protest activity. Their lawsuit alleges, among other things, that Clearview has illegally appropriated their likenesses and violated their reasonable expectation of privacy in their biometric information, making it possible for the government to exploit Clearview’s technology to chill their rights to free speech and association.

To build its facial recognition database, Clearview has scraped over 30 billion images (and counting) from the Internet—most prominently, from social media sites like Facebook, Instagram, and Twitter. It does so by using algorithms to automatically and indiscriminately gather large amounts of information without getting individuals’ or platforms’ consent.

After scraping the images, Clearview uses them to create a faceprint––a unique biometric identifier of each face. It then sells access to its technology through a public-facing platform and app. Clearview users can upload a “probe” image of an unknown individual, and Clearview creates a new faceprint from the probe image. Based on a proprietary algorithm with a secretive set of metrics, the app then spits back images from its database whose faceprints “match” the one created from the probe image. As of March 2023, law enforcement has reportedly used Clearview nearly a million times, including to surveil protestors across the country.

If you’re wondering by now where Clearview’s “speech” is in this process, you’re not alone. In fact, Clearview argues that the entire process of making and selling its app—from scraping to making faceprints to providing the app to customers—constitutes expression protected by the First Amendment. To stake out this claim, the company principally points to a dozen words from a 2011 U.S. Supreme Court case. That case, Sorrell v. IMS Health, dealt with a Vermont law that sought to protect medical privacy by limiting the disclosure, sale, and use of pharmacy records revealing the prescribing practices of individual doctors.

A group of data miners and drug manufacturers sued to block the law, arguing that they had a First Amendment right to share and sell this prescriber-identifying information, in part because the pharmacies who sell it to them are lawfully allowed to collect it for their own internal business purposes. Vermont argued that its law primarily regulated commercial conduct, rather than speech—the nonconsensual commercial sale and use of nonpublic information. Rejecting this argument, the Court explained that “the creation and dissemination of information are speech for First Amendment purposes.”

Thus, comparing itself to the data miners and drug manufacturers in Sorrell, Clearview argues that scraping our images from the Internet and feeding them into its secret machine learning algorithm to create unique biometric faceprints is “creating” information. And when it sells its app to users, it is “disseminating” information. An open-and-shut First Amendment case, Clearview urges.

But Clearview should be held accountable for any harmful, unlawful step it takes in building, maintaining, and selling its app. It should not evade liability simply because that step is one part of a larger process of “creation and dissemination of information.” After all, as the Supreme Court has explained in the context of journalism, “the First Amendment, in the interest of securing news,” does not “confer[] a license” on journalists to break the law in the process of investigating a story, even though illegal activity like “stealing documents or private wiretapping could provide newsworthy information.” This is a key distinction from Sorrell, where the underlying collection and creation of information did not involve any allegations of illegality. It does not matter whether you are a journalist or a secretive surveillance company, if you violate people’s rights in order to create and disseminate information, you should not be able to invoke “free speech” to cover all your tracks. Clearview’s distorted reliance on a few stray words in Sorrell is inconsistent with this common-sense application of the First Amendment.

Fortunately, Clearview’s argument has received no more than tepid support in the courts so far. In Renderos, a trial judge allowed the case to move forward after finding that Clearview’s “biometric analysis and maintenance of the database is not ‘speech,’” even if other steps in Clearview’s process might themselves encompass First Amendment-protected activity. This is the right approach, and it is consistent with a decision reached by a court in Illinois in a separate lawsuit brought against Clearview by the ACLU.

Embracing Clearview’s framing would provide it with a First Amendment get-out-of-jail-free card for almost any violation of law, leaving Clearview’s secret, commercially motivated facial recognition business entirely insulated from most government regulation and consumer protection or civil rights lawsuits. And it is not just about Clearview. Any tech company that uses AI and machine learning to scrape data without consent and turn those inputs into outputs is arguably in the business of creating and disseminating information. Extending Sorrell to cover AI writ large—the path that Clearview’s logic inevitably leads us down—would create nearly insurmountable barriers to regulating this growing industry and protecting our personal information from being exploited for profit—something a majority of us want. This judicial interpretation would have grave repercussions for our ability to control our online lives.

Ironically, it would also be “bad for free speech,” as the Renderos case makes clear. Clearview’s invocation of the First Amendment is especially dizzying here, because the plaintiffs highlight the ways in which Clearview’s practices chill the plaintiffs’ own speech. Mass data collection and surveillance of the kind that Clearview perpetrates and perpetuates directly threaten the rights of speech and association protected by the First Amendment. As the plaintiffs explain it, “[t]he ability to control their likenesses and biometric identifiers—and to continue to engage in political speech critical of the police and immigration policy, free from the threat of clandestine and invasive surveillance—is vital to [the] [p]laintiffs, their members, and their missions.”

Clearview gathered billions of images without people’s consent and built a highly invasive facial recognition app based on our biometrics, structuring its entire profit model around privacy invasions and corporate secrecy. The company should not now get to invoke the First Amendment to shield itself from public oversight and accountability. Creating and maintaining a mass surveillance tool used by the state to chill dissent is wholly antithetical to the values enshrined in the First Amendment. When courts are called upon to decide whose rights to prioritize under the Constitution, they should bolster our individual and collective rights to speak, protest, and control our images and data-driven identities––not a tech company’s “right” to build an Orwellian money-making machine.

Authors

Jake Karr
Jake Karr is the Deputy Director of NYU's Technology Law & Policy Clinic and a fellow at the Engelberg Center on Innovation Law & Policy, where he is a member of Knowing Machines, a research project tracing the histories, practices, and politics of how machine learning systems are trained to interpr...
Talya Whyte
Talya Whyte is a 3L at NYU Law. Her legal scholarship focuses on the intersection of new technology, society, and digital rights. During her summers she worked at a drug discovery startup, Octant Bio, and Cooley’s New York office as a Summer Associate. Talya is a 2023 Google Legal Scholar, Student F...

Topics