Home

Clearview AI Is Deploying a California Law Meant to Protect Activists From Bogus Lawsuits

Melodi Dincer, Nicola Morrow / Aug 15, 2023

Nicola Morrow is a graduate of NYU Law and a recipient of a 2023–2024 Justice Catalyst Fellowship, and Melodi Dincer is a Supervising Attorney with the Technology Law and Policy Clinic at NYU Law and a Fellow with NYU Law’s Engelberg Center on Innovation Law & Policy.

Clearview AI is a lucrative facial recognition company that has spent the past few years boiling in legal hot water. To the company’s chagrin, several European countries have fined Clearview millions for violating their data protection laws by scraping billions of images of peoples’ faces from popular social media websites without their knowledge or consent in order to build its facial recognition app. So far, these moves have failed to stop Clearview from profiting from our data. But as lawsuits against Clearview continue to mount, the company has invoked a dangerous legal argument that, if successful, could provide a playbook for numerous AI companies to evade accountability.

Today, Clearview’s main U.S. customers are law enforcement agencies who use the app to identify people from photos and surveillance footage. The police have a long history of surveilling activists using powerful new technologies, chilling their fundamental rights to assemble and protest. In an ongoing lawsuit in California, immigrant rights activists and organizations are suing Clearview for misappropriating images of their faces, invading their privacy when it built its app and when it licensed the app to local police departments to use during protests. The complaint alleges that the company’s actions deter protestors from exercising their First Amendment rights out of fear that they will be harmed if police feed images of their faces into Clearview’s AI-run surveillance machine and identify them from Clearview’s scraped sea of faces–a fear grounded in reality, as police departments in the U.S. consistently respond to lawful protests by deploying Clearview’s app on protestors.

Clearview, no stranger to the courtroom, has responded to the activists’ lawsuit by deploying a different dangerous practice. The company is testing out a legal strategy to nip the activists’ lawsuit in the bud and avoid going through the “discovery” phase of their lawsuit, where it would have to turn over information that could include the development and inner-workings of its facial recognition app. Once exchanged, this valuable information could help prove Clearview violated the activists’ rights and later, through court filings, become publicly accessible. The legal process would make Clearview’s technology and business practices more transparent to lawmakers, who could then craft regulations reigning it in. To avoid that outcome, Clearview hopes to get the case tossed out of court early on.

The stakes are high, so Clearview is aggressively pursuing various legal arguments. But at the heart of its legal strategy is a law that should not even apply to Clearview in the first place.

The strategy involves what’s called an “anti-SLAPP” law, designed to protect defendants from “Strategic Lawsuits Against Public Participation” (SLAPP). In SLAPP lawsuits, powerful entities sue their opponents in order to intimidate, censor, or silence them from engaging in protests, petitioning the government, or public discourse. The suits are strategic–not substantive–designed to bury innocent defendants in costly litigation until they give up or go bankrupt. California’s anti-SLAPP law protects defendants in these cases by allowing them to file a motion early on in the case, describing how the lawsuit targets their lawful activities. If successful, the burden then shifts to the party suing to prove their lawsuit has merit and is not an attempt to squelch speech. If the judge finds the lawsuit is a SLAPP, they will dismiss the case and can order plaintiffs to pay defendants’ attorneys fees and court costs.

Anti-SLAPP laws are designed to protect free-speech-minded Davids from powerful and often secretive Goliaths that exploit our highly litigious society. In California, the ideal case is when a large land developer sues environmental activists for protesting projects that pollute the local area by picketing construction sites or speaking out at community meetings. Rather than endure these lengthy, expensive, and punishing lawsuits, the activists can file an anti-SLAPP motion and get the case kicked out before it costs them too much.

In a strategic twist, Clearview filed an anti-SLAPP motion against the community activists challenging its facial recognition app in court. Goliath has turned David’s shield into a blunt cudgel, attempting to insulate Clearview’s surveillance empire from legal scrutiny. Cloaking itself in activists’ clothing, Clearview argues that scraping billions of photos from the internet without consent, using them to train its AI systems, and then selling its app to police was akin to a public participation activity such as petitioning the government, publishing news articles, or organizing public protests. It demands that the suit be dismissed quickly, without allowing the activists any chance to gather evidence, depose witnesses, or have experts issue reports to the court on how Clearview’s technology might violate their rights.

Clearview is wrong, both legally and logically. Legally, Clearview has to show that through building and selling its app, the company is attempting to participate in an important public conversation. To do so, it tries to argue that its business is a central part of the public conversation about crime and the identification of criminals because some of its customers have, at times, used the app during isolated criminal investigations.

But even in these scenarios, Clearview is not part of the public conversation – it merely provides the product that some of its users use to match photographs to faces. In those cases, if anyone is part of a “conversation,” it is the user, not Clearview. Once it sells its app, what police do with it is out of Clearview’s control. Moreover, Clearview is a notoriously secret company that repeatedly attempts to distance itself from the downstream effects of its surveillance tool – in essence, removing itself from the public conversation as often as it can. Fortunately, the California trial court rejected Clearview’s argument along these lines. The judge found that the activists were not suing to silence some concerned citizen showing a police officer an image she took on her phone at the crime scene; instead, they were challenging a business profiting from illegally appropriating images to sell a product to customers.

Clearview is now appealing the trial court’s decision, and a lot hangs in the balance. If Clearview succeeds, not only will it thwart the activists’ opportunity to vindicate their rights, but it could also provide a playbook for other AI companies seeking to avoid liability for misusing our data–especially data about our biological features–for their own profit. As over 32 states have anti-SLAPP laws on the books, tech companies could soon use anti-SLAPP motions to undermine our chances of regulating AI products built from the exploitation of our data.

Because of its significance, Clearview may fight this battle up to California’s Supreme Court and beyond. Facial recognition systems that promise to identify any person, at any time and place, affect each of us individually and all of us collectively. By rejecting Clearview’s contortion of anti-SLAPP law, courts will ensure tech companies view us as more than mere data points to aggregate and train algorithms on. The law would empower us to control our dataflows and preserve our innate value from the commodifying gaze of the machine.

* As a Legal Fellow with the Knowing Machines Research Project, co-author Melodi Dincer filed an amici curiae brief on behalf of science, legal, and technology scholars supporting the plaintiffs before the trial court in this lawsuit.

Authors

Melodi Dincer
Melodi Dincer (she/her/ella) is a technology privacy lawyer with expertise in biometric surveillance, AI policy, and data justice lawyering. Her work focuses on how the law can entrench power disparities in the development, adoption, and legitimation of new technologies. She is a Legal Research Fell...
Nicola Morrow
Nicola Morrow is a graduate of NYU Law and a recipient of a 2023–2024 Justice Catalyst Fellowship. She is invested in strengthening civil liberties, with particular interests in defending individual speech and privacy rights and fighting government use of carceral surveillance technologies. As a Jus...

Topics