Home

Donate

Your Face Belongs to Us: A Conversation with Kashmir Hill

Justin Hendrix / Sep 24, 2023

Audio of this conversation is available via your favorite podcast service.

In 2019, journalist Kashmir Hill had just joined The New York Times when she got a tip about the existence of a company called Clearview AI that claimed it could identify almost anyone with a photo. But the company was hard to contact, and people who knew about it didn’t want to talk. Hill resorted to old fashioned shoe-leather reporting, trying to track down the company and its executives. By January of 2020, the Times was ready to report what she had learned in a piece titled “The Secretive Company That Might End Privacy as We Know It.

Three years later, Hill has published a book that tells the story of Clearview AI, but with the benefit of three more years of reporting and study on the social, political, and technological forces behind it. It's called Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy As We Know It, just out from Penguin Random House.

What follows is a lightly edited transcript of the discussion.

Kashmir Hill:

I'm Kashmir Hill. I'm a technology reporter at The New York Times and the author of the new book, Your Face Belongs to Us, a secretive startup's quest to end privacy as we know it.

Justin Hendrix:

Kashmir, I'm so pleased that you can speak to me today about your book, which is of course about Clearview AI, but it's about many other things.

I think the most interesting thing about this book is that it seems to have many starting points. How did you arrive at this structure for the book?

Kashmir Hill:

Well, I have two different timelines in the book. They're both chronological but they are kind of, unfurling in two different moments. And one is the now a few years ago I discovered this company that no one had ever heard about when a tipster sent me something they come across in a public records request to the Atlanta Police Department.

And this company called Clearview AI claimed it scraped billions of photos from the public web and social media site to develop a facial recognition app with something like 99 percent accuracy that could identify people who hadn't given their consent to be in that database.

And the second timeline is how we got here. When I first reported the existence of Clearview AI, it was really shocking. Part of it was that facial recognition technology had come that far because I think generally people thought that facial recognition technology was really flawed, that it didn't work that well, that it was biased, and so it was pretty shocking to be in that moment.

And so I wanted to explain to people not just how Clearview came to develop that radical technology, but to set the stage for how they're able to access the tools that they used.

Justin Hendrix:

Perhaps impossible to do but at one point in one of the chapters, you go all the way back to Aristotle. Talk to me very generally about the various ideas about the face that are at work here?

Kashmir Hill:

Yeah. So Aristotle believed that only humans have a true face the other animals have, obviously something on their head with eyes and ears and a nose, but only the human face kind of conveys the soul and is the soul. Aggressive and that everything that makes us so unique is actually reflected in that face.

He said that if people had eyebrows that were a certain shape, they were this kind of person, if they had they had a big nose, they were slovenly. I mean, it was just like crazy kind of palm reading for the face. But this actually, this strain of thought took hold.

And during the Victorian age, especially, there were a lot of big thinkers. Who believe this Charles Darwin actually wrote that he almost wasn't able to be the chief scientist on the Beagle because the ship's captain was a physiognomist and thought that his large nose meant that he like wouldn't do the job well.

And it's funny because Darwin's cousin, Francis Galton is one of the big proponents of physiognomy. And really believe that you could tell if somebody might be a criminal, based just on their facial features. And he would do things like make photo composites, and we're talking about in the 1800s of lots of different criminals or lots of people with mental illness and say, Oh, you can just, you can read it in their face.

Like here are the features. And in that belief has persisted, honestly, through the ages even if it's. considered now, I would say, by most of the academic community to be discredited. There's still AI researchers today who keep returning to this idea. There were a couple of sets of teams of researchers in recent years who took criminal mug shots and took photos of normal people and kind of tried to make them look the same.

And then trained computers to say, can you find the criminal face? And then they would come out with these studies that say, yes, AI can predict who's a criminal. But what was likely happening is that AI was very good at predicting which of those photos was originally a mug shot and which wasn't.

Justin Hendrix:

It seemed to me that race is sort of a through line, sometimes because of Folks sort of actively looking for these differences and suggesting that they are present and exist, and in some cases because they don't recognize race as a concept that's important to the work they're doing.

Would you say that's a sort of fair way to assess it?

Kashmir Hill:

Yeah, I kind of trace facial recognition technology back and in the early days and it went back to Kind of early 1960s before silicon valley was even called silicon valley. The cia was funding. Scientists there to try to get You know, these very early computers to be able to recognize somebody and through this time period these early decades that they're working on it, every time that they tried to get the machine to recognize somebody, it was a white man.

They just kept giving machines photos of white men and saying, learn these faces, learn to tell the difference between these faces. And so the kind of early facial recognition technology. programs and algorithms. They were good at telling the difference between white men's faces and not so much anybody else.

And so there's this great anecdote about one of the big breakthroughs was this made by this guy, Matthew Turk who was at the MIT Media Lab. And This is in the 90s and, some documentary crew sent a sent a team to go and, talk to Turk, talk about facial recognition technology, what it would mean at the time the desire was to put the technology into a TV so that the TV could watch who was watching it.

So you could know, what age, what gender was watching a particular TV show. And he had this kind of working technology, and they had him test it by having somebody sit down in front of the television, and it would put a name to a face, and they did it with with a few men, and then there was one woman.

And then the documentary team sent a dog in and Matthew Turk hadn't been expecting this and the dog comes in and the camera sees it and then it labels it with the name of the one woman who was part of the testing group because to this technology, both the woman and the dog. were the ones that looked so far different that they must be the same.

And I just kept seeing this over and over again, where the technology just did not work well on non white people, just for a very long time, even while it was getting used in the real world.

Justin Hendrix:

So we've got the kind of development of facial recognition technology, the advance of computing and then along come, these big social media platforms.

I suppose, one of the other interesting things to me about this book, it made me think about is just the long legacy of, problems that have emerged from Quizzes on Facebook.

Kashmir Hill:

Well, with Hoan Ton-That, who was kind of the technological mastermind of Clearview AI. He's a young guy at 19 years old. He drops out of college in Australia and moves to the United States to Silicon Valley. It's 2007. It's kind of when the dream is forming there. Everyone wants to make the next big app.

He loves computers and he kind of gets his start. On Facebook, it had just opened its doors to third party developers for the first time. And so he's just making these quizzes like a, would you rather do this or this? I don't know if people remember this time, but it was kind of like when FarmVille was blowing up and it was just like, every time you logged in, you would get these notifications about your friend did this in FarmVille, your friend took this quiz, you should take it.

And so he was able to just get millions of users really quickly. And it was exciting. I don't think he was really passionate about the quizzes themselves. It was just about getting a bunch of users and making money off of them making money off of advertisements. And so it was kind of wild to see Hoan Ton-That's journey specifically from going from Facebook quizzes to making kind of silly iPhone games to making an app called Trump hair that would look for a face in a photo and then put Trump's hair on it.

And then, looking for the next big app, he starts looking at. facial recognition technology. And meanwhile, in this time that he's been on this journey from Silicon Valley, eventually to New York from 2007 to around 2017, facial recognition technology has gotten so much better in part because of us putting a whole bunch of photos online, giving basically researchers our faces over and over again on Facebook, on Google, from lots of different angles and oftentimes us tagging ourselves in it. So it made it easier to train computers because they could say, here's, here is, a thousand photos of one person. Here's thousands of photos of one person. And they had more diverse faces because we have a diverse, group of people on the internet.

And so the technology just gotten much better over time.

Justin Hendrix:

So it's kind of the technology and then a set of other kind of forces that are at work, I suppose, to create the market demand because it sort of feels like when Tom contact goes into this business and connects with these business partners, they're, er, knocking on doors that open easily on some level.

A lot of curiosity, a lot of appetite ultimately from police and security agencies about how to take advantage of these technologies.

Kashmir Hill:

Yeah. And I mean, police have been using this technology for some time, right? Police have been using facial recognition technology since really the early 2000s, which is troubling in that we now know that the technology was quite flawed in terms of how well it worked on different groups of people.

But yeah, they had been around, but they had been using it to go through databases of criminal mugshots maybe state driver's license photos. And so what Clearview came along and offered them was for the first time a database that had everybody in it. And we're talking children, we're talking people who've never had You know, any kind of encounter of law enforcement people outside the United States that aren't in government databases.

And what police officers told me is also that the technology that clear view was selling this kind of corporate version of facial recognition, it worked better than what they've been working with before that it worked on. Photos where people were looking away from the camera where they're wearing a hat or they're wearing glasses and that what they had been using before that was a little bit more of the legacy flawed version.

And so they were really excited to get this clear view. I did not set out though, planning to sell the technology to police. They really just want to sell it to whoever would pay for it. And so the first people they approached were like hotels real estate buildings. Lots of billionaires.

Billionaires were basically the first people who got to try out the app.

Justin Hendrix:

How did they arrive at the idea that police and security firms would be the best market to go forward with?

Kashmir Hill:

It was kind of coincidental. They were trying to Sell the app to a real estate company that had a lot of buildings in New York, and they met with a security executive who was going to vet the app and make sure it worked well.

And he used to work for the NYPD and he said. Wow, this app is great. I know my colleagues would really like it. And then he introduced them to the NYPD and that's how they got started with law enforcement for the first time.

Justin Hendrix:

I had a chance to go through a bunch of emails that came out in a FOIL request a journalist called Rachel Richards had made. And then the New York Legal Aid Society had pursued, to get the actual request fulfilled. And it's kind of extraordinary to look at the approach. I mean, often it's Ton-That, making the approach to individual police officers having this sort of back and forth.

But it feels very much like this kind of, I don't know like Dropbox approach to sales. Let's just try to get individual officers to try this and individual parts of the police department to give it a go and see how easy it is. And so you have all of these various kinds of officers giving it a try and uploading photos, et cetera.

The thing that stuck out to me the most was the fact that there didn't seem to be any rules. Anybody could just give it a try with any photo they might have laying around. And that was considered okay at the time.

Kashmir Hill:

Yeah. It was like the Dropbox model or the Pringles model. Once you pop, you can't stop with facial recognition technology. This is apparently pretty. common. A lot of vendors offer free trials to police officers. One of the officers told me that. But yeah, it was wild. The general population did not know about the existence of Clearview AI. And meanwhile it was spreading like wildfire among Police officers. NYPD is very connected with other agencies around the country, around the world. And so lots of officers within the NYPD started using it.

And it was so easy. All you had to do is go to Clearview site and enter your email address or just kind of email Hoan Ton-That and say, hey, I want to try it out. And they're just giving it to everybody. And there's nobody who's vetting it. There's no one. Testing it to make sure it's accurate. Does it actually work on surveillance, tapes?

They're just out there downloading this free app, using it and using it in actual cases without necessarily letting anybody know that, this is how they became a suspect in a case is that there was a face recognition match and They're telling other law enforcement agencies, and it just started, I got to watch it through the email traffic, looking at these emails you're talking about, and it just was spreading from, one department to another department within NYPD, and then starts going to agencies and starts going international and.

It wasn't until years later that Clearview AI actually sent their algorithm to the National Institute for Standards and Technology, NIST, and got it tested to see, how accurate it might be. That part of it was really surprising to me that there really did not seem to be any rules or vetting beyond, is this helping in my investigation or is it not?

Justin Hendrix:

I suppose this does bring us to the policy aspect of this, you start the book out pointing out, there are still no federal privacy protections in the United States. But you know, there's so many different layers of policy where perhaps someone could stop and say, Hey, we need to ask ourselves some substantial questions here.

I understand that, in the couple of three years since these various emails came along and NYPD has got at least. Some policy unclear exactly how things work, I suppose, inside the department about the use of tools like Clearview. But, just more broadly, we haven't seen really any forward progress on the use of facial recognition at the federal level.

We've seen, of course, some state biometric laws passed. What's your kind of take right now on the lack of... Movement in this country, why wasn't the clear view sort of seen as a kind of catalyzing event?

Kashmir Hill:

I really think Clearview was a catalyzing event, and I think it seems very hard for the federal government to make new laws to govern technology.

But Clearview AI, I wrote that first expose about the company in January 2020 at the beginning of the month. And I would say in the weeks afterwards, it felt like there was so much momentum to do something about facial recognition. technology to make some laws at the federal level.

And then the pandemic came along and COVID just took over our lives. And I saw this happen again and again, as I was tracking the history of this technology, something would happen like the so called snooper bowl in 2000. One where facial recognition was rolled out on the fans of the Super Bowl in Tampa unbeknownst to them.

And it caused this uproar. And one of the facial recognition vendors told me that he thought he was going to go out of business. That everybody was calling for his head. And then September 11th happened and then all of a sudden everybody was calling him saying, can you put facial recognition technology in my school in the mall in my community?

Can you put it in the airports? It's just this kind of, this push and pull between privacy and security that just seems to happen over and over again.

Justin Hendrix:

You spend a bit of time contemplating the way things are perhaps a bit different in Europe. You tell the story of a individual who does a subject access request via, the Hamburg Data Protection Authority. How has Clearview been received in Europe? How has it affected the policy conversation there?

Kashmir Hill:

Yeah, the story of Clearview AI has rolled out so differently in Europe versus the U.S. So in the U.S., there was a lot of concern. There were lawsuits in a few of the states where there's a bit more protection of our private information and of our faces, namely Illinois Vermont, and California.

In Europe, all of the privacy regulators also in Canada and Australia said, We're launching investigations into this company, and they all found that what Clearview AI did was illegal that the company needed to get their citizens consent to put them in this, big database that now has 30 billion faces, a few of the regulators issued fines which Clearview at this point is it's fighting those rulings, it's not paid any of the fines But it effectively pushed Clearview AI entirely out of Europe.

I mean, they're not doing business there anymore whereas before they had been doing trials with a number of international agencies. So the privacy regulators haven't. been able to get their citizens out of the Clearview database yet, but they have gotten Clearview AI out of their countries.

Justin Hendrix:

Do you have a sense of the company's international reach at this point?

Kashmir Hill:

They say that they're focused just on basically the U. S. right now. Every once in a while a story pops up about, they're kind of courting police agencies in other countries. Clearview AI does have a version of its technology that it can sell that doesn't have the big database behind it, the database of 30 billion photos.

That's just the algorithm itself that you can deploy on, your own database that you might develop. But right now they're really focused, as I understand it, mainly on the U. S. market.

Justin Hendrix:

I want to ask you about the role of the First Amendment in thinking about clear view and facial recognition.

Clearview on the one hand has sought to use the First Amendment as a kind of protection for what it's doing. Others argue that, the First Amendment should not in fact be a cover for facial recognition or, surveillance generally. How does the First Amendment play into the story?

Kashmir Hill:

So Clearview AI often describes what it does as just being a Google for faces. That's literally the kind of copy they used in the advertisements that they put out for police officers. And so they're saying, we just went on the internet, we took information that's public.

Photos of people. And now we've just made it easier to find, you can find information about a person on Google by searching their name. You can find information about a person via clear view by searching their face, and it'll show you all the websites where it appears. And when they started getting sued in the U S.

I'm going to focus on Illinois just because it has a law that is just directly applicable. A very prescient law, I have a chapter in the book about it, passed in 2008 called the Biometric Information Privacy Act that says that a company must get an individual's consent to use their biometrics, like their face print or their fingerprints or their voice print even or pay a 5, 000 fine.

And this has been just a crippling law for a lot of technology companies. Facebook paid $650 million to settle a lawsuit brought in Illinois over when it rolled out photo tagging for all of its users there, but so when Clearview inevitably got sued in Illinois, one of its defenses was the First Amendment.

It said, we have a First Amendment right to collect public information and to disseminate it, as we plea. All we're doing is we're looking at what's already public and making it easier to find. And they hired one of the country's preeminent First Amendment lawyers, Floyd Abrams, who actually he represented the New York Times in the Pentagon Papers case, one of the most famous lawyers in the country to defend them there.

Justin Hendrix:

And what do you make of that defense? How's that going? At the moment, there are points of view on both sides, I assume. But there are those who argue very much against that perspective on the First Amendment.

Kashmir Hill:

Well, they've deployed this argument, in all of their court cases in the U.S. and in the three different states. And it hasn't worked yet to, get a judge to dismiss the case. In all the cases, the judges said, okay this is sorry, but we're still going to go to trial. And, one of the big cases was the ACLU sued Clearview AI in Illinois. They ultimately decide to settle the case with Clearview agreeing not to sell their database of faces to private companies or individuals to only deploy it to law enforcement.

There's said to be a settlement coming in. There's a class action lawsuit in Illinois that's looking for money, 5, 000 per, for each Illinoisan whose face is in the database. So it hasn't really. At least to make the, to make it go away, but yeah, but I mean, this is such one of those central tensions in the United States of privacy, people's right to privacy versus people's, right to disseminate information and to have free speech.

Justin Hendrix:

You talk about the idea of the "rickety surveillance state." Is that what we have in the United States at this point? There's not perhaps, something along the lines of what you might expect in China or another more authoritarian nation. But, the pieces of it are kind of coming together.

Kashmir Hill:

Yeah, the pieces are there. I was thinking about this with the Pennsylvania fugitive manhunt recently, he was on the run for weeks before he was caught and we're not quite In a place where all the cameras are connected, all the cameras are running facial recognition technology where you can press a button and know where anyone is in the United States but, the pieces are there that could happen.

We could live in that world where it's just very easy, protesters come and they're outside of your building and you just say, okay, I want to know the names of every single person who's in that protest. We're not quite there yet. I think we can decide what we want it to look like.

But the technology is getting so powerful. It's not trivial to build a surveillance state like that, but it's it's certainly getting easier. And. So the decisions we make now are really important. One of the things I often think about when people say it's hopeless, like there will be no privacy.

The technology is just taking us there is the wiretapping act. And the fact that Congress did get it, it's act together and decided to pass laws against secretly recording people's conversations. And that's why. All of these cameras that are that kind of blanket American spaces only record our images and don't record our conversations.

Justin Hendrix:

In chapter 24 of this book, you do kind of chronicle groups and people who are fighting back against facial recognition, the kind of activist push against it. I sort of think of myself as. Certainly more aligned with those individuals on the deployment of these technologies in surveillance context.

But I look around my neighborhood in Brooklyn, and a lot of folks are installing ring cameras everywhere they can, right? On their front door, their back door. And I suspect that if NYPD came around and said, hey, can we connect your camera to our central database? There might be a lot of folks here who would say yes, that would be something they would like, which is what people have said in multiple jurisdictions across this country. How do you think about that tension between the public appetite for more cameras, more surveillance, more, facial recognition, and the few voices that seem to really recognize it as a threat to civil liberty?

Which one do you think will win out in the long run?

Kashmir Hill:

Gosh, that's a complicated question. I could spend another hour talking about that. I do think there is a human tendency that we want privacy for ourselves. And we really want to invade the privacy of other people. I mean, it's honestly part of why I've spent the last 10 years writing about privacy is kind of sorting that out.

I do think the idea of friction is important that, we do live in a world now where more information is collected than ever before. And I think people are going to just keep putting, cameras on their doorbells because it's convenient to be able to know who's coming up to the door. It's protecting the packages that are sent to them by Amazon, the company that makes their ring camera.

It all goes hand in hand, but having the friction of needing the police officer to come to the door and asking for that footage is important as opposed to just having, an infrastructure where all the ring cameras are connected for easy access. By the police and they can just pull up anything they want at any time.

So I think, making sure you have a responsible deployment do you have some friction will help maintain some sense of privacy. And I also think people don't realize the harm until it's, honestly, it's too late, we all put our faces on the internet over the last 20 years not thinking, Oh, one day, something's going to come along and make all these images findable, like we're helping to train the AI to find us and then it's not until a Clearview AI comes along and says, Oh look what we can do.

Look at this new superpower that you realize, oh, I wish I'd been more careful about what I put on the public internet.

Justin Hendrix:

I suppose that no matter what happens to Clearview AI your book makes clear that all the pieces are there. There'll be some other iteration of it, some other formulation of it.

Maybe even in a public context, some agency will build its own system. What have you. Is there a jurisdiction in the world that may kind of represent to you the best jurisdiction in terms of maybe taking advantage occasionally of facial recognition towards good end, solving a crime or stopping a terrorist or what have you, but really seems to have all the right sorts of protections in place to prevent against its misuse.

Kashmir Hill:

I don't think I could point to a particular jurisdiction who's doing it exactly right. I mean, I think what is difficult about this is there's so many different types of facial recognition technology. So, asking. Do we want a database like Clearview AI's that's everybody, everybody on the internet who has a public photo, or do we want more facial recognition technology that is watch list, oh, here's the bad guys that we know of.

And that's kind of how facial recognition technology still originally gained acceptance within The policy environment is that you had people testifying at these hearings and he said, we're only going to do facial recognition technology with criminal mugshots. We're only wanting to keep track of the bad guys.

We're not interested in just normal people going about their lives. And then there's the creep. And then they start putting in driver's license photos and then Clearview AI comes along and puts everybody in. So I think the creep makes it really. Um, you know, I think there's clearly good use cases for facial recognition technology and criminal investigations, but you have to do it right.

I have written about a number of people now who have been wrongfully arrested for the crime of looking like someone else because the police get that hit from the facial recognition system and then they do not do enough. Investigating to make sure that's the actual person. And I think that is, could be one of the great harms of the technology is that, people who just have a doppelganger that results in them getting flagged have a lot of difficulty brought into their life.

And then when it does work, well, that raises questions, too. I mean, I can't help but think about Madison Square Garden, the iconic events venue introducing facial recognition technology a few years ago to address security threats, people who are violent in the stadium and then deciding, actually, let's use it against.

Lawyers who work at law firms have sued us because it's really annoying when we get sued. So let's put thousands of lawyers on the ban list. I just think that the use of the technology can spiral out of control very quickly. And you can't really trust the technology unless you trust the person operating it.

Justin Hendrix:

That has to be the pettiest use of facial recognition technology that we've seen in the world to date. This is an extraordinary book. I feel like there aren't that many books that deliver when they try to take this approach of both telling an individual story, a narrative of a founder and a company, but also all the forces and ideas that are sort of swirling around that individual.

Everything from Trumpist populism on through to of course, the technological developments that sort of come together. Do you think if it weren't clear view that it would have been someone else that there was a moment in time that simply all the pieces were there if it hadn't been a haunt on that, that it would have been someone else?

Kashmir Hill:

I mean, it's hard to say, right? One thing that surprised me, I mean, when I first heard about Clearview AI I did think they had made this kind of technical, technological breakthrough. And then as I looked more closely at this, realized that both Facebook and Google had And they had gotten there first, that they had developed the ability internally to identify a stranger.

And these companies are not necessarily thought of as privacy paragons. But they decided this is too dangerous to release. We don't, we don't want to be the ones to put this superpower out there. And that there is something particular about Hoan Ton-That and Clearview AI that they were. They were willing to break that taboo to have the kind of ethical breakthrough of doing something others weren't willing to do.

So I imagine that probably someone else would have done it, but it is part of the reason why I wanted to tell the story of this company to really understand why was it that they were the ones to do it.

Justin Hendrix:

This book is coming out at the beginning of, I guess, of another, technological cycle where we're going to focus on generative AI.

You don't touch too much on those technologies, the relationship between them. I suppose there's definitely a relationship between the applications and around pornography and non consensual searches for pornographic content and things of that nature. Are you thinking about that now?

Are you beginning to think about, how this sort of feeds into this generative AI context?

Kashmir Hill:

Yeah. I mean, it's some of the same questions, right? The use of the public comments, the sucking up, hoovering up of data that's online that people kind of felt they had a property right over or a privacy right over.

I thought about this so much interviewing artists who are so upset about these generative image technologies that have hoovered up their. Their artwork and now can make art in the style of them they feel the same kind of outrage and violation that people I've talked to who have had their face taken without their consent and put in these databases.

And I do think it's, yeah, it's some of the same questions that policymakers are asking right now about generative AI. I, Should they be able to gather whatever they want from the internet, what is public and then also just the same questions of accuracy and bias and the tools that they're creating, who's testing for that.

It's very similar. And I certainly think about the overlap a lot. The generative AI tools were really just starting to come out as I was finishing this book.

Justin Hendrix:

Well, perhaps that will be the subject of the next book if in fact there is one. I would recommend this book to everyone, especially anybody who's an aspiring journalist. I love the anecdotes at the beginning about, the actual shoe leather and searching for the door that didn't exist.

So I'll commend folks to the book to read about that, but Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy As We Know It, Kashmir Hill, thank you very much.

Kashmir Hill:

Thank you, Justin.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics