Daniel Solove on Privacy, Technology, and the Rule of Law
Justin Hendrix / Aug 10, 2025Audio of this conversation is available via your favorite podcast service.
Daniel J. Solove is the Eugene L. and Barbara A. Bernard Professor of Intellectual Property and Technology Law at the George Washington University Law School. The project of his latest book, On Privacy and Technology, is to synthesize twenty five years of thinking about privacy into a “succinct and accessible” volume and to help the reader understand “the relationship between law, technology, and privacy” in rapidly changing world. I spoke to him about the book and how recent events in the United States relate to his areas of concern.
What follows is a lightly edited transcript of the discussion.
Justin Hendrix:
Good morning. I'm Justin Hendrix, editor of Tech Policy Press, a non-profit media venture intended to provoke new ideas, debate, and discussion at the intersection of technology and democracy. In the introduction to On Privacy and Technology, recently published by Oxford University Press, George Washington University Law School Professor Daniel Solove says, "The book's project is to synthesize 25 years of thinking about privacy into a succinct and accessible volume, and to help the reader understand the relationship between law, technology, and privacy in a rapidly changing world."
I recently had the chance to speak to Solove about the book and about how he thinks about recent events, particularly in the United States, that relate to the issues he's concerned with. Let's jump right in.
Daniel Solove:
I'm Daniel Solove. I'm a professor of law at the George Washington University Law School. I recently wrote a book called On Privacy and Technology.
Justin Hendrix:
And how many books in are you now?
Daniel Solove:
I've got quite a few. I've got about six regular books and then a bunch of textbooks.
Justin Hendrix:
This is a slim volume. And it seems like it's very much written to be an accessible volume. You talk a little bit about Ludwig Wittgenstein at the beginning, about his kind of idea of the sort of family of ideas. Why Wittgenstein? How does that help us understand conceptions of privacy?
Daniel Solove:
So I think one of the big struggles when it comes to privacy for a very long time is defining it. People throw around the word, but no one really knows what it means. And this has led to a lot of problems because policymakers trying to protect privacy don't know exactly what they're protecting. And what often has happened is that privacy is defined in a very traditional way. And that is to try to look for the common denominator for all things privacy. And the result has been that a lot of the definitions of privacy are too broad. And they're so broad that they're meaningless because they encompass pretty much anything, or they're too narrow and they exclude a lot of things. And what I find in a lot of the cases and the laws is that the definition of privacy is often really constrained and leaves out a lot of things.
And that's a problem. That means that a lot of things go unprotected. So what I tried to do is develop an idea by a philosopher, Ludwig Wittgenstein. And he argued that not all definitions have to be in this traditional way by looking for a common denominator among all those things. That a lot draw from a pool of common characteristics. They're related in a family resemblance way. With family members, you might have certain features of different family members in common. The same nose as your brother, the same hair as your sister, the same whatever, eyes as your father. But you might not all have the same thing in common, but you're still related. And I think privacy is similar in that there isn't a common denominator. These are related things and there are many things. So privacy is many different things.
And if we can see it this way, we liberate ourselves from this quest to find the common denominator. And then we can better protect more things that are privacy. And we should focus on, "Is there a problem here? Is there a harm here? Is something wrong?" And then we should jump in to protect it. And I find that oftentimes in the existing policymaking that's not happening. There's a clear harm and then a court or a legislator will say, "Well, it doesn't fit into this definition of privacy, so we won't do anything about it." And I'm hoping that with my view, that will change.
Justin Hendrix:
So you talk about different ways that privacy relates to our lives and to why it's important. You mentioned things like respect for individuals, reputation management, maintaining social boundaries, trust, fair decisions about one's life, freedom of thought and speech, freedom of social and political activities, et cetera. We'll kind of hopefully go through a couple of those as we have this discussion. But I wanted to maybe just hone in on one other phrase that had a sort Wittgensteinian quality, at least to me. I think of Wittgenstein, at least early Wittgenstein, short, declarative sentences. You repeat again and again, "Privacy is about power." Can you say what you mean by that?
Daniel Solove:
Privacy really is a very important thing and a very important value that we must protect because if you look at what almost every authoritarian government has done and almost every type of authoritarian entity is that they invade privacy and they try to create a world with low or minimal privacy. And they do that because privacy is power. Because knowing information about people, having personal data is having power over them. And it's power over them because this information can be used to blackmail them. It could be used to manipulate their behavior or control them in certain ways. It can be used to do a whole host of things to people, or used to prosecute them, arrest them, ground them up. All sorts of bad things can be done when this personal data is collected about people and more information is known about people. We've heard the saying, "Knowledge is power." Privacy is essentially knowledge about an individual's personal life. And that is power.
Justin Hendrix:
This isn't hypothetical. Right now, in this country, we're very concerned about power. Many are very concerned about power. Concerned about corporate power. And the tech economy dominated as it is by the exploitation and acquisition of data into AI systems. We're concerned about government power, particularly over the most vulnerable. And we're or at least I am particularly concerned about the combination of the two. How are you thinking about this moment in time with that idea, "Privacy is power," in mind?
Daniel Solove:
I think we really are in a rather dark time right now. We have, I think, the most authoritarian government we've ever had in this country. We already see attempts to gather massive amounts of personal data through DOGE, through what Elon Musk was doing and the government's continuing to do. We're seeing this data being used already to round people up and throw them into prisons and camps. And you see also this information being used for a whole host of undefined and unspecified purposes that we don't even fully know what they all are. It's a very troubling time. And I think it's also a time that's illustrating that our privacy laws are really not up to task. They really aren't protecting us well enough. And we're really seeing that right now, that the government has really, through DOGE, has just done massive amounts of exfiltration of data, improper sharing of data. And we've really yet to see anything put a hard stop to it.
It's already been gone. And I think that's an enormous problem. And I'm not even getting into what the private sector is doing, which I think is also a very big problem as well. We've long seen massive amounts of personal data collected about people by a whole host of different companies, used for a lot of different purposes, and sometimes shared with the government. So the problem with the private sector relates to the government. And we're also seeing now with AI massive amounts of personal data being scooped off the internet. It's a practice called scraping. Companies just take it. And then they're using it for all sorts of training for various AI models. One example is facial recognition where a company called Clearview AI scraped billions of photos off the internet to make a facial recognition tool that then is used to identify people based on their face.
And all this data is being collected without consent, often without people knowing about it. In a kind of very shadowy legal context, I think a lot of the collection isn't legal. The problem is that the laws are very weakly enforced. They're not particularly strong. And even when they're enforced, the enforcement is often weak. And companies know the lesson. They can just go and break the law. And the worst case scenario is they get a slap on the wrist if they are caught.
Justin Hendrix:
Throughout this book, you try to distinguish between who's the villain here? Who should we really be focused on? And you have a bunch of great lines I really like. You write that technology poses grave threats to privacy, but technology per se isn't the enemy. Later, you write, "Technology allows us to sweeten the bitter plan the universe has for us, but the universe can be a sadistic comedian. It rarely gives without taking." So how do you think about the law as the primary, quote, unquote, "Enemy," here?
Daniel Solove:
I'm reacting to a lot of what I see in commentary. And so typically, when you hear about technology and the problems that it causes, often one of two things are blamed. Either the companies are blamed as evil. "Oh, look at all these evil companies doing horrible things. If only the companies weren't evil. The companies are the villains." Or technology is seen as the villain. It's evil. It's bad. And I actually think that it's not productive to see either as the villain, to see either as evil. I don't think technology is evil, but technology is also not good. It's a force. It's a power. It's something that exists. And it's not going to just go away. We're not going to just all throw our iPhones in the trash. We're not going to go back to a pre-computer world. We are where we are. I think that technology is going to keep marching on.
We can claim it's evil and ruinous, but ultimately that is not going to really move the needle at all in terms of giving us more protection or addressing our problems. And I think companies too, you can find all sorts of instances of companies doing some really dicey things and downright evil things. And then you go throughout history, and companies do these awful things, but I don't think they're the villain. And I think that's the wrong lesson to learn. The villain is the law. The problem is that the law is what regulates companies. It's what regulates technology. And the problem is that the law isn't doing its job. A lot of times the law tends to ... And we're seeing now the same thing happen again, the same story, the same arguments. "Oh, no. We can't regulate AI. We have to be in a deregulatory mode. If we regulate technology or AI, we're going to stifle innovation and then other countries are going to leap ahead of us and we're done. Oh, no. We can't possibly regulate them."
That's the problem, is that your companies are not good and they're not evil. And I think far too often we somehow expect companies just to act good. We think, "Oh, wow. Why this respectable company? They have beautiful logos. And they have all this money. And they have to be good." No, they're not good. Companies are like sharks. Sharks aren't good or bad. You don't go to a shark and say, "Please, just don't eat seals." They eat seals. That's what they do. Companies are machines built to make a profit. And what they do is they are great at responding to incentives. That's what they do. That's economics 101.
This line by Charlie Munger. And the line is, "Show me the incentive, and I'll show you the outcome." And so ultimately that's what companies respond to. So the law needs to set the right incentives. The law incentivizes doing something safely and creating technology that doesn't invade privacy. Guess what? The law sets the right incentives, companies will do it. If the law says, "No, we just hope that somehow companies will do the right thing, even if it's less profit," then I think they're being naive, the law's being naive and foolish, which it often is. Companies won't do that.
Justin Hendrix:
You talk about the stories of milk and cars being instructive here.
Daniel Solove:
Yeah. When you go to the supermarket and you buy milk, you can pretty much trust that it's probably not going to kill you. You can just focus on price and the brand, but you don't have to research the farms that the milk came from. You don't have to become an expert on pasteurization. I say all this now because I think we're heading into a world where we might go back to the world where milk kills you. We have somebody at HHS that believes in unpasteurized milk. It's scary. But generally we can trust that our food is going to be relatively safe. And we don't have to become experts on food safety. And it wasn't always this way. Milk used to be mixed with formaldehyde because milk would go rotten and the formaldehyde sweetened it. And so they mixed the formaldehyde in. And babies would die, lots of babies.
And why are the babies dying? Maybe because you're mixing poison in the milk. All that now we have come to take for granted, that things are safe because someone has our back. We don't have to become experts on food. The same thing with cars. Cars used to be very unsafe. And the car manufacturer said, "You know what? People don't really want safe cars. It's all the drivers' fault. The drivers were just better. And people don't ... The market won't support that. No one cares. And we can't do it. It's just too expensive. Not possible." And guess what? Now we have a lot safer cars. We have car safety laws. You can buy safer cars or less safe cars, but generally there's at least a minimum level of safety that's there. And we really don't have to be ... Worry about that. And I think the same thing should be with all technology.
We've seen the story time and again where companies say, "Oh my gosh, you can't regulate us. It'll kill us. We'll go out of business. No way we can do it." It's a temper tantrum. It's like when I tell my kid to clean up his room. "I can't do it." "It's possible," you tell them. And when the law says, "Look, you're going to make safer cars," guess what? They make safer cars. Tell them, "Hey, you're going to make safer drugs," safer drugs. Tell them you're going to make safer food. You create the right incentives, and guess what? Miracle. Companies can actually do it. And I think the same thing goes with technology. And I don't think that's a ... It's not a threat to innovation. It's just steering innovation. It's not saying, "Oh, yeah. You just can't create AI."
Just say, "No. When you make AI, why don't you make an AI that's safer? Why don't you make an AI that is less privacy invasive? Why don't you make an AI that is better?" It's steering innovation. A seatbelt is innovation. A airbag is innovation. It's just that these are innovations that the law is steering to say, "Look, we think safety is a important value. And we want to incentivize more safety in the cars." And guess what? If we incentivize it, we get it. And it's a good thing. And that's also innovation. So I really have a lot of issues with the way the current debate is unfolding, the way the law is going, because I think it's going in the wrong direction, learning the wrong lessons. And if we look throughout history, we see that regulation can be a friend to innovation. I don't think people look and say, "Wow, fuel efficient cars today, cars with airbags and seatbelts that are safer and all the car safety today is somehow bad for the car market."
Justin Hendrix:
There may be certain politicians that regard fuel efficiency standards as negative, but we'll perhaps move on from there. I want to pick up on one of the things that feels related, to me at least, here where you talk about the idea that the law must become bolder and not shrink from the challenge of regulating design. This has become a bit of a contentious area when it comes to thinking about the design of internet platforms, especially platforms that host speech. I think there are also some of these paradoxes and congruities that come up in some of the discussion around things like Section 230, First Amendment, et cetera. And you get into that stuff a little bit in the book, but can you talk a little bit about why you feel the law needs to get over its issues with regulating design in the US and how that relates to privacy?
Daniel Solove:
Design is power. And ultimately design is the ballgame. You really can't protect privacy or regulate technology without regulating design. And regulating design is often twisted by its opponents to say, "Wow, you're going to tell the tech companies how to build their products?" And the answer is no. You're just providing goals and you're saying, "We want to eliminate certain types of designs that we think are problematic, but we're not going to give them the blueprints for exactly what they're doing, but we're going to just say certain things." You have building codes and they do specify certain things, but they don't tell you exactly how to design your building. They just say design a building that's safe or that's not going to blow away in a hurricane or collapse in an earthquake. But I think that when it comes to design, the law, it's very weird because politicians were very nervous.
Anytime you said, "Regulate design," they would freak out. "Oh my gosh, we can't do it." But then if you say, "How about regulating a dark pattern?" then they're like, "Oh, yeah. Of course we can regulate dark patterns. That's different. That's bad." Dark pattern is basically a term to describe a deceptive design. And these type of deceptive designs are designs. And the law is starting to regulate dark patterns. So it's just a change of terminology, and suddenly it's totally fine. "We don't like it. We should stop deceptive designs." And I think law has a big role to play in this. And I think it can and should regulate design, but obviously the devil's in the details. I don't think the law should micromanage what companies do, but I think there's a lot of things with design that we know are deceptive, we know are harmful and can be restricted or steered in the right direction. And that still leaves a gigantic sandbox with 80% of the space to do what they want.
It's just we're going to put some limits on that. So you'd mentioned free speech and the First Amendment and platforms. And what do we do with that? I think it's an incredibly complicated set of issues of how do we regulate what goes on in platforms. And it does involve free speech, but it also involves more than what we think is just pure speech because what we see on platforms is not just pure speech. It is speech that is architected. What we see on these platforms is influenced by algorithms behind the scenes that are designed to show us certain things and make other things harder to see. They're designed to skew conversations in certain ways and to shape them. And so it is that we think, "Oh, the social media is our speech." But is it purely our speech? It's really the speech of the companies who are actually taking what we're saying and then using their algorithms to repackage it and push it out in ways that change the message and direct the speech and shape the speech.
In fact, the companies will admit to this. They will say that this is what they do. And so when they want ... They say, "We are speakers. We want First Amendment protection." So they run to the Supreme Court and say, "Hey, any type of regulation here is a violation of our right to free speech. We are speaking with these algorithms. The way we present stuff on social media and how we do it is our speech." They write this. This is their argument. Then, though, when it comes to instances where the algorithms do things that cause harm to people, then they turn around and say, "Oh, no, it's not our speech." We should be immune because it's the speech of other people and it's not us. Someone else said it.
And so there's a really interesting case in the Third Circuit fairly recently where a TikTok algorithm recommended choking videos to young children. And a young girl saw in her feed, "Recommended for you," and it was this video of choking. And she imitated the choking in the video and died. And the company said, "Oh, that's someone else's video. It's not us. We're not speaking. We're not responsible. We didn't say it." And I think the court correctly rejected that argument. The court said, "Look, now your same company has made the argument that no, you're speaking, you should be protected by the First Amendment for saying what you say through the algorithms that are delivering and choosing what's presented." So the companies want it both ways. They want to be speakers when it helps them. And they don't want to be speakers when it doesn't help them. So that's a problem.
And I think we need to have a coherent approach where I think companies need to be held accountable. That when things on the platforms that they can control are harming other people, I think they should absorb the harm and be responsive to the harm. That's the right incentive. And if you tell them, "Hey, if there are harmful things on the platform, you can be sued, you can be held accountable for that harm," guess what? They will reduce the harm. They will improve the algorithms. They will do the things necessary to stop that risk from happening. But it doesn't happen if you don't create the incentives.
Justin Hendrix:
In the book, you talk about a variety of different problems of our current moment, the problem of automated decisions, the problem of the panopticon. You talk about the problem of identification, which I just wanted to hit on quickly because we're starting to see some of the same executives that are building AI firms are investing in novel ways to do identification. I'm thinking in particular of World or Worldcoin, the Sam Altman backed firm that's going around scooping up essentially iris scans in order to describe an identity marker to individuals. Can you talk a little bit about this problem of identification and where you think we are at the moment on it? Because it seems like a lot of tech magnates are beginning to scratch the surface of this issue. Basically saying, "This is a fundamental. This is something we're going to have to sort out in the age of AI." Never mind the fact that in many cases they're also creating the technologies that are giving rise to concern that we won't be able to determine people's identities in a lot of contexts. But how do you think about this problem of identification?
Daniel Solove:
Well, big component of privacy is the ability to be anonymous, the ability to be obscure. This is a term, obscurity, that Professor Woodrow Hartzog uses a lot in his work. And he talks really compellingly about the value of obscurity. That in most of our lives we go about living in obscurity. It's not complete privacy. We go to the store, we go about our daily business, but we don't expect to be tracked everywhere we go. We don't expect to be ... Everything that we do to be recorded and linked back to us. We depend on this obscurity. And that's increasingly being lost by these new technologies. That every single thing that we do, everywhere we go, we are identified and tracked. And then that becomes a tool of power over us.
And so in one anecdote, for example, Madison Square Garden used a facial recognition tool to identify people as they were coming into that facility. And there were lawyers that were representing people against Madison Square Garden. And they just went in their own private capacities just to go for entertainment and they got kicked out. They got identified and kicked out. And so is this the world we're going to be in where every time you walk into a store, you can be kicked out?
So if I write something critical of a department store, I walk into the store, they see my face, they're like, "Oh, there he is. There's Dan Solove. We don't like Dan because we've just linked it up to he wrote something we didn't like. Out you go." And do we want to be in a world like that? Do we want to be in a world where wherever we might go, the government can pick us out? Say, "Oh, I see Dan's in the crowd of this protest here. And we don't like that. And now we're going to use that to retaliate against me for something else." These are the problems that get created. Identification is a tool of power. We look at identification systems throughout history. Their uses are very frightening. This is what the Nazis use to round up people and exterminate them in the Holocaust. This is what various types of genocidal governments have done. Round up people and kill them. And it's done through identification.
So it's something we need to control. It's something we need to better regulate. And I'm not saying that you have no identification, everyone runs around anonymously. But right now, the law really just is not up to task in addressing any of these problems or any of these issues. Meanwhile, we have the companies racing, sprinting to come up with identification and put it into a world that's not ready for it. That's the problem. That's the lesson that we learned with the Nazis. You don't want these tools to get into their hands. The reaction of tech companies, I think, seems to me a very tone-deaf reaction. Let's rush into it, knowing that it's probably going to go into some really scary hands and be used in ways that will probably result in a lot of harm and possibly a lot of death. I think that the law needs to catch up. And it's just so far off. It's scary.
Justin Hendrix:
I don't know when ... Put this book into the publisher, if it was before or after the election. I know it was published just shortly after Donald Trump was inaugurated, if I have my dates correct. There are substantial questions in this country, I think, about the rule of law and about the extent to which perhaps we can utilize the normal channels for redress we might otherwise have available to us. I guess there are real questions about Congress and whether it will have any interest really in pursuing any of the types of reforms that you're ... I don't know if you're advocating for a specific reform here, but you're certainly advocating that something is done, I think, legislatively. I don't know. How do you think about this whole way of thinking? We're targeting the law, but what if the law is no longer essentially a reliable surface on which to project our concerns?
Daniel Solove:
That's a great question. And the book was mostly written prior to the election. Some of it was wrapped up afterwards. A lot of the book is a response to problems in the law that for the last 25 years, I've been arguing we need to do, we need to do. It's not just me. A number of scholars on privacy and technology have been saying, "There's some real problems here. We need to address them." And basically, no one did anything. This bridge is collapsing. We should do something about it. And everyone's, "Let's ignore it and put it off. Oh, there's no problem." Here we are, and the bridge is now collapsing. You raise a point about what about a lot of my recommendations involve what law should do, but what do we do in a world that the rule of law is breaking down and a lot of cases has broken down and we really don't have much of a rule of law? And that's tricky.
Because I also have to come to grips with the fact that I'm a law professor. And ultimately, I like teaching law. And teaching law is teaching a set of norms and practices that have developed for centuries. And if the rule of law breaks down, then all I can talk about is power. And quite honestly, I'm not interested in that because power is basically, okay, I can write the next Machiavelli's The Prince. I can write books about, "Okay. Here's the bribe you want to do. Here's the right ideology. Here are the people that you want to be friends with. Here's how you can use tricks and might to get what you want." But then I'd be some kind of dark political scientist. And that's not of interest to me. I'm of interest in how do I make legal arguments? How do we make arguments about rules? How do we interpret the law? And how do we apply the law?
How do we create good policy with the law? That's what I know. That's what I do. And so I do write for good faith courts that actually care about those rules. I do write for legislators that want to do the right thing. And I really don't have much to say to people who don't. If I'm writing a book about here's how to play chess, I can't write it for a monkey, I can't write it for a dog, I can't write it if you're not going to play by the rules of chess. A chess book is meaningless. And I get it. But ultimately, I write chess books. You have to play. There are rules you have to follow. And if you don't follow those, you're not being a judge, you're not being a legislator. You're really a hack. And I can't say much to you other than give you some money, hope that whatever my position is ... And it's also too, I can't write a book of logic.
If someone will write an opinion and say A one day and then not A the next day and doesn't care about logical inconsistency, it's like I can't have a logical conversation with an illogical person. Just I can't really go into a debate with a dog. So that's the problem. So ultimately, I come to the conclusion like, look, I'm going to keep being a law professor. That's what I do. I'm going to keep writing about this is what we do. If you want to play by the rule of law, if we care about the rule of law, if we care about creating laws that actually are trying to solve problems and help people and that are fair and properly enacted, a good faith legislator, a good faith judge who's really trying to get it right, they don't have to agree with me, that's fine.
But at least I want someone who actually is playing the same game. Then I hope that my book will be persuasive. If they're not into that, then it's just power. And I don't really have much to say other than someone else can talk about how much the bribe should be or how do we reconcile illogical nonsense. I just don't have a debate with someone who will not ... You catch them in illogical inconsistency, they should resolve the inconsistency. And if they won't, then I don't know what there's to say anymore.
Justin Hendrix:
Throughout the book, though, I think you do still address this question in some ways, maybe not directly. You write things like, "Regulating technology involves imagining the future and understanding power." And you write, "In reality, power rarely yields to anything except power." So perhaps that's one of these short, declarative sentences that we can think about going forward. But maybe I'll appeal to your professorial instincts. And knowing that Tech Policy Press podcast listeners are good faith actors, I think, for the most of them, I haven't met every single person who may listen to this podcast, but I think that's the most of them. What would you tell them to go and work on at this intersection of privacy and technology? Perhaps if you were starting your career now? You talk a lot in the book about how you got your start, the early days of the internet, the kind of opportunity that folks saw then. Feels like we're in another early moment around artificial intelligence. What are the key questions that you would dispatch your younger self to go and work on?
Daniel Solove:
I would definitely say I do appreciate the importance of framing the debate. I've always appreciated it, but I appreciate it even more now. And dispelling these myths that exist, regulation stifles innovation, that stand in the way. I think it's very important that people demand of their policymakers real solutions to these problems, better privacy laws. Congress really, for this entire century, has been largely inept and not very capable of doing much. It's gotten so partisan. Fewer and fewer people really are interested in compromise. Fewer are really legislating in good faith anymore. It's not a really well-functioning body anymore. And we've seen a lot of attempts to come up with a federal privacy law, which I don't think is happening. I just don't think Congress is up to the task of pulling this off. And I don't think anything they could produce would be that good. The states are jumping in to regulate privacy and they're passing laws. And I think that's great.
Unfortunately, the laws they're passing are not very good. They are weak laws based on approaches to regulating privacy that pretty much hardly any commenter thinks is working very well. But they jumped in and they all do cut and paste jobs some other states are doing. And as a result, I think the laws are weak. But I think what's important is people need to stand up and tell them, "Look, I'm sending this food back. You'd think you've appeased me and you've protected my privacy with this hunk of junk. I'm sending it back to the kitchen. Recook it." I think if people really start demanding more of legislatures in their states, that hopefully some states will respond. I think we are seeing some states. There are a few good laws and are good parts of laws that are coming out. And I'm hoping that will keep happening. I think a lot of the law ... The things where the laws go wrong is that they put a lot of onus on individuals.
They basically say, "Okay. We're going to protect privacy by giving you a bunch of rights. You can access your data. You can correct your data. You can find out the data they have. You can opt out of certain things. You can delete your data." Look, I don't think there's anything bad about getting these rights, but the laws just typically stop there. They just, "Okay. Here are the rights. And now, okay, if the consumer doesn't do it, consumers don't care." The problem is that these rights are not a good way to protect privacy. They really don't. The consumers really can't know enough to exercise these rights in any meaningful way. So we need a lot more and a lot more than the onus on the consumer. But I don't know how many times where I'm interviewed in various media stories and I'm always asked the question at the end, which is, "What can people do to protect their privacy? What should people do?"
Justin Hendrix:
I must admit, I read your warning in the book to interviewers not to ask that question.
Daniel Solove:
That's the question that drives me crazy because they want to end on something optimistic. They want to make people feel empowered. "Oh, if I just take my social security number and lock it in three safes, if I just make data deletion requests to all these companies, if I go through all this time and effort and read every privacy notice, that somehow I'm going to be okay, I'm going to be safe." And the answer is no, you're not. This is all just trying to give you the illusion that individuals have control. All these things are ... It's the illusion that somehow if you do these things, you're going to be okay. And the answer is no, you're not. The only way that we're going to really make a meaningful difference in protecting privacy is if we have really strong, good, effective laws that create the right incentives.
That's it. And if we don't do that, I think we're wasting our time. The consumers are just doing all this. The companies are happy because, "Hey, we don't have to change anything. We'll just give the people rights." Two people will ask for deletion, and nothing is going to change. They do what they do. And the problem persists. But I do think in today's landscape, the states are interested in regulating privacy. We just have to get them to do it better. Congress is likely not worth a lot of attention at this point because I think it's like Charlie Brown, the football. At some point, Lucy gives the football. And every time he goes to kick it and she pulls it away. But Congress goes and they start talking about a privacy law, and everyone perks up. I do it too because you got a lot of attention. When you say federal privacy law, everyone wakes up.
"Oh my gosh. Yay. A federal privacy law." But really, come on, it's not going to happen. It's the football. The states, though, are doing stuff. And now is a great moment to get them to do better. Now is a great moment to try to buck this trend on regulating AI. And also, I think there's also an important thing that is very important that people understand because I think the real power is that people need to understand these issues a little bit better so they can demand more of legislators, meaningful policy. And that is that pretty much AI is everything. And so the idea, "Oh, regulate AI. Should we regulate AI? Or shouldn't we regulate AI?" To me, that is a ridiculous question. Saying, "Should we regulate the internet? Or should we not regulate the internet?" That was the debate in the early days of the internet. But the fact is we see now the internet's everywhere.
Everything is the internet. You don't have anything without the internet. So if you want to regulate anything, you got to regulate the internet. Same thing with AI. AI is everything because the definition of ... AI is a rebrand. AI used to be an attempt to try to create sentient machines. And it was a certain type of technology that was developed 50 to 70 years ago that went through a lot of droughts, like they called them AI winters. And only in the last eight to 10 years did it really start to take off and the computing power and the data necessary to make these technologies work a little bit better, but it's not sentient machines. It's just that these technologies have finally come into becoming a lot more useful and workable. They jumped on the label. "Let's all call it AI." But it's been around for a very long time.
These algorithms that are integrated into a lot of things and have been used for a very long time. And so AI has already been with us. It's already into a lot of things. The way AI is defined in the laws is very broad. And I think that people ... "Oh, AI is this kind of small little thing that's ..." It's everywhere already. And people label anything with an algorithm as somehow AI these days. The idea that we somehow regulate the question, do we regulate AI, or not regulate AI, is absurd. Of course we regulate AI. It's in everywhere. I don't think there's a special regulation for AI. I think all the different areas need to regulate it, just like with the internet. Cars connected to the internet, but it's part of the car. And we regulate it with car safety. The internet is everywhere. And the different areas of law absorb the technological change that the internet brings.
The same thing with AI. All the different areas of law regulate that. So privacy law needs to address AI and privacy. I don't think you need some brand new separate law. The problem is the laws that exist regulating privacy are not good enough. And AI shows us just how bad those laws are and how much those laws need to be fixed. So instead of looking to, oh, should we regulate AI, we already are regulating AI. We're just regulating it badly. And we need to look back at those privacy laws, look back at the laws in these other areas and say, "How do we take these laws and make them actually work in light of AI?"
Justin Hendrix:
One of the things I appreciate about this book is a way that you frame up your work as a kind of practice. You write late in the book that, "As technology evolves, privacy will always be in danger. We must constantly work to keep it alive, like emergency room doctors desperately trying to save a critical patient. We can never rest." I hope that my listeners will go and pick up a copy of this book, On Privacy and Technology by Professor Daniel Solove. And that is available from Oxford University Press. Sir, I thank you very much for speaking to me today.
Daniel Solove:
Yeah. Thank you so much for having me.
Authors
