Home

Donate

Carissa Véliz: Privacy is Power

Justin Hendrix / Sep 3, 2021

For the Tech Policy Press Sunday Show podcast, I spoke to author Carissa Véliz about her recently published book, Privacy is Power: Why And How you Should Take Back Control of Your Data. Carissa is an Associate Professor at the Faculty of Philosophy and the Institute for Ethics in AI, and a Tutorial Fellow at Hertford College, at the University of Oxford. She works on digital ethics- with an emphasis on privacy and AI ethics- and on practical ethics more generally, political philosophy, and public policy.

I caught up with Carissa about the book, and how it relates to some current issues in the world, from the pandemic to climate change. Below is a lightly edited transcript of our discussion.

Carissa Véliz

Justin Hendrix:

Before we get started talking about your book and the issue of privacy more generally, can you just tell me a little bit about yourself and how you came to look at these issues?

Carissa Véliz:

I'm a philosopher. I was writing my dissertation on a different topic in ethics, and I started researching the history of my family as a side thing. And I learned a lot about them that they hadn't told us. They were refugees from the Spanish Civil War, and they went to Mexico. And there was a lot about what they did before going to Mexico and who they were that was completely unknown to us. So I went to the archives, I dug out this information, and it made me wonder whether I had a right to know these things and whether I had a right to write about them because I found them very interesting.

That summer, Edward Snowden came out with his revelations that we were being surveilled at a mass scale. Being a philosopher, I looked to philosophy to see what was there about privacy, and I realized there was a huge gap in the literature. There was some work on privacy, but there was very little. And the work that was there was quite outdated. So I decided to change the topic of my dissertation, and I ended up writing about the ethics and politics of privacy.

Justin Hendrix:

So tell me about this book and just give the reader a sense of how it came together. You've just published it in the fall of 2020.

Carissa Véliz:

Yes. Originally, I had the idea of publishing an academic book, but the more I researched privacy, the more I realized this is a very important topic. We're at a crucial state, a crucial moment in history, and there is a need for people to be better informed about this. And there are very good reasons for why we're not better informed. There are a lot of interests primarily from companies, but also from the government, that we don't know too much about, or how much we're being surveilled. And I thought that this was too important a topic and too important a moment to just write an academic book. So I decided to write the book that I wish I had been able to read when I got worried about this issue.

Justin Hendrix:

The first three chapters are context. You spend chapter one walking through the day in the life of someone and talking about how privacy and collection of data might impact them in the different parts of the day. In chapter two you get into how the data economy developed and two or three key things that drive that. Can you, for the listener, just quickly sketch out what those three key drivers are.?

Carissa Véliz:

Sure. I wanted to write a book that was totally accessible for somebody who hasn't thought about privacy at all, but also interesting for experts and philosophers. And so I hypothesized that the three key elements that drove the data economy was first, Google realizing that they could make a lot of money out of personal data. They were a startup and they were very successful in the sense that they had a fantastic search engine and a lot of people wanted to use it, but they didn't have a way to fund themselves. They were keenly aware that getting into ads could compromise their loyalty towards users, but they couldn't figure out a different way. And they realized that they could target ads in an incredibly personalized way and have a competitive advantage. And that made them soar incredibly high after they decided to use data that way. And the Federal Trade Commission realized that this was a danger. In 2001 they published a report recommending to Congress that they should regulate the data economy. And many of the suggestions they make were along the lines of the GDPR.

So the second element that was really important for the data economy to take off was 9/11, because a few months after the Federal Trade Commission publishes this report, 9/11 happens and suddenly the government realizes that they can make a copy of all that data, literally make a copy, and try to use it to prevent terrorism. I think this was an intuitive approach. I think it was well intentioned. It just so happens that big data is not the right kind of analytics to prevent terrorism. Big data is fantastic at knowing what you will buy tomorrow because billions of people buy things every single day. So we have tons and tons of data. But terrorism is always going to be a rare event and that makes it not good for this kind of big data analysis.

So it's kind of a shame because we lost privacy for something that was promised and that just wasn't possible. And then the third element that was very important is once the data economy was half entrenched, the technology companies realized that that was their future. So they wanted to peddle narratives that convinced us that we didn't need privacy anymore.

Famously, Mark Zuckerberg in 2010 said that people had evolved the norms of privacy, that we thought about data in a different way, and that people were happy to share their things. And there's a big irony in somebody saying this who has bought the four houses around his house to protect his privacy. But back then a lot of people bought it. A lot of people thought, "Yeah, maybe privacy was relevant in the past, but anymore." And what we've seen recently is that privacy is more relevant than ever, that the same reasons we had for protecting privacy, we have today still, and it’s even more salient in many ways. In the past you protected your privacy so it would not get stolen. And today we're seeing identity theft really rise, especially as a result of the pandemic, when more people are using things online or spending time online.

And so these three factors- first using personal data as a tool to personalize ads, then 9/11 and governments having an interest in making a copy of that data and in that data getting collected, and then thirdly, the kind of narratives that tech companies have peddled. And one of the most important ones has been that privacy is obsolete.

Justin Hendrix:

So before we move on to that, I want to just focus in on 9/11. We're about to have the 20th anniversary of 9/11. And of course that will be an important day here in New York City where I live and elsewhere in the country. But at the same time, another event that occurred another event that many describe as terrorism, January 6th, the investigation in Congress and the select committee is just getting underway. And in fact, in the other tab I have open right now, I'm looking at a host of letters of request that the committee has sent to social media companies requesting information that may be relevant to what happened that day.

And I'm also struck by the fact that after January 6th, the FBI Director, Christopher Wray, has testified that there was just simply so much information on social media and coming over the transom, that they had a very difficult time finding a signal in the noise. And we see the DHS actually putting out a call for proposals to possibly buy other services that will help it sort through massive amounts of information on social media. There are literally lines in your book that are about this phenomenon. What do you make of what's going on in the United States, perhaps elsewhere, today when it comes to social media and surveillance?

Carissa Véliz:

Yes, that's very interesting. And one of the reasons why big data is not good at preventing terrorism is because terrorism is like trying to find a needle in a haystack. And adding all the data you can possibly collect and saving it for as long as you can just means you're adding a whole lot of hay to that haystack and making it all the more difficult to find the needle. So this is one example in which there's so much information out there that it might be counterproductive. It's intuitive to think that the more data we have, the more we'll know, and the more we can improve society and predict the future and prevent bad things from happening. But there's a lot of evidence that shows that often too much information is counterproductive and we just get lost in the noise in correlations that turn out to be spurious in pretty much chaos, and it would have been better to just collect the data that we need.

In the case of January 6th, it's an interesting case because if you're somebody who's worried about democracy, that was a scary day and a scary moment. And people might think, "Well, see, it's fantastic that we have this data and that they can find the people who acted in illegal ways." The problem with technology is that you shouldn't think of it as something that will always be used for good ends and in the best way possible, even if it sometimes is. Technology can be used in different ways. And you have to consider the very likely fact that it will be used in the worst possible way, because human beings are imperfect and there are a lot of interests and it's chaotic and you can't control technology. Once you invent something, you can't un-invent it. So when we think about whether it's a good idea to collect so much data, we should try to think about the cases in which it goes terribly wrong, not the cases in which you think it may be justified.

So talking about recent things- this is not about social media, but it's about collecting data- we're seeing now in Afghanistan, that one of the horrendous things that is happening is that there's a lot of data that might incriminate people. And I've seen a post from a teacher who's trying to delete or burn her students' records and women's records because if the Taliban realized that women were studying they might be in serious trouble. And if those records were digital, it would be almost impossible to delete them. You just can't burn them. One of the qualities of digital data is that it gets copied in so many servers that it's very hard to delete. Another example is the US apparently left biometric data of people who helped Americans. And there's a big worry that the Taliban might get that data. The UK embassy, it turns out it also left personal data behind. And so we can see how personal data is a huge danger. And we should think about what can go wrong, not only what can go right.

Justin Hendrix:

So you also put that in the context of what's happened here in the United States, you mentioned Edward Snowden, but also multiple other aspects of what the state here has done to create a sort of surveillance ecosystem. Are you monitoring also what's going on in China on a day-to-day basis? This may be something that you haven't seen yet. I just noticed today that part of the Chinese government has put out a new set of recommendations around data privacy. How do you think about China in the context of these questions?

Carissa Véliz:

China is a fascinating case, of course, because it's a case of a country that probably surveils the most around the world. And it's something that as liberal democracies we don't want to go that way. And yet we are building many technologies that are going in that way, we're not walking away from it, we're working towards it. And that's something that really worries me. China is also very interesting because it has implemented a system of social credits in which, depending on what people do, they lose or win points. And those points are used to limit people's opportunities or to give them more opportunity. So say you get caught jaywalking, that loses you points and that might be used to prohibit you from using airplanes, high-speed trains, from staying in exclusive hotels. If you don't have enough points, you might be less visible on dating apps. And if you have a lot of points, you might, for instance, not need to give a deposit when you rent a car. There are all kinds of perks.

And what makes the system scary is that it's very totalitarian in the sense that something that you do in one aspect of your life can influence another. So say you listen to loud music at your home. If you live in the United States, that might get your neighbors angry and they might even call the police and ask you to turn it down. But that's not going to have an impact on the loan that you're going to get or your visibility on a dating app or on a job that you're going to apply for. And in China, it does, which makes it really scary. Now, something really interesting is that up until now, one of the arguments of tech companies in the west for not being regulated was that they needed to be competitive with China. So the idea is that, "Don't regulate tech companies because we need all that data to keep up with China. And if you regulate us, China is not going to regulate its tech companies and it's to have more data, and therefore it's going to develop AI faster."

There are many reasons for why that's very questionable. But something fascinating now is that China has just passed one of the most strict laws of privacy around the world. And there's a lot of speculation of, "Why did they do that? Their tech companies, their stock dropped, so why would they do that?" And one possible reason, among others, is because they realized that having so much personal data stored is a ticking bomb. And in particular, it is a danger to national security. So their rivals will get to that data sooner or later, and they will use it against China. So one reason why they're regulating in favor of privacy is to protect themselves. And that gives a big motivation to the United States to come up with a federal privacy law, because it's one of the few advanced countries that doesn't have one, and that's very worrisome.

Justin Hendrix:

So you do get into this idea of privacy as power in chapter three, you talk about hard power, soft power, this idea of privacy being collective. Can you talk a little bit more about the different forms of power you see connected to privacy?

Carissa Véliz:

Sure. Up until now, we have been aware that data is important because it's valuable, people can sell it for money. But I argue that even more important than that is that data gives people and institutions power. So not only can they sell it and earn money, but they can also get all kinds of things in exchange for it. So for instance, if you have enough power, you can buy politicians and you can lobby the government. If you have enough power, you can not pay taxes, and you have much more clout than just money. One of the insights of Bertrand Russell was we should think about power as energy in that it transforms from one thing into another. So if you have enough political power, you can buy economic power. If you have enough economic power, you can get military power and so on. And one of the jobs of good regulation is to stop that from happening, so that even if you have a lot of economic power that doesn't necessarily get you political power.

So at the moment with data, it's a kind of new power in the sense. It's always been there, there's always been a connection between knowledge and power. Francis Bacon argued that the more you know about somebody, the more you have power over them. And that's kind of an intuitive thought. And Michel Foucault argued that the more power you have, the more you get to decide what counts as knowledge about someone. So for instance, Google has a lot of power and that makes it the case that it decides how it classifies us. When Google tells the rest of advertisers and companies that you are such and such age and such and such gender and these are your interests, they get to decide how you are treated and how you are seen.

And part of the power of data is related to the power to predict what's going to happen next. And in particular, the power to predict what we're going to do next, and try to influence our behavior such that we act differently than we would have otherwise. And so even though this power has always existed, in the digital age it becomes much more important because we have a lot more data than we used to have and we have new methods to analyze that data that weren't there before. So it kind of crept up on us because we weren't used to data being this powerful. And instead of just thinking about it as money, thinking about it as power will lead us to be more mindful of the kind of symmetries that we are seeing, how do we address those and how do we regulate them?

Justin Hendrix:

One of the things that I think of us doing at Tech Policy Press is being part of a pro democracy movement around the intersection of tech and democracy. You actually pause in the book and make a case for liberal democracy. Why did you think that that was important to do in this context?

Carissa Véliz:

When I was writing the book, or maybe a few months before, I had seen a few polls, one by The Economist and a few others that seem to suggest that people weren't that enamored with democracy anymore, in particular in the United States, but also elsewhere. There's a lot of people who think that democracy is not working, so maybe it wouldn't be such a bad idea to have something else. And this worries me because I agree that many times democracy doesn't work, but the alternatives are so much worse. And what we have to do when democracy doesn't work is figure out what's going on and change it as opposed to thinking, "Well, maybe if we had a dictator, they would sort things out." Because history shows that more often than not, that leads to a lot of unnecessary suffering and a lot of injustice.

So I make a case for why liberal democracy is something important. And it's not just any kind of democracy, but liberal democracy. And the idea is that liberal democracy wants to make people as free as possible, as long as their freedom doesn't harm others. And it also wants to put limits on what people can do, such that nobody's rights get trampled, not even the minority. So one of the worries about democracy are what John Stuart Mill called the tyranny of the majority- which means that the majority, if they dislike a minority, can be just as bad a tyrant as if you had one dictator. So liberal democracy tries to limit that power. And one of the authors that I cite that I think is very insightful is George Orwell. And George Orwell said, "You know, I get it, democracy sucks. It's slow. It's chaotic. If you get too many rich people, they're going to co-opt it, it's just really not ideal. But if you compare it to dictatorships, there's a huge difference."

And some of his detractors used to say, "Well, but there isn't. In democracies you see injustice and people who go to jail who shouldn't go to jail, and people who do crimes and they don't go to jail because they're rich." And so on and so forth. And George Orwell said, "Yes, that's true. But the amount of injustice that you get is very small in comparison to what you get in a dictatorship." And that matters. So what is the difference in grade becomes a difference in quality. And most people in liberal democracies can go onto the street and protest and speak their minds and buy what they want and so on without fear of being repressed or of facing negative consequences. That is what matters in a liberal democracy, and that's what we should be very worried about protecting.

Justin Hendrix:

So in chapter four, you get into questions around what exactly we should be doing as individuals and as a movement to take on the question of privacy. There's a lot here. You want folks to step up and help stop personalized advertising, stop the trade of personal data. You have recommendations like implementing fiduciary duty around data and data collection, improving cybersecurity standards, a ban on surveillance equipment, some proactive things like funding privacy regulation, and bodies that would look after privacy, getting involved in antitrust, doing more to protect our children with regard to privacy. What do you see as the key things that listeners of Tech Policy Press should be doing?

Carissa Véliz:

The key thing is to realize how important privacy is and then everything falls from that, so do what you can, and you don't have to be perfect. History shows that we only need 5 or 10% of people to make an effort for things to change quite radically. So we need regulation, and there's no way around it. This is a collective action problem. And collective action problems don't go away through individuals. But individuals have a big role to play in making that happen. So if we have 5 to 10% of people who care about privacy and realize it and protect it, that can motivate regulation. And not only regulation, but also companies realizing that privacy can be a competitive advantage and it can actually sell and that people care about it and that we are willing to pay for it. Because we realized that if you don't pay for it, you pay even more further down the line. So if you don't pay for privacy, okay it might seem free right now, but five years down the line you get your identity stolen and you lose money because somebody stole your credit card number. Or you ask for a job and you get discriminated against because the personal data shows that maybe you have a disease or you're trying to get pregnant or whatever else doesn't make you attractive to an employer, and you don't get the job.

Ultimately, it's much more expensive to not protect our privacy. So what we can do is first understand that privacy is collective. So a lot of people say, "Well, I don't care. I don't have anything to hide. I'm not a criminal. I'm not shy. And I have a permanent job. So I have no reason to protect my privacy." But actually one of the narratives that the tech companies have sold us that is incorrect is that privacy is an individual preference and something very personal. But actually there's this collective side that's just as important. So when I expose data about myself, I expose others. If I expose my location data, I'm exposing my neighbors and my coworkers. If I expose data about my psychology, I'm exposing people who share those traits. If I expose my genetic data, I'm exposing not only my parents and siblings and kids and so on, but also very distant kin who I'm never going to meet but who can have very bad consequences from it.

So if we think about privacy as collective, suddenly it gives us a reason to work as a team to protect our privacy not only for ourselves, but for our community, family, friends, but also our society. And we should try to choose privacy-friendly options. So instead of using Google search, use DuckDuckGo, it's very good and it doesn't collect your data. Instead of using WhatsApp, use Signal. Instead of using something like Gmail, you can use ProtonMail. There are many alternatives out there, and they're typically free and work very well. Or if they're not free, they're not very expensive. Contact your political representatives. Tell them that you care about privacy. There's not one thing that's going to solve it all. So it's just the accumulation of efforts that's going to make a difference.

Justin Hendrix:

You have some specific recommendations for tech workers, and I don't know every person that will listen to this, but I do know that a lot of the folks that are in the Tech Policy Press community who either work in tech or around tech. What do you think they should be doing in particular?

Carissa Véliz:

They have a very special responsibility because they make the magic happen. Without them, there wouldn't be any apps or websites or platforms and so on. So here at Oxford, I get to teach people who are studying sometimes mathematics or sometimes computer science, and I like to talk to my students about the inventors in the past who have regretted their inventions. So there are many examples, but one example is, for instance, Kalashnikov- the person who developed the rifles- thought that their weapon was only going to be used in a just war. And of course it turns out that it has been used in all kinds of wars and many times in very unfair and horrific ways. And near the end of his life, he wrote a letter to his priest asking if he was responsible for those deaths. And you don't want to be that person. You don't want to be the person who developed something that gets used to harm people, because you're going to carry that for the rest of your life.

And so in a perfect world, and I hope in the near future, we will have regulated technology in a way that not as heavy burden is on the shoulders of people who create the tech. So data analysts and programmers and computer scientists should be able to go to an ethics committee and to get advice, to ask about a particular project, to solve problems. But at the moment we don't have that. And so all the responsibility is on their shoulders. And the order, it's a tall order because the idea is when you design something, try to imagine how can it be misused. Just imagine like a dictator ending up with this tool and how they would use. And by design, try to make it impossible to abuse. And that's really, really hard and in some cases will be impossible. But designers have to keep in mind that they will lose control of their inventions.

Now that we don't have regulation, something that I advise is to try to seek out advice from ethics committees or people who work in ethics or people who work in tech and society. And in the US, I'm not that familiar with what kinds of committees there are, but in the UK, for instance, Digital Catapult is an institution that helps startups take off. And one of the services they offer are ethics committees. And I think that that is a really important thing. Also for people who want to invest in tech, if you want to invest in tech, first make sure that whatever project you're interested in has had some kind of ethical vetting, because it takes a lot of imagination and experience to try to come up with what can go wrong. And as a designer, you might not be used to thinking about that. And ideally it shouldn't be all on one person's shoulder to carry that burden.

Justin Hendrix:

You see two roads ahead, two possible worlds. Of course, there are probably more, but you offer a ‘two roads diverge in the yellow wood’ sort of conclusion to the book. Can you describe what you see possible in the future and which direction you think we might be going?

Carissa Véliz:

Yes, basically we have two options. Either we have a society in which we regulate for privacy and we get our act together and protect liberal democracy and make sure that our personal data won't be used against us, or we get a society in which we have what we currently have, but more surveillance. Because that's the tendency- to collect more and more data and to have more and more tools that are able to transform experience into data. And this is quite a scary scenario because it's kind of China, but with more high-tech. It's a scenario in which you can't do anything that doesn't get recorded and everything you do matters, and you get judged for that.

So it's a world in which, for example, I worry about children and what it means for them to grow up in an environment in which everything they do can be recorded and potentially used against them and potentially used to humiliate them or to discriminate against them in the future. And we really have to think carefully about what kind of society we want to live in in 10 years, 20 years time. Particularly now with the pandemic crisis, it's easy to give up civil liberties in an emergency without thinking of the kind of world we want to have once that emergency is over.

On the one hand I'm optimistic. I think that more and more people are aware that privacy is important. Among other reasons, because they have had some bad experiences with privacy online. In that sense, we are maturing as the digital age evolves. But at the same time, the tendency is still currently to collect more data and to have more surveillance and to be very uncritical about surveillance. We're getting used to cameras and microphones being on all the time, being everywhere. And that really worries me.

So ultimately I'm optimistic in the sense that I think this is so bad and it's so unsustainable that we are going to regulate it sooner or later, we're going to get on top of it, just like we regulated other industries like railroads and cars and drugs and food and whatever industry has come before. But the question is, are we going to regulate it before something bad happens? Or are we going to wait for something like personal data being used for the purposes of genocide in the west? Is that what it's going to take for us to wake up? And every alarm call that we get is more and more worrisome. So what are we waiting for to wake up? And that's my concern.

Justin Hendrix:

You deal with the potential pushback that someone might bring, which is, "Don't we need all this data to solve our problems?" And you go into a specific look at medical technology and medical information, and the extent to which personal data may be valuable to solve diseases or medical problems. But I might also throw in there climate- in the big infrastructure bill that's made its way in the United States, there is a lot of focus on digital solutions for climate change and what might be possible there. And lots of interest, obviously, in data collection. I don't know. How do you square those things? On some level, our capitalist economy seems to have mostly bet that our only hope is more tech and more information and more machine learning and more big data sets. So what do we do?

Carissa Véliz:

So the ideal answer is longer than I can give right now, so I encourage people to read the book because there I have a more nuanced answer. But the short of it is with regards to medicine, yes of course we need personal data for medicine. If you go to your doctor and you don't want to tell them what's wrong with you, they're not going to be able to help you. But that doesn't mean that we should sell that data. So what I'm arguing for is that personal data shouldn't be the kind of thing that you can buy or sell. Just as we don't allow votes to be bought or sold because it would totally deform democracy, for the same reasons we shouldn't allow personal data to be bought or sold.

Furthermore, we have to be critical in the sense that it's not automatic that the more data we have, the better medicine we will get. So one example is during the coronavirus pandemic there have been many attempts to use AI to help fight the pandemic. In a recent MIT Technology Review article, they wrote about two meta-analyses that have been published recently about all the AI tools that have been implemented in hospitals to fight COVID. And it turns out that out of the hundreds and hundreds of AI tools that have been developed, maybe if I remember correctly, one or two are clinically viable, maybe. So these are tools that have been used on patients, and that in some cases may have harmed patients. So we need to be a lot smarter about AI and not just assume that because it's cutting edge tech, surely it's better. And because it's AI, surely it's better. That's just not the case. So that's part of it.

Another issue is whether AI is really going to need as much data as it needs now. So what we want from AI is for it to be authentically intelligent. And if you have had a conversation with your digital assistant recently, you will have noticed that they're not very bright. The difference between a digital assistant or an AI and a child, is that you only have to tell a child something a few times and they get it. They remember it, they can generalize it, they can use that information in many different ways very flexibly, and they don't need millions of cases to do that. So there's an argument to be made for why AI in the future, really intelligent AI, won't need the amounts of data that it needs today.

A third answer, or a third part of the answer, is that there's a lot of data that we can use that's not personal data. And I admit that it's very hard to distinguish what personal data is from what is not personal data, because what we thought was not personal suddenly becomes personal data when it turns out that there's a new tool that can re-identify or use that data to identify people. But for the purposes of climate change, there will be a lot of data that will be beneficial that will not be personal data. Things like the quality of air and where the temperature is going up and how and how the glaciers are melting and all kinds of things that will not be personal data. So the short of it is we can do everything we want to do without having the data economy that we currently have. There's no reason why we should be buying and selling personal data.

Justin Hendrix:

Is there anything I didn't ask you about that you want to get across about the book or about this topic generally? I mean, I guess I could put it to you like this, what's next for you on the topic of privacy? Where are you going to take your concern about this issue next?

Carissa Véliz:

I want to focus more on how algorithms are using data and what we can do to preserve autonomy. So at the moment, there can be hundreds of algorithms making decisions about you right now, whether you get a loan, whether you get a job, whether you get an apartment, how long you wait in line, what price you get for a particular product, and you have no idea. And that seems wrong to me. I also want to think about how we regulate algorithms. At the moment, you can produce any kind of algorithm pretty much to do whatever you want and let it loose into the world without any kind of supervision whatsoever. And that seems absolutely crazy. So one of the things I'm thinking about is how do we implement randomized controlled trials with algorithms just like we do with medicines? We would never allow a medicine to go out into the market without being tested. And yet we do that with algorithms all the time. And algorithms can be just as harmful as any powerful drug. So I'm currently thinking more about that and veering in that direction. So more about power than privacy.

But maybe to end, a lot of people think that it's very radical to say that we should end the data economy. But really if you come to think about it, first, in history, we have banned certain kinds of economic activity in the past because they're just too toxic for society, they're just too dangerous. But second, if you think about it, what seems to me really extreme is to have a business model that depends on the systematic and mass violation of rights. That's what's crazy. It's not banning it that's crazy. So I think we have gotten used to a very unfair situation and we have to reassess our society with a view to protecting our democracy and the kind of life we want to lead and that we want our kids to be able to lead.

Justin Hendrix:

The book is Privacy is Power: Why and How You Should Take Back Control of Your Data. Carissa, thank you very much for talking to me today.

Carissa Véliz:

Thank you so much for having me, Justin.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics