Privacy is one of the fundamental issues in tech policy. And yet, in the United States progress on this issue has been elusive at the federal level, even as Europe has forged ahead with its General Data Protection Regulation or (GDPR) and now the Digital Markets Act, which will reinforce the privacy protections afforded EU citizens under GPDR with new provisions.
And yet there are bills before Congress that could change things in the U.S.- such as the Banning Surveillance Advertising Act, which was introduced earlier this year by Democrats. At the time, Senator Corey Booker, a Democrat from New Jersey, said that “The hoarding of people’s personal data not only abuses privacy, but also drives the spread of misinformation, domestic extremism, racial division, and violence.”
To talk more about the history of how we ended up with an internet bought and paid for by surveillance advertising and what might drive reform, I spoke to two experts in the field:
- Dr. Nathalie Maréchal, Policy Director at Ranking Digital Rights.
- Dr. Matthew Crain, an Associate Professor in Media and Communication at Miami University and author of Profit Over Privacy: How Surveillance Advertising Conquered the Internet, from the University of Minnesota Press.
What follows is a lightly edited transcript of our discussion.
So Matt, you offer a history– or even a prehistory– of our current internet ecosystem and how it is financed by advertising. Where do you think that story begins?
Right. So I think one of the major challenges of any kind of historical work is to put a line in the sand and say, “This is where this begins.” Because someone will inevitably come and say, “Actually, there’s roots to this that go back much further.” So you have to pick a place and I really looked at the 1990s in the dotcom era of when not only surveillance advertising or danger of advertising begins to take shape in a proto-form, but really when the nation decided, or I should say the elite policy makers and folks that they spoke with to make decisions, decided what kind of policy and legal framework are we going to have in the United States for this brand new technology, that no one knew quite sure what it was going to be, called the internet.
So we start in the 1990s with the Clinton administration– that’s where the book picks up– when they decided to privatize the net and turn it into a nebulous, turn it from a nebulous government sponsored, university sponsored, scientific resource for sharing information, into a new domain for commercial development that was going to spread throughout the economy, and not only revitalize the flagging United States economy, we’re in a recession when Clinton first gets elected, but also raise the country’s standing in the global capitalism, to renew America’s relationship with computers and the cutting edge of technology. So that’s really where the story begins.
Now, are there a couple of key policy decisions that you think really set us on that trajectory?
So public policy in the 1990s around internet was fast and furious, and it’s very important to think about what was done, and also what was not done by our legislators and our elected officials. So the actual privatization of the pipes, the internet infrastructure started with George HW Bush, but was carried forward with Clinton. So that’s really the first moment when we just basically said, “Okay, we’re going to turn this over to the private sector, it’s not going to be run by the Education Department or it’s not going to be brought into any kind of public service. The idea here is we’re going to privatize and let the markets and our telecommunications’ companies and whatever companies could be possibly built on this thing, we’re going to let them run the show.”
This commitment to private sector leadership over the popularization and dissemination of the internet, that’s really the first key inflection point. Then from there, you get further policy debates around particularly is advertising going to become a fundamental funding mechanism for media and communications on the internet as it starts to take shape later on in the 1990s. So there’s a lot of particular policy battles and public debates around those questions that we could dive into in more detail, if we want to.
Nathalie, you looked at the political economy around the internet; what do you make of it, how it looked in the 1990s and into the early aughts, both here and perhaps around the globe?
I think what’s really interesting about much of the internet economy is that you’re really talking about companies that have two different sets of products and two different sets of markets, and the one that they actually make money from, which is largely advertising, surveillance advertising specifically, they try to talk about as little as possible. For a very long time, until really the past few years, they would just wave their hands and say, “Oh, well, we make money from advertising, but don’t worry about it.” And really focus their emphasis on all these amazing end user facing consumer products that we were able to use for “free,” meaning without paying them money from our bank accounts, but instead engaging in a transaction that we didn’t really realize was happening, of our data, our attention and so on, that they then used to transform into revenue from advertising.
Now, of course, we know this, we know this actively in a way that maybe before we just knew it subconsciously, but didn’t realize the meaning of it. That’s due in large part to the work of scholars like Matt Crain, but also Shoshana Zuboff and a raft of other authors out there. So what’s been really rewarding to me, as someone who somewhat awkwardly inhabits both the scholar and activist arenas, is to see how this analysis and first this empirical fact finding and then the analysis has made its way from the academy into civil society circles, and then into policy making circles.
Where we’re now at a point where policy makers in the US, in the EU and beyond, are poised to finally take legislative and regulatory action. This is something that’s really been a long time coming. Matt, I think in your book you talk about how we almost got privacy legislation at the end of the ’90s, there were a lot of conversations happening in Congress and all sorts of hearings and everything. We did end up with COPPA, which protects, to an extent, data privacy for children under 13, there was a promise of, “Oh, well, we’ll do children and we’ll get to the rest of y’all later.” Of course, it’s 25 years later and we still haven’t gotten there.
But that policy conversation was also derailed by 9/11. We were having all these conversations about fair information practice and what data collection should be permitted versus not permitted, and data minimization and use limitation and all kinds of things like that, and then 9/11 happens and there’s a complete paradigm shift away from privacy protection and towards, “No, we actually need total information awareness. We need to collect it all, surveil it all, know it all.” As we learned was an internal NSA motto, thanks to Edward Snowden. At the same time, Google and then other Silicon Valley firms were developing a same ethos, but not for the purpose of national security, for the purpose of again generating revenue through targeted advertising.
So it’s really the fact that it’s taken this long to get here is really amazing, and it’s really important to think of it in the context of all the geopolitical goings on of the past 25 years.
My head’s about to roll off my shoulders, I’m nodding so hard over here, Nathalie, listening to you explain all that. Because, if I may, some theme of what you were just talking about, one of the big takeaways from all this historical research that I did to write this history of the 1990s is that we didn’t suddenly wake up in 2018 and find the Cambridge Analytica scandal plastered all over the news and everyone suddenly decided we cared about privacy in the United States. It wasn’t a sudden awakening to the dangers of potential negative social outcomes from an entire internet economy based around advertising and based around indiscriminate data collection of internet users.
So there were a lot of awareness and a lot of conversation, perhaps not at the mass public level, but even you go back and you read Pew research polling data that asked people at the dawn of the internet, “What are your concerns?” People were uncomfortable with putting their credit card information online, people were uncomfortable about seeing advertisements that were increasingly tailored to them, even into the 1990s. So it wasn’t a question of like we weren’t aware that privacy harms are in the offing if we develop a surveillance based advertising economy, it was a question of what are we going to do about it and what institutions and what structures are we going to put in place to sort this out?
The decision that was made by policy makers in close conversation with marketers and advertising agencies and the earliest ad-tech platforms was that we’re going to let the market sort it out. We are going to adopt a policy that we still are living with today called notice and choice, where essentially privacy was going to be negotiated among individual users and the companies that they interact with online, and if you didn’t like a company’s privacy practices, you would simply not engage with that company and you would go to a different company. You would make choices based on your privacy preferences, and as long as companies disclosed their data practices to you, then the free market and competition would sort out a market for privacy, and those who valued it would be able to achieve interactions without surveillance, and those who were fine with it, they could find a solution that met their needs.
So in retrospect, it’s pretty silly to think that’s how privacy markets are going to sort out and actually produce benefits, and there were folks on the ground early, early on in these discussions that saw this very well. So this idea that we’re going to let the private sector lead, we’re going to let markets sort this out and we’re just going to notify people and let them make decisions in the marketplace really was a facile understanding of how information markets work and the history of telecommunications and media, which is all about concentration of power and the biggest players trying to avoid competition. So now we’re left in a situation today where we have choices to make, we can choose between using Google or using Bing, but they both operate on the exact same business model.
Then it really dovetails into this idea of, is competition a necessary component of a political solution to all of this? Absolutely, but is it the only thing or is it sufficient? Probably not. There was a great deal of competition in the 1990s about who could surveil us the best, so a competitive market unhindered by other policy goals that value other social norms, besides making as much money as possible and vaulting the United States to the height of the world stage of some new informationalized capitalism, bring those values into the policy discussion and we could potentially be having a very different type of conversation today.
I don’t want to take too big a detour into this, but one thing that you talked about in the book that seems to have roared back a bit is the concern about kids and privacy, children and privacy particularly. You have a section called Spy Kids, where you talk about concerns that the CME and FTC were raising around children and privacy back in the late ’90s. It really seems like we’re seeing a resurgence of that particular point of view, not least of which because some of the same lawmakers that were around talking about these issues then are still the same people that are taking another crack at this today, I suppose. But what do you make of this dialog around children and privacy, and its role in the broader history?
I’m very interested to hear what Natalie has to say about this also. The COPPA law, it only exists because there was a very small and dogged group of policy activists, the Center for Media Education started researching this in 1995, and they decided that this was the low-hanging fruit to get people in Washington to really care, save the children. So this focus around children, it is absolutely warranted, absolutely justified, but like Natalie said, it never should have stopped with them. The implementation here was, again, going back to this idea of notice and choice, it was about obtaining parental consent, and as long as parental consent was somehow given, then the data collection practices were fine. Yeah, I think that’s inadequate for a variety of reasons.
Yeah, I agree with that. First of all, as we’ve been saying, putting the onus on the individual, or in the case of the children, on their parents to protect them from harms arising from commercial activity is something that we don’t do in other areas. We don’t review car seats for safety ourselves, we don’t check, we’re not responsible for thinking about drug interactions before we take medication or give it to our kids. Individuals just aren’t qualified to do that, you need regulatory agencies to protect people.
But beyond that, what’s really worrying to me how often one focusing children’s rights serves as a way to end the conversation, as you were saying, and that’s why I’m very concerned about the idea that President Biden put forward in the State of the Union and that is being batted around in Brussels in the context of the DSA trilogue about banning surveillance advertising for kids, because the idea is we’re going to keep it for everybody else. So it’s a way to just offer a fake compromise that actually just serves corporate interests and doesn’t actually protect people’s rights more broadly.
But also what’s really worrying is how often children’s rights is actually code for parents’ rights, and we see this in the conversation around online harms and then content regulation, more than about privacy. That it’s really about giving parents the ability to control what their children can see and do online. I understand where that impetus comes from, you want to protect children from seeing content that’s too mature for them or so on, but what about all the kids who need for their mental health, for their own safety to have access to content about sexual health or that affirms that they’re not evil because they’re attracted to someone of a different gender than who their parents think they should be attracted to and so on. So it’s really important to not conflate children’s rights with parents’ rights to control their children.
So I do want to stay with just the current political environment at the moment and some of the policy options that appear to be on the table. We still don’t have any kind of fundamental policy legislation that’s on the table here in the United States, we’re seeing some of the individual state legislatures move forward with a patchwork of ideas. Europe seems to be full steam ahead on the Digital Services Act, of course they’ve already got GDPR in place, and now there are new legislative proposals like the Banning Surveillance Advertising Act here in the States. Is there any chance that anything will happen in the United States in the near term? It doesn’t seem like this particular Congress will advance anything that’s privacy specific.
Yeah. So what’s really interesting about privacy and big tech regulation more broadly is that it’s one of the few policy areas right now that is not actually a strictly partisan issue, where you have Democrats on one side and Republicans on the other side and there’s such a wide gulf in between, that the only way you’re going to get anything through is if one side just overwhelmingly has the numbers. In a 50/50 Senate and with what we know about the filibuster and certain Democratic senators’ attachment to that tradition, that’s very unlikely in a lot of areas.
But with big tech regulation, including privacy, that’s not really the case. The parties are actually much closer to each other than big tech and big tech lobbyists wish they were, and Meta in particular is putting a lot of lobbying muscle into convincing Republicans and Democrats that they’re much further apart than they actually are. On privacy, for the past couple years, the big bones of contention have been the private right of action, whether or not individuals can sue companies for violating privacy laws, or whether that’s something that can only be brought forth by state’s attorney general or the DOJ and so on, and state preemption with overall Democrats wanting a private right of action and wanting the federal privacy law to not preempt state laws. The reason for that is because a number of states have stronger civil rights protections than the federal government does.
Whereas, on the other hand, big tech and its lobbyists and legislators who are aligned with them do want to see a preemption clause because they want to preempt the California statute, CCPA and CPRA specifically. Now, what’s going on at the state level, with the exception of California and then there’s also a really strong biometric privacy bill in Illinois, is that the big tech lobby is really pushing aggressively for some really, really weak privacy bills at the state level, there’s one in Utah, there was one in Tennessee recently, there have been a bunch of others that are just really, really weak. I think the strategy behind that is to set that as the ceiling for what should be replicated in a federal law.
So I really hope that congressional leaders on both sides of the aisle can get their act together as soon as possible, and pass a strong privacy legislation at the federal level. I do think that there’s room to negotiate, if the bill is strong enough on both preemption and in a private right of action. But the clock is ticking for this Congress, and I know both parties want to be seen as the party that did big tech accountability, so there’s a really perverse incentive there to stop the other side from getting the win, because you want to save the win for yourself. But I really think there’s a way to characterize this as a win-win, and I’m hopeful that folks on Capital Hill can find it.
Yeah, I can pick up on just one thread there, which is Natalie was saying it’s really important to pay close attention to what the tech companies themselves are saying in public, and then to the extent that we can, understand the rhetorical strategies that are being used on the non-public, like lobbying side. The legal scholar Chris Hoofnagle has this amazing paper called the Denialists’ Stack of Cards, where he outlines a bunch of different rhetorical strategies that tech companies, but companies more broadly, cycle through when they are being faced with different types of regulatory scrutiny.
Most of those cards that are played are things like regulation is going to stall innovation, how can you regulate the internet? The internet, it moves too quickly and by the time you get a law passed, everything will be different. So there’s all these different, it will kill the economy, it will kill jobs, it will do all of these things, and then at a certain point you reach an inflection, and this happened with Mark Zuckerberg and some other big tech folks a couple years ago now, where the conversation changed from, “Regulation is antithetical to free market capitalism and will cause all of these harms” to, “We’re very interested in sitting down at the table and supporting regulation, as long as the regulation is the right regulation.”
Not that this needs to be underscored probably for your audience, Justin, but we need to be very careful about any regulation that the tech companies themselves are publicly supporting, because what they will not do is undermine their fundamental business model, and I believe Natalie and I are both in agreement that is exactly what must be changed. So yeah, just a little bit of context there.
Totally. Though at the same time, you have to be careful not to assume that everything big tech companies like is bad and everything they hate is good, because that’s how you end up with nonsense like when Senator Blumenthal said that everybody who opposes the EARN IT Act is a big tech lobbyist, and Justin was kind enough to allow me to publish a piece on Tech Policy Press highlighting that I am not in fact a big tech lobbyist, and I think EARN IT is really, really, really bad.
Yes, that’s a good point.
So let me just step back again, let’s just talk a little bit about what are real concerns here about privacy at this stage, it being 2022. We know that we need data to power systems that will hopefully help us address any number of human challenges, human problems, and that we have a lot of data that of course is in the public domain or perhaps should be in the public domain to help us think through the challenges we face. But it seems to me we’re concerned about political manipulation, which Matt, you’ve written quite a lot about, we’re concerned about technology abetting authoritarianism, and we’re concerned about technology companies exploiting us and other firms exploiting us in the economy and taking advantage of asymmetry of knowledge about us as individuals or us as groups, that gives them an economic advantage over us. Are there other concerns or fears that the two of you see as fundamental to this question?
Yeah. I’ll add another big concern about opportunity costs. I think it was Zeynep Tufecki i who said a number of years ago that, “The best minds of an entire generation are dedicated to making, finding ways to make us click on ads more.” As long as that is such a huge money maker, I was reading a paper the other day that argued that online targeted advertising has a profit margin of over 60%, and that’s why even companies like Apple and Amazon that already make gobs and gobs of money are investing in that area and are growing in that area, because it’s basically free money. For a relatively small investment, you get huge returns. As long as it continues to be that lucrative, people are going to continue working on this, rather than working on climate change, for example. So I think that’s a really important thing to keep in mind too.
I have a couple thoughts on that. I think it’s very important, I mean I agree with what Nathalie’s saying totally, and I also think it’s important to move the conversation beyond individual harms and think about this in a couple of ways. So we live in, so much of our lives are filtered through these communication technologies where surveillance is affectively inescapable, unless you’re going to go live in a cave somewhere, which is not a viable solution for most of us, so what are the negative social externalities that such a system creates?
There’s a couple things that I just want to mention here, and one thing I’m very concerned about that many others are as well is what has been happening with the accelerating decimation of journalism in the United States. Newspapers in particular are closing at an unprecedented rate, we have half the number of journalists in this country that we did 15 years ago or something. So that is extremely worrying, and it’s not something that you can map directly to a privacy harm to an individual person, but the fact that surveillance advertising’s business model, which is so heavily concentrated in the hands of big companies is, in some sense, a zero sum game with many, many publishers that are really struggling, because now to be online and to be an ad-supported publication you’re essentially contracting out with large ad-tech platforms who are taking a giant slice of an already diminishing advertising pie.
So that is something to be very concerned about that is a negative social externality that spins out of this business model, especially a very concentrated market. The second thing is what kind of society are we creating in terms of our relationship to institutions? One of the harms here is that this data is collected, it’s combined with other data, inferences are made upon it and it enters into contexts entirely divorced from the original site of collection. So we can be in situations where credit scores or credit opportunities or job opportunities or insurance decisions are made based on a bunch of data that we really have no way to trace back to an original source.
Daniel Solove is another legal scholar who writes about this very eloquently and just talks about how this is becoming the norm where our relationships to the institutions and the organizations that make decisions about us are byzantine and impenetrable, and that just is a death sentence for any kind of accountability. So I think we have to focus on individual harms and we have to focus on opportunity costs, we also have to think about more broadly, and this is a hard discussion to move into concrete policy decisions, but nonetheless it’s important, what kind of society is a surveillance society?
So I want to move away from history and the present to just speculate on the future, and I have to say today, having just read the new V-DEM Institute report on democracy and how it’s performing around the world, once again, another set of bad figures, now 70% of the world’s population living under autocracy, up I think 21% from 10 years ago, just a continued decline. If you look at the statistics in the Internet Freedom report from Freedom House, digital authoritarianism on the rise, tech firms essentially, even Western tech firms, embedding authoritarianism.
I’m struck by this New York Times story earlier this week about Nokia, the Finnish telecom company, respecting the sanctions on Russia and pulling out of business, but leaving behind an enormous surveillance apparatus that it had helped the Russian state build. I’m finding it slightly hard to be longterm optimist at the moment about where we’re headed on these issues, I’m wondering if either of you can give me any optimism or help me speculate more specifically about where we might get to.
Well, the trends are not looking great in many areas, as you say. It would be foolish to disregard that. From a historical perspective, I think one of the lessons that comes out is that the system we have now is not a natural system that was created by any kind of deity or process of evolution, it was a result of human beings making choices in particular contexts. So there’s always an element of agency here and we can never count out the ability to intervene in processes, even those that seem like they’re well underway. There’s always the hope and the responsibility to build, or at least try to build the future that we want.
Yeah, I think that’s right. I think of optimism as a radical choice, in the same way that you have to choose to love someone every day, I think you have to choose to be optimistic. If you don’t choose to be optimistic, there’s absolutely no way that you’re going to win. I have conversations at least once a week with someone, whether it’s someone who works in civil society or in policy or someone who uses the internet and is part of society, and they’ll say, “Oh, well, the work that you’re doing is really interesting, but that’s never going to happen. None of that is ever going to happen.” My answer is always, “Well, not with that attitude it’s not.”
I don’t spend a lot of time trying to persuade people that they’re wrong to be pessimistic, but for me and for my team, we choose to be optimists. Which doesn’t mean that we are ignorant of the data that we see and the trends in front of us, but the arc of history is long and I like to remind myself of Ursula K. Le Guin’s quote that, I’m going to mangle it, but it’s something about how we live under capitalism, and in our case under surveillance capitalism, and it’s hard to imagine life without it, or life outside of it, but that was true of the divine right of kings too. You have to choose to be optimistic and to keep fighting.
So let me ask you both a last thing then, which is what would you tell the listener to do at the moment? What would you leave them with as an action that they could take, which might lead towards that more optimistic future that you see?
Think critically about your own use of social media, of various services that are supported by surveillance advertising, both in your personal life and in your job. Which isn’t to say that everybody needs to get off all social media platforms overnight and that we should all quit our jobs if they have anything to do with using those platforms, but to think critically about it. Why are you using this or that platform? What is the benefit that you get out of it? Are you really as trapped as you think you might be? Maybe take a break from it for a week or two and see what you miss about it, if anything. Or resolve that you’re only going to, you don’t need to have it on your phone, so you’re not tempted to check it every time you’re standing in line somewhere, but you’ll keep an account because maybe you need it to receive information about some neighborhood group that you’re in or something like that.
Take control of all that. It’s pretty much impossible to completely protect yourself from being surveilled by the surveillance advertising apparatus, I spent a long time trying to do it, it’s not actually doable, which is why we need legislation and regulation here, but there are steps that you can take and I would encourage people to do that and reclaim what agency we have in the status quo and be assured that there are plenty of people like me, like Matt, like a lot of people that we work with who are fighting in different ways to make the vision of a privacy respecting internet a reality.
Matt, you end your book with a call for an alternative political vision for the internet, is there something you think the listener could do to get there?
Well, that’s a pretty tall order. I think try to think about areas of life that we’ve decided maybe markets aren’t the only or most efficient way to structure. I think privacy might be one of those areas. As for what you can do on your own level, I think everything Natalie said was spot on. Also, maybe support your local journalists. There’s an outlet that you like and give them some money. The reason why journalism is in such a struggle is that we have the expectation that it’s ad supported and it shouldn’t be up to individuals to pay for it. If you a have the capacity to support alternative business models, then I think that’s a nice concrete way to take a small bit of action in your daily life just to make the alternative world that you want to see out there.
Well, I thank you both for speaking with me today.
Thank you, it was wonderful.
Thanks, always great talking to both of you.
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Innovation. He is an associate research scientist and adjunct professor at NYU Tandon School of Engineering. Opinions expressed here are his own.