Home

Donate

Containing Big Tech

Justin Hendrix, Rebecca Rand / Aug 20, 2023

Audio of this conversation is available via your favorite podcast service.

This episode features two segments. In the first, Rebecca Rand considers the social consequences of "machine allocation behavior" with researchers Houston Claure and Malte Jung, authors of a recent paper on the topic with coauthors Seyun Kim and René Kizilcec.

In the second segment, I speak with Tom Kemp, author of a new book out August 22 from Fast Company Press titled Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy.

What follows is a lightly edited transcript of the episode.

Justin Hendrix:

So this week we're introducing Evidence Base, a segment where we highlight new research on the relationships between technology and people, politics, and power. This week we've got an interesting story about fairness and machines. And Rebecca, our Tech Policy Press audio intern is here to talk about it. Hello, Rebecca.

Rebecca Rand:

Hi, Justin.

Justin Hendrix:

So what do you have for us today?

Rebecca Rand:

Well, I think the best way to start is actually on a little tangent. I have a video to show you. It's an experiment that was run by a team of researchers on Capuchin monkeys.

Justin Hendrix:

Okay, so now I'm supposed to watch this thing.

Rebecca Rand:

Yes. So you can see there are two monkeys in separate cages side by side. They can see each other, and there's a researcher in scrubs. She has two containers, one with pieces of cucumber and the other with grapes. Now, the thing you have to know about these monkeys is they fricking love sugary stuff. Juice is like heroin to them. So obviously between cucumbers and grapes, the grapes are the real jackpot. So the monkeys have been taught to give the researcher a rock in exchange for a treat.

So see, the first monkey's giving her a rock, she gives it a piece of cucumber, monkey eats it, no problem. Then she turns to the other monkey who does the same thing, gives her a rock, but this monkey gets paid with a grape. Now the other monkey is watching this happen. So it gives a researcher a rock, and she hands it a piece of cucumber. Now watch what it does. The first monkey takes the cucumber, looks at it and throws it right back at the researcher. Then it starts banging on the table with its hand and shaking the bars of its cage like, "Lady, where the F is my grape?"

Justin Hendrix:

That is not a happy monkey.

Rebecca Rand:

No, it is not. So what we see here is that fairness is this sort of deep innate sense that we have as primates. And when things are so obviously unfair, we have these really strong reactions, we flip out. But more and more as we barrel into the future, it's not a lady in blue scrubs making these decisions about who gets what resources. It's actually machines, it's algorithms deciding which resumes recruiters see, how prominent your dating profile is on the dating app, which gig workers get which jobs.

Justin Hendrix:

So what does science have to tell us about how people feel when machines are making such decisions?

Rebecca Rand:

Right. So I talked to a couple of researchers who ran this other study.

Malte Jung:

I can start here. I'm Malte Jung. I'm professor in the information science department at Cornell. I study human robot interaction. I've been doing that for the past 10 years.

Houston Claure:

I'm Houston Claure. I'm a postdoc at Yale, and my research focuses also on human robot interaction and how we can build robots that can understand fairness and can behave in a way that we consider fair.

Rebecca Rand:

So Dr. Claure and Dr. Jung, they were looking at something they call machine allocation behavior. That's basically when machines are making decisions about doling out resources, and they wanted to know how humans feel and behave differently when they know it's a machine making these decisions versus a human. Here's Dr. Claure.

Houston Claure:

One of the interesting things about our field has been how much there is to learn about how humans respond to the way robots behave. For me, the focus has always been about fairness and do people respond in the same way to a machine when it's fair or not?

Malte Jung:

If I jump in, for me, what's so fascinating about machine behavior is this kind of growing understanding that some of the behaviors that machines exhibit is unique to machines. We don't see that otherwise out in the world. What we were particularly interested in has been how the behavior of a machine might impact how we interact and relate to other people.

Rebecca Rand:

So to study that, they used as all good behavioral researchers do, a game of Tetris. It's like a thing in the behavior world. Researchers love Tetris.

Justin Hendrix:

I love Tetris too. So how did this experiment work?

Rebecca Rand:

Right. So they ran this study looking at people playing a collaborative game of Tetris. Only one person could control the falling blocks at a time. The other person just had to sit there and watch. But on some teams, one player got way more time in charge than the other.

Malte Jung:

You might see that sort of where the fairness aspect comes in. I mean, playing Tetris is fun. And watching someone play Tetris, not so much.

Rebecca Rand:

And sometimes they'd tell the team, someone, a person is deciding whose turn it is, and other times they'd say it was an algorithm choosing who got to go.

Houston Claure:

It was the appearance of an algorithm that was making a decision because in the backend we actually had control over who was getting more resources.

Rebecca Rand:

And what they found is a few things. First, people knew right away when the turns were unfair, and they didn't care who was making that decision. Being sidelined by a machine felt just as bad to them as being sidelined by a person. The next thing they found is that when a person was favored by a machine, it kind of went to their head.

Houston Claure:

When it was an AI that made this decision, people who received more resources actually saw their partner who received less resources as less dominant. So essentially, we found that receiving more resources from an AI actually leads to a feeling of empowerment, which was very interesting. So people who received more resources from a human had no difference in how they perceive their group member as dominant or less dominant.

Justin Hendrix:

Interesting. So it's like people taking more stock in how much an algorithm seems to value them than other people.

Rebecca Rand:

Totally. Dr. Claure actually had a funny story about that.

Houston Claure:

Yeah. A couple of our friends decided to, "Oh, let's check our Uber ratings and see who has a higher Uber rating." And then we started comparing these values with one another, and then the person who had the lowest Uber rating, we started making fun of them even though we have no idea how this rating came about or attaching their personality to this value and changing the way it perceive this person.

Rebecca Rand:

And this is their central point, that how machines treat us will perhaps cue us on how we view each other.

Malte Jung:

Machine allocation behavior really changes how we perceive other people, how we relate to them. And I think that's hugely important because anything we do as people, I mean we do with others. We live with others, we built our families with others, we work with others. There's not much we do alone and we can accomplish alone. And so kind of understanding how machines mess with this fundamental aspect of what it means to be human is really crucial.

Rebecca Rand:

Dr. Claure told me that in the term, fairness generally improves how well people work together. But something his colleague, Dr. Jung, pointed out is that a machine might not appreciate that if it's only focused on maximizing performance in the short term.

Malte Jung:

Because in this study we found that actually the groups in which the allocation was unequal in which one person got a lot more, they performed much better than the ones where it was evenly distributed. We really need to enable machines to reason about this stuff. And in that way, reason about trade-offs is like about how do I trade off the performance of the group versus their tendency to be friends after. So how do you make these trade-offs?

Justin Hendrix:

Those are questions I assume for future research. Fascinating, Rebecca, thank you so much for sharing.

Rebecca Rand:

My pleasure.

Justin Hendrix:

Thanks so much to Rebecca Rand for that segment- and wishing you all the best for the start of a new semester at the Newmark Graduate School of Journalism. You’ll hear more segments from Rebecca this fall.

Next up, my conversation about privacy, AI, and competition with the author of a new book from Fast Company Press titled Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy.

Tom Kemp:

My name is Tom Kemp. I'm a Silicon Valley based entrepreneur, investor, policy advisor, and the author of the book Containing Big Tech.

Justin Hendrix:

Tom, maybe we can start off with just a brief history of your career in Silicon Valley, what you've got up to, what types of companies you've created, and perhaps why you've decided to turn to policy.

Tom Kemp:

Sure. So I graduated from the University of Michigan, grew up in the Midwest and had an opportunity to interview at Oracle back in 1988. And when I came out to Silicon Valley, it was like the land of fruit and honey, and it was just amazing with all the entrepreneurial spirit. And so I joined Oracle, worked there for a couple years and then just started doing startups.

The last startup I was at, I founded it and was CEO. It was a company called Centrify and was able to grow it to a hundred million dollars cybersecurity provider and did the classic go out, raise Silicon Valley money with a venture capitalist and was able to get to the point that it got acquired. The startup that I had before actually, which I also co-founded, went public. So have a 25 plus year track record of building companies in Silicon Valley. And lately it was very much in the cybersecurity area. And so, I saw what the bad guys were doing, and that got me really concerned about this large amounts of data being collected, how it can be exploited.

And then after my company got acquired, I took some time off and then really started digging into privacy and took some online courses, read a lot about it, and then eventually hooked up with a guy by the name of Alastair Mactaggart who was starting to get this proposition on the ballot. And I worked with him for six plus months as a full-time volunteer. Basically, I was the chief marketing officer to help get Proposition 24 passed. And then since then, been doing more and more policy work, working with candidates, and then also have written some of my own laws here in California. The most recent is Senate Bill 362, the California Delete Act.

Justin Hendrix:

So most folks probably don't know that it's possible for individuals, organizations to write laws and to have those co-sponsored by legislators in California. Can you talk a little bit about your experience with the Delete Act and are there any other bills that you're working on?

Tom Kemp:

Yeah, I mean, obviously California has a direct to the voter vehicle, which is the proposition system, but the reality is that the bar is so high that you really need to be very wealthy because nowadays you have to collect at least a million signatures. The last time someone did it just with pure volunteers was maybe 30 years ago. And so nowadays you have to go out and pay $7 per signature. So right off the bat, even to get something even close to being on the ballot will cost 7, $10 million. But also if you are able to catch the ear of a politician, here in California you can come to them and you can propose an idea.

And so I'm very fortunate that I've gotten to know my local state senator, Josh Becker (D-CA13). Over a year ago, I proposed a bill regarding better regulating data brokers, and he got excited about it and I wrote it and he took it on. And at that time, the bill got actually killed by the tech industry. They were able to kind of do backroom lobbying, was able to get it killed. And so this year I decided to go bigger and batter and built a much more comprehensive bill that was modeled very much heavily on Senator Ossoff (D-GA) and Kennedy's (R-LA) Delete Act and really taking the concept of the FTC's Do Not Call registry and applying it to data brokers where people in California could go to a single government website, put their name, their email address, and then any registered data broker would have to delete those peoples that went through this clearinghouse. They would have to delete their data.

And so the cool news is that, again, Senator Becker was very amenable to this. And then we also started working with a organization called Privacy Rights Clearinghouse. The folks there, specifically Emory Roane, also contributed and wrote this bill alongside with me and it passed the California State Senate. And it's now on the assembly where it's the next stop is the Appropriations committee.

Justin Hendrix:

Privacy is, I suppose, the major preoccupation or one of the major preoccupations of this book. You have chapters on digital surveillance, on data brokers, on data breaches, and then to some extent, the privacy concerns that run through the other areas, AI, persuasive technology, kids online safety, extremism and disinformation, competition. Let's talk about the kind of organizing idea behind this book for you. I mean, clearly you're concerned about the scale of technology firms, you're concerned about what's emerged out of Silicon Valley, and privacy seems to be at the core of it.

Tom Kemp:

Yeah, I would agree with that. I think, first and foremost, that these tech companies have largely been unregulated when it comes to the collection of information. It's interesting that the motivation behind writing this book was that most people, even people in tech that are in Silicon Valley, they don't really fundamentally understand the business models of some of the largest providers that it's focused on advertising, trying to hoover up as much information. And then they're also not familiar with... And these guys, these are people in Silicon Valley, so they're probably a little bit more tech-savvy than the average American because they work in the tech industry and have been doing so. They may even work at Google or Facebook themselves. And by the way, I literally live like a mile away from Facebook's headquarters, and it's a 10-minute drive to Google, so I'm like in the epicenter of all this.

So what I wanted to do was that given my background in cybersecurity, what I've done over the last couple of years in privacy, but also just living and breathing the Silicon Valley entrepreneurial journey in terms of starting companies, getting VC money, taking one company public, having to get acquired, I thought I also had a good mindset and viewpoint in terms of what these large tech companies are thinking about.

And so in the end, I wanted to write a book that connected all the dots and would be a book that you could hand your Uncle Larry or someone else that is an informed citizen and has some curiosity about this. And so they could get it, right? They could connect the dots themselves and have it be explained to them not in a complete deep academic way, but just more kind of at a higher level. And so, the awareness would be raised what the issues were, but also I wanted to provide solutions. And I provide in the book solutions not only for consumers, but also for policymakers in terms of what type of laws and what type of things that they could put in specific laws as it relates to privacy, data brokers, data breaches, et cetera.

Justin Hendrix:

You spend a bit of time talking about the downside of living in a world under surveillance, the downside of living in a world where the cameras are always on. I was struck by the fact that you refer multiple times to the fact that we're living, of course, in a post Roe v. Wade America, and that to some extent that has made it clear to many people what the stakes are with regard to privacy, particularly with regard to the way we interact with information online, what might happen with that information. There have been headlines just in recent days about this, how in states where abortion has been criminalized personal information that folks might trade over text messaging apps or that might be acquired through web search history, et cetera, the extent which that could create a possible danger for the individual's trading that information. Was that a moment for you in particular? Did you sort of recognize that, I suppose, retraction of a write as a wake-up call in the writing of this book?

Tom Kemp:

Well, certainly as a CEO of a hundred million dollars-plus cybersecurity company that had 2,000 plus enterprise customers, I got to see firsthand what was happening with hackers. And oftentimes the enterprise customers would call us up after they've been hacked, and it was like, "Oh boy, that's a lot of data that's being stolen from you" and that's, "Oh, it's going into the wrong hands." So yes, the fundamental thing is that in the past, the data was collected from an advertising perspective. I think people were fine with that trade off that, "Okay, I searched for toilets today looking at Home Depot or something like that." And then for the next three weeks I see toilet ads everywhere or lawnmowers or whatever, red dress.

But what certainly has happened is that that data is increasingly being weaponized against us, and we're seeing more and more cases. And so I thought I had seen a lot of bad stuff, but when I was researching this, it was like, "Oh my God, that's really bad" in terms of how data's being weaponized against us. And one example is that after the Dobbs decision, Google, based on employee pressure, said, "Okay, okay, we're not going to collect any of this abortion related searches. And then if you drive to an abortion clinic, et cetera, we won't display that." So what I did last August was as I was researching this book, I said, "Okay, let's see if that's really the case." And so for a couple days, I just did a lot of searches. I downloaded specific apps. I drove to a Planned Parenthood. Luckily there was a taco truck outside of it, so I parked the car and had tacos, but I was in the same [inaudible] of it and had actually put in the map that I was driving to it.

And all that data was collected. And what Google had said was that they were deleting it and discarding it wasn't the case. And I waited another 30, 40 days. And then a reporter with the Guardian wrote about that in November. And then just recently, Geoffrey Fowler at the Washington Post also wrote about it six, eight months after I had actually saw the same thing as well. And then that caused a letter from, I think, about 10 senators to be written to Google. So it was just eyeopening that these practices are still occurring even after these tech companies said they weren't.

And then I think one last thing I also want to add is that, look, we've had big monopolies in the past. Standard oil was powerful, but it didn't know everything about us. When entities that have such concentrations of power have so much data, bad things could happen in terms of my background with identity theft or the weaponization of data as it relates to the reproductive health, how the data feeds into algorithms that make the technology more addictive and then eventually lead people down to rabbit holes, et cetera. So there was a lot of eye-opening incidents as I was writing this book that I thought I had seen it all, but no, and I document those in the book.

Justin Hendrix:

You're clearly somebody who looks for points of leverage and wants to be effective. You actually lay out in an appendix your own set of requirements for what national federal privacy legislation should look like in the US. What do you make with the ADPPA, American Data Privacy and Protection Act, at the moment? Do you see the possibility of its passage? Do you see some way to leverage perhaps the concern post-Dobbs, perhaps the concern about artificial intelligence? Can you see national privacy legislation happening in the United States?

Tom Kemp:

It's a good bill and it actually meets, and in some cases exceeds, what California has, although California is evolving. For example, we came out with the age appropriate design code, which is not reflected in the ADPPA. I know you interviewed the baroness a couple months ago. And then there's other things that California has added. So I would say it's on par with CPRA. I know some people say, "No, no, it's so much better." But there are areas and aspects that California does exceed it.

I think the fundamental issue that I have with it is that it preempts state laws. And historically, just to quote Brandeis, that states have been the laboratories of democracy. And I think it's so critical being in California and being someone that sees something happening and being able to work with your legislature, or even if you're very wealthy, to go the direct democracy route, to be able to make a change in difference and to have a federal privacy law acting as a ceiling, and given the fact that it's very difficult for due bills to come out, especially in the area of privacy because we really haven't had major privacy legislation since 90s, and those were just sector specific with HIPAA and Gramm-Leach-Bliley, to me the big downside of ADPPA is the fact that it actually preempts state laws. And if people say that ADPPA is so much better than every state law out there, then they shouldn't have a problem with the ability for it to actually be preempted.

The other big issue, as we all know, is the public right of action. I think there could be some sort of compromise there that maybe you can limit or narrow it as it relates to identity theft-related privacy violations, which I think maybe that's a way to kind of proverbial split the baby, so to speak. But to me, it's a good bill. The fundamental issue is being a California and seeing that citizens can make improvements in especially citizens that are based like myself in Silicon Valley and see the rapid evolution of technology. I don't want to lose that, just like I don't want to lose the ability for California to be able to help set automobile safety standards. So that to me is the big issue right there. But fundamentally, I think it's a really good bill, just the preemption is the big killer for me.

Justin Hendrix:

I want to ask you about another bill that you talk about in the book, The Fourth Amendment is Not for Sale Act, that one just got reported out of the House Judiciary Committee 30 to zero, I think, with one member voting present as moment that we're recording this sort of late July. So this does seem to have a lot of bipartisan support. Is that another one that you've followed closely and do you suspect that it has a chance?

Tom Kemp:

I like this bill a lot because obviously I've written in California two data broker related bills, one that got killed by the tech industry last year, SB 1059, and this year SB 362, the California Delete Act. And so I was the one that proposed it. I co-wrote it with Emory at Privacy Rights Clearinghouse and have been kind of spending half my time just bird-dogging working with different groups and state Senator Becker's staff to make this happen. And so, really this Fourth Amendment is Not for Sale Act really kind of address one of the key problems that we have with data brokers, which is that government agencies, instead of getting a court order to track the location of specific people, they can actually go and just contact and buy the data from data brokers.

And the data brokers, because they've integrated their capabilities with hundreds of different types of mobile apps that have their SDK, that all this location information is being fed in about Americans. And so ICE or someone else can just say, "Hey, why screw around with a court order? I'm just going to call up X, Y, Z data broker and I'm going to be able to track these people as well." And it's a complete subversion of the Fourth Amendment. And so I'm very pleased.

I think people, the nice thing is that there has been and is growing awareness with the issues of data brokers in that recent weaponization, not the weaponization, the FTC Chair Khan's testimony. Rep. Matt Gaetz (R-FL) went on for half his questioning about creepy data brokers. And so I think now there's finally consensus. And I think maybe Republicans look at the threat more so as this data being sold to government and not trusting the FBI and some government agencies. And I think in the case of the support for SB 362, we have Planned Parenthood who look at using the location data tracking as a threat to people visiting abortion clinics.

And so I think there is now consensus. And so I'm actually quite encouraged that this could actually make its way throug.h that bans people from buying the data, but the fundamental issue is should they actually have the data? And that's why I want to empower consumers in California. And I really hope the Ossoff and Cassidy Delete Act also gets through at the federal level. It's incorporated in the ADPPA, which gets people to actually delete the data right off the bat. So it's not a question of who... I mean, because even if you ban governments, other people can buy it, right? The location information. So let's just get rid of the data to begin with and empower consumers to have some control of how they're being tracked and making sure that people are not collecting and retaining this information.

Justin Hendrix:

Logically, your chapters on surveillance on data, data brokers lead into artificial intelligence and what you think can be done there. I want to ask you about the roadmap because you mentioned, for instance, a variety of things that are happening in the United States that perhaps the Biden administration's efforts around the blueprint for an AI Bill of Rights, the effort to some extent... Well, I'll slightly rephrase this. You mentioned the blueprint for an AI Bill of Rights. Just today we've seen the Biden administration come out with some voluntary principles that certain AI firms have agreed to around safety and other considerations about how they'll perhaps make their products more transparent and agreeing to certain measures that hopefully will defend against some of the worst possible abuses. Do you think we're close to getting past, I guess, some of this well-intentioned principles and speaking and blueprints in this country with regard to AI? Do you see a moment in the near term where we might catch up with the EU and actually put a few laws in place?

Tom Kemp:

Probably not because just how dysfunctional DC is. They can probably agree to a narrow, "Hey, governments agencies can't buy location data from data brokers, but it's harder." But I also have a fundamental issue with how we're designing a lot of these laws, even at the privacy perspective, is that what we've seen is that the tech industry, especially at the state level, after losing in California, they've been able to heavily influence how state laws have been written. And the fundamental problem that we have, and I think this same problem is going to eventually apply to any AI laws that eventually come out, is that the laws make it hard for consumers to exercise their rights. So let's just take privacy. There are entities that you interact directly with, like a Walgreens, a Walmart, a Google, et cetera. And yeah, you may have the right to actually say, "Tell me what you have on me and don't sell my information." But you have to do that on a one-on-one basis, right?

What we really need, I think it's really critical, it'd be better from a consumer perspective if we had universal support for a global opt-out signal that basically as you visit these sites, it just sends the signal like, "Don't sell or share my information." And so you avoid the cookie fatigue that people have in Europe or even what we have in the US. Similarly for data brokers, we don't even know who these organizations are. And if we do find them, we have to contact them on a one-on-one basis. And so that's why I think having something like the Delete Act where you can go one time, one place, and put your information in and hit the delete button and it deletes all the information.

Specific to AI, I also think from a transparency perspective, we should look at, and implement it, transparency from the consumer's perspective and make it easier for the consumers. Because in the end, yeah, the companies that are creating the ChatGPTs, et cetera, yeah, they may be more transparent, but it'll be buried in privacy policies. And in today's announcement that happened with the Biden administration about providing transparency, and one of the things was like, okay, they'll put watermarks with audio and video, but the big thing is text, right? And so we should have the right to be able to take chunks of text and actually ask the large AI providers, "Did you cut and paste it?" and said, "Do you generate this? Did you generate this?" And that will address a lot of the issues that people are worried about students handing in essays. Like today, the University of Michigan Law School said it's not going to accept any applications that use ChatGPT. Well, let's not guess or speculate. Let's give the ability for a consumer to be able to actually ask the company if they generated.

So I think the fundamental issue, and I try to bring this forth in the book, is that even if we get some of this stuff, laws for privacy or better guardrails for AI, it should be approached from a consumer-centric to make it easy as opposed to, yeah, the tech companies, they'll sign up for, okay, more transparency, but then their privacy notice, instead of it being 30 pages becomes 33 pages, and people will just still do an accept all because they just want to get on to the website and it's not going to improve thing. And so I think we need to make privacy and guardrails for AI just more simpler and easier, more consumer centric.

Justin Hendrix:

Of course, there may be some technical challenges to some of the identification or at least trustworthy identification of AI generated texts. I think that's still a big technical problem, but perhaps we'll figure out a way to get past that one in the near term.

A big focus of the book is on competition. You've already talked about scale and the extent to which the scale of the big tech companies puts them out of the league, even of past monopolies like standard oil or railroads, et cetera. You detail various areas impacts of the scale of the current tech firms on innovation, on our politics, and on the press. I want to ask about that in particular. You don't go into the book to some of the questions around some of these schemes that are popping up around the world to get big tech firms to try to remunerate news firms. But I thought I might ask you what's your view on those is given the stance in the book, things like the JCPA, the Journalism Competition and Protection Act, which is in consideration in the Senate, there's a similar bill in California at the moment. There's just passed legislation in Canada. There are other jurisdictions around the world thinking about these bills that essentially would require the tech firms to remunerate journalism organizations for content.

Tom Kemp:

Yeah. In the California bill, that was put forth by Assembly Member Buffy Wicks (D-CA15) who got on board with my SB 362, and she was the one that wrote or sponsored the AADC from California last year. But what happened with that one is that the opposition came forth, and so she agreed to pull the bill and make it a two-year law. So the California bill is no longer active. But yeah, just stepping really far back is that for the last 20 plus years that there's been no regulation about the collection of data and the use of the data, which increasingly is being utilized by AI, at the same time for the last 20, 30, 40 years, we haven't had strong antitrust. Really the last major antitrust activity was stopping Microsoft from requiring the bundling of Internet Explorer. And by the way, when that came through, that opened up the door for Google and this massive amount of innovation to occur, just like a massive amount of innovation occurred when AT&T was broken up and we had a telecommunications revolution occurring because of that.

But yes, so what happens is when you have these large tech monopolies, it actually exacerbates the problems that we talked about before as it relates to privacy and AI, and it also causes new problems such as what we're seeing with the press. I believe that unless you address the issues as it relates to behavioral advertising, I'm not saying ban behavioral advertising, I'm saying that you limit it not to children and then also not rely on sensitive data being used with it. And then you also need to address the practices of the big tech companies of referencing their own music, their own news applications, and then also address that a lot of newspapers get their subscriptions through mobile apps and Apple and Google take 30% of that, which means less dollars are being flowed. So I think you need to address the privacy related issues and you also have to address the unfair charging of the transactions as well as the application fees and the self preferencing of the big tech's own news apps and music apps, et cetera.

So those need to be addressed as well to release more money to be available for publishers. You also need to seriously look at like Senator Mike Lee's America Act, where he proposes actually breaking up Google's ad tech business because Google participates in all parts of the advertising ecosystem, and they're taking like 50%, and that also derives money from publishers as well. So I think you first need to do all that stuff because there's 30% here, 50% here are not going to the news organizations. And then yes, then consider the JCPA. Or was it Canadian Bill 18? Et cetera. Then they should also have some sharing right there as well. But if you don't address the preferencing, the 30%, the use of surveillance, advertising, et cetera, then these type of bills are really not going to help that much because huge chunks of money are not making it to the publishers right out of the gate.

Justin Hendrix:

In general, you conclude by sort of suggesting that the unassailable position of big tech, despite all the reforms that are perhaps on the table or suggested that you chronicle in the book, the position of these companies seems like it's sort of set to only improve over the next few years their profits, their scale, their control over so many aspects of the digital ecosystem. And certainly who knows what advantage artificial intelligence will bring to these large firms. What gives you hope at the moment? I mean, you point in the book to perhaps a generational shift?

Tom Kemp:

Yeah. So look, the book itself, I did want to write it so you could hand it to your Uncle Larry or a friend who may not be up-to-date, and so they can kind of get it. But I also didn't want to be a complete Debbie Downer and just say, "Oh God, everything sucks." I wanted to actually provide solutions. And I am hopeful because in the past, we did break up the robber barons and the railroads, we did break up standard oil, we did break up and better regulate telecommunications, and we had a telecommunications revolution. No one's using Internet Explorer today. But if we hadn't taken the antitrust activity with the DOJ to require Microsoft not to force the use and embedding of it, that caused companies like Google, et cetera to proliferate.

And so, I do feel that there is better awareness out there. I do see a lot of positive activity and maybe some of these bills get through. Last year as it relates to data brokers, there was a federal bill limited just to judges coming out of that incident in New Jersey with the child of a federal judge getting killed. And so, okay, so if we can provide protections to judges and we're also maybe considering banning the sale of sensitive data to governments, then why not take the next step? I'm hoping my book kind of pushes... I don't think my book's revolutionary. I think there's some interesting things that I've discovered and I connect the dots and I try to make it understandable to the lay person, but I'm also hoping that this gives further impetus to make the changes that people can literally hand the chapter on what should be in a privacy bill and say, "Hey, why don't we have these types? Why don't we have the right to know? Why don't we have global opt-out signals, et cetera?" and just educate people and get people to the further push this through.

So my goal was to kind of push the ball forward and nudge things along in a small way while at the same time I'm putting my money where my mouth is and trying to contribute to the bills, at least here in California being written.

Justin Hendrix:

Well, I would agree that this is readable. And for anybody that wants a tour of the various legislative solutions that are at least on the table to a lot of the harms that you address here, it's a very useful resource. So I would commend Tech Policy Press readers to it Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy by Tom Kemp. Tom, thank you so much.

Tom Kemp:

Thank you for having me on.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...
Rebecca Rand
Rebecca Rand is a journalist and audio producer. She received her Master's degree from CUNY's Craig Newmark Graduate School of Journalism in June 2024. In the summer of 2023, she was an audio and reporting intern at Tech Policy Press.

Topics