Home

Donate

The Problem with the "Big" in Big Tech

Justin Hendrix, Rebecca Rand / Sep 17, 2023

Audio of this conversation is available via your favorite podcast service.

Today’s episode features two segments, both of which consider the scale of technology platforms and their power over markets and people.

In the first, Rebecca Rand delivers a conversation with University of Technology Sydney researcher Dr. Luis Lozano-Paredes about a community of drivers in Colombia who have hacked together a way to preserve their power alongside the adoption of ride sharing apps.

And in the second, Justin Hendrix speaks with Columbia University Law School Professor of Law, Science and Technology Tim Wu, who recently spent two years on the National Economic Council in the White House as Special Assistant to the President for Competition and Technology.

The discussion touches on privacy legislation, ideas about competition and scale, and Wu's observations on the landmark antitrust trial between the Justice Department and Google, which wrapped up its first week of testimony on Friday. The conversation took place at the All Tech is Human Responsible Tech Summit, hosted with the Consulate General of Canada in New York, on September 14th.

A full transcript is forthcoming; what follows is a lightly edited transcript of the discussion with Professor Wu.

Justin Hendrix:

Good afternoon. How is everyone doing? We're having a good time. All Tech is Human group puts on a great set of events. They've been very busy this week. David and the Canadian Consulate, very grateful to you again for allowing me to be part of this. Tim Wu, I'm very privileged, very proud to be sitting on the stage next to you and have the opportunity to ask you a bunch of questions. Tim Wu, now back at Columbia where he has taught with some interruption since 2006, but most recently, of course, known as the architect of the Biden Administration's Competition and Antitrust Policy. So this man spent a couple of years in the White House close to the seat of power, making a bunch of decisions about some of the matters that we've been talking about today. Since there has been a good amount of conversation and you were here for, I think, a good bit of it and we have that context also in the room with us, I can ask you why the hell hasn't Washington, D.C. fixed any of these problems?

Tim Wu:

This is a pretty cheery crowd for such a depressing topic. I got to say that much. I also want to say a special thanks to the Canadians. Here is my secret. Despite all this White House stuff, I am a Canadian dual citizen, and I'm very glad Canada's sponsoring this important event. So how did we fail basically is the question of this. One of the things I worked on, in Antitrust and Competition Policy, I think we did a good job in the Biden Administration and we have kicked things up and as you may know, we're trying Google at this exact moment, there's a case going on against Facebook, another case against Google.

Maybe they'll get a case against Amazon. So there is a determined effort to try to rebalance the power that has aggregated in big tech, and I think that's important. Even if you work for big tech, you might have to admit maybe you have a little too much power compared to everybody else and that, I think, is going well. What has not gone very well, I am hesitant to admit, but have to admit is the goal of getting at some of the core sources of power surrounding data and surrounding data collection, which are often referred to as the privacy debate and the child protection debate.

Justin Hendrix:

I will actually ask you about those things. I know in the last part of your time in the White House that you were actually I suppose dispatched to work on the push around privacy legislation, the American Data Privacy and Protection Act, and child safety legislation that's been proposed in Congress.

Tim Wu:

Yeah.

Justin Hendrix:

Again, nothing's moved ahead. Some folks perhaps in this room would say good news on that, on the child safety bit of it, but almost everyone, I assume if we had a raise or a vote here, folks would mourn the fact that nothing's happened on privacy. Why not?

Tim Wu:

Yeah, I think we often said in the White House, this should be pushing on an open door. This seems easy. So for one thing, the American public when polled says, "How do you feel about your privacy online?" It's overwhelming that people feel they don't have enough privacy online. Other countries have better laws, although not really good enough. I don't think the GDPR goes anywhere near far enough for what we need. You had industry which is dissatisfied with the potential of all kinds of overlapping laws and differences wanting something. They just wanted something, so they were behind it. Now, not all of industry, Facebook was not behind it and obviously Google and other companies, but Apple was on our side, the Chamber of Commerce was on our side, Software Alliance, we had a lot of big things. You had the Republican Party willing to go along with us, and it all faltered on embarrassingly terrible turfy politics that I find is a shame for American democracy.

Justin Hendrix:

Led by Democrats.

Tim Wu:

Unfortunately, I'm a Democrat, some of you maybe Republicans, our party failed. I have to just lay it down there. We got into strange turf battles between Senator Cantwell and various other people. It's just boring to talk about. California, who at one point in the history of this nation liked to pioneer things and then pass it on to the whole country, came out very strongly against federal privacy laws for reasons that they thought were defensible, but to me sound a lot like turf defense and a lot like we have a new executive agency and we want to do all this stuff. So they got this that got to Pelosi. Pelosi said she was thinking to support it. So we had a pretty strong federal privacy bill. We had a lot of support for it.

It took a big step conceptually, I don't want to turn this to a law school lecture, but I think it's incredibly important that we go beyond the model of the people who hold your data, have to be careful with it or delete or do things if you ask them to to preventing the data from being collected in the first place. I think everyone knows this, that the notice and consent model is broken and people have known that for years. What you have to do is ban data collection in certain forms or make it absolutely limited to what the thing is. If someone has a dog walking app, it shouldn't be collecting everything about you and reading all your emails, it should be about dog walking.

You know who else was behind us? The intelligence agencies were behind us on this stuff. So this is a problem. We've created the spy architecture that other countries use to spy on us. So the forces you might think would be against this kind of stuff were on our side, we couldn't get it done. It was an indictment of American democracy and a little bit indictment of the left, I hate to say, but overall, our system of government. So we should have this, everybody wants it. There should be much less data collected. It should be that which is strictly and absolutely necessary for the functions being performed and nothing else. That is clearly what we need, and the question is making that happen.

Justin Hendrix:

Let me push you a little bit on that question. What in fact do you believe the left did to scuttle privacy legislation?

Tim Wu:

So I already mentioned California, and I think if they were here would say, "Well, we thought that..." They have their side of this argument, but I think that it is, nobody thinks they're being turfy, but it happens. Some of the prominent senators on the left felt they needed to control the legislation or as in a tit-for-tat for other kinds of battles, just stuff that's not very interesting. On what should be the easiest form of protection, child privacy protection, we also failed. If you asked the American public, "Do you think children should have stronger privacy protections than they have now?" I think you would get 99.9%, I don't know, whatever support. In a democracy, 60% of the people should get what they want.

If it's not unconstitutional violation of people's fundamental rights, 99% of the people should be able to get what they want. Children's privacy is in that category. The fact we can't pass it is an indictment of our current system. I blame the left slightly because we get weirdly involved in battles in the left that... I remember some strange, it was almost like a political parody, we had some crazy battle going on between some of the LGBT groups and some of the eating disorder groups we're fighting with each other about all this kind of stuff. I know people have interests, I know they feel very strongly, but we are failing to do stuff. If you are on the left, it's partially our fault.

Justin Hendrix:

Yet, there are some odd-

Tim Wu:

Children's protection, the Republicans have not been that bad on that issue. I am ashamed to say we're worse on the issue of children's protection than the right.

Justin Hendrix:

Yet, we do have these Trojan horse concerns, right? Senator Marsha Blackburn said some very wretched things lately about how KOSA, the Kids Online Safety Act could be used, potentially someone will find the exact quote, but essentially to police children's interaction with transgender issues and ideas. So perhaps some of these concerns are real?

Tim Wu:

Our view was not that view. We didn't obviously agree with Senator Blackburn's statements. It's very difficult to police what Republicans are going to say or do, but I think the basic goal of the Children's. Protection Law actually related to the last panel is we wanted to force companies to spend more on trust and safety and on children's protection. That's all we wanted to do. The president felt that was very important. We didn't want Facebook to be firing 20,000 workers or whatever they call that Twitter company to be firing all their staff who take care of this stuff because they're too afraid of liability. That's what we wanted to accomplish. I have friends in trust and safety at companies who I won't disclose. They're like "The United States is not that really putting much pressure on us, they're not really after us. Europeans get after us."

All you do is occasionally have a hearing, we show up, you yell at us for a couple of hours, then we go home and okay, and then we know we don't have to do anything. So they don't have to really spend money on this at all. Until we have someone, forget about these sessions where you yell at them, unless they're facing billions of dollars of liability, they will not hire and keep the people to really make this happen. Companies are for-profit institutions, they respond to financial incentives that look like billions of dollars, not like mosquito bites. We have failed to do this. We have failed to create the incentives to make companies invest seriously in trust and safety, and that's what we wanted to do. We've gotten distracted by a lot of stuff. We've let this pass, and I think it is a disservice to our children.

Justin Hendrix:

So I want to step back and let's talk about competition antitrust, the scale of tech firms. How many key people here work for a large technology company, if you're willing to raise your hand, a company that measures its revenues in billions? Okay.

Tim Wu:

Well, I'm trying to get more money for your trust and safety. I'm on your side.

Justin Hendrix:

There you go, and they're probably in those departments. But let me mush a few ideas together and maybe give you a bad question that hopefully you'll give a good answer to. I saw your former White House colleague, Dr. Alondra Nelson, who was deputy director of the Office of Science and Technology Policy, talk recently at Columbia at the Knight First Amendment Institute. She talked about the relationship between technology and that idea of the poly crisis, all these complicated issues that we face in the world, whether it's climate change or interstate conflict or what have you.

She noted that in some ways of depicting the poly crisis, tech is often off to the side. It's frontier technologies and maybe some aspect of disinformation, misinformation, cybersecurity, et cetera. She was arguing tech is actually related to all of these larger problems and in some cases underlies and in some cases in conversation with and reflexively related to these things. So if you accept that premise, I want to ask you the problem of scale in the current tech ecosystem, how does it relate to the quote, unquote, "poly crisis?" Is it in there? Is it part of that complicated mash of ideas?

Tim Wu:

Well, I'll say first that Alondra Nelson is awesome, and everything she says must be correct. But moving on from that, the problem with scale was the greatest challenge of the 20th century at some level. Even though we thought, I think there was a moment in the early 21st century when people thought we had moved on from scale or that life was fundamentally different or that small businesses and small startup stuff were going to rule the world, that hasn't happened and we're back with scale. Scale is a almost wonder drug. It makes possible many things that seem magical. But I'm also reminded of the saying that where something has gone wrong, something is probably too big.

So I think that almost all the challenges we face in contemporary, all these poly crisis at some level, not uniformly, but many of them linked to the problem of scale and things getting beyond human size, beyond our easy capacity to deal with, whether that's populations, whether that's systems, whether that's schools, airports, tech companies, you name it. In my heart, this child protection and privacy are my side gig. At core, I'm a structural antitrust economic person, and I think that getting a handle on the dangers and possibilities of scale is core to building a future that works for everyone. Now I've said a lot of abstract stuff. What does it mean a little more concretely? As I said, I think in early 20th century people had thought that the advantages of scale had disappeared.

That little startups like Google were beating big companies like Microsoft, that little bloggers were beating existing news organizations that the advantages of being big had disappeared. That all seems like a bad joke 15 years later, or not a bad joke, but a passing moment. There are still times where people have individual success and so forth, but go try and launch a e-commerce site to compete with Amazon and odds are you're going to find it a challenging thing. The main reason is Amazon has an extraordinary scale, try and launch a search engine that competes with Google and their scale of daily usage and so on. So in all these areas, scales become this extraordinarily important competitive determiner. I think there's a lot of problems with scale, but mainly I think from an economic equality standpoint, scale tends to concentrate wealth. When you look over the economies of the past and over the course of humanity, the most concentrated scale economies, the PlayStation model have concentrated much wealth and created dangerously unstable systems, so that's what I'm worried about. I know it was very vague, but you said it was an open-ended question.

Justin Hendrix:

Absolutely. So let's make it a little more closed-ended and talk about news of the day. So Google now on trial in the government's suit against it, first of two related to competition and antitrust. I understand you took the train and joined the trial yesterday.

Tim Wu:

I was there. That's true.

Justin Hendrix:

How did it feel in the courtroom?

Tim Wu:

I thought it was fascinating, exhilarating, almost like going to a free TED Talk. Well, actually a mixture of Law & Order and TED somehow.

Justin Hendrix:

You had Hal Varian, the economist for Google delivering his statements.

Tim Wu:

Well, the first thing they did was to put Hal Varian on, and Hal Varian is the chief economist of Google. He has a little bit of a resemblance to Bill Gates, so it had a little bit of a '90s feel to it, and he certainly had the mannerisms. People here are probably too old to remember, but Bill Gates was deposed in the Microsoft trial back in the '90s and it came off terribly. He kept arguing, and being annoying, and everyone had liked him, then he came off as evil nerd kind of figure. Hal Varian, they'd ask him questions like, "Does a search on Google give you a broader set of results than a search on Amazon?" He said, "Well I don't know if I can answer that, depends what you mean by the word broad." He said, "Well, more sources of information, well really, it's still hard to say, can you be a little more..." went on and on like that for hours. So I enjoyed that. I don't know why. But then the most-

Justin Hendrix:

You're a lawyer.

Tim Wu:

The art of good lawyering is something, and there's a reason people like legal dramas. But the most interesting part, the government's core case is that despite being a relatively cheery, friendly place, Google that Google has been very aware of human behavioral tendencies, including habit, the effect of defaults on choice making and has used... Since its search engine rose to popularity because it was clearly better than what was around. But since then has been very thoughtful and careful about subtly maintaining itself and its market dominance and shutting out potential competitors or starters by using its money to make sure it has the defaults, takes advantage of habit and does a lot of subtle things to keep it there.

They had a professor from Caltech who's a behavioral economist, and that's why I was saying it was like a TED Talk 'cause he was talking about all the various... and then proving with Google documents that they were aware of this and trying to manipulate people so that they generally ended up with Google. So that is the core of the interest. Obviously, I worked for the Biden Administration, so I hope the government wins. But even just as a effort to understand what power looks like in the 21st century, the subtler forms, I think 20th century power is much more industrial or military, now it's more about controlling the people's attention, subtly shaping people's decisions scale in a different way. The trial is very good at trying to understand what economic power looks like 21st century.

Justin Hendrix:

So let me ask you, if the DOJ case fails, what's at stake? If it fails, what does that mean for this project to potentially address 21st century economic power, the network effects, all of the sorts of ideas that you're working on? How does it affect perhaps the other case about Google? We got another one coming up around market dominance in digital advertising and how does it affect the case against Facebook?

Tim Wu:

I think there's two things I'd say. First, I think that while the Google trial is about the 2010s and the deals that Google made with Apple to stay out of search, I think that's the main indictment is that Google paid off Apple to stay out of search. That looking at the past, it'll matter for the future and civically for the AI markets and the ongoing contest for what commercial AI looks like. Google feels threatened by what's happening in the AI market. They have obviously their own products are rushing... and the question is in some ways whether they can use the tactics that they use in the 2010s, the soft economic powers talking about for the next contest.

In terms of the second thing I'd say is, I think that if the government loses all these cases, we enter an era of long-term monopolization where the big 3, 4, 5 tech companies are pretty entrenched and the government will have effectively blessed the models that these companies use to keep competition at bay. In Facebook's case, it was buying its major competitors like Instagram and WhatsApp, case of Google, it's all money related, paying to be the default, paying to keep Apple out of search, paying to make Samsung, make sure that it doesn't develop a different search alternative. I think having blessed those techniques, I would predict that those companies stay in their seat of power for much longer than is natural or healthy.

Justin Hendrix:

If there is any example of an entity on the planet that wherever you can point to problems with it, it seems to be because even the humans working there don't really understand how it works or how the system operates. It's Google, in my view. But let me ask about with this, with regard to AI, this was a big week for AI in Washington D.C. We've had, I think, three AI hearings in the Senate this week. We had the Senate Majority Leader hosting the first of his AI forums largely populated by men in suits who run large technology companies. Lots of discussion about what to do should we have a standalone agency, an FDA for artificial intelligence. In your view, what's the best path forward if we do find these firms in this unassailable position? Is there any hope for AI regulation?

Tim Wu:

Well, there's certainly the potential of AI regulation going forward. By the way, the men in suits, I'd feel I'd call the men who have put on suits for the occasion, but feel very uncomfortable in suits and look strange in them. But that's my snarky take. I feel that I am speaking slightly against my former employers concerned with the direction of AI regulation, mainly because I approach these questions from a structural power issue. I'm mainly interested in power and it's rebalancing. The fundamental reason I believe in the cases against Facebook, Google, and maybe Amazon, it's not because I think they're particularly evil companies. I think there've been much eviler companies and are much eviler companies in history. I just think any overly concentrated power is dangerous and the longer it stays there, the worse it gets and that rebalancing is necessary. A constant rebalancing is necessary for the health of any long-term democracy or economy.

Actually, that's what my next book is about, which I am trying to write about the constant cycle of trying to rebalance power in a democracy in a long-term sustainable economy. So I'm worried about the current round of legislation that it is shaped by many of the big players who frankly seem interested in it largely to insulate them and make them the player. Now I think they have legitimate concerns too, and I think government doesn't want us to be conquered by evil robots and is concerned that it was behind the ball, but behind it it should be, and is this question, is this leaving the market to three players? That's what I'm worried about.

Justin Hendrix:

Let's talk about just briefly, we only have a couple of minutes left. With regard to labor issues and artificial intelligence, what are you concerned about at the moment? We've got everything from the Hollywood strike and this focus on writers and the extent to which creatives may be replaced on through to headline after headline about the way in which people in global majority countries are being employed to look at the worst stuff and build the classifiers that are going into these large language models for $1.50 an hour. How do you think about the labor issues around artificial intelligence broadly?

Tim Wu:

Well, let me start with labor first, and I feel very strong, it's about time labor got its due and that we have neglected the interests of workers for far too long in this country. For 40 years we took labor as a cost and thought that was fine when actually it was us. You know what I mean? And focused entirely, especially in management, and all these areas like directing all our efforts to try to make stuff cheap and no concern about producers or workers. I think this White House, and now I'm going to sound like a partisan, but I really do believe the president wants to change that, has tried to change it. We had unions come in all the time and we're very focused on trying to make things better for workers. So that's my political message, but I believe it. The AI and labor issue I think is complicated.

I really think the history of predicting technology's effect on work is filled with gross errors that are almost embarrassing. In the '50s and '60s, anyone who studies this knows that everybody predicted because of more advanced technologies that the workweek would fall to 30, 40, 20 hours. There's a Life Magazine issue from the '60s somewhere that's called the Crisis of Leisure. What are we going to do with all this free time? Instead, even though we have these events, we have email, things that seem impossibly magical, we have telephones, we have email, we have baby monitors. I don't know, you have everything. People work more than ever for less money. Most families have two people working, not against women being in the workforce, but it is striking that at one point in the '60s they thought, "We'll only have to have one member of every family work for 20 hours a week, and that'll be enough for everybody." Now most families have two people working killing themselves.

Sherry Turkle was talking about children being neglected, and someone asked a great question about what is this? So this is crisis, so we've been wrong over and over about what technology will do, and I think we'll be wrong about AI. My fear about AI is not that we'll have less work, but that AI that will all become middle managers. Once something becomes potentially more efficient, suddenly everyone's asked to do more of it. Say you're a lawyer and you're asked to file one complaint a month, and then AI makes it possible to write complaints a lot easier, suddenly you need to write 20 of them a month, we have this weird tendency when things get more efficient... Emails made it more possible to communicate with, so we communicate more and in some weird converted way and instead of there just being some amount of work we have to get done, I think humans have a weird capability to invent or create work or something happens or we need to be more efficient or do more work.

I don't fully understand the mathematics of it, but I have a weird feeling AI will make us more busy, but we'll be supervising 100 Ais, and we'll all be middle managers and not doing any work, what you might call real work ourselves, actually doing the writing or the creation or the drawing or the things that are rewarding as opposed to supervising the AIs that are doing it, and that is what I fear. I fear for the human condition, this is go back to 19th century Ruskin, what is the quality of the work that we do, and how does that matter to the human experience? I think we forget that sometimes. It's not just the amount or type of work, it's the satisfaction. Mark's had a little bit on this, alienation of labor. You think it's alienating to being in a factory line. What about just being someone who sits there approving the products of others? So there you go.

Justin Hendrix:

I think we are pretty much out of time. Can I ask one more question, David? Okay. I've got three more on my list, and then we'll finish up here. Okay.

Tim Wu:

Three more questions.

Justin Hendrix:

They were, why do we need a robot penal code? Which I know is something you've written about. Is there a decoupling looming where to some extent outside of maybe these three big tech firms, we can't even have the technological imagination for where AI is going? Is the federal government investing enough in avoiding that circumstance in public sector and in university investment? That's the second question. I suppose the last one, you can choose which of these we want to answer. What should the folks in this room do about any of this?

Tim Wu:

Okay, I'll take the first question because-

Justin Hendrix:

Robot penal code it is.

Tim Wu:

Yeah, robot penal code, because I think our approach to AI regulation right now is wrongheaded in the sense that there is a lot of slightly more abstract concerns about cataclysmic events and things, not all of which is unjustified, but at least in government, lack of attention. I know a lot of people in this room are concerned about this, but lack of attention to obvious more real visceral harms like advanced fraud. I know people here are worried about electoral fraud and so forth, and I think in some ways, we're not being tough enough in those areas.

In some ways, we're over regulatory with AI, potentially over regulatory in some more abstract sense when you talk about licensing to use AI and much under regulatory when it comes to the clear concrete harms that are going on. A good example of this is what we're pressuring tech companies to do. I think we in some ways put pressure on the hardest thing, which is misinformation and not enough pressure on, it's not easy to deal with, but more obviously concrete in its harm, which are things like the sexual abuse of children and other very visceral, real-world harms. So I think we should have a robot penal code. I think it needs to be strong and include the death penalty for robots.

Justin Hendrix:

You heard it here first. Tim Wu, thank you very much.

Tim Wu:

It's been a pleasure.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...
Rebecca Rand
Rebecca Rand is a journalist and audio producer. She received her Master's degree from CUNY's Craig Newmark Graduate School of Journalism in June 2024. In the summer of 2023, she was an audio and reporting intern at Tech Policy Press.

Topics