Roger McNamee’s official bio at Elevation Partners, an investment firm, says that since 2017 he has been involved in ”a campaign to trigger a national conversation about the dark side of social media“. He is the author of a book, Zucked: Waking Up to the Facebook Catastrophe, published by HarperCollins in 2019, that contains an account of his role as an investor and advisor to the company and his ultimate recognition of the harms it causes at scale. To those in the tech policy world, his voice on these issues is well known- he is a frequent commentator on technology on shows such as “Squawk Alley” CNBC and on Ali Velshi’s show on MSNBC, and he is a regular at conferences that delve into topics at the intersection of tech and democracy.
Roger knows the tech bro culture well enough to have served as a technical advisor for multiple seasons of HBO’s “Silicon Valley” series, a show that lampoons the people and industry there. He is also a philanthropist and a musician- he plays in the bands Moonalice and Doobie Decibel System, on bass and guitar. If you follow him on Twitter, you will get to hear his frequent livestreams, sometimes solo, and sometimes with the band.
I caught up with Roger last week on the Tech Policy Press podcast to get his sense of where the movement to regulate and hold technology firms to account is at the moment, particularly in the U.S. You can listen below, or read the edited transcript.
I’m Roger McNamee. I’m an American citizen who’s really concerned about the future of democracy, public health, privacy, and competition in the era of internet clavicles.
Thank you, Roger. I want to use this conversation to look back a bit at the last few years and where we’ve got to with regard to a movement to regulate big tech to think about the intersection of technology and democracy- where we’re at today, and where we’re headed in the future. But I think just for the sake of my listeners- for the two or three that may not know who you are- can you just give some folks a bit of context on your personal journey? Why Roger McNamee is involved in these issues?
I first decided I needed to go to Silicon Valley in 1978, when I was going back to college. My brother gave me a Speak & Spell, which was his toy to teach kids how to spell words. He said, “If you can make this thing talk with a two line text display and keyboard, pretty soon you’re going to be able to make a handheld product that holds all your personal information.” This was in 1978, one year after the Apple II first shipped, three years before the IBM PC. My brother is describing the Palm Pilot, which came out roughly 18 years later.
I went back to college determined to be the guy who invented the Palm Pilot, but I was a terrible engineer. That didn’t work out. In 1982, I got a job at a mutual fund company called T. Rowe Price Associates in Baltimore, Maryland as a technology analyst. Keep in mind, this was an era in which the dominant form of technology was aerospace and defense.
I was the defense analyst, but also did software. By 1985, the personal computer industry was a real deal. I convinced the firm to change its strategy in tech, which worked out well. When they started a technology fund, I became the portfolio manager in 1988. I was lucky enough to have a great record. In 1991, I got invited to join Kleiner Perkins and create the first crossover fund. It was called Integral Capital Partners.
That means I was there when Netscape started, when Larry and Sergey showed up with the original idea for Google, when Jeff Bezos brought Amazon. By just pure dumb luck, I was in exactly the right place as the internet took off. In 2006, a senior exec at Facebook contacted me and said, “My boss is facing an existential crisis. He needs to talk to somebody who’s been around a long time who can keep their mouth shut, and can give him good perspective.”
Mark Zuckerberg came into my office. He had just turned 22. The company had nine million users. It was still only high school students and college students. They didn’t even have Newsfeed yet. Mark came in my office and before he says anything, I said, “Look, in order for you to understand where I’m coming from, I need to tell you what worries me.” I tell him in two minutes my big fear, which was that his investors would try to sell the company out from under him.
I told him because he was the first person to require authenticated identity and to give people control of their privacy, Facebook would be the first successful social media platform. It could be a really good thing for humanity. I told him either Microsoft or Yahoo offer a billion dollars for Facebook and I hoped he would not sell it.
Long story short, it turned out the reason Mark was coming to see me was that Yahoo had offered a billion dollars. He didn’t want to sell it. I helped him craft a way to not sell the company. That led to my being an advisor for the ensuing three years, because his management team all wanted to sell the company. He needed to rebuild the management team with people who would be good allies.
One of the people I suggested and then helped to negotiate in was a woman at Google named Sheryl Sandberg. My engagement with Facebook began in 2006. About a year later, I got a chance to become an investor. I helped to bring Sheryl in. I really believed that privacy and authenticated identity would allow Facebook to be a really great firm and they might get up to who knows, maybe 100 million users In North America and Europe! They could do all that without harming anybody.
By 2009, Mark’s vision was much, much bigger than that. He was talking about a billion users. He was talking about going into places and doing business under circumstances that, frankly, really bothered me. It was pretty obvious he no longer needed the advice I was giving him. He had rebuilt his management team. After I stopped advising Mark I stopped paying close attention. The flaw in my reasoning after that was confirmation bias.
I wanted to believe that as Mark grew older that he would mature, and that he and Sheryl would guide the company in a way that would produce great outcomes. Things would happen between 2009 and 2016 that didn’t fit that pattern. In January 2016, it really hit me between the eyes. That was when I started to see hate speech on Facebook targeted Hillary Clinton that was notionally coming from Facebook groups associated with the Bernie Sanders campaign.
Then two months later, there was an episode related to Black Lives Matter where a company was using Facebook’s ad tools to scrape and then sell data about people interested in Black Lives Matter to police departments, which then harassed those people. The pièce de résistance for me was the Brexit referendum in the UK.
In October 2016, I went to Mark and Sheryl with my concerns, in written form, and tried to persuade them before the US election that it was really important to recognize that Facebook’s platform was being used by bad actors to harm innocent people. Knowing their style, I talked to them privately for months. But they didn’t budge at all.
I realized I had to become an activist. So I did. Since early 2017, I’ve been trying to raise the alarm. I joined forces initially with Tristan Harris, and went to Washington, DC. Soon thereafter, Tristan formed the Center for Humane Technology. Renee DiResta who had joined us in the fall of 2017, later joined the Stanford Internet Observatory. I remained focused on DC. But the three of us, plus Sandy Parakilas, spent the second half of 2017 were in Washington trying to raise the alarm. This was the first year of Trump.
Congress was just becoming aware, but people did listen. We built a lot of great relationships, and I’ve worked with government officials in D.C. ever since.
2017, of course, the beginning of not only the Trump years, but really scandal after scandal for Facebook, in particular, and for social media more generally. I don’t know if you want to quickly sum up that period so we can set the stage for today?
As an activist it was hard to get people to recognize that the issues were not an accident. They were not a byproduct of well-intentioned strategy. They were actually the predictable result of a business model built around human attention. If you want to get people’s attention in a very crowded media marketplace, the surest way to do it is to either scare them, or outrage them.
There’s three kinds of content that do that for most people: hate speech, disinformation, and conspiracy theories. The reason they work so well is that our self-preservation instinct, flight or fight, kicks in. Even if we’re not drawn to that content, we have to pay attention to it just as a matter of self-preservation. When you drive by an accident, you look at the accident. You can’t help but rubberneck because your most basic human psychology is drawn there.
To build a business model around that concept, without regard for the certainty that it would lead to bad outcomes was irresponsible. To do it at nation-scale with billions of active monthly users was guaranteed to produce a terrible outcome. In sitting down with members of Congress, I discovered a hurdle. Like me, they liked the people at internet platforms. The hardest part was to get them to recognize that the folks who run Google and Facebook are not bad people, but they have a different value system.
They really believe in engineering concepts like efficiency. They believe in scale, in speed. Our country is based on enlightenment values like democracy and self-determination, which are inefficient by design, because they have deliberation built into them. In a competition at nation-scale between efficiency and democracy, democracy doesn’t stand a prayer. That’s really what the conflict was about. The ethnic cleansing in Myanmar, in 2017, should have stopped the whole thing.
I can understand that Facebook employees don’t buy into the issues in the US election or Brexit. Okay. Fine. Let’s look at Myanmar. There, you got ethnic cleansing. Then, early 2018, we had the Cambridge Analytica scandal. It was from 2016, but now we’ve got data. It’s really obvious. Then you have the terrorist act in New Zealand. It’s like, “Wait a minute. That entire thing was orchestrated to exploit internet platforms.”
Then there were mass killings, where the people were radicalized on Facebook or on other internet platforms. Each one would generate a little bit of a press focus, but always lacking context. Employees would respond, “Well, this was an unfortunate thing.” The problem is that’s how the company’s positioned. They view their mission as so important that a failure is … Well, it’s a step forward in any form of invention.
Edison used to talk about the fact that when something it didn’t work, it was not a failure. It was just an experiment that didn’t work, on the path to something better. Inside Facebook in particular, they train their employees to view issues like Myanmar, and Christchurch as simply learning experiences on the way to a more perfect Facebook.
We saw a little bit of that this week. Adam Mosseri, who runs Instagram made some comments just yesterday that hit that theme. He made this statement in a conversation with Ryan Mac that flowed out of an announcement that Instagram had made around the way it deals with racism on the site where he said, “Technology isn’t good or bad, it just is,” which immediately sparked a blowback from folks in my corner of the world who maybe look at things slightly more like you do. Layers of not only technology, but social circumstances, profit motives and incentives, and various other things that layer on top of the technology that aren’t quite so simple.
Well, Mr. Mosseri is saying this as though technology is inevitable, and that there’s nothing we can do about it. That’s ridiculous. Technology is a choice. In my book I have a quote from Melvin Kranzberg, who said, “Technology is neither good nor bad; nor is it neutral.” I think this is the core point. Technology embodies the values for people that create it, with an additional factor for incompetence.
You’re going to get things that are unintended. What’s going on here is Mr. Mosseri is trying to use the fact that it’s neutral as an excuse. The answer is, “I’m sorry, no. There was an ethnic cleansing in Myanmar that happened because you failed to have enough Burmese speaking people monitoring the situation. You did not have people on the ground. Yet you created the default communication system in this country that was then used by the military to instigate an ethnic cleansing. You are an accessory. You’re not innocent here.”
When this happens over and over again, you have to start asking the question, “What should be the legal remedy here?” Because this is not inevitable. These are choices made in pursuit of profit by people who should know better.
I think it’s fair to say you summed up a lot of your critique and put an underline under your activism with your book in 2019, Zucked, which I’m sure is also something that many of my listeners have both read or are familiar with, or certainly have seen you expand on the different themes and in that book. On some level, I feel like there are certain parts of the book, Congress gets serious, et cetera, where you seem to think that maybe change is more imminent than it turns out to have been a couple years later looking back. We’re still maybe slightly more stuck than you might have imagined we’d be. Where do you think we are at this moment, reasonably well into the Biden administration at this point, and the 117th Congress? Not a lot of progress, really, since you published the book in February of 2019.
You’re correct that I was more optimistic about Washington than I should have been. The thing that made me optimistic was that the Trump administration, for all of its faults, was the first in many presidencies to take antitrust seriously. In the House of Representatives, there was a really interesting core group of members, some who focused on safety, some on privacy, and some on antitrust.
In working with these people, I developed enthusiasm that turned out to be misplaced, because the structural flaws in Congress were more extreme than I realized. We see this playing out in the response to COVID. I was naive about that. I am more optimistic today because the Biden administration has appointed extraordinary people into the key regulatory positions that surround this problem.
Lina Khan has been appointed to be the head of the Federal Trade Commission. Tim Wu is part of the President’s Economic Council. Rohit Chopra, who is a member of the Federal Trade Commission, is going to run the Consumer Finance Protection Bureau. Now Jonathan Kanter has been appointed, but not yet confirmed, to be the head of the Antitrust Division of the Justice Department.
I mean, these are the four best people that Biden could have appointed to those positions. The administration got it exactly right, and has clearly indicated that tech reform is a priority. That gives me hope.
Now, the challenge is the deliberation needed to move forward on any issue, like regulation of internet platforms. Google and Facebook control the core communications media on which that deliberation is going to take place. In retrospect, I was too hopeful about our ability to transcend that. I expected increased attention in the press, beginning in March 2018 with the Cambridge Analytica scandal, and more energy in the academic community, particularly with respect to issues of safety, and privacy, would translate into action.
What I underestimated was the willingness and ability of internet platforms to subvert the democratic process, not just with their platforms, but also with their money. They have been able to exert huge influence by funding academic programs, by funding NGOs, by funding congressional campaigns, by saying the same thing over and over again.
Internet platforms know more about human attention than anybody. They are really good at crafting a message that buys them time. “Oh, we’re so sorry this went wrong. We promise to do better the next time. We’ll study this thing and come back to you with a report in eight months and one day,” knowing that the attention span of the politicians and the journals is probably 30 days. Maybe they put the report out, maybe they don’t. You have to admire those people. They are really, really, really good at what they do.
Our side is underfunded. We have too few people. The country suffers from a lack of trust in government that’s been exacerbated by COVID. Let’s face it. These platforms have played a huge role in making this pandemic into the disaster that it has become, and making it impossible for our government to develop muscle tone to respond under this really extreme test.
Mosseri almost said these things in his couple of tweets yesterday. Basically, what Kevin Roose queried is even falsifiable statement, whether social media has made the world slightly more better than it was, or worse. But I think one of the things that you’ve pointed out is that there are harms that are obvious that are right on our faces that, for some reason, get lost behind some of those arguments that are made about the totality of social media’s impact on the world, criminality on the platforms, for instance, or other phenomena.
Let’s say that you’re comfortable excusing election interference, and let us say that you do not get exercised over an ethnic cleansing in Myanmar because you can’t even find it on a map. Let us say that you look at what happened in Christchurch as a fluke. That is where Facebook’s employees have been. But how do you excuse the level of criminality that takes place every day on these platforms?
Here, I’m talking about trade in antiquities, trade in exotic animals that are protected by global conventions, the promotion and sale of illegal drugs, medical scams, financial scams. These things are pervasive on these platforms. They are all to one degree or another illegal.
But isn’t Mark Zuckerberg’s argument something along the lines of we’re doing our best. We’re investing a lot. You’ve got to kind of look away from the prevalence of these things while we work out our AI?
I’m sorry. I’m sorry. We do not excuse the crimes of people with dark skin complexion on that basis. No, it doesn’t work that way. Let’s say you’re willing to excuse all the illegal activity that takes place on the platforms. You still have to look at COVID and what happened with it.
In May of 2019, the Federal Bureau of Investigation declared that QAnon is a dangerous extremist group. Facebook completely ignored this. Meanwhile, Facebook groups associated with QAnon are growing like crazy. They do nothing about it until June of 2020. More than a year has passed. Now what’s happened in between? When queried in June of 2020 by NBC, Facebook admitted that there were at least 3 million users on Facebook Groups and pages devoted to QAnon.
The prior year, Facebook had released a report with an analysis of Facebook Groups that indicated that 64% of the time when a person joined an extremist Group on Facebook, they did so explicitly because of Facebook recommendation. Point six-four times three million means Facebook radicalized approximately two million people into QAnon. They did most of that during the window after the FBI warned everyone about QAnon.
Now, it’s June of 2020. They make some hand-waving moves. They close off a few pages. They do a bunch of things. But those people are radicalized. Again, the fundamental thing here is they simply repot to a different part of the platform. Coming up to the election, Trump starts the Stop the Steal Movement. Who does he appeal to? He appeals to the QAnon gang.
QAnon was literally designed like a video game, and it embraced and absorbed every other conspiracy theory. It absorbed Pizzagate. It absorbed the anti-vax groups. Facebook became a core part of Trump’s community. Stop the Steal naturally got hosted there. What did Facebook do? It knowingly allowed Stop the Steal to organize what became the insurrection on January 6th. There is documented proof of this. How is that not a crime?
We’ll see if it’s considered by the select committee in any more detail. I certainly hope that it is.
I’m sorry. I mean, Congress can do what it likes. I’m talking about something that the Justice Department needs to be on top of. I mean, these are crimes. The fact that these are rich people is beside the point.
Let’s take another example of a felony. One of the six or seven antitrust cases against Google and Facebook was filed by the Attorney General of the state of Texas and other states. It relates to price fixing in the digital advertising market. It targets Google with Facebook as the co-conspirator. The evidence on this is really powerful. The basic notion was Google was monopolizing a thing called header bidding advertising. It was a core part of the digital advertising market. Facebook pretended to create a competitor explicitly so that Google would divide the market with it.
Apparently, there’s an email trail that even suggests an attempt by Facebook to initiate a shared a defense if they get caught. Price fixing is a felony at the federal level. It’s a felony where the felony also applies to the executives. The CEO of Bumble Bee Tuna was sentenced to federal prison in October. The standard remedy is three plus years in prison for each count for the executives of the affected companies.
The Justice Department has a chance to take over this case and prosecute it. The question is, “Do we have the will to pursue people for criminal violations, which are all over the place around here?” We are always having these conversations using frames suggested by Google and Facebook, as opposed to having them in the frame of the Constitution of the United States.
I think that’s a mistake. Again, I’m just one voice. I don’t make the rules. We’ll see what happens.
You mentioned Wu, Kanter, Khan, the new faces in the Biden administration on antitrust. What are you hoping will happen in the near term, either on the regulatory front, or in Congress? Now you’re also excited about some legislation proposed by representatives, Eshoo and Malinowski, for instance. Are there things that you think might happen in the next six months to a year that may change the game a bit? Do you think the window is closing on some level?
I believe that we should have a sense of urgency. I think I’m now very realistic about the challenges of both getting laws passed by Congress. It takes time to implement changes in legislation and behavior. What we need to use is the tools available to us. The only tools available in the short-run are antitrust laws. If you want to protect the 2022 election, which I think should be our first consideration here, we have to use the tools available.
Antitrust law has been emasculated over 40 years. But there are things like the Texas case that clearly can be brought under existing law and which have real teeth. I’m not looking to put anybody in prison. What I’m looking to do is have the government use every tool in its power to change the nature of the power relationship between internet platforms, and the United States of America.
I do believe that a felony indictment of executives is one of the things that might give the government leverage… Appealing to the moral fiber of platforms has not worked at all. It’s not just that they haven’t been cooperative. They have been disingenuous at every opportunity. I think that’s sad. Because I sit down and go, “Gosh, where do you plan to live? What kind of country do you want to live in?”
The thought experiment I want to ask everybody to run right now is if you knew that you were going to live in a world where you had no right of appeal, where these companies are totally in control of everything, would you wish that you’d done something different today to prevent that from happening? Wouldn’t it make more sense to live in a world where people have new ideas can actually try them without fear of predation from internet giants?
Wouldn’t it be nice to live in a world where if you have a restaurant, some guy who is funded by Silicon Valley can’t disintermediate you from your customer and take 30% of your revenue for doing something you’re already doing, which is delivering food? We’re allowing a very small number of people to control aspects of our lives that historically were ours to choose.
I think we need to recognize that a lot of these products are unsafe.
I was going to ask you about that actually. That might be another, I don’t know, angle that seems to be opening up. Maybe even on a bipartisan level, you’ve got Republicans and Democrats who are raising safety and mental health issues, particularly around children.
Yeah. Let’s look at that. I usually start with artificial intelligence, which is a term applied to machine learning and a bunch of other things. AI grossly overstates the capability of the underlying products, even when they work well. It also elides a core issue. When you train a piece of software with data from the real world, where bias exists, and don’t make special effort, then the system that you create is going to carry forward that bias. It will be even harder to fight back against bias from a black box than it was in the real world.
We’ve seen this with predictive policing. We’ve seen this with resume reviews, which discriminate against people of color, against women, and older people. We’ve seen this in mortgage applications with AI is that enables digital redlining. I think we must require safety from AI. Same with facial recognition. Misidentification of people has put people in prison. These are not outliers. This stuff happens with incredible frequency. It’s easy to predict.
Now you’re seeing school systems putting in these facial recognition-based proctoring systems. They just don’t work. They have massive bias against people of color. It’s insane. We’re basically surrendering our autonomy to surveillance. That is an unsafe thing to do.
We have to stop trusting that each new generation of technology will somehow be the one that’s safe. Because we’ve gone from a world where every technology was more or less safe – up through about 2000 – to a world where more or less every new technology is either predatory or dangerous today.
I think that’s fixable, but you need laws that change incentives. We need to tell engineers that they have a responsibility to anticipate and mitigate harm before shipping a product. If there is harm, they should be legally and financially responsible for it. I don’t know whether accountability should be just at the company level or whether it should go all the way down to engineer level. Perhaps we need to have certification of engineers, as well as lots of training in this stuff.
Safety really matters. You also have to look at personal autonomy. The code word for that is privacy. In their head, people think that the issue is Facebook wants data to target them with ads. If that’s all they were doing, that would be no different than other media. But that’s not what Facebook is doing at all. Facebook has converted all human experience into data that allows them to predict human behavior in order to sell those predictions for advertising and apply them to recommendation engines that manipulate behavior.
This problem becomes clear in the context of QAnon, and Stop the Steal, and the insurrection. For the police officers at the Capitol, the problem with Facebook was not their data was taken. It’s that they were attacked by people who were were manipulated into believing that they were patriots for attacking. They killed and maimed a bunch of police officers. Those terrorists were manipulated.
If you’re one of those police officers, your whole life was turned upside down by that manipulation. This is not an issue that’s going to go away by itself. These companies are not going to fix this. I mean, they’ve had plenty of incentive. They have chosen not to do it. I think we need to stop giving them the benefit of the doubt.
We’ve covered now questions around antitrust. We’ve talked about safety. We’ve talked about privacy. We’ve talked about the trajectory we may be on in the United States with regard to legislation. Let me ask you maybe a bigger question or step back question on some level. You’ve now been part of this movement at the intersection of tech and democracy for the last five years in earnest.It’s changed. It’s grown. It’s morphed. There are a lot of new voices, different voices that are involved now. What can you say about it, just from your vantage of what you’ve observed? You move in a lot of circles, journalists, academics, activists, and also, I assume, concerned people who are in industry on these issues. What does this movement look like right now? Where is it headed?
Well, when I got involved in it, there were a lot of academics who’ve been studying this problem for a long time. My hero is Shoshana Zuboff, who wrote The Age of Surveillance Capitalism. It came out in the UK in 2018, in the US in 2019. Shoshana does for surveillance capitalism what Adam Smith did for capitalism in 1776, defining and naming all of the working elements of the system that is literally transforming our lives without our being aware of it.
Shoshana one of many people I look up to. Safiya Noble at UCLA was a pioneer on the issues of algorithmic bias. Danielle Citron and Mary Anne Franks have done it for privacy law. I’m leaving out a lot of great names. The really telling thing is 80% of the people doing the heavy lifting in this are women. Many of them are women of color. It’s no surprise. They’ve been the harmed parties. They’re really focused on it.
When I showed up, there was little connection between the people doing the great work, and the folks who could do something about it, namely the folks in Washington. For whatever reason, the academic stuff, we’re just not getting out of academic environments. What I perceived was there was an opportunity for me, somebody who had worked in Washington, somebody who had an undeserved advantage as a white male that I could use for good, and so I chose to do that. In the beginning, it was just about trying to get press people to cover the story. Then Cambridge Analytica happened. All of a sudden, it went from Tristan, Renee and Sandy and me trying to feed stories to the press to a madhouse, with amazing journalism in the Europe and the US. All of a sudden, we had to read the paper every day, because the press guys were way out in front. It was really cool.
When that happened, a beautiful thing took place, because the academics who had been ignored suddenly became sources. They were brought into the sunlight. A bunch of them wrote really great books. There’s an unbelievable book on privacy that everyone should read called Privacy is Power by Carissa Veliz, who is a professor at Oxford. It is truly extraordinary. It’s a short book that tells you everything you need to know.
If you can, please read The Age of Surveillance Capitalism. Read Siva Vaidhyanathan’s book, Antisocial Media. These are really important books. If you want to really get into nitty-gritty, read Mindf*ck by Chris Wylie. Those amazing books really give you context. There has also been a lot of great journalism which covers the interpersonal stuff, but getting the context right is the critical thing.
When I talk to people in Washington today, everyone knows the basics of what I’m talking about. They may not understand the details, but there’s a core group of people in the right positions who really get it. David Cicilline, chair of the House of Antitrust subcommittee, Jan Schakowsky, chair of the Consumer Protection subcommittee, Frank Pallone who runs Energy and Commerce, all understand what’s going on.
Representatives Anna Eshoo, Tom Malinowski are the experts on algorithmic amplification, which is the core element of the business model that’s causing so much harm. Speaker Pelosi, has been up to speed for several years. She has been a victim of some bad stuff.
Inside the Biden administration, we’ve got the right people in key positions. In the Senate, you have Senators Warren, Warner, Blumenthal, Klobuchar and others who are deeply involved in all this stuff.
Right now, the coordination among all these people is less than I would like to see. The public pressure for change is much less than I’d like to see. We’re all still stuck on the fact that most people like Instagram, YouTube, and Facebook, despite all the harm they do. They do not realize that they can live without or that alternatives that do not do harm are possible. I tell people, “I’ve been living a life without Google for almost four years now.” It’s awkward, as you discovered when we began this podcast and I couldn’t open the Google doc you sent me. But life without these products is not complicated. Alternatives that are safe are easily made.
You can’t get at alternatives today, because the platforms can choke off anything they don’t like. I believe that either in Europe or the United States, antitrust laws will eventually wear these guys down. They’ll slow them down enough to create an opportunity for alternatives.
My goal is bigger. I carry an iPhone. I want the server for every application I use to reside on my phone, not in the cloud. I do not want cloud. Cloud is a huge problem for privacy, security, and autonomy. It’s a national security problem.
The national security focus has changed from a focus on the Middle East and Russia to worrying about China. Right now, the country is hopelessly dependent on China. Most of our semiconductors come from China or places that are at risk of a Chinese take over. We have way too many essential products and systems that are dependent on supply from China.
All this cloud stuff is immensely vulnerable, as we saw with SolarWinds, as we saw with the Colonial Pipeline, as we see with all of this ransomware. We have to make a decision. Are we serious about security or not? Do we want to be an independent country? Do we want to have democracy? Because all of these things are interlinked in the cloud and web architectures. The business model that Google, Facebook, and Amazon espouse is at the heart of it.
What’s going on? Google, Facebook, and Amazon are selling their vision into education, into healthcare, into the military. It’s insane. We need to just stop it all. We need a timeout. We would be better off without it. There are better alternatives that cannot emerge in this monopolized environment. We need to follow the guidance of Nicole Perlroth in her incredible book, This is How They Say the World Ends. She makes this point that our intelligence agencies are focused on playing offense. When they discover a vulnerability in a piece of software, they don’t tell anybody.
What they fail to recognize is that there’s an asymmetry. A disproportionate percentage of the systems that are vulnerable to those exploits are inside the United States. Our intelligence agencies need to prioritize defensive cyber instead of offensive cyber for long enough to rebuild the institutions of our country.
Our side is still undermanned. Fortunately, we have wonderful people like yourself on our side.
I think about what Just Security and Tech Policy Press are doing. They didn’t exist a few years ago. They make us smarter. That really matters. I’m just one person and every day I do my best. There are way better people out there doing this than I. But we need thousands of more people doing this. If everybody listened to this podcast called their member of Congress right now and said, “You must pass the Eshoo, Malinowski algorithmic amplification bill. You must pass Cicilline’s antitrust bills” we would be much better off.
The problem is that people are busy. I understand that. I’m really sympathetic with it. I’m an old guy who had some free time. I choose to apply my time it here. My great hope is that everybody will take one day, get really pissed off, and make their voices heard. Maybe a miracle will occur.
What’s next for Roger McNamee?
People like me are individually not important. What matters is that there be voices. I look around and there are so many great voices today that weren’t out there a few years ago. That gives me great hope. I’m really, really happy about how President Biden has taken up this cause. It’s going to take all of the administration strength to make any progress at all.
But ask yourself, “What has to happen to make progress?” We’re on that path. We have a long way to go, but we’re on the right path. That’s really important.
Sounds like some of that optimism that was there in 1978 still with you?
Oh, for sure. This country has faced some really hard times. We went through a civil war. We went through the Depression. It’s not crazy to imagine the country coming together to fix this. But I will say that COVID has exposed how deep the hole is that we have dug for ourselves. People are going to school board meetings to try to ensure that children die of a disease unnecessarily. That is insane. When I was a little kid, the country came together to fight polio.
If these people have been around then, one in five kids would be in a wheelchair. These antivaxx people think of themselves as patriots. That level of delusion is tragic. Most of them are not bad people. They have been manipulated. That’s why the January 6th commission is so important. That’s why the Department of Justice pursuing everyone who was involved is so important.
Because these people for their own personal reasons have chosen to undermine the safety of the rest of us. They’ve screamed fire in a crowded theater. That has never been protected by the First Amendment.
If you listen to internet platforms, they want you to believe that the responsibility for all this is on the users. That is utter nonsense. The harms have happened because it was profitable to create systems based on fear and outrage. This was done to people by platforms like Facebook and YouTube.
We have lived in a world for 40 years where we have trusted businesses to make all decisions related to our economy. It was based on an economic theory espoused by Robert Bork, and others that proved to be deeply flawed. The effect of it was to allow a very small number of people to profit disproportionately, and to impose their will and their views on everyone else.
It’s allowed corporations in every sector of our economy to gain levels of economic power that make them the equivalent of governments. We haven’t talked yet in this conversation about climate change. But the reason that climate change is a seemingly intractable problem is because energy companies have been able to control the conversation through all media, but especially through internet platforms, in a way that prevents us from doing the obvious, which is to rebuild our economy on renewable sources of energy.
It’s possible. It would be economically extraordinary. It would be the best full employment plan you could come up with, creating amazingly good jobs. What are the arguments against it? I live in California, where parts of our state are on fire, again. For the third or fourth consecutive year, we’re having unprecedented levels of wildfire.
It’s no longer unprecedented. It’s now annual. Some states have had flooding. Texas had ice storms! This is not a coincidence. These things are the result of human action. We have a path out of it. The path out of it for the United States of America will be the greatest economic opportunity of our lifetimes. If we continue to allow internet platforms to control our democratic conversation, and if we continue to allow them to give a political advantage to the people who spread fear and outrage, we’re doomed.
It’s totally unnecessary. It’s on us. Let’s get this right. I’m a capitalist, but I don’t think we have capitalism today. Today we live in a world of monopoly. It’s nuts. The notion that all this stuff has become politicized, that was a choice. We can undo that choice.
Roger, I thank you for talking to me about these issues today.
It’s entirely my pleasure. Justin, keep up the good work, man.
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Innovation. He is an associate research scientist and adjunct professor at NYU Tandon School of Engineering. Opinions expressed here are his own.