Gus Hurwitz on Technology and the Law
Justin Hendrix / May 3, 2023Audio of this conversation is available via your favorite podcast service.
Recently I caught up with Gus Hurwitz, a professor of law at the University of Nebraska and the director of the Governance and Technology Center. He’s also the Director of Law and Economics Programs at the International Center for Law and Economics, a Portland based think tank that focuses on antitrust law and economics policy issues. Gus told me he’s leaving Nebraska at the end of the semester for a new position that is soon to be announced.
Our conversation covered a range of topics, from how to think about the relationship between technology and the law, how to get engineers to engage with ethical and legal concepts, the view of the coastal tech policy discourse from Gus’s vantage in the middle of the country, the role and politics of the Federal Trade Commission, and why Gus finds some inspiration in Frank Herbert’s Dune.
What follows is a lightly edited transcript of the discussion.
Justin Hendrix:
I wanted to start big picture. What do you see as your intellectual project, as your area of research?
Gus Hurwitz:
That's a great question. I think I can answer it a couple of ways. The immediate way they'd answer is it is changing and in flux. This is something that we drill into academics the opposite way. Academics, young academics, you need to have your methodology and your research agenda, and you have to develop your identity around those and have a long-term trajectory that people can understand who and what you are. When you're getting into academia, that's all incredibly important. You have to have a methodology, so you have a unique toolkit, but you should change as your interests change, as you learn more, as you add to your toolkit, as you learn about and embrace new methodologies, and as the problems that you're interested in and that are important to society change.
I'm kind of in flux, and I'll say a little bit more about that in a moment, but I view myself as a law and economics scholar focusing primarily on new institutional transaction cost economics and public choice economics. Those are the lenses that I tend to look at most problems using, and they're not perfect lenses, they're not complete lenses, no lenses are. I'm always trying to think about things differently and understand how other people look at problems so that we can communicate better.
I'd go on to say historically, and this is what I started the Governance and Technology Center at the University of Nebraska to do, historically, I'm a law professor and I've believed in the importance of introducing law students and legal scholars to better understandings of technology and incorporating interdisciplinary groups into that discussion. Increasingly, I still believe in that very much, but my focus is shifting more to the engineering side. I think that there's greater benefit to society that we can derive. I actually have a differential equation that I use in some presentations I give on this topic that I put up on a slide.
The benefits to society from giving engineers a incremental bit of knowledge about how law and politics works significantly outweighs the benefits to society from giving lawyers and law students an incremental bit more knowledge about how technology works. That's kind of my locus right now. I want to teach more people, especially the people who are building the world of today and world of tomorrow, more knowledge about how law, society, policy, politics works.
Justin Hendrix:
What do you think that engineering students are missing right now, or that engineers are missing right now about policy and the law? I am also engaged in that project. I work with computer science and engineering and information science students teaching at NYU and lately also at Cornell Tech and see great benefit in trying to raise some of the greater ethical policy and legal problems that we face.
Gus Hurwitz:
I'd start by saying it's the low-hanging fruit. It isn't the really complex stuff. Every discipline has its 'how we think about the world.' Every discipline, you could go to any grad school and talk about what's the first year curriculum like, what are you trying to do? The faculty will tell you, "Oh, we're trying to train the students to think like a lawyer, to think like a doctor, to think like an economist, to think like an engineer, to think like someone in business." That's the high output, the high value, low-hanging fruit sort of concepts, introducing those ideas and demystifying them.
Lawyers and law and policy very frequently are viewed antagonistically by lots of people and lots of fields. I think especially on the engineering side, so some demystification. Also, there's a mythology about the law, and apologies to a lot of people with what I'm about to say, that it's hard and it's complicated and you need to be smart to be a great lawyer or legal thinker. I actually think that the law is pretty simple. The basics of learning to think like a lawyer, this isn't revolutionary stuff. It's foundational concepts, counterintuitive concepts that you need to be introduced to. In order to do it well, you need to be indoctrinated to it and you need to practice it. It's like any skillset, it takes practice.
When you're trying to, or starting to see the world differently, it takes a long time before you start to really consistently have those glasses on or remove other glasses and see with the eyes of a lawyer. But learning that the basics needed to understand what lawyers, what politicians, what the political process is doing, why lawyers think about the world so differently, or why politicians keep making the same stupid decisions or all of that stuff, there's some pretty low-hanging fruit sort of stuff that can really increase your ability to engage with the other disciplines and can change how you approach your own worldview. Now, there's something else going on with engineers and Justin, I'd love to hear your take on this. Every now and then while I'm talking to folks, I'll usually get folks agreeing, "That makes sense," or, "That's my experience too." But every now and then, folks will push back on this.
Engineers tend to view the world as a built world, systems as things that we design that operate according to some set of instructions. That's not necessarily the natural world that we're talking about, but when we're talking about things that humans imagine they can design, that there's some logic behind them. Our legal system is something that we've designed. We have a written constitution, it's got rules, it's got enforcement mechanisms, we've got statutes, we've got regulations. When you start a job or you buy a house or a car, there's a contract. This is a set of specifications that you've laid out on four corners of paper. This should make sense and be logically consistent and coherent.
Lawyers will immediately tell you, "Oh, no. No way. That's not what the law is. That's not how it works." Most people generally think that there is some logic and some rigorous set of specificity that you can find in statutes. Engineers in particular tend to view the law kind of as code as outlining, this is how the system should work, it should be logical, it should make sense, I should be able to open the regulation, the statute book and get specifications for what I need to do in order for whatever I'm designing to be good, at least legally speaking, and then, we're good.
It turns out, A, the law can change. And, B, there's a lot of ambiguity and flexibility in what things mean, what words mean, how we interpret those words, how those words have been previously interpreted by courts that might not make any sense, statutes that still have statutory text on the books, but that courts have found to be unconstitutional. We just leave the text dangling around there because everyone knows, read that as every lawyer who follows that area of the law and knows the court said that this text is unenforceable and the legislature just hasn't taken the time to change the language to remove it. Getting engineers to see that little bit is really powerful.
Justin Hendrix:
I think that's right. I think sometimes we talk about technosolutionism as being a part of this phenomenon that you're describing, the thought that we can build something in order to fix something that we built before. Sometimes, that's not the case. Sometimes, it's better to build nothing or to perhaps undo the thing that we built. That is a hard thing sometimes to communicate in an engineering context.
Gus Hurwitz:
One of the things I've been thinking about a fair bit lately, this has to do with some courses that I've been thinking about developing is the absolute uselessness of everyone's favorite problem, the trolley problem. I don't mean to throw the trolley problem under the bus or under the trolley, because I actually think that's really useful if you use it right. What am I saying here? First, the trolley problem, I'll generally assume most folks know about it. It's this famous thought experiment. You've got a trolley going down the track. It's going to run over five people. You're at the switch. You have the ability to throw the switch, diverting it to another track where it's going to kill, run over one person. Do you save the five people? But in so doing deliberately choose to kill the one person? That's just one version of the trolley problem, and that the powerful thing about it is just that iteration alone gives you a whole lot that you can talk about, utilitarianism, moral, making a decision versus letting something happen.
But whatever folks' answers are that they think that they come up to with that, you can then change it around and put them in a situation where suddenly if they apply the answer they thought was the morally right one to the first iteration of the trolley problem under the new scenario, they're going to think that they're a monster. And, wow, that's powerful. But, why am I talking about this from a talking to engineers perspective? This has become a very popular technology ethics problem to give to engineers. Queuing up the idea, we talk about this in the context of driverless cars all the time. You're designing a driverless car and you need to design it so that when something happens that the person jumps out in front of the car, does the car decide to steer onto the sidewalk running over five people or stay in the road hitting the one person or flip that around or whatever.
We talk about this a lot when talking to engineers, and the reality is it's a great way to introduce normative ethics and questions of normative ethics, but engineers don't make these decisions. These are not realistic examples that folks designing driverless cars, autonomous vehicles are going to be thinking about, not currently thinking about, probably never going to be thinking in these terms. It's a very different set of calculations and decisions that they're dealing with, but these are questions that we as a society need to answer.
Getting engineers thinking about these really hard questions of values and normative ethics and how should we make these decisions? Should we be deontological or utilitarian? All this stuff is a great introduction to, as a society, we do decide these questions. What are the ways that we decide them? Because as an engineer, you don't get to decide these questions, but you are designing systems that are going to be evaluated by society. What is the political process? What is the purpose of law? How do we as a democratic society aggregate our values and turn those into judgments that are going to decide whether your company is going to be fined $100 million based upon a design decision? The value of things like the trolley problem and talking to engineers about technology ethics early on isn't in my view to get them to be more ethical designers, but it's to understand the role of the engineer in society as part of this much larger system of expressing and capturing the values of our society.
Justin Hendrix:
One of the people, of course, who likes to comment or invoke the trolley problem frequently these days is Elon Musk who has tweeted various versions of it even lately, I believe, and it's often at the core of the discourse around artificial intelligence and the extent to which we can balance long-term gains with short-term harms. How do you think the thinking that you're applying to this particular thought experiment may extend to AI?
Gus Hurwitz:
Oh, that's a fascinating example. First off, hello Elon and hello future AI overlords. I think you are wonderful. I am a supporter of AI. I am not someone who thinks that we should be afraid of AI. When you take over the world, I am not one of the humans that you're going to need to lock up. Yes, that is [inaudible 00:15:00] right there. I actually think that what Elon and others on the tech side I think generally are arguing isn't so much about the trolley problem, it's more about the precautionary principle and static versus dynamic equilibrium and how we think about risk taking in society and innovation.
The more interesting invocation of the point I'm making about the trolley problem here is actually I think on the legal side. There's a massive discussion in the legal community, in the academic legal community, the policy legal community and the armchair lawyer legal community on Twitter about the legality of generative AI and whether it's capable of causing defamation, whether section 230 applies to it, whether it's a massive copyright violation machine or not, and it makes sense. This is what we lawyers do. We apply old ideas analogically to new settings.
But every time I get drawn into any of these discussions, I'll say with the exception possibly of section 230, I feel like we are putting the cart before the horse. I think there's a very real possibility that generative AI is a truly transformational technology that we don't truly understand at this point in time. Yes, we can figure out how existing law can or should apply to this new technology, but ultimately, it's not up to we lawyers applying existing old wine into these new bottles to figure out what these technologies are, how we should regulate them or embrace them. These are social questions, these are societal questions, and we as a society need to come to an understanding of are these good, are these dangerous, are they useful, what are the guardrails that we need? Once we figure that out, then we can have the question about how we use the law to make sure that those guardrails are being respected or not.
As lawyers applying existing law, I think that we are chasing the tail in a pretty non-productive sort of way. It's intellectually fun and curious. Now for the engineers, the engineers designing these technologies, they should be aware, hey, if we design these in ways that society isn't comfortable with, society's going to outlaw them or they're going to impose rules that are going to restrict their use and make them un-useful or non-functional or whatever. From a engineering perspective, it's really important to understand you are not designing these technologies. You don't have carte blanche, they're not coming out of whole cloth. If you design the world's greatest technology and society's scared of it, society's going to lock it back up.
Justin Hendrix:
To some extent, the tech firms, they also have their lawyers who are in some ways serving the interests of the overall commercial interests of the company and perhaps could be considered to be serving the interests in some cases of the engineers who want to propagate certain systems. And so, they, I suppose, also do a bit of shoehorning into old laws where it benefits them.
Gus Hurwitz:
Yes, and I'd say the role of in-house counsel or any lawyers working for industry is fascinating and somewhat different. Their first interest is going to be understanding what current laws apply to what we're doing, and that's kind of the lowercase L lawyering that they need to do. If you design a technology that defamation law is going to apply to and it produces text that is harmful to the reputation of someone and could amount to defamation, you need to know that because that is potential liability that you're going to face today.
Understanding how existing law applies to these technologies is nuts and bolts lawyering. Now, the role of the lawyer is also bigger picture L lawyering, advising and should be thinking about, okay, whomever the CEO is, whomever the engineers are, let's sit down and talk about the future of this technology. This is a dynamic equilibrium. The law can change, the law will change this technology, how you approach it, how you communicate it, how people perceive it, how its early applications play out, that is all going to affect how the future law is going to constrain or empower this technology.
Justin Hendrix:
Bear with me as I try to phrase this question, but one of the things I've been thinking about a lot and talking with my students about on some level is that we're at a place where, especially with social media and some other kind of related digital communications technologies, we're beginning to see a lot of science pile up that may give us pause about the effect of some of these technologies on variety of aspects of human experience from our politics to kids' mental health, some of that evidence that's brought from science, so some of that empirical observation is being brought into courts now and serving as the basis for various challenges to the practices of large technology firms. Sometimes, those challenges are kind of at odds with some of the principles that these technologies or that these technology firms may have set out with or started out with.
Is that part of your thinking that as we go forward now, we're at a new phase where there's both a sort of technological set of progress that's happening, but also a kind of observational progress that's happening in society where we're able to, in a more fine grain way, observe, describe, and provide an evidentiary basis for some of the social problems that they create?
Gus Hurwitz:
There's so much there, and unsurprisingly, no easy answers. We are at some level in a new phase of society. We're now some sort of interconnected brain of humanity. We're operating at a scale that we've never operated at before, which is to say, go back 50 years, and the United States was 50 states, and the United States was one country among many countries. Even in the post World War II era, we had the two global superpowers, but we didn't have this NATO block of the west versus the east. Nowadays, we are, as a country, the role of the states is much more attenuated, and I'm not just talking politically, I'm talking individuals are much less likely to think, oh, I'm a Virginian, I'm a Pennsylvanian, I'm a Nebraskan. We live in the United States, we're American and our economy, our polity, our democracies, our discussions, our political discourse, they share a whole lot of commonalities and effects.
As a society, we've just not operated at scale in the way that historically we have, and that's from a legal and policy perspective, really damn scary, because the laboratory of the states, the ability for one country to do one thing because it just has different values and still participate in a global economy, that's really important. We don't have that nearly so much today as we did 50 years ago. That's one area of change.
Now, technologies are so frequently dual use and have good and bad, and it's really hard, usually impossible to just get the good without the bad. There is, I think, plenty of evidence that social media technologies in particular aren't great for kids and they're changing both socially and cognitively how we develop as humans in really scary ways. They're not the only technology to have done that. Certainly, there are lots of things out there that we don't let kids participate in for a range of reasons. Some good, some bad, sometimes different countries, different societies have different values around them. I'm thinking for instance, alcohol, very different alcohol cultures in different countries around the world with very different effects.
At the same time, I also need to put on my libertarian street cred hat here and just say that these technologies, as much as they can be dangerous, they also can be hugely beneficial for so many children and so many disadvantaged and marginalized groups, and figuring out the magnitudes of those effects and how they're affecting society, really important. Also, you go back 10 years, 50 years, 100 years, every new technology, there has been a technology scare around very frequently focusing on how it's going to affect the kids. That's fascinating.
You alluded to, and I'll just mention there are now I think three different lawsuits around the country that are all styled as public nuisance suits, challenging social media companies for basically saying, "Hey, social media companies, we're a school system. Your technology has had these effects on children that we as the school system are now needing to bear the cost of." This is being effectively treated as similar to the opioid epidemic. In fact, these suits are based upon recent precedents or recent cases that have held successful public nuisance claims against the opioid manufacturers.
So really, really complicated stuff there. My big picture galaxy brain view of this is equilibria change, and as a society, again, we need to figure this stuff out. Maybe the answer, I personally don't like this answer, I don't think it's a good answer, is to ban these technologies, to ban these technologies for children, to ban them in certain social settings, perhaps in schools, no phones in schools, or something like that. Perhaps the answer is greater education, perhaps more parental involvement and control over the technologies that their kids use. Who knows? We are in the midst of a lot of discussion about public education in this country, school choice movements, especially post pandemic, are getting more attraction without saying anything about the merits of those.
If we end up five years from now in a world where some states have a much reduced role of large scale public school systems, hey, that's a fascinating natural experiment. Maybe in those environments, the effects of these technologies on kids will be greatly diminished, and that would tell us something about these technologies, but also about the overall macro structure of how we educate children and the role of education in society. And, boy oh boy, that is 20, 30, 40-year generational scale thinking.
Justin Hendrix:
One of the things I think about is this issue of magnitudes, how to tell what one harm is versus another, the degree to which we should be concerned about one issue versus another. You see this play out across technology questions all the time. There's bad things happening on the internet and all directions and at great scale no matter where you look and how do you sort of balance the response to one without creating more problem in the other. You see this, for instance, in thinking about privacy versus perhaps looking for harms like child sexually exploited material, things of that nature. The encryption debate, for instance, I think kind of hinges on this question a little bit around the potential harms of end-to-end encryption to law enforcement or security interests versus the great benefit to individuals who are perhaps operating in authoritarian contexts or who have other reasons, of course, to secure their privacy. Do you think we're going to, I don't know, in a couple of decades time, get to a place where we're able to more accurately understand those magnitudes?
Gus Hurwitz:
First, a comment about magnitudes, because this is an incredibly important point to make, not just about technology, I'd actually say this is even more important a point to make in terms of media and media coverage of everything in society. I don't have any tattoos. If I were to have tattoos, there are two that I would get. The first is Shannon's channel capacity theorem, so I'd have an integral, and the second is the question, what's the denominator? This is a question that I am constantly usually internally shouting, but sometimes externally shouting at headlines and reporters. What's the denominator?
You tell me that such and such has happened. You tell me that 800 events happened over the last year. You tell me whatever. Those are the numerators. They're meaningless without the context of the denominator. If one thing happens and it's a terrible, terrible thing, but it's never going to happen again, then we shouldn't change policy around it. If something happens and it's a moderately bad thing, but it's happening 100,000 times a day affecting 90% of the population, well that's something we should really be thinking about. Having that denominator, understanding the scale, what is the magnitude of effect is so important to public policy decision and we are really bad at focusing on what the denominator is.
It's such a pervasive issue and it does endogenize to the tech policy discussion, because it's about media and reporting and press coverage, and our modern media ecosystem is so clickbait and headline-driven. You need to attract attention with the numerator. It's just a much more attention economy sort of tool to use when you're competing for eyeballs. That's part of the problem there. The bigger picture question about will we reach an equilibrium? No, and I actually think that that's a good thing. Society changes, law changes, our values change, who we are change, the circumstances of the planet change. As all of that happens, what our concerns are as a society are going to change. Everything moves in cycles.
Our polity here in the United States and most of the world is largely a dialectic dynamic equilibrium between more progressive values and more conservative values. That's good because where the right answer is probably always going to be shifting a little bit, and we need that churn in order to keep progress for society while maintaining stability in society. That's the dynamo that makes sure that we are always adapting to our changing environment.
It's also frustrating. It's also concerning, especially in that media environment where media is competing for attention. There's always conflict. If we're always in a time of change, always in dynamic equilibrium, wow, there's always opportunity to say one side is bad, one side is taking us in the right direction, one side's taking us in the wrong direction, and boy oh boy, as just a citizen, just a human trying to live my life, I wish that I didn't need to care about those minutiae nearly to the extent that in our modern society, we are really compelled to do. I'm really going to try not to jump into a discussion about Hobbes and Benjamin Constant and Pericles and different understandings of what it is to be a citizen in a democracy, but there's always this tension. Too much democracy doesn't allow us to live. Too little doesn't allow us to live either. It makes it hard to just be a person.
Justin Hendrix:
I want to ask you a little bit about just your vantage point. You've been in Nebraska for some years now. You have a perspective perhaps that is different from that of some of your colleagues that are on the west or the east coast. What has being there in the interior of the country taught you about tech policy debates?
Gus Hurwitz:
Really interesting question. I would describe much of the debate, the non-coastal debate, as some might phrase it, as high beta, high variability. My time in Nebraska, I would say folks here, folks in state and local government, folks on the ground trying to solve problems, they generally are very sophisticated. They understand problems and especially when we're talking about things like rural broadband, internet access, they understand things a whole lot better than most folks I talk to in DC, so a lot of sophisticated knowledge.
At the same time, crazy stuff can happen because very frequently, you don't have that level of sophistication or you have someone who has an idea at the local level or even the state level, they put forth some politically compelling policy proposal and it passes. Sometimes, this is just complete surprise. Sometimes, it's just the local politics. When we're talking about the modern at-scale economy that I was talking about before, it's really hard to live in a country or to have an internet where one state says you need to have age verification, and another state says that biometrics are effectively illegal and have another state where there's a right to respond, a right to reply to political criticism, and another state where there's a right to have a political criticism or potentially, but not definitely, defamatory speech about you taken down. That's not how the internet works. Lots of crazy stuff can happen and does happen.
Justin Hendrix:
Are there particular scenarios that you're watching play out right now? We're seeing lots of legislation come out of the states focused on kids' safety, for instance, these age-appropriate design code, various versions of those, various kind of iterations of perhaps that first version we saw come out of the UK and recently passed in California. There, of course, privacy legislation. There's other legislation concerned with different aspects of AI, deep fakes, things of that nature.
Gus Hurwitz:
Oh, God. There's so much going on right now. I'm just going to kind of say no, I'm not following it, because there's too much to follow. I'm trying to keep generally abreast of the macro level trends and no folks who are working on specific states or issues in specific states. The big picture stuff, there's just so much uncertainty right now, and that's not good for anyone. I'd say big picture, it's no way to run an economy, no way to run a society now. It's a blip, I think. We are at a particularly chaotic time, more for political reasons, I think, than for technological ones.
As we are trying to figure out the social meaning of many of these technologies, and as we're coming into what's going to be another contentious election cycle, there's just a lot of ideas out there. Ultimately, that's good. If states do start passing laws that are truly problematic for the operation of the internet, hey, that might get Congress to do some stuff, because that can all be preempted. At the same time, Congress is thinking about plenty of its own things. I'll say I don't expect anything on the tech front to happen in Congress this year with the current Republican House majority. I think the signaling has been pretty clear. There isn't much appetite for tech-focused legislation. That doesn't mean there isn't stuff going on federally, but I don't think we'll see much legislatively.
Justin Hendrix:
Well, let's talk about one area that I know you do follow very closely, which is antitrust and perhaps that's one of the areas where we can expect some federal action?
Gus Hurwitz:
Yes. Both federal, state, and international action. I think antitrust, there might be more going on in the antitrust world right now than in the state legislative world. Every country has lots of activity going on and it's just fascinating to be working in this area. At the federal level, the big dynamic I think to be watching right now is the Federal Trade Commission versus Congress. The Federal Trade Commission is doing a lot of activity. They have non-compete rulemaking in progress. They still have the commercial surveillance privacy, advanced notice of proposed rulemaking. They have various RFIs about technology, antitrust-related issues. They have internal stuff going on. The Illumina-GRAIL transaction that was just blocked. Lots going on there, which isn't unexpected with either democratic leadership in the White House, or especially with Lina Khan as the chair of the FTC. She was made chair to do this sort of stuff.
The brewing brouhaha is, I think, House oversight of the FTC. About two months ago, I guess, Congress put in a request for the FTC to provide a bunch of documents and share information about their current activities. Historically, the FTC has complied with those sort of requests. They didn't this time. Jim Jordan just suggested the other day that subpoenas might be coming the way of the FTC (Editor's note: Rep. Jim Jordan, R-OH, did ultimately subpoena the FTC).
If the FTC doesn't respond to these materials by next week, House oversight could be very painful for the Federal Trade Commission. When Congress decides, when the House decides that it wants to slow down an agency, it makes life very, very difficult for everyone at that agency. It sucks all the energy out of the agency, both psychically, but also just in terms of time and resources that they have to do what they want to be doing.
Justin Hendrix:
What's your assessment of Lina Khan's FTC? I think a fair-minded person would look at the scenario and say, "It's embattled."
Gus Hurwitz:
Embattled is definitely a word for it. I'm not going to lose any friends at the FTC by calling it a train wreck because they already know that I think it's a train wreck. I'd actually start by saying it's sad. The FTC, for as long as I have been a lawyer and going back a generation previously, basically has been one of the most functional, bipartisan, good place to work agencies in the federal government. It was an agency that was built on consensus and got things done, and it's not that anymore.
A few things that you could say, I'd say big picture stuff to be watching with the FTC outside of specific cases or specific matters or initiatives that the agency has on its docket. The commission should be really worried about losing substantive legal authority and also losing reputational legitimacy before the courts. There are a few ways that this could happen. First, we have the house oversight. It is possible if the FTC overreaches that there could be legislation put in place that restricts agency activity. I think that that's pretty unlikely.
One of Chair Khan's basic points, and I'm actually somewhat sympathetic to this, is that the antitrust agencies, FTC and DOJ don't litigate enough. As a result, there isn't much case law out there involving antitrust. There's a lot of folk knowledge, folk understanding within the antitrust bar about what is and isn't anti-competitive, what the antitrust laws actually mean. I think that there's a lot of value to litigating more cases.
Chair Khan's approach to this is let's bring big cases. I think that she's going to lose big cases, and that's going to be a big judicial check on agency authority. If you want to shape precedent, you want to be winning cases, not losing cases, unless the goal is to lose politically valuable cases that you can then go to Congress and say, "Hey, Congress, the courts just said that antitrust law doesn't cover this. Obviously, it should. You should pass a law." I don't think that those laws will be passed.
Justin Hendrix:
Bear with me as I again try to phrase a question that hopefully hits the mark on this. But to tie this back to the conversation about magnitudes... I'm not an expert on antitrust or the FTC, but from my limited vantage point, I understand the project that Lina Khan and others have engaged in as trying to bring a sense of magnitude of certain types of problems that we haven't typically thought of as being in the realm of antitrust or competition into the conversation around antitrust and competition. There is a good deal of evidence that those connections are there, that some of those harms are real. But there's a question really about the precedent and the scope of whether the law allows them to be addressed just as you're saying. Is there a happy middle on this? Is it possible to imagine and scenario in the future where we are looking at things like networked harms or broader scale, impacts of companies that are operating on network or platform economics in this context?
Gus Hurwitz:
Yeah, it's a really good question. The danger is the unintended consequences. First, I think that there is a way to get there, and I'll come to that, but antitrust law is a scalpel. It's intended to do one set of things, address one set of economically defined issues, and it does that well. It doesn't solve all problems. There are absolutely market situations, market activity, market structures that we as a society might look at and say, "Oh, we're not comfortable with that. We think that it locks out entry, or we prefer to live in a world where small firms have greater ability to operate and antitrust law tends to reward efficiency and high output low prices, and those firms aren't going to be able to operate in this environment."
How do we get to a world where we can have those socially defined market structures that we as a society think are useful? My answer is not through antitrust law. Let's not break antitrust law to go after all of these other things that we as a society might find to be really important. As a more specific example, very contemporary example, the FTC's non-compete rule. Generally, one of the big antitrust values that the Neo-Brandeisian community chair Khan focus on is using antitrust as a tool for labor to improve the market power of labor in the economy.
We have a department of labor. Labor relations, we have an NLRB. We have a whole lot of regulation and statutes involving regulating the relationship between labor and firms. With non-competes, we have most states that have enacted legislation regulating non-compete agreements. Congress considers these sort of issues regularly, where, as a society, using our democratic process and legal tools to think about labor relations quite a bit, why in the world are we going to walk in and say, "Oh, we've got this other tool, antitrust law that we're now going to use as a collegial to reshape massive areas of the American economy despite all of this state and federal level regulatory and legislative attention."
As a first matter, it doesn't make sense affirmatively to do this because we have all these other actors already working on these issues. Second, it's going to distort antitrust law. It's going to change up the precedence if successful or it's going to limit the precedence if ultimately unsuccessful in ways that might limit the effectiveness of antitrust law as the narrow scalpel that it has been over the last 40 years.
Justin Hendrix:
It sounds like you don't disagree that there are harms that are being caused perhaps by the economic power of some of the companies that are under that scalpel in this context. You simply disagree that the path forward that the FTC is taking, and perhaps other proponents of looking at antitrust in a different way are taking.
Gus Hurwitz:
My answer will always be yes and no. Certainly, I'm looking at non-competes as an example. There are absolutely examples where non-competes make sense and are almost certainly harmful. There are also examples where non-competes or circumstances where non-competes make sense and are useful. There are a ton of circumstances where non-competes are kind of weird. Why are they here? What are they doing? Are they harmful? Are they beneficial? Are they meh? I don't know. For circumstances like that, a broad ban on non-competes could be harmful. A attempt to broadly ban non-competes that results in the FTC, either through Congress or through the courts being told you don't have legal authority to touch this area of economic activity, that could also be harmful because it could limit the FTCs ability in the future to bring a narrower case addressing specific actions relating to these concerns.
Justin Hendrix:
How do we finish this up? We've been through a lot. How do we sum this up for the listener? Bring it back perhaps to your worldview, your shifting worldview?
Gus Hurwitz:
My shifting worldview. I'm going to not go to Thomas Hobbes and Benjamin Constant. I'm going to go to Frank Herbert instead and Dune. There are two things that I would invoke from Dune. For those who have not read the book, you should read at least the first three books. You need to read the second book in order to read the third book, unfortunately, but one of the defining events in the series, the Dune universe, is what's known as the Butlerian Jihad. It is when humankind turns against thinking machines and deems them illegal. At some level, I wonder whether we're flirting with that sort of path forward in our political discourse around technology, and that no one I think would willingly say, "Yeah, I think we should outlaw thinking machines. We should outlaw technology." But we're talking about outlawing so much technology generally that's really hard to have a scalpel and say, okay, we're going to outlaw all the bad uses, but allow the good uses. You frequently can't do that.
The other thing that I'd invoke from Dune, and unfortunately for this, you need to read the entire series, and that's a total slog. I don't expect most folks would do that, but one of the arcs that defines the story of dune is something called the scattering. The idea that in the book, the defining conflict is that humanity's empire is going to ultimately implode and to destroy humanity, and humanity's going to go extinct. And the only way to survive this is to spread humanity out into the stars in a way that cannot be regulated by a single empire. So we're going to scatter to the stars, and everyone's going to be kind of out in their own unique solar systems or worlds that just can't be governed.
I prefer to live in the world that's ungovernable with lots of different ideas that you can't understand. There's more to life and society and politics and technology and everything going on, then we have the ability to understand and bad things are going to happen there, but the safety net is that it's un-regulable. This is the tension, okay, I lied, I'm going to invoke Hobbes. This is the tension in modern conceptions of government. You need to have government, you need to have some strong governing figure, some strong central governing figure in order to have a modern society. But that figure needs to have checks on its power because otherwise its power is going to be captured by elements of society that want to use it for their own gain. The greatest challenge in our society if we don't want to go extinct or get scattered to the wind is how do we have a strong governing figure that also has limited power.
Justin Hendrix:
Well, Gus Hurwitz, that's the view from Nebraska at the moment. Perhaps it'll be a view from someplace else the next time we talk. Thank you very much.
Gus Hurwitz:
Absolutely. My pleasure.