In Age of Disruption, a Defense of Incrementalism
Justin Hendrix / Mar 1, 2026The Tech Policy Press podcast is available via your favorite podcast service.
In their new book, Move Slow and Upgrade: The Power of Incremental Innovation, Evan Selinger, a professor in the Department of Philosophy at Rochester Institute of Technology and Albert Fox Cahn, founder in residence of the Surveillance Technology Oversight Project (STOP), argue that society is over-fixated on disruptive innovation over the kind of steady incrementalism that can deliver sustainable returns over longer time frames. They argue in favor or more careful deliberation and adopting what they call the “upgrader’s mindset,” which should be applied whenever “disruptive changes would pose the greatest social risk.”
Gradual innovation, or what we are calling “upgrades,” is like incrementally improving water quality through decades of environmental regulation. In contrast, discontinuous innovation is like trying to clean up a lake by adding a new, invasive species.
I had the chance to read this book well before it was published, and to write a forward to it. It’s a slim volume that challenges the mantra of Silicon Valley, inviting us to ask whether in some circumstances “there is a less disruptive, evidence-based alternative to innovation.”
What follows is a lightly edited transcript of the discussion.
Evan Selinger:
I'm Evan Selinger. I'm a professor of philosophy at Rochester Institute of Technology.
Albert Fox Cahn:
My name's Albert Fox Cahn. I'm founder of the Surveillance Technology Oversight Project, a New York civil rights group. I'm also a visiting scholar here at Cambridge University.
Justin Hendrix:
I'm excited to speak to the two of you about this book, Move Slow and Upgrade: The Power of Incremental Innovation, which I got the chance to see early. I actually had the opportunity to write the forward for this, which was exciting for me. I'll just step back for any listener who's not familiar with the two of you, your curiosities. Albert, S.T.O.P., a well-known entity, you and others from your organization have appeared in Tech Policy Press, or we have certainly cited and relied on your work in past. And Evan, the same with you, and you've been on this podcast in the past. Just briefly, your intellectual curiosities, what brought you together to this topic? Evan, do you want to take that to start?
Evan Selinger:
Sure. So philosophy comes in many shapes and sizes, and I guess I'm not your grandfather's philosopher. Philosophers do different focus things, and my bag is technology. So I think a lot about technology. I think a lot about the ethics and legal dimensions of it. And I get to work with really amazing people like Albert, who are practitioners doing their thing on the ground. And so since both of us had previously done a number of short form things together, we wrote a number of op-eds because our interests about tech and ethics and the legality of tech align, we thought it was really important to do a sort of more long form thing. Both of us have been concerned about the excessive power of Silicon Valley and the way in which it is commandeering so much of our lives. And we wanted to find an opportunity to not just say the same old, same old thing, but really deepen our thoughts and think of some positive ways forward.
Albert Fox Cahn:
And for me, as a kind-hearted nerd of all trades who just likes to take on bullies, and sometimes I do that in the courtroom by suing them. Sometimes I do that as a lobbyist by trying to pass new bills. But what Evan and I have really focused on is how to use these platforms as a way to give people the tools to not just understand that they're not alone in feeling frustrated with the usual game plan for move fast, break things technology innovation, but to also realize that there are all these warning signs that these sorts of products, these apps, these government programs that promise to come in and magically fix our lives and give us this fantastical future, we can see that they're going to fail us and we can see the better alternatives. And with the book, it creates this nice handbook for how you can spot those patterns and spot the alternatives in your life and in government policy.
Justin Hendrix:
You wrote this book, I suppose, probably finished it more than a year ago at this point, and that's just the pace of publishing. Feels like lots has changed, everything both on the political scene, but also in tech. I was reading this morning a piece in The Washington Post about the breakneck pace at which the federal government, for instance, is installing artificial intelligence applications across many different agencies. All those federal inventories of AI are now out from OMB and people are looking through them to determine precisely what the federal government's putting to work in different contexts. Things are moving very quickly, it feels like to me. Your book comes along at a time when the pace seems to only be hastening, but I want to ask you just to explain why you did choose Zuckerberg's mantra to kind of invert. What are you hoping to accomplish? Who are you hoping to reach with this book? I mean, clearly the guys in Silicon Valley aren't going to slow down anytime soon.
Evan Selinger:
So this is a more recent story, but it captures, I think, a big motivation for both of us. So last semester, I was asked to give a talk at a business conference, and that's not normally my crowd. And it was a conference on business and ethics in the age of AI. And I wanted to give the kind of talk that I thought would be helpful, just like in writing this book, we wanted to write a book that would be helpful. And so I did my due diligence, and I was so excited to give this talk, and it had two parts. The first part was, "Hey, I'm speaking to small business owners and I know you're all buying into this message that if you're not making your companies AI first, you're going to be left out and you're hearing that message and you're responding. So let me lay out for you in very clear language, some of the problems you're going to encounter if you do so recklessly." And then the second part of the talk was like, "And here's how we could do things better."
And I'm giving the talk and I'm looking at into the audience and there is a major bit of dissonance between how well I think I'm giving the presentation and the forlorn faces in the crowd. And I'm thinking, "What is going on here?" Because I'm literally speaking to, "If you do this, here's the risk. If you do that, here's the risk." To be specific, I was saying things like, "If you're not really careful with what you're handing off to automation, the results show you're going to make things less efficient, not more efficient. One of the dangers of AI slop is you're going to create bad products or bad workflow and on the backend it's going to take a tremendous amount of time to fix the errors that didn't need to be there in the first place. And on top of that, studies are showing it can do damage to your workplace morale. Workers get annoyed when they're forced to do extra work because they're given AI slop."
And so I'm pointing out these very concrete things. And in the end, I sort of asked, I'm like, "What's going on guys?" I know I'm giving a good talk. I'm in the zone and it's because they made many people in the audience already made the mistakes that I was trying to warn about. So by hope of trying to say, "Here are some warning signs to be on the lookout for," it felt as if they had already dived into the deep end and almost didn't want to hear it because it felt for them too late, which is the very opposite of why we wrote this book and why I was giving the talk that way. We wanted to write a book that looked retroactively into case studies that we thought were important in order to identify warning signs so that going forward, people could make some better decisions, which is hard.
Albert Fox Cahn:
And yes, Justin, you're completely right. Things feel like they're moving faster than ever. It feels like we're into this new moment of technological failure where we see government agencies adopting AI at an unprecedented speed. But what we see is it's actually nothing new because it's the same types of failures we've seen in all these earlier iterations just coming at a faster pace and at a bigger scale. So when we look at, we start off the book with a chapter where we go into the Metaverse and we go into this multi-billion dollar misadventure to bring us this vision of the future that never panned out to show how we knew at the time that Meta was changing its name and investing huge sums that this wasn't going to work, how when Microsoft was trying to pivot into the Metaverse, that it was a move to nowhere.
And we look at all the ways that even companies like Apple have been caught up in the just unfounded optimism that somehow something unproven, something without evidence, something without a sensible business plan or a believable use case would be the new future. And whether it's the Metaverse, whether it's cryptocurrency, whether it's surveillance capitalism, or whether it's this new AI bubble, which feels like it's constantly on the brink of bursting, we see the same broken pattern of just moving fast and really breaking far more things than are built. And I think that yes, the people who are making billions off of this in the Valley, the venture capitalists, the startup founders, they will reject us and fight us to the bitter end. But there's so many other people in business, in government, in civil society, in everyday life who keep getting told that we have to trust the techno wizards, we have to follow their lead. And really, I think it's for everyone else to have this toolkit to then say, "Oh, no, the Emperor really isn't wearing any clothes."
Justin Hendrix:
So I want to dig into this idea that we should care about social, institutional goods, the things that you say have been built up with a lot of care and thought over the years. But I also want to challenge you because there might already be a listener who is thinking to themselves, "This is what? An anti kind of innovation argument? This is an anti-reform argument. Are you sort of defending institutions? Is that the goal here? Are we trying to move slower and maintain a kind of stasis?" A lot of folks out there, I mean, certainly we see this in the election of Donald Trump, we see it in populist movements left and right who feel that the status quo is broken and they very much regard technology as one of the means to potentially shake it up.
Albert Fox Cahn:
Well, we're trying to push back on that false binary, that idea that we have to settle for a broken status quo or invest in this unhinged lottery ticket scheme. There's an alternative. It doesn't have to be get rich quick schemes. It doesn't have to be one unproven app after another, one invasive algorithm after another, one really problematic tech vendor after another. We show how actually the thing that has far more often been the more powerful driver of change and helped us address the most important issues we face are these really boring incremental upgrades. And the reason why we keep seeing them overshadowed by the Silicon Valley innovation era, it just is a boring story. People don't want to hear about turning the wrench a little bit. They don't want to hear about making the patch here and there. They want to think that there's some eureka moment that fixes it all. But what we've seen is that people keep telling us that they're shouting eureka and at the end of the day, they're just really shouting a sales pitch.
Justin Hendrix:
Yet you do acknowledge that some advances will require great leaps forward. We'll have to perhaps change the way that we do things fundamentally. And I assume you think tech will be part of that. How do we do both that, for instance, potentially take advantage of AI technologies that might potentially change the way we work and live, but also, I don't know, have our cake and eat it too, preserve some of these sociotechnical systems that you talk about here, the care that's gone into the human connection, the kind of ways of operating that are baked into how so many of our institutions operate?
Evan Selinger:
I'll give an example that's not in the book, but I think it kind of illustrates some of what we were talking about. But to backtrack for just a teeny bit, and Albert was really spot on in saying this, this book is not a screed against disruptive moonshots. We're not saying don't ever do that. What we're arguing is they've been overvalued, tremendously overvalued and overvalued to our detriment, where the risks get made invisible, where the risks become, if you point them out, you're considered out of touch. And so this is part of what we're talking about. So if I were to think about something happening right now and how I would do it differently through more of an upgrader's mindset, here's an example, and this is something that Silicon Valley would hate and I think could be incredibly useful. So obviously the big technology of the moment are chatbots and LLMs, and we talk about them in the book.
In the last year or two, one of the things that we've seen, which I don't think is surprising, I think is entirely foreseeable, is that the use of LLMs has changed. So if you ask, what are people turning to LLMs for? It's shifted from more of a information retrieval, something like a fancy browser, to looking for advice. So it is a quantum leap forward. We're no longer anything like finding information on the internet. We now have your, and Sam Altman and others are describing this as like having a team of PhDs in your pocket and so on. And so people are feeling like, "Oh, I have access to the greatest legal minds. I have access to the greatest scientific minds." And I think the companies that are producing these are very disingenuous about them. So they'll write in their terms of service, they'll say things like, "Well, don't ask for advice about things like this unless you're also going to consult a professional. You're also going to talk to your lawyer, unless you're also going to talk to your doctor."
But they know at the same time that they're doing that, they're also pitching these as democratizing knowledge for people who don't have access. They know it's expensive to have appointments with lawyers and doctors. It can take months upon months to get an appointment with a doctor. So they know that in reality, the way that people are actually going to be using these technologies are not the way that they're saying, "Well, we're warning them against that." I mean, you have people like Sam Altman appearing on late night saying, to raise his kid, the most intimate and personal of things, he's asking ChatGPT for advice.
So to bring this back to this idea of upgrading, here's something we could have done. Here's something we could still do. Here's something I think upgraders should maybe get behind. If we bracket to the side, all of the environmental costs of LLMs, and that's a considerable topic, and we push to the side just for a moment the discussion about intellectual property, which is also considerable. If we were just talking about what would it mean to have this kind of a technology be more useful, more relevant to problem solving, I would say this. What if our LLMs, instead of having the capacity to give us advice, to tell me, "This is what you should do," I don't think LLMs should be giving any advice at all. They're not built to do so responsibly. They lack metacognition. They don't know what they don't know.
So I wrote this in a Boston Globe piece. If I were to ask an LLM, "Hey, I'm thinking of going on the job market, what are my options?" It's just going to fill in some blanks and brainstorm them. It's not going to ask the very basic question of, "You have a stable tenure job. Are you sure in today's day and age you even want to be on the job market?" It doesn't know what it doesn't know. I think if we had designed these to be parsed down, brainstorming devices, help us retrieve some information, help us make our own decisions for ourselves better without telling us, "This is a good idea for you to do that, I would recommend." And unfortunately, this is what people are more ... They're turning for life advice, they're turning for psychological advice, they're turning for legal advice. And you constantly hear the professionals who have worked very hard in these fields saying, "You're going to be getting awful advice." But the over-hype, I think, is bleeding into the typical user who just isn't aware of how glitchy and how poorly designed for advice these technologies are.
Albert Fox Cahn:
Yeah. And I think that it's emblematic of what we show in the book that with innovations, with moonshots, you tend to start with a solution, then you reverse engineer what problem you're trying to solve. So people basically spent all this money developing these LLMs and then went through this exercise of being, "Well, what do we do that's actually useful? What do we do?" And no one thought, "Oh, let's invest billions of dollars in improving the power of machine learning so we can get AI generated pornography that simulates real life photos." No one set out to create the sort of stuff that Grok has brought us, but this is what happens when you start with building the technical capacity first, whereas with the types of solutions that we are enamored with, it's things like the mRNA vaccine where you had people tinkering on improving this vaccine over decades, creating this vaccine platform, looking at how you can establish these real improvements in how we manufacture vaccines.
And then you did actually have a bit of a moonshot idea. At the height of the pandemic, we had a change in how we invested and we invested in getting those vaccines rapidly to market. But unlike the Silicon Valley approaches that we tend to see where they're building first and justifying later, this was departing from the normal rules to invest a huge amount of money to meet a moment of intense catastrophe. And in the end, it saved a lot of lives. But I think that what people tend to overlook is how often the most valuable contributions, not just from Silicon Valley, but from any of the areas of sciences, social sciences, urban planning, engineering where we're trying to improve things, how often those improvements are slow, incremental, and above all, they're evidence-based at the time people are actually trying to invest in them.
Evan Selinger:
To just add one very quick thing, because that's exactly it, that is the theme of the book. To use a kind of meme language, it's an eff around and find out as opposed to ... The idea is that these are general purpose. And so if you convince people that if you're not getting maximum efficiencies, go keep using it until you figure out what that is, you're encouraging recklessness. That is the ethos that's being encouraged.
Justin Hendrix:
Evan, you're teaching this semester. You're dealing with students who are encountering these tools. They're also in a context, in a job market where they're expected to stay on top. And I assume that among your students, adoption is probably at or near 100%. I'm teaching this semester as well. It does feel like even more than last year, these technologies are in the classroom all the time. It's very difficult to separate out what students' workloads look like even. Very difficult to know precisely how they're using these tools all the different ways. I don't know. I mean, what do you tell your students right now when it comes to thinking about adopting these tools into their educational practices? What are you telling them?
Evan Selinger:
I think it's important, and we talk about this a lot, to think about it structurally and to think about it structurally, really, frankly, in the terms that Albert and I kind of lay out in the book, which is basically this. They're hearing because Silicon Valley has created the template and this template is being echoed over and over. And one of the strategies of normalization is you just repeat something until it seems like there's no other message you can hear. And the message is coming loud and clear, which is that hiring managers want AI first employees. The companies that are going to succeed are going to be AI first companies, that you can't get out of college, you can't start a business, you can't enter a business unless you're AI first. And when you hear all of that, that creates a massive amount of panic because students look at the entry level job market, which is not great.
And of course, one of the interesting things is we're finding out more and more of this through investigative reporting. A lot of companies are saying they're getting leaner because they're getting more efficient through AI, and that's actually not true. They're using that as a kind of pretext because either they over-hired during the pandemic or there are other structural things going on, but this is a good way of broadcasting that you're a lean mean machine and that you're going to be highly profitable. So students keep hearing, "You got to be AI first." Administrators keep hearing, "You're got to be AI first." But here's the rub. At the same time, the very people who are saying you need to be AI first are also saying, "Don't worry, this isn't going to create massive loss of jobs. This isn't going to automate people into unemployment." And then you say, "Well, how are both possible? How is it going to be so disruptive, but also not so disruptive?"
And then they sneak in the premise, "Well, all you need to do is adjust, upgrade a little bit, learn how to be educated in such a way that you can pick up critical thinking, you can pick up foundational knowledge, and you can also learn how to be AI savvy." And what they're not paying attention to or they don't care about, this is not like an Oreo cookie where the top and the bottom go together. There's massive tension between those two ideals because if you're afraid of not getting a job, and in fact, you don't even believe there's a stable future. You believe things are going to be so disrupted, all you can do is be a short-term thinker, you're going to be incentivized to want to have the highest GPA that you can have. You're going to want to be incentivized to show you're as AI forward as possible.
And so that means it becomes rational for students to not want to learn how to spend time developing an argument, thinking critically. Why read a book when you can just ask for a five-point AI summary? They're not looking to cheat, as some people say. They're not looking for a shortcut. They are so afraid of the disruption. And if you were to say to them, "But what about the long-term impact of this? Don't you think that eventually things are going to burst and having good foundational knowledge that you can pair with these skills is going to be really important?" I'm afraid I keep hearing and other professors keep hearing over and over, students are worried they don't have the luxury of doing that. It's not that they don't want that, they don't have the luxury. That's a recipe for disaster, I think.
Justin Hendrix:
It does feel like everyone's in this kind of catch-22. Either you figure out how to use the tools and perform at a faster pace, do more with less, or you may lose out entirely. So it kind of puts everyone in that same situation. This kind of zero-sum mentality.
Albert Fox Cahn:
Completely. Though I think the AI FOMO is starting to fade as people start to realize just how short these apps are coming up when it comes to actually solving a lot of the problems they need to take on. And look, there is this sense of ennui bordering on despair that I sense with some of the students that I'm having the pleasure of working with. And I do feel like the technological chain has really gutted their sense of real predictability, agency, all the things that Evan was talking about.
But I just think that what I tend to see with a lot of these AI apps, I'm thinking of the senior product managers, I know from FAANG companies who will tell me how they have this cutting edge model that they've invested billions in developing, and then they are pitching it to potential clients only to realize they can't actually find a profitable use case for it. Because in the rare cases, they find something that AI is well positioned to replace in terms of things that are relying on manual labor, when you take the cost of how much these models are to run, suddenly paying people minimum wage is a lot less.
And so I do think that in some ways, as omnipresent as AI is in the discourse right now, what I really am telling people is what are we going to do to prepare now and collectively as a society through our political apparatus in a procurement context for the next hype cycle? Because I guarantee you that as soon as the glean comes off of AI or frontier AI or agentic AI or whatever other sort of sales pitch we hear, there's going to be some new buzzword that takes its place.
Justin Hendrix:
So the book takes us through multiple cautionary tales. You've already mentioned the Metaverse, crypto is another. But I want to pause a little bit on the segment around surveillance, the Ring doorbell problem. This is obviously a topic that's close to both of your work. And I think just in the last couple of days, there's been some discussion about this following a Super Bowl ad from the firm Flock that has prompted lots of folks to discuss the trade-offs that we're making, the lost dog ad as it's referred to. If my listeners haven't seen it, they can Google it or I'll put a link in the show notes so you can go and find that. But let's talk a little bit about this, about the Ring doorbell problem and the extent to which that kind of serves the thesis of the book.
Albert Fox Cahn:
I think this is a moment where the failure of the whole surveillance solutionism market is coming into the foreground. We detail how Ring sold people this myth for years that if we simply ringed our houses with cameras, that it would bring us safety. And what we've seen instead is it brings us surveillance and not always surveillance under our control, that we have these platforms which are allowing police to track us through these same camera systems that are making it ever easier for police and companies to work together to weaponize the hardware we buy against us. And in the case of Flock, we see how these camera systems that were being sold to homeowner associations and police departments in the name of preventing crime are being misused for everything from immigration enforcement to abortion prosecutions.
And it really, to me, highlights how just one of the core failures of the innovation landscape, that when these cameras came on the market, they were able to come up with this really simple sales pitch, but they never had the evidence to actually justify their claims. They never actually had the data to show that they reduced crime. And when you look at what the evidence-based measures are to protect your home, they don't actually raise any of these same concerns. It's bars, it's better locks, it's having lights on automatic timers, it's using them all in combination. And they don't sound like life-changing innovations, but it's very easy to understand the benefits they provide. It's also easy to understand the ways they fail us. And that's the hallmark of an upgrade. It's something where you understand the benefits and you understand the cost, you understand what you're getting, and you don't have the magical thinking of surveillance solutionism.
And yet with Ring, with Flock, they promised a world they never delivered and instead they created this tool that we see being weaponized against our most vulnerable neighbors every day. And I really think the rage we see building against these companies is emblematic of the backlash that we see against surveillance solutionism more broadly.
Evan Selinger:
The other thing I would just add really quickly that we point out in the book, and of course we're not the first to point it out is, and the warning signs were there from the start. Ring was never going to be just a camera. You can't be sufficiently high tech and disruptive if you're just a camera. You have to have a bunch of add-ons and a bunch of other things.
And so it's pretty clear what the dynamics of social media are. I mean, we know by now that whatever good social media can provide, it also leads to people being quick and hot tempered and not having impulse control. And so if you create a network surveillance technology like Ring and then you incentivize people sharing things online and you tell them that we're not just offering you a product, we're offering you a way to be a good neighbor. And that's vigilantism.That's about being hypervigilant, that's about reporting suspicion and so on and so forth, you're going to be inflaming the very sensibilities that make people nervous in the first place without having the kind of due diligence and protocols in place that allow this to happen justly and with care.
So on top of the fact that as Albert pointed out, we don't have compelling evidence that it's going to do the thing that homeowners and renters are looking for, which is, "I want things to be safe." Well, that's certainly not proven. We seem to have a lot of evidence that it's done the opposite. It's made people very anxious, which is a pretty horrible thing to do.
Albert Fox Cahn:
Yeah. I mean, for people who don't know, Ring has this neighbor's app which is associated with it, and it bombards people with notifications. And I've had to have these conversations with my own family where people are on these apps thinking that it's empowering them to be safe. But when you look at the daily reality of it, it's just constantly bombarding you with trauma, constantly putting you in this fight or flight mode, constantly making you more fearful of the place you live. So it doesn't do anything to provide you actual security, but it provides you a constant mental state of insecurity.
Justin Hendrix:
I want to ask you just a little bit about why you talk about cybersecurity perhaps a little differently than you talk about the other examples here. What separates folks concerned with cybersecurity typically? Why are they an example that deserves calling out in the book?
Albert Fox Cahn:
Cybersecurity has often been pretty boring. It has not gotten the same flash and glam that other startup culture tech companies have. It's been the place where geeks spend hour tinkering with the status quo, seeing what are the ways we can mildly improve the security? What are the ways that things can fall apart? And we see all of these things in cybersecurity that structurally prime it to be a good space for upgraders. Because in cybersecurity, if you have any one point of failure, that's going to be your downfall. So people in cybersecurity tend not to look for the breakthrough change in terms of upside, but how to mitigate all the potential areas of downside. Because in cybersecurity, you know that just one out of date piece of software, just one out of date piece of hardware, just one untrained employee can be the crack that lets in the attack.
There's also things like defense and depth, privacy by design, compartmentalization. All of these frameworks for cybersecurity that are all built around assuming the worst will happen and then building in redundancy, building in ways to mitigate the harm, ways to contain the damage. And these are all things that are hard to translate into the sort of companies that venture capital firms are always craving. And yeah, you have seen AI moving into the cybersecurity space. You've seen people trying to market it, there like everywhere else, but you don't see whole cybersecurity teams being laid off like you do on the development side. You see it being layered on in addition. And that sort of additive approach, instead of this simplistic side of tech development that we've all come to know and despise really separates it.
Justin Hendrix:
What's on the checklist for someone who wants to work differently for the upgrader, somebody who wants to slow down as you suggest?
Evan Selinger:
I think we try to point out a number of different things to focus on. And so I mean, one of the first things that obviously springs to mind is, are you actually providing a solution to a problem or are you offering solutionism in search of a problem? I mean, so many of the things that we talk about in here, I mean, the Metaverse was a prime example. An ill-defined concept that no one could ever really figure out what it meant. And I read recently, I mean, Meta's been hemorrhaging, hemorrhaging money in that department. I mean, not only did it never pick up, it's been a massive loss. And if you were to ask people ... We were doing the Wendysverse. Everybody felt it. Professors were like, "I'm teaching in the Metaverse." Everybody couldn't wait to get on the Metaverse bandwagon. And if you were to ask, "What is the problem that you're trying to solve?" No one had an answer for that.
It was just like, "But I know this is going to be the next great thing." And I feel like that's kind of happening right now with AI. People are saying, "Well, I know if I'm not AI first, I know if I'm not enhancing efficiencies." And you're like, "Well, have you looked really carefully about what specific thing you're trying to make more efficient? And do you have a very well worked out plan of how some form of AI is going to do it?" They often say, "No, but I know we'll get there." So I feel like that is a massive, massive warning sign just from the jump. Is there a fear of missing out? Are you afraid that if you're not jumping on this bandwagon that you're going to be left behind? And we've seen this over and over again. I mean, Albert before mentioned AI won't be the last fad, but it hasn't been that long since if you weren't mentioning big data, you were afraid that somehow you were just not going to be part of the crowd that was getting things. Everything had to be big data.
Before that, it was somehow just the internet. Then it became the Metaverse. Now it's AI. So one of the things we see over and over is this jumping on. And with that jumping on, there's a bit of magical thinking, which is that there's going to be outsized returns without taking the risk. So people end up being surprised over and over again that jumping on this bandwagon without having a very clearly evidence-based understanding of where this is going to lead could end up being incredibly, incredibly risky.
One of the other things that we talk about is sort of not actually consulting the beneficiaries of your product. So many people are like, "This is going to be my solution to whatever." How many people did Ring consult in asking about, will this actually make your neighborhood better? We don't find a lot of that. We find a lot of companies projecting what the ideal audience is without actually having real people who are using their products sort of conform to that.
And I would also say one other thing that we really haven't had much of a chance to talk about is certain things get left out of the conversation entirely. There are certain values that you can't even talk about that we should be talking about because they're considered to be sort of like so outdated or archaic they're not even worth consulting. So quick example from the book, and then Albert, I'll turn this over to you, but you asked about students. So obviously the job market isn't great. And students will literally come back and say to me that getting ghosted is pervasive, that a badge of honor is actually getting a reply email from an employer saying like, "We're really sorry." Even any acknowledgement whatsoever. And so it's created this massive fear that you've got to be sending out a billion different resumes and you've got to optimize them for a resume screener and you're probably not going to get a person on the other end.
And so one of the things we even talk about in the book is quite apart from whether these things will work, whether they're going to actually help your company get the best qualified employee, which is a very big question in many cases and whether there are issues of unfairness, which there are, there is a question of like, what are you trying to do? What message are you sending when you're having an AI avatar interview a person? You're basically sending them the message that the company doesn't even care enough to have an actual chat with you. There's something inherently dehumanizing about it. I mean, we talked about in the book that some of these things should be off the table. So one of the things I think we need to think about is, yes, you could look at all of these from a very abstract spreadsheet perspective of, will this make the process feel more efficient? Or you could ask some on the ground questions and go, "What kind of message is using this technology sending?" And I think a lot of them are sending horribly anti-human alienating messages.
Albert Fox Cahn:
This is just a call for us to reaffirm some basic common sense. You don't have to be a technology expert. You don't need to be a tech ethics expert. You don't need to be a historian of the way technology has failed us to look at the cases we walk people through and see the patterns we're pointing out, and then to just see them in the technology debates we're having every day. It's really a reaffirmation of those simple principles that we normally apply reflexively in everyday life, but have been so willing to jettison for the last 20 years when it comes to the ways that we think about technological change. And I think that when people really have this chance to linger in the ways that innovation has failed and to see the ways that upgrades have succeeded, it becomes just a new muscle memory that you can apply to so many areas of life.
Justin Hendrix:
What gives you hope that this upgrade mentality can gain traction? I mean, it does seem like we're in many ways careening towards a world even more driven. I mean, all of government policy in the U.S. and beyond seems designed to encourage more disruption. Countries around the world are trying to build their own Silicon Valleys. They're talking to AI firms about how to both bring those firms there, both in terms of the software, but also to build the data centers. We are building out effectively an infrastructure for the opposite of what you're calling for. What gives you hope that the upgrader's mentality can gain traction?
Evan Selinger:
I'll give an example on a micro level, and then Albert, over to you, and this brings us back to teaching. So as we were discussing before, I think it's unfortunate and tragic, but a lot of students right now are experiencing the idea that they'll have assignments like post something on a discussion board. And so someone posts something written by a chatbot and then they feel obligated to respond. So it's chatbots speaking to chatbots and nobody's speaking to each other, but that's also created a hunger among some students to actually be understood and have a meeting of the minds and not want that. Some people are just caught up in that rat race and some people are going, "That's not the world I want to live in. I do want to live in a world where I can be heard and I can think."
And so those students are writing things and you can see a mind at work. This isn't about running something through some glitchy AI detection software that then tells you, "Okay, this is highly likely written by a human being." You have conversations with people, they follow it up in writing, and I'll spend more time with those. I can't do that with everything. Again, I can't do this at scale. And then I get students responding and they're responding in a way where it's not about just getting a grade. They're like, "Thank you for listening. Thank you for providing some feedback. Hearing from you is very different than me entering something into a chatbot and asking it like, 'What are the good things about this paper? What are the bad things about this paper?'"
So I guess I'm picking up a little bit on what Albert was saying before about some fatigue. I haven't totally seen this at scale, but I am seeing it on a smaller level. Some people are like, "This is just not the world I want to live in." And they will go above and beyond what is asked of them because there is meaning in doing that and they can see the value in doing that.
Albert Fox Cahn:
Look, I'm a technologist, I'm a lawyer, but I'm a giant history nerd. I spend a lot of time thinking about the way that America has responded to crises over the generations. And I am very convinced that our country will often do the right thing, but only at the last possible moment. And I think that we have such growing resentment of the ways that these technologies have harmed our communities, degraded our quality of life and impacted our daily psychology that we are at a moment when people are ready for something different. And I talk every day to politicians, to journalists, to civil servants, to teachers, to students who recognize that the choice they've been offered on technology is a bad one. They don't want the broken status quo and they don't want to keep having their hopes shattered by one faulty gadget after another.
And so I think this is a moment where people are really hungering for a third way. And that's why I'm hopeful that we will be able to reach so many more on top of the upgraders who never stopped tinkering. And they're the ones who've been keeping the lights on this whole time.
Justin Hendrix:
You say at the end of the book that, "It's just one part of a larger effort to reframe how we all think about change and progress." I kind of always think that that's what we're trying to do here at Tech Policy Press as well, just be part of that larger effort. I really appreciate you all taking the time to speak to me about the book, appreciate the book, and hope to have you both back soon.
Albert Fox Cahn:
And appreciate you writing the forward. Thank you so much, Justin.
Authors

