Home

Donate
Podcast

What Are the Implications if the AI Boom Turns to Bust?

Justin Hendrix / Nov 13, 2025

Audio of this conversation is available via your favorite podcast service.

This episode considers whether today’s massive AI investment boom reflects real economic fundamentals or an unsustainable bubble, and how a potential crash could reshape AI policy, public sentiment, and narratives about the future that are embraced and advanced not only by Silicon Valley billionaires, but also by politicians and governments. 

I'm joined by three experts:

What follows is a lightly edited transcript of the discussion.

Media montage:

Well, it's been a choppy week for tech stocks, and that often gives rise to the question, "Are we in or are we approaching an AI bubble?" … The health of the US stock market on any given day depends on a number of variables, but The New York Times reports that, lately, it almost entirely hinges on the success of artificial intelligence and the companies behind this technology … It's bringing back memories of the dot-com bust in 2001, or worse, the housing crisis in 2008.

Justin Hendrix:

In today's episode, we're going to consider whether the massive AI investment boom reflects real economic fundamentals or an unsustainable bubble, and consider how a potential crash could reshape AI policy, public sentiment, and narratives about the future that are embraced and advanced not only by Silicon Valley billionaires, but also by politicians and governments. I'm joined by three experts. Let's get right into it.

Ryan Cummings:

Ryan Cummings, chief of staff at the Stanford Institute for Economic Policy Research.

Sarah West:

Sarah West, I'm the co-director of the AI Now Institute.

Brian Merchant:

I'm Brian Merchant, I write the newsletter Blood in the Machine, and I am a journalist in residence at the AI Now Institute with our compatriot Sarah here, and I've also written for a number of places and I imagine we'll be discussing the article I wrote for Wired today.

Justin Hendrix:

Pleased to have you for this conversation. We're going to talk a little bit about a topic that I feel like is in my newsfeed every day, the possibility of an AI bubble, what it might mean both for the economy, the industry, but also for tech policy more generally. I want to start just by putting to you, Ryan, what I understand as the argument that we're hearing about the possibility of a bubble, and I'm going to come to your piece in The New York Times, but I want to first start with something I read a little more recently by Mark McDonald in the Financial Times. He talks about the basic narratives that we hear around the AI bubble. He says, "The narrative we hear is that trillions are being committed to AI investment, but little is showing up in revenues or productivity gains. It's a neat narrative, but it's also increasingly wrong."

He points to some work from researchers at Columbia University and Zhejiang University in China, and suggests that we're only now beginning to see the beginnings of a generative AI productivity gain. How confident are you at the moment that we have enough evidence to suggest that an AI bubble is real versus this argument that we're simply at the early stages of the, quote-unquote, "AI revolution" and all of those productivity gains haven't yet shown up?

Ryan Cummings:

I appreciate that you say there's enough evidence to suggest there's a bubble, because we can't definitively say whether or not we're in one otherwise. I think the market would react pretty quick if everybody agreed and understood that we were in a bubble. Gerard Bernstein and I, our New York Times op-ed, we argue that the evidence points towards it's much more likely than not that we're in a bubble. Now, there is, of course, revenue that's being generated from AI. I think OpenAI just announced the other day, they're heading towards $20 billion in revenue, which is very real amount of money, and you can obviously see productivity gains from different types of firms, from different individuals. I, myself, have experienced productivity gains. I do coding a lot of the time as an economist, and I use AI a lot for that. The question isn't, "Is there real revenues there or is there real productivity there?"

The question is, "How much is that revenue and productivity gains relative to what's being priced into the market right now?" There's four firms, Microsoft, Google, Meta, and Amazon that have collectively spent $335 billion in CapEx and probably 50 to $100 billion in R&D. That includes compute costs and paying research, and all these things in the past year alone. We're looking at, let's say, $400 billion of expenses over the next year. I went through and tried to understand, what are the revenues from these four firms from AI-related products? It turns out they're closer to 15 to $20 billion. Again, this is excluding OpenAI, which has $20 billion in its own revenues. You're looking at, next year, they're going to spend close to $400 billion and they're making off of actual AI stuff $20 billion. That's a pretty big mismatch. Now, of course, a lot of the gains to AI, which I agree with, are going to occur in the future, not necessarily right now, but then the question is, how much longer in the future are they going to accrue?

For example, if I said, "Justin, I have a great investment for you, it's going to pay off a $100,000. How much are you going to be willing to pay for that?" You might be really excited and say, "Okay, I'll pay $90,000 or $95,000 for that and get $5,000 in return," but then if I said, "Justin, this investment's going to pay you $100,000, but in 50 years from now is when you're going to get it," your valuation, how much you'd be willing to pay to get that $100,000 is probably going to be quite a bit lower. What we know about productivity-enhancing technologies, particularly revolutionary ones, whether that's the printing press, all the way to go back to the 15th century, or the electrification of the factory floor or the internet, or any revolutionary technology, railroads, that we've seen in the past, it takes a long time for those to diffuse and benefits throughout society.

Right now, we're seeing adoption numbers that are good, but it's still going to matriculate... It's going to take time to matriculate not only throughout the economy, but within firms figuring out how to use it best. You might adopt it, but you still don't really know how to use AIs. Are your HR people using it well to do recruiting? Is your finance people doing it well to use their forecasting? These things take time and tinkering even if you have the adoption. Right now, you look at the stock price valuations, they're as high as they've been since the dot-com bubble for the technology sector. You look at the amount of investment that's going in, and then you look at the revenues and the productivity increases that you're seeing. Again, they're not zero, $20 billion is not nothing, for example, in the case of OpenAI, but they say they want to spend $1 trillion before the end of the decade on building out data centers and getting more compute and R&D, and all these things.

Unless their revenue catches up to that, that's a long way to go, $20 billion to $1 trillion in five years, then we might say it's a bubble, because things are just priced in excess of the actual profits that they're going to accumulate.

Justin Hendrix:

Sarah, I'll bring you in there. I guess the other thing that you often hear about what distinguishes this investment period from the dot-com bubble is that those handful of incumbent firms that Ryan mentioned that are investing the lion's share of this massive build out on data centers, et cetera, that they are incredibly profitable and that they, for the most part, are spending against those profits and not wagering on the future in quite the way that perhaps firms were in the dot-com period. Although even that narrative seems to be breaking down a little bit, we seem to be seeing more reporting about the degree to which people are assuming debt in order to build out data centers, perhaps not funded directly by those incumbents in the more general march to build out the infrastructure and to be part of the big cycle. We're seeing lots of other players that maybe aren't in quite the same position as a Google or a Meta take on debt to do that. I don't know, what do you make of that argument at this point, Sarah?

Sarah West:

Yeah, I think it's a really important question to be asking, because I think you're right to state that the first wave of capital expenditures for this version of building AI has been largely funded from cash on the books of the hyperscalers. That's like Google, Amazon, Microsoft, companies that run their own cloud businesses and then also are building and deploying AI tools within their existing ecosystems and striking deals with AI development firms, like OpenAI. There has been a transition in the last few months. I really think the Oracle deal was one critical juncture point where you saw a shift from cash funding data center investment to... Oracle struck a huge deal with OpenAI, but it doesn't have the same cash that the hyperscalers have. Oracle is going to have to turn to credit and taking on debt in order to finance the contract that it's struck with OpenAI.

With that shift to taking on debt, we've started to also see the emergence of these circular financing structures, like OpenAI's deal with Nvidia, where Nvidia is writing them a check for each gigawatt of energy that OpenAI brings online, but it's purchasing Nvidia chips in order to do that. Build out similar structured deals with AMD, there's a circular deal with CoreWeave. We're starting to see that almost like the conspiracy theory string graph structure becoming more endemic in the market, and that's where it starts to feel much more speculative. It's where you start to worry about, "Is this a bubble that when it bursts, if it bursts, it leaves a bunch of infrastructure that can then be used for other purposes?" This is what really tips the scale into a bubble that could have more widespread fallout that really impacts people at the end of the day, because a lot of these deals also are structured in ways that leave local utilities, taxpayers holding the bag.

I think the last point worth making here is that the bubble conversation has not led to a meaningful interrogation of, "Why do we need to build out these infrastructures at such high levels of scale?" This makes a whole lot of sense. If you're a cloud company that's raking in profits from the deployment of technology and software that runs on your ecosystem that you are going to be bringing in income. From the deployment of AI at scale, that needs the resource that you own and control. It's a whole different value proposition to be building out AI at scale that's financed through these very precarious structures. There's, I think, a lot of evidence that small narrow models just work better, they have fewer consequences for the environmental harms, for the rampant collection of data to be able to train them. There's all of these reasons why I think we need to be asking that question more. Do we need this second in the first place? And if we do, does it need to be built out at this large bloated scale? Who benefits from that model and structure and who's harmed by it?

Ryan Cummings:

Just on this point, as an economist, I find this almost triggering, whenever people make this argument that it's finance out of earnings, so it's different. There's a very famous paper, which economists generally think of as the beginning of corporate finance almost, from 1958, and it's the Modigliani Miller Theorem. What that says is it doesn't really matter how a firm finances investments, whether that's equity or debt or earnings. What ultimately matters is the amount of profits that come back to shareholders. Now, we know this is wrong in a lot of ways, and I say it's the beginning of corporate finance, because Modigliani Miller, this theory breaks in a lot of ways. It's preferable for firms to finance things out of earnings and then debt and then equity, and things like this, so there's a lot of important caveats. But the general idea, whenever you're talking about this scale of, "It's less of a bubble because of the way they finance," it's crazy, because at the end of the day, the ultimate thing that matters is the profits that accrue back to investors.

If they're financing out of earnings, it doesn't necessarily mean if the bubble bursts and their share prices collapses, they're going to go bankrupt, right? But if at the end of the day, you spent $400 billion in a year and then you only got back $50 billion, that means there's $350 billion that did not come back to shareholders, so the prices of those shares have to adjust downwards, because that money was vaporized. Whether that came out of debt or whether that came out of earnings, again, for the solvency of the firm, that matters, but for an investor sitting there, if you took out debt or you took out earnings, or whatever, and you spent all this money and none of it came back in profits, it's not really going to be all that important, whether that's finance out of equity or debt.

Justin Hendrix:

I want to bring you in, Brian, you just spent some time with two other economists who have recently written a book about bubbles. Brent Goldfarb, David Kirsch from the University of Maryland, they published a book called Bubbles and Crashes: The Boom and Bust of Technological Innovation. You spoke to them in particular about their framework for understanding whether a bubble exists. There are four principle factors, you say there, the presence of uncertainty, pure plays, novice investors, and narratives around commercial innovations. Maybe just give us a brief capsule on what talking to Goldfarb and Kirsch told you about the nature of this potential bubble. I might start to push the conversation towards what Ryan was going on there about a bit too, which is the implications for investors and what it might mean even for regular people if this bubble bursts.

Brian Merchant:

Honestly, I had started looking at this framework that Goldfarb and Kirsch had put forward a year plus ago. I was surprised that it hadn't been brought up more frequently as a means of trying to understand what's actually going on here. To clarify, what they aim to do is try to create a historical framework, so they go back and they look at 58 different tech bubbles and tech booms and try to assess them on the grounds of whether or not it became a bust, whether or not the bubble was inflated to a large degree or whether it was just a boom that then didn't have any catastrophic economic impacts. Yeah, they come up with these four different indicators that you mentioned, the pure plays, the novice investors, a coordination or alignment of narrative, and then yet the uncertainty that is present in tech innovation.

Obviously, we went through each one of these, I did it independently, and then I reached out to Goldfarb, to Brent, and we went through them together, and it was pretty conclusive. It was pretty overwhelmingly evident that, in each of these cases, some a little bit more than others, that we at least have the elements necessary for a bubble. The conditions for a bubble are ripe, and the two biggest ones, of course, are the uncertainty... The uncertainty with AI is off the charts. I think, as Goldfarb is quoted in my piece, as I quote him, he says that, "Usually by now, a few years into tech innovation, you have some idea what it's going to be useful for in the terms of generating revenues and business models, and right now, it's still relatively unclear." As Ryan was saying, there are definitely some things that it's useful for, right?

It's pretty clear that some coders like to use it to generate the road and parts of code, and that it can create some productivity gains in some spheres, but again, the uncertainty that shrouds the question of turning that into a viable business model, and as Goldfarb points out, some of the key things still aren't being priced in, like energy costs. Looking at the balance sheets, these firms are very likely... Still, when they're talking to partners or investors or consumers, or God knows who, are not looking at the price of energy that is being used and the resource and the compute. There's very little hard analysis going into, what's going to be a business model 10 years down the line? What is the robust business model look like? It's still a lot of just gesturing towards, "Well, we're going to have AGI. Well, it's going to do this. Well, it's going to be this immense generator of productivity and profits in the future," and very little sort actually drilling down into the numbers for a lot of the leading firms.

You have this immense level of uncertainty, in other words, that's shrouding everything. Historically, when you have both uncertainty and a long timeline, that's a bubble red flag right there too, for sure. I think the two of the biggest bubbles that I think wound up being the biggest corollary through our investigation were broadcast radio and aviation, both of which were big booms in the '20s, and we all know how the boom in the '20s ended, but you had technologies that is like, "Whoa, this is amazing. This is going to change the world somehow." Especially with radio, it wasn't clear. It was like, "Is this going to be marketing for department stores? Is it going to be live plays that people can listen to?" There was so many questions about business model, what does a business model look like? There was uncertainty for years and years and years.

Aviation too, you had an industry that was obviously... You had this big event where Charles Lindbergh does this big flight and investors all over the world say, "Wow, aviation is great. We need to invest," so they invest in tons of aviation companies, and at least then it's clear. It's like, "Well, eventually, we're going to move people from point A to point B quickly, and maybe some cargo or whatever." But again, it's still unclear what the business model is, how that's going to work out, how that's going to provide a return to investors, so that took a while too, and that fed a big boom and there was a ton of uncertainty and a ton of the other ingredient that is off the charts here, which is this coordinating narrative. That's what Goldfarb and Kirsch say. When you have a coordinating narrative that fires a starting pistol that gets all investors from different walks on board to start saying, "This is the future," then you're in dangerous territory, and then we saw this in droves, right?

With the launch of ChatGPT, here you had a tech demo, something that the industry hasn't had for a number of years now, like a tech demo that actually works that excites the general public, that gets everybody on board after years in the wilderness with Meta and NFTs and crypto and things that work for niche audiences, but now you have something that everybody agrees upon, "This looks like the future." You have the huge amounts of uncertainty, you have this coordinating event that launched with ChatGPT and quickly becomes AGI, "We're going to build this everything automator, we're going to build this super intelligence," and that is enough investors. If they don't necessarily believe it, they at least don't want to be the one holding the bag if it happens and they haven't invested in it. It's coordinating further and further in investment there.

You have companies like Pureplay, Nvidia, OpenAI. If OpenAI becomes a public company, that'll be a massive Pureplay investment. Nvidia has, for all intents and purposes, become an AI company. It's where people invest if they want to invest in an AI company. There are other ones, like CoreWeave, coming online too that have had IPOs. Pureplay is basically if a company or a firm whose fate is tied to the technology. Then, yeah, novice investors. Everybody's got their Robinhood apps now, everybody can get in on the boom if they want to. Maybe Pureplay's and novice investors are not quite at the gargantuan scale that the uncertainty and coordinating narratives are, but those two being off the charts and the other two being present, as far as their framework is concerned, as Goldfarb said, on a scale of zero to eight.

Justin Hendrix:

I want to come back to this idea of the coordinating narrative in particular and bring in the policy dimension of that as well. But first, Ryan, I just want to ask you, Brian's conversation with these economists suggests big bubble, not only a big bubble against these different dimensions that they're measuring, but also potentially just the overall sides of it. In your New York Times piece, you suggest, and I don't know how comforted to be by this, that if it were to burst, it wouldn't be as big as the burst of the housing bubble, the financial crisis, which think about that. That's not saying much, we're still living with the political and economic consequences of the financial crisis in many ways today. I don't know what gives you some, I suppose... I don't know want to say "Hope," but maybe reason to believe that, if this bubble bursts, it wouldn't be as severe as something like what we saw in '08.

Ryan Cummings:

Yeah, the plain and simple of it is most Americans' wealth is their home, so whenever you had the housing bubble occur, you had the value of people's homes decline by 10, 20, in some areas, 50, 60%. Now, people's livelihood is tied up in the stock market as well with their 401(k). For example, 50% of the growth, since the launch of the ChatGPT in the stock market, in the S&P 500, has come from the Mag 7, which are the AI-exposed firms. If there's a bubble and those firms share prices decline, that's certainly going to have a negative wealth effect. We talk about this wealth effect, meaning you now have to save more for a given level of retirement, so then you're going to pull back your consumption, and that has downstream consequences throughout the rest of the economy, because you're not buying cars, you're not going out buying clothes, you're not taking vacations, things like this.

There definitely will be reverberations throughout the economy, but that housing bubble was particularly unique, deep, and long for the following reasons. One, everybody's wealth was tied up in their home, and two, all of this was jammed into the financial system. All of these complicated, arcane financial products, and that had the result of, whenever people's home values were declining, it also meant the assets on the balance sheets of every single bank in the world declined dramatically as well. Then that had the added effect of now people's wealth is declining, but also, if you need to go out and borrow money, you need the provision of credit. That completely froze, not only in the US, but obviously, in Europe as well. Pretty much globally. The impact of the great financial crisis, or economists call it the GFC, it took almost a decade for us to get back to the pre-trend level of growth.

We really didn't see this until we came out of COVID, because it was so long because of those consequences. Now, with the AI bubble, there's certainly going to be a lot of wealth loss. I just mentioned, if you're financing things out of earnings, those firms are going to become less profitable, they're going to have less profits to give to shareholders, which then spend on different things, or they go and put it in another company in another form of investment. I think it is likely, if we see a correction, as they say, in the stock market, that's pretty enduring, we see prices decline by 20, 30, 40%. 40% would be really bad, but if we see these large drops, I think the corresponding wealth effects from that would lead to a recession. But again, it's not going to take a decade to figure out, "Okay, how do we re-scramble all this investment? Households are going to be harmed, but then, ideally, the stock market tends to go up over time.

That's the history since 1930, pretty much. The stock market will rebound and figure out, "Okay, maybe we were off with the level of profits of AI or with the firms, for example, that are actually going to benefit from AI," and then that'll figure that out and then prices will eventually recover, I would think. I don't think it'll be as widespread and as bad. I know it's almost a trite analogy at this point, but the dot-com bubble bursting in March of 2000, and then obviously the corresponding recession, 2001, 2002, I think it's closer to that, maybe a little worse. Obviously, it depends on a lot of factors, including the administration's ability to direct the economy in a time of crisis, which I personally do not have a lot of confidence in, and that can make things a lot worse. Yeah, I don't think it'll be as catastrophic, but every recession is, in its own way, catastrophic.

People losing their jobs, we know people lose jobs, some people from that fraction are going to die. It is, quite literally, life and death, but if you're thinking about the relative scale, I don't think it'll be as bad as the financial crisis.

Justin Hendrix:

Before we move on to this idea of the coordinating narrative, and I am thinking about that not just in terms of the investment, but also the broader political narrative around AI, and that's where I want to bring Sarah in, want to say anything in response to what Ryan's just said about the potential scale? It occurs to me, one thing you didn't talk about there, Ryan, is also implications for energy markets and other things that feel very exposed in this AI infrastructure build out.

Sarah West:

Yeah, I think if you look at a wider array of factors around the decision making that's shaping this push to roll out AI at scale, I think that there are other dimensions that are worth solidifying here. For example, we're seeing a real push to build out energy infrastructures to power these data centers that's extending the life of coal plants. We're ramping up reliance on gas turbines to power... For example, the Colossus data center in Memphis is powered by gas turbines. Similarly, the new one that's being built out in Texas. Some of the decision making around energy is pushing back the transition to renewable forms of energy. There's a constellation of factors there that I think are really important to take into account, and there are other dimensions too. Some of this discussion is treating AI as though it's a market that's emerged out of healthy competition between players.

But I think what's more the reality is that there already was a high level of concentration within the tech firms that have been powering forward AI development. The market is structured in ways that are going to lead to their own benefit, and one of the areas that I'm most worried about is that we're seeing deepened concentration within the sector and deepened ties between AI firms and government in ways that I think are becoming increasingly really worrisome. I think that came really to the fore over this last week, when OpenAI's CFO, Sarah Friar, was in a conversation hosted by The Wall Street Journal, and she made a passing reference to government providing a backstop for OpenAI's financing of its data centers, which they tried to very quickly walk back, but they'd already published letters that were calling for federal guarantees for loans. I think across a number of dimensions, we're already seeing the government stepping in to backstop this industry and to treat it as this is a strategically significant technology, and deepening the closeness between government leaders and the heads of these companies creates this deepened not just economic power, but also political power.

Justin Hendrix:

I think this gets at the question that I wanted to get to next, which is, maybe to use the language that Brian's been talking about, this coordinating narrative idea. This is the idea for the future that, literally, almost every government has adopted, almost every company has adopted. This is the thing that we're all pointing towards, artificial intelligence. In your last artificial power landscape report, Sarah, you pointed to this idea of the AGI mythology, AI's false gods, the arms race with China. These are the narratives that we're all more or less operating within on some level, even though some folks on this call might spend some time trying to raise skepticism about them. If the bubble bursts, to what extent does the narrative fray or come apart, or do governments demand that it stays intact? Leaders like Donald Trump who have staked so much on it and whose political supporters have staked so much on it demand the narrative stays intact.

Sarah West:

Yeah, we've been in this moment where there's been this collective blessing of the AI industry as deserving of particular treatment, even as, I think, in the market, we haven't seen clarity on how is the adoption going to, in a deep way, become integrated into the ways that firms are behaving. We don't have decent substantiation of what productivity looks like for firms that are adopting. There's some really trenchant challenges around security, around the propensity for hallucinations or just making stuff up within these models. Some really big hurdles that would prevent it from ever fulfilling those dreams. I think what's especially worrisome in terms of the dimensions of the fallout that wouldn't be captured by a standard treatment of the AI bubble, it's like, what are we not investing in as we're really wholesale focusing on this industry?

We're spending a lot of time talking about AI's ability to cure cancer, even as funding is being pulled from the NIH, from the NSF, from places that we would be able to... We may be trading off real moonshots, real things that are going to ultimately lead us on a path toward developing innovation that's going to serve dividends for the public at large in the name of going all in on this one... What seems like a really risky bet for all, honestly even for Mark Zuckerberg and Sundar Pichai, the folks who are investing because they want to protect their moats, it's maybe a little bit less risky for them, really risky for the rest of us to be making these trade-offs.

Justin Hendrix:

Brian, do you want to jump in there? Because you also bring up, for instance, the AI will cure cancer or AI will automate all jobs narratives, and the implications of those potentially coming apart.

Brian Merchant:

I'll say that one thing that's maybe worth layering in here is that we've reached this point that is unprecedented on a number of levels in terms of the scale of the argument that the AI companies in Silicon Valley is making and that they have coalesced around this narrative, which Silicon Valley has talked a big game since the beginning of it. It's used grandiose metaphors and terminology to sell everybody, "It's going to change the world, it's going to do..." This may yet be the peak of all of that, right? "AGI is the technology that's going to automate all jobs. It's going to replace human interaction, it is going to be..." They're selling everything. It is the everything machine. AGI is the promise of doing everything. Yeah, governments that are susceptible to wanting to have a piece of that, of course, are going to find ways to prop it up and ones that find it useful already.

We already know that especially the Trump administration has found it quite useful on many fronts. As much to produce content, some might call it, others might call it propaganda, for its Twitter feeds and for its public-facing social media tools. It has found an immense propensity for using this stuff in the political arena. It also, I think, fully buys into that narrative that you're talking about where it's a geopolitical struggle that it needs to win. Again, this is another arena where it's unclear how much actual buy-in there is on this when Eric Schmidt goes out there and says, "We can't lose the race to China," right? Or JD Vance is saying, "AI should be a tool of American dominance." They may really believe it, or that may be a useful vector through which to continue to concentrate investment and interest and consolidation in the AI industry and in the tech industry.

The sheer bigness of this, I think they need that framework to continue if they want to continue this trajectory to the extent that they can. They've already had soft banks sinking tens of billions of dollars into this, they've already had some of the most enormous deals on record yet for the tech sector, but if they're hoping to continue the build out of data centers, of compute, of infrastructure at this rate, then they kind of have to continue with this narrative, because you can't really... There's nothing else, right? You're already at the top of promising that the AGI to do all this job, animation and all this other stuff. This moment is so unusual, because we do see the Trump administration just took 10% in Intel. It's just willing to do this to an extent that feels a little bit different. It's more of a wheeling dealing sense, and certainly, I think if OpenAI were threatened financially or economically, then it's not hard to imagine the state taking some stake in the company or, if not, backstopping it outright.

Trump loves his deals, so maybe it would be more complex than that. But that is to say that there is a lot resting on this framework continuing. I think if you start to pull the Django blocks out, then that could be something that does contribute to a deflation at least, or an unraveling or a crash at worst, because you have been promising for years and years the everything machine. It can do it all, the AGI, the super intelligence. If that's not going to emerge, if that's going to fall short with the levels of investment that we're seeing and the levels of return that people are expecting, then yeah, at this point, one of the most important things is finding ways to carry on that narrative, or at least to reconstruct it, re-construe it, and try to find vectors to continue to promote it.

I do think that's one thing to watch, we've had Sam Altman trying to, "Well, it's not really that important whether or not it's AGI or not," and, "Yeah, it's a bubble, but..." There are some of these deflections going on, and meanwhile, you have Nvidia and Jensen Huang going full steam with the same narrative. I think that's just one of the key things to watch, yeah.

Justin Hendrix:

I want to talk about the potential policy implications if this thing bursts, and I might start close in a little bit, Ryan, with economic policy implications. Brian made mention of some of the interesting industrial policy of the Trump administration to try to encourage the AI industry or to get the government somehow involved in the potential upside. We've even seen Trump himself essentially go on sales calls in the Middle East with AI executives and variously promote the industry in different ways, or at least do their bidding. I don't know, what do you think? If the bubble bursts, how does it change the economic policy landscape in the United States and perhaps abroad?

Ryan Cummings:

Yeah, so I think it goes back to what I was saying of I don't think the recession will be as bad, but given how the administration seems to try and coordinate economic policy, I don't know how good they'll be able to do that. The thing I particularly worry about is the cause... This is something our director here at SIEPR, Neale Mahoney, says, "With AI, in a sense, we're in a Rawlsian moment, where we don't know what the exact problems AI is going to create." There's obviously a lot of potential benefits of potentially curing cancer or this stuff we've talked about, but in terms of, is it going to displace people, is it going to compliment people? What's going to be the downsides at people's economic futures? We don't know exactly what those problems are, but we know the solution, which is to just have a broad social safety net, which will catch people as they fall through the cracks.

With the recession, if there's a bubble and that causes a recession, it's the same idea. You really don't need some custom AI bubble-bursting policy. What you need is a really robust social safety net to ensure people who are out of work temporarily, particularly through no fault of their own, get some robust unemployment insurance, people who are hungry or getting food assistance through SNAP, people have access to healthcare coverage. Those things are important morally, one, because we shouldn't have people starving or going bankrupt because of healthcare, but they're also important economically, because they support consumption. They effectively put a floor at how low the economy is going to go, because if you have all those basic necessities covered, you might be able to consume a little more, which is going to help stimulate demand and get us back on an expansion path.

Unfortunately, what we've seen with the Trump administration is, even in good times, they're trying to roll this back. Obviously, with the shutdown, they were illegally withholding SNAP funds. The whole point of the shutdown was the people on the Affordable Care Act who are receiving premium subsidies, their healthcare premiums are going double or in some cases triple. In these key measures, which help sustain the economy, even during a downturn, they're more likely... I think what their policy has shown is to accelerate that rather than decelerate that. What I would anticipate, to Brian's point, is they'll double down on this. If AI prices go, they'll try and start increasing the hype cycle again with the idea of like, "We just got to get these stock prices to recover," but whenever the market's going through a correction, this is what economists will think of in terms of the share prices as efficient, right?

"We thought there would be a lot of profits and now we're starting to see there's not as many profits, so the share prices need to lower." That's not necessarily something you want to stop. You want to stop is you want to stop the suffering that occurs from that. It would be great if the market could adjust in the share prices and then people were nowhere off. That's the ideal. But with the Trump administration, I think their approach would be, "The first order thing is getting the share prices to go back up, and the second order thing is taking care of people's basic needs." That's a dour response, but I just don't have a lot of confidence in their ability to coordinate an effective response. That's the other thing, I was in the Biden administration for two years, particularly whenever Russia invaded Ukraine, and that required a whole of government response.

Everybody from every agency at all hours of day and night coordinating thinking, "What are we going to do? We're in a full on crisis. We have to carefully coordinate. We have to be rowing in the same direction. We have to be speaking in one voice. We have to give people confidence and hope that we're able to manage this situation," and the Trump administration broadly rules by tweet or Truth Social, or whatever platform the president's using, and the agencies don't agree with each other. If we're in a moment of economic crisis where people are rapidly losing their jobs, their incomes are going lower, is Trump going to say, "Well, great news. The ballroom is completed"? These things are not things that are going to help. Unfortunately, I think we're in a situation where it's more likely not the administration will accentuate the degree of the downturn rather than reverse it.

Then one final point on this too is, even if they wanted to do what Democrats would typically do, which is, "Hey, let's issue more debt, let's have an expansionary fiscal policy. We'll do some investments," some of this Keynesian economics for people that are familiar, even if they wanted to do what I would say is the right solution, they're more limited by our fiscal space. We have a lot more debt than we had coming in, interest rates are higher. Now, obviously, they're trying to get the Fed to cut interest rates, but it's just going to be more mechanically difficult to do.

Justin Hendrix:

Sarah, I want to bring you in on the potential tech policy implications. This whole area of tech policy is also dominated by this uber narrative of "AI is the future, AI is the next thing." Of course, your institute focuses primarily on artificial intelligence, but so much of the rest of the tech policies, civil society, world, every government agency around the world that focuses on these questions, there are so many global confabs now around AI and AI governance. People are just now gearing up for the next big summit in Delhi in February. There's things happening in the G20, there's things happening in the UN. If this bubble bursts and the narrative braiser comes apart, does it create any opening in the tech policy space? Does it create perhaps opportunity to introduce new protections, new regulations? Can you see the general coordination across all of that activity across the globe? Can you see it changing?

Sarah West:

I think that the realistic picture is that we're not well-placed to be able to execute the moves that are needed in order to really meaningfully change the picture, but where I'm optimistic is that there is an opportunity to lay the groundwork for a more innovative economy and an economy that's going to ultimately put serving the broader public benefit first ahead of protecting the positions of the large incumbent firms. If what got us here, and just to step back again, is that we've landed on a version of AI that the benefits for the broader public are there, but still in the speculative realm, we don't have clear use cases that are commensurate to the value that's being placed on this technology, where the benefit to a few firms is what's most clear, cementing the position of the cloud infrastructure companies, which are already very large companies in place.

We landed here because of policy movements that have helped the position of those firms. What would get us out of it is really strong and robust antitrust enforcement, particularly at the cloud layer that gets rid of the incentive structures for deep vertical integration, for deepening the concentration of power in a few firms, sets the groundwork for a more broad competitive, innovative economy, and then thinking through industrial policy moves that are going to correct the unhealthy dimensions of this market and incentivize rather than just pure shareholder value. If we're already tipping the scales, let's tip the scales in the ways that are going to have broad benefits to the public at large. Let's not invest heavily in technologies whose primary use case is likely to be justifying job displacement, devaluing work, eliminating the value placed on creativity and craft, and instead, really deeply invest in those areas of basic research and innovations that's going to serve dividends for the public first. I don't know that we have the political climate that's going to galvanize around that, but on my more hopeful days, that's where I think we need to be heading.

Justin Hendrix:

Sarah, are you encouraging others who are reform-minded or reform-oriented in the space, others in civil society, others in academia, others in policy, to make a bubble burst plan? Is that something that folks should be doing right now?

Sarah West:

Yeah, I think very much so, and I think we can be putting a finer point on, what is a version of AI in the public interest that really puts the public interest at the center instead of having it be like a gloss on a market that's really incentivized elsewhere?

Justin Hendrix:

We just have a couple of minutes left. I want to come to each of you and just ask you about some signal you're watching that will tell you how things might play out. Brian started by saying, "Of course, we can't predict, we have no idea what might happen," but what are the signals that you're watching that might tell us how things are playing out? Brian, I'll start with you.

Brian Merchant:

I'm not an economist and I'm not the financial expert either. I'm not the best person to ask about crystal ball stuff here, but I will say that I do think that the point is taken that the bubble may not burst as violently or have as much damage as the financial crisis that a lot of us have already lived through and are still living through, the legacy of in many ways. But there is a different dimension here that's worth pointing out, and that is we have had AI and tech CEOs stepping up to the microphone saying, "Our technology is going to replace 50% of jobs. This is what we want to do, we want to automate. We want to rewrite the social contract, and we want to do it all very clearly for our own benefit. We've gotten immensely wealthy and powerful," and guess what? People hate them.

People are angry at them at a degree that is, again, unprecedented for my tenure as a tech journalist. I've been doing this for 15-plus years or so, and yeah, people aren't crazy about Mark Zuckerberg or Elon Musk, but now there's a level anxiety and anger that's building towards these figures and also the technology that they are producing, AI, right? AI is also one of the... Yeah, some people find it productive, some people find it useful, other people want to burn it to the ground, and there is a broad public sentiment that is, at the very least, anxious or skeptical about that. There are many more people who are in the working class who harbor a specific anger towards AI, and when you catalyze all of these things, if you do in fact have a bubble burst that is caused by the tech industry, at least in part or that's at the leader, that's where the narrative is going to be.

I do think that there is going to be a difference in the way that the public processes and reacts to that bubble bursting, so there could be a lot more anger. There was some anger certainly at the financial crisis, at the investment banks that got bailouts and nobody else did, but it was a little bit different. We live in angrier times, more polarized times, more politically-activated times in some instances, and as Ryan was saying, we have no reason to believe that this administration is going to ameliorate the problem in any way. If anything, I might expect it to heighten political tensions and potential conflicts, so I would also be preparing for that, both in terms of people who are interested in seeing AI serve the public more, or a tech industry that serves the public instead of as an excuse to concentrate profits, but also for organizing, and also labor might be interested in preparing for that moment as well to try to push back when and if this bubble bursts and leaves these firms more vulnerable. That's what I would say.

Justin Hendrix:

One signal is, watch the people and watch their anger, and the extent to which it might reach that boiling point. Ryan, what about you? What are you watching?

Ryan Cummings:

Yeah, so I'm actually going to be looking at the earnings calls, and less what the companies themselves are saying, but more what the analysts are asking, particularly if they start asking more and more about the revenues and profitability. So far, what we've seen is they've given these softball questions, "Hey, great quarter, guys. You're doing all this AI investment, that's great. We want you to do more investment." But whenever analysts from the banks and different funds start going on these earning calls and saying like, "Okay, you guys spend hundreds of billions of dollars, when is it actually going to accrue to us, the investors?" That's when I think things might start to tip a little, because that's saying to me the investors are getting less patient. They've been promised a huge payoff.

They said, "Hey, we're willing to wait some amount of time," but once they start asking more aggressively, "Hey, this seems like this actually isn't accruing as fast or as big as you're saying," I think that might be pretty indicative that we're at tipping point, where people are no longer believing this hype narrative that Brian exposed on very well, and they're starting to actually think, "Actually, maybe we've overdone it."

Justin Hendrix:

Sarah, last word to you, what's the signal you're paying attention to?

Sarah West:

We just had a big mayoral election in New York and in Virginia and in New Jersey. We're starting to see places where AI is emerging as an electoral issue. The data center pushback was potentially significant in both New Jersey and in Virginia and Georgia as well. I think, in addition to looking for more scrutiny from the financial sector, I think scrutiny from voters and the emergence of this is something that political candidates are engaged and activated on I think is the other dimension, because I think that's one place where we're most likely to get real meaningful policy change taking place.

Justin Hendrix:

We are out of time. This has been a fantastic conversation and I appreciate the three of you joining me, and hopefully, we'll have the opportunity to come back and talk about these dynamics when we have more data as time unfolds. Sarah, Brian, Ryan, thank you very much for joining me.

Ryan Cummings:

Thanks, Justin.

Brian Merchant:

Thanks for having us. I look forward to doing the follow-up in our barrel suspenders after the-

Ryan Cummings:

Yeah, from our tents in the background where there's an oil drum fire. Yeah.

Justin Hendrix:

If that comes to be, I will certainly bring the microphone and talk to you all soon.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President of Business Development & In...

Related

Perspective
The AI Deregulation Agenda Has Helped Create an AI Bubble and May Hasten a CrashSeptember 9, 2025
Analysis
Washington's Quest for AI Dominance Leaves Allies Between Rock and a Hard PlaceAugust 7, 2025

Topics