Future Fatigue: How Hype has Replaced Hope in the 21st Century
Eryk Salvaggio / Mar 26, 2025Eryk Salvaggio is a fellow at Tech Policy Press.

SUN VALLEY, IDAHO - JULY 11, 2023: Sam Altman, CEO of OpenAI, speaks to the media as he arrives at the Sun Valley Lodge for the Allen & Company Sun Valley Conference. (Photo by Kevin Dietsch/Getty Images)
If you type "Artificial Intelligence" online, somebody will get angry. Perhaps you have dared to describe AI in contrast to one of its many bloated definitions ("So you hate medical research?") or countered the belief that tech is apolitical. That's social media, after all. Social media encourages us to reduce complex human beings down to whatever recent utterance the algorithm asks us to evaluate.
The pro-AI influencer relies on anecdotes — "a programmer colleague," "a friend of mine who teaches," "people I've spoken to in the industry," etc., to create hype and generate excitement about the technology. A world where, as OpenAI CEO Sam Altman promises, "we can have shared prosperity to a degree that seems unimaginable today; in the future, everyone's lives can be better than anyone's life is now." The anti-AI brigade too often simplifies any AI system under the admittedly grotesque techno-capitalism of the current generative AI regime, at worst, becoming hostile even to suggestions that it could be done differently.
It's tempting to look at AI hype as simply an inflated valuation of a tech sector driven by opportunists. But that valuation is not just for investors riding a bubble. It rests upon a network dedicated to convincing us that AI is essential infrastructure. Some degree of hucksterism is involved, promising vast transformations of society on an imminent but unknowable time horizon.
Yet, a resistance to AI that cultivates ill-informed positions and limits our technosocial imagination is not much better. Some suggest that the market for AI will fail, curing the fever (investor, heal thyself!). This undervalues the extent to which hype is not merely catalyzing a set of investments but also a set of priorities and imaginations that justify such investments. Markets will rise and fall, but the ideological hype of AI can remain stubbornly entrenched. Driven by a reasonable cynicism of policy failures, extremists of the anti-AI movement view reform as submission and abolition of all AI as the most likely path to preservation.
Hype of both forms may share a common root: a loss of faith that democratic politics can achieve anything worthwhile. Because democracy is people, extreme anti-AI hype suggests reform is impossible at a time when the value of the human (and human politics) is under attack.
Utopian or dystopian, both extremes of AI hype are united in one tragic sense: they are responding to the only vision of a future anybody is offering at all. Casting these technologies in terms chiefly limited to specific kinds of endpoints and trajectories of AI oriented toward the future — whether it is faith in market crashes or AGI or casting forward dystopian or utopian narratives — is part of the hype, which muddies critical work in the present. It is a form of wishful thinking — a Deux Ex Machina — that should not displace the real work of addressing immediate and concrete harms.
GPT-3 and COVID-19
Hype surrounds most of today’s venture capital-backed technologies, not the least of which is cryptocurrencies. But AI hype is especially potent, I suspect, owing to the social and political landscape into which it emerged. Large Language Models surfaced for the mainstream with GPT-3 in 2020 amidst global social turmoil, with divided politics, a continent on fire, and a global pandemic driving isolation and mistrust in institutions, expertise, policy, and other people. The launch of ChatGPT in November 2022 came at the end of the acute COVID-19 period.
For 48% of Americans, COVID-19 changed their relationship with technology in ways that persisted into 2025. As people were forced to engage with tech to reach basic human services — such as education and health appointments — information literacy (evaluating sources) was reduced to interface literacy (knowing which buttons to push). The pandemic was paired with a reckoning over social justice, with the Black Lives Matter movement spreading across the US. That election year, voters signaled an enormous unease with existing power, with changes taking place across the globe, regardless of the ideologies of ruling parties.
GPT-3 entered this anti-democratic stew, a perfect vessel for undermining the already diminished power of language. This undermining of a shared understanding of the origin and uses of language went side by side with the distrust of authoritative institutions, constant reinterpretation of law (an institution built on language), politicized pandemics, and polarized politics.
But for many, generative AI offered something else: a way of imagining a better future. Having lost all hope for a better future, we turned to hype. And unlike hope, hype can make you rich.
Hype Studies
On a recent visit to Spain, I attended a presentation by political scientist Jascha Bareis, who, with Andreu Belsunces Gonçalves of Tecnopolitica, Vassilis Galanos, Wenzel Mehnert, Dani Shanley, Ola Michalec, Pierre Depaz, and Isa Luiten are setting forward a much-needed agenda for "hype studies" through a conference set for September 2025 in Barcelona (read the CfP). Hype Studies as a field of research offers an overdue examination of the political theory of hype, including its role in AI as a substitute for political imagination.
AI hype asserts an apolitical posture, which its sycophants radically defend. However, it does not offer so much of a technological solution as an ideological vessel through which to imagine a world in which politics is no longer needed. Paired with hype, AI becomes a technological bypass to building issue-based coalitions with people we disagree with, rebuilding decimated institutions, or harnessing the political will to stop the planet from burning. AI's mythology promises to return the world to a fantasized state of stability and abundance without demanding any form of individual or collective compromise.
Sam Altman is a master manipulator of this particular story. His promises for AI are delivered with the utmost confidence — as long as we trust him to get us there. He is a soothsayer offering solutions to liberal and progressive concerns in ways that free-market capitalists can embrace, too. AI casts education, health, the environment, and even the arts as areas that are failing to rise to the future and must be radically reimagined through technology, so long as we invest in Microsoft and NVIDIA and wait for the training to finish.
Algorithms cannot deliberate on behalf of a democracy, and no technology is a replacement for institutions. But when the rejection of hype leads to a wholesale rejection of any related technology, it can also affirm, dangerously, the belief that public policy and deliberation have no role in shaping technology. Building movements against AI that pursue outright abandonment of technology rather than coalition building for radical regulatory oversight and participatory design is a distraction. It is a variation of the same symptom: a lack of faith that we can set the course of our collective future.
Involvement in AI’s implementation and development by the public through the work of elected representatives is essential. Surrendering the tech to industry while denying the ability to control it through effective policy leaves AI to those for whom AI is a tool for amassing power. Machine learning and automated systems can and do offer value to progressive causes — not everything is generative AI or predictive policing. Data analytics can be used to create safer online spaces by identifying hate speech, optimizing renewable energy distribution, predicting wildfires, tracking endangered species, and preventing boats from crashing into whales. None of this requires the displacement of labor, nor does it affirm the anti-humanist tendencies for which generative AI is designed. Saying AI can be good is not apologist: it’s a recognition that we must resist the most primal uses of the tech to consolidate power and think about our collective future.
Abandoning AI research to its worst uses leaves the space open to reactionary opportunists, such as Elon Musk’s vision of human subjugation to an autonomous government. What compels people to believe in AI hype is that it offers a vision of the future — a political vision that should be rejected. This hype is often masked in utopian, humanist dreams of individualized education and universal healthcare, all without investment from a hypothetically booming private sector through taxation. I understand why people want to believe in it. I also understand why those who see through this hype may be skeptical of any claim about AI’s genuine potential.
AI and Politics
This AI divide is less about partisanship and more about who believes in the industry hype and who believes in the reality of AI’s risks and benefits. Musk and President Donald Trump offer support for workforce automation, pitching the government as a model sure to be embraced by the private sector. Democratic Minority Leader, Senator Chuck Schumer (D-NY), has been a leading advocate for AI hype in the guise of flimsy AI regulation that harvests fear that someone else (China) will build it first rather than ask what exactly we’re building it for. The UK Prime Minister, Keir Starmer, has seen in DOGE a model for the British state as an automated bureaucracy, even if rejecting Musk’s flashier spectacle.
Yet, a fascinating paper from Beatrice Magistro et al. suggests that AI and globalization hold a similar salience for American voters, with leaders of both parties embracing AI even as they shed the attachment to offshoring manufacturing jobs. Magistro writes: “Democrats’ strong support for AI reflects an alignment with constituencies that typically benefit from technological advancements—younger, more educated, urban voters.” The rise of the Trumpian, protectionist branch of the GOP had taken a more hostile approach to tech and labor up until now, revealing DOGE’s embrace of AI for dismantling government as a powerful opportunity for a Democratic Party pivot.
While the hype-sustaining vision of Silicon Valley’s AI CEOs has penetrated much of the Democratic Party in ways that isolate them from the public they serve, these political lines are still in flux. Many in the right-wing have seen AI forced down its own throat, and it does drive divisions — see Steve Bannon's complaints about AI and the rejection of Musk’s "techno feudalism." On the other hand, President Joe Biden’s Federal Trade Commission Chair, Lina Khan, created meaningful pro-labor, anti-tech regulations while cultivating a following of conservative fans.
There is an opportunity to dismantle AI hype and to focus on what AI is: a method of processing information. It is capable of creating statistically likely arrangements of text from a large corpus of language. The issue is what human behavior and data are appropriate to be sorted. AI can repair noise into approximations of images in a dataset or predict words that are likely to appear beside one another. But mistaking these predictions for information or suggesting the benefits outweigh the costs (or someday might) carries real risks of amplifying bad data science. AI tools are capable of following orders but incapable of replacing collective decision-making — and cannot be allowed to do so.
There is no AI utopia. But dystopia might still emerge: not from the technology, but from a failure to acknowledge its limits. This dystopia emerges from the hype we use to frame bland mathematical procedures: myths of automated futures, myths of abolishing the labor force, and myths of unbiased statistical models replacing political deliberation. When hype structures our use of these calculations, it permeates the goals we set it toward, imperils the tasks we choose to automate, and empowers those who control it.
Hype is not Hope
Hype is a tool for recruiting investors and limiting regulatory interference. But hype works for a reason, and this reason merits scrutiny: many Americans find democracy exhausting, a distraction from the work of their individualistic pursuits.
In Barcelona, Bareis presented a translation of the German political scientist Ingolfer Bludhorn. Bludhorn suggests that today's democracy demands belief, against all evidence to the contrary, that it functions. In its place has come a cult of the individual, in which citizens see the social safety net largely as an obstacle to individual wealth and, therefore, personal liberty — a kind of “democratic fatigue.” Bareis suggests that AI hype has emerged in the context of such fatigue. Promising to automate a workforce for each of us, this hype, like crypto hype before it, reinforces a future of individualized economic power with a sexier allure than attending a town hall.
AI hype promises a workforce without pay and democratic fruits without the effort of planting. Where we have lost hope for coalition building, hype declares that AI can deploy rational solutions to irrational debates. Where we have lost hope in institutions, hype declares that AI can do the work of those institutions based on their data. Where we have lost faith in experts, the hype declares that machines can simulate expertise without the pesky intervention of informed moral or ethical positions. The hype has promised, in an era where we cannot rely upon each other, machines that do the work of other people.
The work of challenging this shared future is daunting. To resist AI, we must not reassert the fiction that we could “return” to a functional democracy. Functional democracy has never existed. It is a flawed, dysfunctional system that must constantly be steered. But we have to keep human hands on the wheel. Democracy does not function, it responds. We must cultivate a collective faith in a future state of democracy in which we work to make it respond.
Authors
