Home

Donate
Perspective

We Aren’t Ready to Live in a World of Algorithmic Enchantment

José Marichal / Oct 16, 2025

A mural by an unknown artist in Brooklyn, New York, photographed in 2021.

Six years ago, I was a relatively content college professor at a small liberal arts college. I did (and do) find joy in using knowledge to help students enter, as philosopher Michael Oakeshott once said, “the conversation of (human)kind.” I read some. I wrote some. I graded some. I was on too many committees. But I also groused over the lack of tomato production in my garden. I struggled to form a B chord on the guitar. I tried in vain not to get injured in my Saturday morning “legends” pick-up soccer game. I built an unaesthetic but functional potting bench. I regularly lost to my wife and kid at Boggle.

Fast forward to today and I’m often suspicious and edgy. The world seems to be moving faster than I can cognitively grasp. I can sometimes feel myself getting a bit manic trying to make sense of arcane concepts like the “dark enlightenment,” the latest iteration of “mixture of experts” reinforcement learning or the new corporate tech merger or sale that is further consolidating a consolidated industry.

As a political scientist trained primarily in public policy analysis, I don’t feel equipped to understand the gathering “AI storm.” When I was in graduate school, rational-choice theory was the primary approach to doing policy analysis. The assumption in these models was that humans were rational actors who could formulate preference, conduct full information searches, evaluate options and arrive at an optimal outcome. If humans could do this, so could those that work in government. Policy analysis was hence the study of maximizing costs and minimizing benefits.

Fast forward a few decades, most policymaking (including tech policy) is rooted in the assumption of rationality. But increasingly, these assumptions seem absurd. How is anyone supposed to “conduct information searches” in the era of AI slop or independently formulate preferences in the era of intrusive recommendation algorithms? Something deeper and more far-reaching is going on in society. Collectively, we are becoming more irrational. Charlie Warzel wrote a great piece recently in The Atlantic arguing that we’re drifting into an AI fueled ‘world of weird’ for which no one is fully prepared, especially policy makers. He describes it as a period in which we must constantly be:

...contending with MechaHitler Grok and drinking from a fire hose of fascist-propaganda slop. It means Grandpa leaving confused Facebook comments under rendered images of Shrimp Jesus or, worse, falling for a flirty AI chatbot…. AI revenge porn and “nudify” apps that use AI to undress women and children, and large language models that have devoured the total creative output of humankind.

Much of the media discussion about AI and mental health is focusing on an emerging phenomenon of “AI psychosis.” Psychiatrist Marylin Wei noted in a recent Psychology Today article of an increase in such cases resulting with engagement with AI chat apps where users report either becoming messianic, experiencing godlike visions of AI, or developing attachment-oriented romantic delusions with the chatbots. These are often cases of underlying mental illness or emotional challenges exacerbated by a new, often sycophantically flattering technology that encourages delusional behavior.

But less reflected upon is the low simmering feeling of ill-ease, vertigo and fear that many of us are experiencing. Anyone who has played with OpenAI’s new Sora AI video app might feel this vertigo about the sudden scrambling of what is real. It is as if we are unleashing forces we don’t quite understand. Couple this with reports that TikTok will soon be substantially controlled by the second-richest man in the world and the increasingly strange declarations about the 'Antichrist' by the billionaire cofounder of one of the world's most powerful surveillance platforms and our assumptions about rational actors conducting cost-benefit analyses seems inadequate to explain our moment.

These sensations are reminiscent of what some scholars refer to as the pre-modern enchanted world (as opposed to the modern dis-enchanted world) In an influential 2007 book, A Secular Age, the Canadian political philosopher Charles Taylor wrote about the tension between the modern age of scientific, bureaucratic rationality and the pre-scientific revolution age of awe, mystery and wonder. He differentiated between a pre-modern world where, without the language of scientific rationality, were more open to receive the world as filled with inexplicable forces beyond our control. He called those in this world “porous” — they were open to a spiritual world of forces that could only be controlled through ritual and prayer.

Modernity, Taylor argued, gives us a language for understanding the inexplicable and encourages us to be skeptical of claims that cannot be verified empirically. While he isn’t anti-science, Taylor contends that a language of scientific explanation and an insistence of proof also closes us off to the mystery and wonder of the world. People in this world, Taylor says, become “buffered” – filled with self-doubt and skeptical of divinity experiences because they can’t empirically verify them. Taylor argued that the “buffered” didn’t replace the porous because humans still sought and gained brief glimpses of transcendence – or what he calls “fullness.” Anyone who has ever seen a beautiful sunset or can remember back to the early stages of falling in love knows about fullness. Other writers have referred to it as the sublime.

Many scholars challenge this perspective of enchanted pre-modernity and a dis-enchanted modernity as a false binary. The most famous of these was Bruno Latour in his pronouncement that “we have never been modern,” or that there has never been a clean distinction between nature and scientific reason but instead were interconnected. Taylor himself argues that modern life is a push and pull between the dis-enchanted buffered and the enchanted porous. I’d argue that engagement algorithms, smart phones and AI have tipped the balance towards the enchanted, but not always in productive ways.

Whereas two decades ago we might have relied on scientific advancement, university knowledge, capitalist production and the stability of liberal democratic institutions to ease our anxieties about the world, those institutions are all experiencing a legitimacy crisis.

More and more, we are turning to algorithmic systems to serve this purpose. We adopted the equivalent of pre-modern rituals with our smartphones. Rather than a St. Mary prayer for each bead in the rosary or a Muslim call to prayer, we’ve adopted the swipe. Instead of regular Torah study, we take selfies without makeup. Instead of the doctrinal authority of the church, synagogue or mosque, we’ve given the algorithm doctrinal power without the sense of ritual that served to ease the anxiety of an uncertain world (for some).

As these rituals have become less and less a part of our lives, we’ve replaced them with technology. We’ve been on a steady turn away from rationality. We’ve turned our phones into portals —a curated “otherworld” that promises novelty, constant affirmation, frictionless relationships, and entertainment our everyday reality often fails to provide. We’ve eschewed hard-earned scientific knowledge for the breezy persuasion of the TikTok influencer. Think of the person walking obliviously through a noisy street, eyes fixed on a glowing screen: that absorption gestures toward another realm, something to help us negotiate an uncontrollable world.

In the search for fullness, we’ve adopted “habits of purification” intended to protect us from an uncertain world. Today, instead of “burning witches,” we monitor Ring cameras, engage in public displays of solidarity on social platforms, signal group membership by shaming or trolling dissenters. For some professions, especially those who must cultivate audiences, curating a digital persona is work: for many others it becomes a strategy to soothe the existential anxiety of “being in the world” by solidifying group identity and policing boundaries. While these rituals often come from the good place of protecting the vulnerable or seeking to maintain a comprehensible traditional life, it becomes all too easy for these practices to turn into instances for shutting out dissent.

AI apps promise to intensify algorithmic rituals. AI brings the promise of endless affirmation. Where social feeds still involve some human interaction (albeit abstracted), AI dispenses with the need to engage with another self entirely. Chatbots give us the promise of a “synthetic relationships” without the friction and contingency of other people who have their own needs and desires. NBC News documented a spike in ads for AI chatbots and “AI girlfriends,” often sexualized and synthetic, that invite users to feel connection without human contingency. These interactions preserve the sensation of connection without the painful task of having to ponder questions like: “does this person really love me?” or “how do I need to change to become a better partner for my spouse”? 

Charles Taylor understood that delusion and fullness are different sides of a coin. A relationship with a chatbot is not relational – there is no one on the other end. The synthetic agent that affirms every impulse doesn’t represent the divine, but another instance of surveillance capitalism. Simone Weil had a useful insight into the question of fullness. For her, the divine couldn’t be attained through grasping for it — “we do not obtain the most precious gifts by going in search of them but by waiting for them.” Fullness, she suggests, cannot be manufactured; it must be cultivated, and it often arrives quietly. AI, by contrast, is not interested in helping us foster the patient, receptive habits that lead to genuine awe — quiet waiting cannot be tokenized.

New, creative thinking is needed to address the tech policy challenges of the coming decade. An assumption of rational actors engaged in deliberative, reasonable debate won't get us far. Tech policymakers need to broaden their views of policymaking, moving away from pure rationality towards storytelling. If we are to harness these tools in an increasingly irrational, enchanted age, we need to recognize the power of narrative to drive thought and behavior. If people are searching for “fullness,” how can we use AI tools in ways that encourage ‘waiting’ and ‘listening’ rather than cruelty and violence? Can Sora videos be used to promote empathy?

While Sora is fraught with challenges, we need to think about how these tools can be used to promote universal human dignity. How can AI videos help tell the story of the damage caused by ICE detentions or the pain resulting from a lack of access to medication? Many people I respect insist that refusing to use these corporate tools is the only valid option. That is a reasonable position, but we do not live in reasonable times.

Authors

José Marichal
José Marichal is a professor of political science at California Lutheran University. His research specializes in the role that algorithms and AI play in restructuring social and political institutions. He is currently writing a book entitled You Must Become an Algorithmic Problem, scheduled for publ...

Related

Perspective
AI Isn’t Responsible for Slop. We Are Doing It to OurselvesJuly 15, 2025

Topics