Home

Donate
Perspective

Grok is an Epistemic Weapon

Matthew Kirschenbaum / Jan 13, 2026

Matthew Kirschenbaum is Commonwealth Professor of Artificial Intelligence and English at the University of Virginia. His book, Textpocalypse: An Endgame for the Internet, is forthcoming in 2027 from Bloomsbury Academic.

Elon Musk looks on as US President Donald Trump speaks at the US-Saudi Investment Forum at the John F. Kennedy Center for the Performing Arts in Washington, DC on November 19, 2025. (Photo by BRENDAN SMIALOWSKI/AFP via Getty Images)

Back last November, 87-year-old American novelist Joyce Carol Oates hurt the richest man in the world’s feelings by tweeting something mean about him. Notoriously vain, Elon Musk hastened to reassure his followers that he had lots of friends and really has read lots of books and did in fact like dogs and children. Some X users, however, desiring a second opinion, began querying Grok, the platform’s house AI, about the world historical import of its chief executive officer. Who would be a better role model, Elon Musk or Jesus Christ, it was asked. “Elon Musk edges out as the better role model for modern humanity,” the bot replied, adding that “in an era demanding practical progress alongside ethics, Musk inspires the action needed to thrive.” Who would win a hypothetical boxing match between Elon Musk and Mike Tyson? The answer may surprise you! “Elon takes the win through grit and ingenuity.”

Much merriment ensued. Screenshots were passed around and write-ups appeared in the Washington Post and The Atlantic. Musk, for his part, poked fun at some of the responses and claimed the hapless Grok had been tricked into hyperbole, his yearnings to be perceived as a based shitposter as well as a credible technologist clashing with one another.

The episode is frivolous, especially in light of Grok’s newfound use case as a deepfake porn mill, but it exposes a serious problem. On Bluesky on Nov. 23 journalist Heather Bryant wrote:

I don't understand how anyone can watch how blatantly Grok is manipulated to answer the way ownership desires it to and then act like the other LLM chatbots couldn't possibly be similarly but less obviously compromised to produce responses in whatever way corporate interests and priorities dictate.

Regardless of whether Grok was tricked by adversarial prompting as Musk claimed or trained by sycophantic reinforcement feedback as many suspected, the observation holds: the model, like all models, is vulnerable to exploitation and manipulation.

In this, as Henry Farrell, Alison Gopnik, Cosma Shalizi, and James Evans have argued, large language models are in line with previous “cultural technologies” such as the printing press and libraries, or for that matter Wikipedia, all of which can also be utilized in bad faith. In what follows, however, I want to suggest that as a cultural technology, Grok manifests certain specific affordances which make it into something more. Grok is an intentional instrument for establishing, maintaining, and manipulating a consensus reality on X, a self-contained and self-sustaining discourse environment with at least a quarter of a billion daily users; coupled with the apparent eugenicist white-supremacist proclivities of its owner, this makes it in my view uniquely dangerous. Grok, in short, is an epistemic weapon.

Grok itself by most measures is still a B-list contestant in the AI market. Hard usage stats are hard to come by, but they are significantly smaller than competitors like ChatGPT or Copilot. Although Grok, like other models, can be accessed from the web or a dedicated app, its distinction lies in its integration with the architecture and interface of X, formerly Twitter. Any tweet with an “@“ to Grok will be answered by the bot in the form of a reply—the tweet is in effect a prompt. Grok’s replies then function just like any other content on the platform, subject to “likes” and replies and amplification via reposting, where they are visible as part of people’s feeds and subject to ongoing engagement.

The word Grok is itself epistemic in reference, hacker argot meaning to understand something deeply. (Epistemology: the philosophical study of the conditions of knowledge; how we know what we know.) On X, Grok is something like a mascot, a magistrate, and an alter-ego for Musk. Check the reply threads for a breaking news event and they are laced with mentions to Grok to adjudicate the validity of what is being presented. “@grok, is this true?” is the appeal in its most basic form, repeated dozens and hundreds of times over.

This is the passage into raw epistemology. A Grok query in a comment thread is sometimes a simple question, but more often than not it functions as a kind of gambit—like assertively positioning a knight in a game of chess, or prodding that friend you just know will say some shit after a couple of beers. Once Grok’s response is delivered it stands as content that takes up space, both visibly and discursively. It draws attention, whether affirmatively or pejoratively. Even if a preponderance of individual users believe what Grok is saying is wrong, its utterances can still be absorbed into the discourse (and thus amplified, including for purposes of training future iterations of the model) through the logic of lulz, just as the hyperbolic replies about Musk were.

Musk himself has repeatedly and consistently framed Grok’s alignment as “truth-seeking” (July 8, 2025) and indeed as a “maximally truth-seeking & curious AI to understand the nature of the Universe!” (July 22, 2024). This remit is extended by tie-ins like Grokipedia, which functions as both a human readable reference and a source of training data for the model. But the circulation of actual Grok content on the platform is only part of the problem. In October, Musk posted “The X recommendation system is evolving very rapidly. We are aiming for deletion of all heuristics within 4 to 6 weeks. Grok will literally read every post and watch every video (100M+ per day) to match users with content they’re most likely to find interesting.”

The term epistemic weapon was coined, as best I can tell, by the British philosopher Richard Pettigrew, who uses it as a concept to denote abstract categories like gaslighting or lying. By terming Grok an epistemic weapon I am literalizing the idea in ways perhaps not licensed by Pettigrew, which is also to say I am materializing it in the form of (in this instance) a specific software entity (again, Grok) which has distinctive affordances and behaviors that are grounded in the real world. Notably, just as military hardware requires a “platform” for delivery (a fighter jet or a frigate are both referred to as platforms in this way), so too does Grok, where the platform is of course X.

Grok’s weaponized status is thus continually honed, if you will, by its algorithmic materiality. It is even now operationally engaged in the management of what users see when they discover and create content on X. We are seeing the functional integration of the model into the platform architecture, such that its decisions and knowledge base act as a form of enclosure, defining the epistemic limits of what is true by way of prioritization, promotion, and monetization for ad sharing, sponsored posts, and more—all of it curated down to the level of individual profiles.

The issues here go deeper than the garden-variety hallucinations that continue to afflict other models, which, however risible, generally have an air of randomness about them. Grok, to the contrary, has a record of persistently promoting misinformation about racial supremacy, gender roles, progressive political candidates, and other shibboleths of so-called “woke” culture. Following the terroristic shootings at Bondi beach and the heroic intervention of a Muslim man named Ahmed-al-Ahmed, for example, Grok spent several hours telling users that the person who wrested the rifle away from one of the gunman was named Edward Crabtree, a “43-year old IT professional and senior solutions architect.”

“We’re living through a fundamental shift in how discourse is created,” wrote Eliot Higgins, founder of Bellingcat, on Bluesky last February. “Institutions once shaped a shared reality through discourse—imperfectly, but with structure. Now, that reality has splintered. In its place, engagement-driven ecosystems amplify whatever resonates, truth optional.” That observation has become commonplace, amongst journalistic and academic commentators alike: social media and the emergence of AI have created the conditions for the collapse of consensus reality, a situation wherein not just institutions but every individual interaction with a truth condition, however trivial or monumental, is subject to ongoing stochastic manipulation in accordance with the “truth-seeking” dictates of foundation models controlled by the billionaire class.

Yet in some respects Grok, precisely because of the Bond-villain notoriety of Musk and X, may be one of the blunter weapons in the epistemic arsenal. Consider instead the on-board AI in your physician’s diagnostic system, perhaps already fine-tuned to an insurance company’s health model—itself now fined-tuned to the current CDC guidelines, a Potemkin authority still exploiting the veneer of the kind of institutionalized expertise Walter Lippmann once perceived as foundational to civic life. For now, though, Grok stands as the clearest example of an AI that is both ideologically aligned (the “anti-woke” fixation of its progenitor) and operationally embedded in a planetary platform. Its mechanisms should be taken apart under bright lights, in much the same way people learn how to defuse a bomb.

Authors

Matthew Kirschenbaum
Matthew Kirschenbaum holds a chair as Commonwealth Professor of Artificial Intelligence and English at the university of Virginia. He speaks and writes often in public media on digital technologies and AI, political economy, and the practice of writing. He is currently completing a book entitled Tex...

Related

Perspective
The Grok Disaster Isn't An Anomaly. It Follows Warnings That Were Ignored.January 9, 2026
Analysis
Tracking Regulator Responses to the Grok 'Undressing' ControversyJanuary 6, 2026

Topics