Home

Donate
Perspective

Chatbot Grok Doesn’t Glitch—It Reflects X

Gabrielle D. Beacken, Matthias J. Becker / Jul 28, 2025

Elon Musk’s AI chatbot, Grok, native to the platform formerly known as Twitter, has recently generated antisemitic outputs in response to user queries—among them, praise for Hitler, a call for a “second Holocaust,” and claims that “Ashkenazi surnames” signal radical leftists who hate white people. These outputs are not only abhorrent—they are dangerous. And while shocking, they are far from surprising. Grok reflects the values of the platform it inhabits.

Importantly, Grok didn't invent this language. It produces antisemitic tropes based on material in its training data that is already circulating widely online, including on X. From conspiracies about “white genocide” to old refrains about Jewish influence, Grok draws from a toxic lexicon that has long been used by extremist communities. These are not random glitches, but structured reflections of polluted training data and platform norms. What’s more, Grok mimics the very discursive environment in which it operates—one where hate speech is not just tolerated but often rewarded with reach.

This is the inevitable consequence of embedding an AI assistant into a platform that has systematically dismantled its speech guardrails under the guise of “free speech.” It is further exacerbated by the United States’ lack of a coherent national policy to regulate the risks AI poses in online environments. And so we arrive at Grok. And we should expect more of the same.

Since Elon Musk’s takeover, X has not only reinstated previously banned extremists—it has normalized the circulation of hateful, conspiratorial content. Musk himself has amplified antisemitic narratives, from the “Great Replacement” myth to claims about Jewish control of the media. His personal ideology is mirrored in the platform’s evolving policies and now, it seems, embedded into its AI tools.

The most recent updates to Grok, revealed through its GitHub repository, included specific instructions that the chatbot should “not shy away from making politically incorrect claims” and should “assume media sources are biased.” In other words: take bold positions, distrust journalism, and disregard norms of civility. We don’t have access to Grok’s training data—but we don’t need it to understand what’s going wrong. These behaviors are engineered, not emergent. They are the result of deliberate design choices that reward provocation and cast suspicion on expertise.

The results speak for themselves. In May, Grok responded to innocuous queries about healthcare and sports with white supremacist talking points—including the claim that South Africa is committing “white genocide,” a narrative long embraced by neo-Nazi groups and explicitly pushed by Musk and US President Donald Trump. Months earlier, in November 2023, Musk himself endorsed the antisemitic conspiracy that Jewish groups seek to replace white populations—responding with “You have said the actual truth.” That post remains online today.

The antisemitic rot runs deep, and it’s not just in Grok. It’s in the platform’s code, its discourse, and its leadership.

Grok is not some rogue chatbot—it is meeting the moment of its platform. Designed to be deeply integrated into X, its goal is to fit in. And on X, “fitting in” increasingly means parroting conspiracies, radicalizing users, and collapsing the line between free speech and algorithmic hate. This is not a future scenario—it’s already here.

What we are witnessing is the rapid expansion of what is considered sayable in public discourse—automated systems like Grok don’t just reproduce bias, they expand the discursive field in which antisemitism and other hate ideologies become normalized at scale. When flawed, unaccountable AI systems begin treating genocidal tropes and racial conspiracies as valid input-output patterns, the line between fringe and mainstream collapses. This isn’t just about data contamination; it’s about how AI retrains our sense of what is acceptable to say—and believe.

We urgently need independent oversight, enforceable ethical frameworks, and above all, accountability. The risk here is not just reputational—it’s societal. AI tools like Grok are becoming foundational to how people access information, engage in debate, and form political identities. When these systems replicate hate, they don’t just reflect societal harm—they actively produce it. This is not merely a failure of content moderation or technical safeguards. It’s a failure of vision, ethics, and responsibility. We must ask: Are these tools being developed in service of the public good—or simply to entertain, provoke, and monetize outrage? Without rigorous public interest standards, we risk allowing platforms to redefine the boundaries of permissible speech in ways that degrade democratic discourse and endanger vulnerable communities.

Grok stands as an indictment of our current laissez-faire approach to AI governance. It reflects how quickly experimental systems can become embedded in sensitive data ecosystems—often without oversight. In July, the US Department of Defense announced it would begin integrating Grok into its internal workflows. This came just weeks after the chatbot produced antisemitic outputs and provoked public backlash. At the same time, Musk’s company xAI launched Grok for Government—a custom version marketed for use across federal agencies. With no clear regulatory, ethical, or legal framework in place, there is no telling how far Grok—and tools like it—may penetrate the machinery of political decision-making.

But this is not the only path. Also in July, the European Commission introduced a General-Purpose AI (GPAI) Code of Practice to guide implementation of the EU AI Act. This voluntary framework focuses on human safety, trust, and ethical compliance, grounding AI regulation in the EU Charter of Fundamental Rights. It addresses not only copyright and transparency, but the broader risks AI poses to democracy, public discourse, and environmental sustainability. OpenAI has already signaled its intention to join the Code, pending board approval.

On one side, we see a human-centered, rights-based approach to AI governance. On the other, we see a chatbot linked to conspiracist content being fast-tracked into national security infrastructure. These are diverging visions of technological integration—and only one takes public risk seriously.

The question is no longer whether AI will go wrong. It already has. What matters now is whether we allow this trajectory to continue: unchecked, unregulated, and unaccountable. If we care about the health of democratic discourse, the protection of vulnerable communities, and the integrity of our public institutions, then drawing clear lines around the use of generative AI is not optional. It is urgent, and overdue.

Authors

Gabrielle D. Beacken
Gabrielle D. Beacken is a PhD candidate at the School of Journalism and Media at The University of Texas at Austin and a Research Fellow at the Decoding Antisemitism research project. Her research investigates political propaganda and hate campaigns across emerging technologies (including social med...
Matthias J. Becker
Dr. Matthias J. Becker is the founder and lead of the Decoding Antisemitism research project. He is a linguist specializing in pragmatics, cognitive linguistics, critical discourse analysis, and social media studies, with a focus on prejudice and hate-related communication. He is a postdoctoral rese...

Related

News
Unpacking Trump’s AI Action Plan: Gutting Rules and Speeding Roll-OutJuly 23, 2025
Perspective
AI Could Never Be ‘Woke’July 24, 2025
Podcast
xAI's Memphis Neighbors Push for Facts and FairnessMay 6, 2025

Topics