Silicon Valley's Moral Posturing Is an AI Power Play
Daniel Dobrygowski / Apr 14, 2026
Joining the Table by Yutong Liu / Better Images of AI / CC by 4.0
To many, Anthropic founder and CEO Dario Amodei looks like he’s channeling Martin Luther in the face of a Pentagon MAGA inquisition, issuing statements defending the company’s principles and drawing firm ‘red lines.’ Even if the true story of Anthropic’s commitment to safety is more complicated, its stance in favor of AI safeguards against domestic surveillance and autonomous weapons has spurred an otherwise unlikely coalition of giant tech platforms, AI researchers, theologians, and tech advocates that supports the AI company in its litigation with the federal government.
Anthropic recently secured a temporary stay from a California court in the Pentagon’s attempt to destroy the company by labelling it a “supply chain risk”, but lost a similar motion seeking a stay on the Pentagon’s efforts to bar the company from providing goods and services to the military. The claims at issue here aren’t whether Anthropic’s tech works or how secure it is; rather, the question is who gets to decide what is right and what is wrong when the military uses AI. Such substantial ethical issues, which include questions over human agency and free expression in the age of AI, are coming up at ever greater frequency.
Most of these high profile tech debates aren’t about what product works better or worse, or which application has a bigger market. They’re not about who has the best platform or who can scale the fastest. They aren’t even grand claims about “disruption” or how to create utopias. These new, increasingly rancorous, debates are over questions about what’s good or bad for society, what is right and what is wrong. Perhaps more so than past tech arguments, the current Anthropic-Pentagon dispute has fundamentally moral dimensions.
This trend toward moral claims and counterclaims tracks with the speeding development of AI. A recent New Yorker profile of OpenAI head Sam Altman showcases how quickly Silicon Valley whips from moral panic to visionary certainty and back. The magazine’s 18-month investigation examines, in part, how OpenAI’s CEO invoked the noble mission of saving humanity from a catastrophe brought about by artificial general intelligence (AGI) to attract “other people’s money and technical talent,” and how a crisis of trust in Altman’s leadership led to a botched attempt by the company’s board to oust him. These stories reveal how moral reasoning, or its absence, shapes tech’s biggest fights.
Altman is not the only AI leader to raise moral quandaries. Palantir’s Alex Karp predicted that AI and data analytics are just the thing to save Western civilization (with the perhaps unintended side effect of diluting women’s votes). He trods alongside sudden moral crusaders Elon Musk and Marc Andreessen, who are attempting to portray themselves as champions of the fundamental right to free speech (or at least, their own free speech). And then there is Peter Thiel’s weird antichrist roadshow. From the deadly serious to the outlandishly bizarre, it’s clear that the way technology CEOs and investors are talking about their work has changed. The Silicon Valley elite now spend a lot less time making utopian claims for their tech and a lot more time making or responding to moral claims about AI. The shift from promising utopias to mimicking prophets is as stark as it is sudden. Morality is becoming the new founder mode obsession, but the question is why.
In some respects, this is about finding a new post-utopian marketing angle for themselves and their tech, but there are higher stakes in this fight. Unlike past squabbles, the power and the unpredictability of AI are fueling this intense, multisided battle over the virtues and future of tech. This open fight about morality and AI will determine questions about what new technologies will be allowed to do and, even more importantly, who will decide, possibly for generations to come.
It’s too easy to minimize these controversies as just a marketing exercise and all this public moralizing as mere posturing. After all, though they may seem like they’re on opposite poles to partisans on social media, Anthropic and Palantir still do business together. Most of the CEOs who seem to be leading this moral struggle about tech have the same financial backers and the same customers. It’s also true that part of this fight looks like an attempt to capture the moral high ground now that the public’s relationship to technology and the claims of technology owners has changed.
In the last two decades, public sentiment about most technologies, but especially about social media and AI, has soured. Trust in technology seems to be at an all time low. But it's not just that the technology is losing trust in some abstract sense, it’s that there is profound distrust of the people and companies who own and operate it. For example, in 2025, the PR firm Edelman asked global respondents about AI and only 44% expected businesses to use AI in ways that they approved of. In the regions that have been dealing with advanced tech the longest, especially the US and Europe, less than a third reported trust in business uses of AI. To put the trust issue more sharply in perspective, the Pew Research Center found that, in the US, only 17% of US adults believed that AI would have a positive effect on the country over the next two decades, with 35% expecting negative outcomes.
This sentiment means utopianism and technological determinism (the idea that technology will solve all problems), are out. Vague promises no longer ensure trust. It’s pretty easy to understand why. People can draw a line from all the recent technology governance failures, individual and societal harms, and security breaches back to the bad decisions and permissiveness that formed the basis for Silicon Valley’s technology culture. The idea that regulation and responsibility don’t matter as long as you’re making money no longer appeals to the majority of the population. At the same time, people have become more familiar with digital technologies and so have become more comfortable making moral judgements about them. Interesting new research points to these kinds of moral determinations as also underlying rejection of AI.
Given this movement from dreams of utopia to disappointment to moralizing, it’s not a surprise that VCs and tech billionaires see this as a trend they can capitalize on. Until recently, few tech leaders sought to publicly engage with moral questions about the technologies they were developing. Everything was going to be great and each new widget was cast in glowing terms. From social media to blockchain to AI, new tech would end world hunger, defeat oppression, and secure the blessings of liberty in posterity. Only, of course, after one more round of funding, or maybe right after the IPO or product launch. If tech CEOs and investors thought about morality at all, it was to assume they were on the right side. As it becomes obvious that this line of reasoning has failed, what we’re seeing today is a host of technology leaders who are unused to questions of morality wrestling with the concept and with each other over how to claim the moral high ground.
Sometimes, that looks like a creepy triple-feature starring technology, philosophy, and religion. Some technology leaders have turned back to religion (or at least mythology), even intensely searching for the harbingers of armageddon among Swedish climate activists. These are outliers, however, since the majority of folks working on these technologies (especially in Silicon Valley) are more likely to be atheists, agnostics, or just don’t care too deeply about religion. But concerns about morality persist even where religion doesn’t. Engineers and designers still struggle with the morality of the technologies they create. Many struggle mightily with them, which has given rise to books like 2010’s Moral Machines and 2021’s The Alignment Problem. One can draw a line between moral qualms about the capabilities of new technology and inequality in how their benefits are distributed to the Effective Altruism (EA) movement. Indeed, EA’s spreadsheet ethics have grown popular (and controversial) as the moral discussion about technology has come to the fore and technologists discovered a whole host of global problems—from education to epidemics—that must surely benefit from adherents’ sense of their own intellectual superiority.
The moral concerns leading to the technology trust gap are real, and even if tech leaders’ attempt to grapple with them are only a cynical bid at market differentiation, the weight of discussion of the morality of tech still demands attention. In fact, it is critical, because what is at stake concerns fundamental questions about human agency, the value of people’s lives and livelihoods, and the future of democracy. It's urgent because there’s a real chance that, just like business models get locked in as AI becomes the next general purpose technology and platform economics does its thing, so too will moral decisions.
The potential for lock-in makes the morality of tech too important to leave only to VCs and tech CEOs. Everyone’s future will be impacted by the determinations made now for AI and other new technologies, so it’s everyone's right and responsibility to decide. This is why technology governance, including regulation by democratically-elected governments, is so important. That’s also why some people in tech come out so strongly against regulation—they feel entitled to make the decisions of the future in small groups and highly curated chats in order to lock their choices in before anyone else gets a say.
However, unlike in most other areas of advanced tech, here’s where individuals, and the institutions and organizations that represent them, have the advantage over the self-appointed masters of AI. Regular people, advocacy groups, religious leaders, and even some government officials have been grappling with and contesting the moral questions of tech for decades. If tech CEOs want to barge into the morality conversation, for once they are going where others have the first-mover advantage. After all, people have been thinking about morality and technology in ways that are helpful to us now since at least the Industrial Revolution. The modern interpretation of the ancient Jewish concept of Tikkun Olam (“repairing the world”) and the Catholic Church’s teachings on work, technology, and human dignity (beginning with Rerum Novarum) were both developed in the midst of the immense technological changes of the late 19th and early 20th centuries. They both focus on the centrality of people and our moral rights in, and obligations to, a more just world.
This new battle reflects the long-simmering values conflicts that our focus on AI has brought to the surface: geopolitical and class struggles over who controls new technologies as well as legal and policy conflicts around privacy, intellectual property, power, and other issues. Those are important problems in their own right, with brilliant people working on them, and they certainly deserve getting called out as they have been across newspaper headlines and podcasts absorbed by millions. But at the heart of those concerns is the question of who gets to decide what our technology does as well as who determines the morality of technological progress itself.
The flashpoint of this conflict is most obvious in the recent acrimony between the Pentagon and Anthropic over who gets to set the safety redlines for emerging tech. But they’re not the only ones who get a say. From debates about data centers to surveillance boycotts, people are finally asserting their values against the companies who have been ignoring them for a generation. Between their political overreach and new moral posturing, tech CEOs may have inadvertently created an opening for regular people to take back control of our digital and physical lives. Moral bravado may be all the rage among the kings of tech, but right and wrong is intuitive to most people. This new opening is just the opportunity we’ve been waiting for to have a good fight about the values most people share—autonomy, fairness, humanity—and how to make sure our tech serves those righteous aims from now on.
Authors
