Home

Donate
Perspective

Tech Policy Is on the Front Line of Fascism vs. Democracy. Pick a Side.

Nathalie Maréchal / May 4, 2026

An upside-down US flag waves as people attend the 'People's State of the Union' rally near the US Capitol in Washington, D.C. on February 24, 2026. The event, hosted by the MoveOn organization, was billed as a boycott and livestream counterprogramming of President Donald Trump's State of the Union address. (Photo by Bryan Dozier/NurPhoto via AP)

In the United States, tech policy once seemed like the last bastion of bipartisanship. In 2026, it’s on the front line of fascism vs. democracy. But too many tech policy professionals are still acting like it’s business as usual, evaluating policy proposals independently of the broader context of President Donald Trump’s—and Silicon Valley’s—assault on democracy, the rule of the law, and the Pax Americana of the post-war era. Continuing to do so is appeasement at best, and fascist collaboration at worst.

This is not exactly a novel insight. In March 2025, Techdirt founder and editor Mike Masnick published a blog post titled “Why Techdirt is now a democracy blog (whether we like it or not),” explaining that

…what’s happening in the US right now is some sort of weird hybrid of the kind of power grabs we’ve seen in the tech industry, combined with a more traditional collapse of democratic institutions… The story that matters most is how the dismantling of American institutions threatens everything else we cover. When the fundamental structures that enable innovation, protect civil liberties, and foster open dialogue are under attack, every other tech policy story becomes secondary (emphasis added).

Techdirt continues to do an excellent job scrutinizing how power and technology interact, along with other tech-focused publications like 404 Media, Wired, and Tech Policy Press.

Just as tech journalists bring essential perspectives to covering the organized destruction of American government and society, tech policy experts have a unique role to play in defending democracy. As Center for Journalism and Liberty director at the Open Markets Institute and Tech Policy Press board member Courtney Radsch writes, “Tech policy has become integral to resisting authoritarian tendencies.” Many of us understand this well, of course: civil society is full of Cassandras rightly decrying the growth of surveillance technology, unaccountable automated decision-making tools, and attacks on free expression. This isn’t new, either: public interest advocates have been making these points for literal decades.

But these warnings have mostly fallen on willfully deaf ears. Companies regularly dismiss our warnings and launch risky products with inadequate safeguards amid marketing campaigns that hype the novelty and convenience of their wares. (The roll-out of the Meta AI Glasses stands out as particularly egregious.) Policymakers, too, tend to wave away our expertise as “Luddite” paranoia, instead parroting the Silicon Valley catechism of innovation and techno-optimism. (They are mostly using the word “Luddite” incorrectly, by the way, as Radsch explains.)

Much of the tech policy field—by which I mean the people and organizations involved in determining, advocating for, and enforcing laws, regulations, company rules, and social norms governing the use of technology—seem committed to the polite fiction that it’s still business as usual, and that the policy questions we’ve been grappling with for years can be approached today in the same way they were before Trump’s ongoing autogolpe. I’ll resist the temptation to speculate why so many experts and institutions act like they’re still living in a functional liberal democracy: the point is that positions that would be defensible in a different political context simply aren’t at this time.

‘Papers, please.’

Take age assurance, for example. After years of the social media industry’s inability (or unwillingness, one might argue) to protect minors from harm on their platforms, we now see at least some policymakers and self-interested tech companies building consensus around using technical means to verify that users are of the proper age for the given content or service. Proponents of such schemes draw analogies to the very different practice of visual inspection of an ID at, say, a bar or liquor store, arguing that what is possible offline should also be possible online. And while technology has indeed evolved significantly in recent years, age verification continues to rely either on digitally collecting official ID data or on estimating someone’s age based on facial analysis technology, which is not only inaccurate for many people but also forces users to give over unchangeable biometric data: the image of their face. Technical safeguards are possible, for example by only sharing the binary yes/no answer to the question “Is this person over age 18?” rather than their precise birth date, and promptly deleting collected data. But without legal mandates to deploy such safeguards, many providers will default to the simplest and lowest-cost solution.

Moreover, without a national privacy framework grounded in data minimization and purpose limitation, consumers should assume that any data that can be collected about them will be used to benefit the service provider (and/or the age assurance vendor), sold to a data broker, or both. And once data enters the data broker ecosystem, it’s vulnerable to basically any unknown, non-consensual use, including acquisition by any number of government agencies known to circumvent the Fourth Amendment by simply buying information they would otherwise need a subpoena to obtain. As Center for Democracy and Technology’ Security and Surveillance Project deputy director and Tech Policy Press fellow Jake Laperruque says, that makes about as much sense as saying that it’s okay for police to search your home without a warrant because they bribed your landlord. And everything we have seen since January 20, 2025 suggests that the federal government is intent on making this problem worse, and on using the data to persecute disfavored groups—including political adversaries.

We should be very, very wary of building a “papers, please” web where everyone’s online activity is tracked and linked to their official identity under any circumstance, and we should dismiss the idea completely in a context where all the safeguards that could, perhaps, counterbalance the scheme’s inherent threats to civil rights and liberties are either absent or under sustained attack. But to hear policy debates on the topic, you’d think you were living in a parallel timeline.

Parents and others fighting to protect kids from online harms are right to be worried, even outraged—and I understand the instinct to port concepts from the offline world, like checking people’s ages to segregate kids and adults into separate spaces that meet their distinct needs. If we could guarantee that every entity along the value chain would respect users’ privacy and other rights, and that the federal government would not only respect but affirmatively protect those rights, we could have a productive debate about age verification. But that is not the country those of us in the US live in now, and pretending it is will only harm us, starting with the most vulnerable.

Varnishing villainy

Similarly, the current enthusiasm for “AI for Good” initiatives ignores the political economy of Big Tech and AI industries, which is currently oriented in opposition to democracy and human rights. (Political economy is the study of how economics, policy, politics and power influence each other.) Industry leaders like Sam Altman, Marc Andreessen, Marc Benioff, Jeff Bezos, Greg Brockman, Alex Karp, Elon Musk, Peter Thiel, David Sacks, and Mark Zuckerberg have all been very clear, through their words and deeds, that they enthusiastically support the MAGA authoritarian project. Others, like Tim Cook, seem less enthusiastic but still bend the knee. (Dario Amodei, the CEO of Anthropic, stands out as an exception, although an imperfect one: it says a lot that the low bar he clears is ‘no killer robots or domestic mass surveillance.’) The political project to integrate “AI” (which, again, is not a distinct technology but an umbrella marketing term that obscures more than it reveals) into any and all semi-plausible domains of human life cannot be understood outside of the political economy of these industries.

And while I broadly agree with Arvind Narayanan and Sayash Kapoor that the tools we call “AI” are normal technologies, the political economy of the AI industry is anything but normal.

Like social media before it, “AI” promises a tech-mediated utopia. We are told that thanks to “AI,” language barriers will fade away, keyboard warriors will be freed of their drudgework by ever-more-capable machines, scientists will quickly discover cures for intractable diseases, and we’ll all enjoy new lives of leisure funded by universal basic income (UBI) schemes. That’s a pretty picture, but one that is completely divorced from the material reality of what these companies, and their leaders, are actually doing.

The central political project behind the AI hype is to eliminate the need for labor, and thereby abolish the economic argument for labor rights: the capital class needs the rest of us alive and marginally healthy so we can work for them. What do you think happens when they no longer need us to? Are we really supposed to believe that the same people who are cheerfully gutting our already-meager social safety net are suddenly going to decide to support UBI? (In fact, Altman is already backtracking.)

Naming the stakes

The present moment calls for both optimism that we can change the world for the better and for what George Washington University professor Dave Karpf calls technological pragmatism: an intellectual orientation that is distinct from both techno-optimism and techno-pessimism. We shouldn’t assume that new technologies are inherently good or bad. Technological pragmatism invites critical questions about technology as it actually exists: how it works and how it fails; what values, assumptions and ideologies it is imbued with; how it fits into current social practices; and how its development and adoption may be shaped by various actors.

Technological pragmatism, then, calls on us to look beyond the “AI” hype. We must probe the economic incentives and ideological commitments behind the techno-authoritarian project as a way to help us identify tech policy positions and arguments that are less obviously tied to the systematic dismantling of constitutional democracy—such as the techno-legal solutionist focus on age assurance, or the C-Suite obsession with replacing workers with LLM chatbots willy-nilly. (Techno-legal solutionism is “the belief that complex social problems can be solved through legally mandated technical fixes.”) While “AI” technologies may indeed be used in the public interest, an industry that is economically and ideologically oriented toward authoritarianism will overwhelmingly develop and roll out products that advance that authoritarian vision. “AI for Good” efforts that fail to address the political economy of “AI” are doomed to failure.

Let’s consider the motives of key industry leaders. At least some of the tech oligarchs explicitly tie their embrace of authoritarianism to tech policy developments earlier this decade, specifically efforts by the European Union and the Biden administration to regulate the development and use of “AI” technologies. Faced with a choice between either accepting that democracy, rule of law and public-interest governance would necessarily result in reduced profit margins, or joining forces with a corrupt convicted felon with overt autocratic aspirations, the titans of the tech industry chose the latter.

Days before Trump’s second inauguration, Marc Andreessen attributed his turn from “normie Democrat” to MAGA neoreactionary to the so-called techlash. He specifically described three components: tech workers’ vocal contestation of what they saw as their employers’ negative impacts on society, the Biden-era SEC’s crackdown on cryptocurrencies, and the Biden administration’s efforts to regulate AI. Mark Zuckerberg is plainly counting on the Trump administration to protect Meta from accountability under EU laws like the GDPR, Digital Services Act and Digital Markets Act. In this context, the industry push to “reopen” or “clarify” legal frameworks like the GDPR and the AI Act—notwithstanding the “won’t someone think of the small businesses” laments—makes both economic and ideological sense. Pesky concepts like privacy, transparency, and competition get in the way of both short-term advertising profits and ubiquitous adoption of “AI” throughout society in the medium term.

While economic interest offers the simplest explanation for the tech CEOs’ front-row seats at Trump’s inauguration, and much of their behavior since then, that explanation is incomplete without a serious consideration of the illiberal, antidemocratic ideologies that are now well established among the tech elite: the AI-centric ideologies that Timnit Gebru and Émile P. Torres have termed TESCREAL.

TESCREAL is an acronym for Transhumanism, Extropianism, Singulatarianism, (modern) Cosmism, Rationalism, Effective Altruism, and Longtermism. Gebru and Torres describe these as a “bundle” of “interconnected and overlapping ideologies” with roots in 20th-century eugenics. AI policy wonks are hopefully somewhat familiar with these ideas, but my (unscientific) polling of friends and colleagues suggests that many professionals working on AI policy do not yet grasp the threats that the various -isms that make up TESCREAL pose to democracy, human rights and the rule of law. Even more worrying, others do, but shy away from confronting these beliefs head-on. Yet it is urgent to do so, as Effective Altruists in particular flood the tech policy domain with cash, funding a rash of think tanks and advocacy organizations dedicated to the AI utopia that was promised. Several news outlets have reported that this year’s primaries are turning into a proxy war between two rival TESCREAL camps, the Trump-aligned accelerationists and the effective altruists focused on “existential risk” and other aspects of “AI safety.” TESCREAL ideas are increasingly influential in tech policy which, again, is central to the fight for democracy.

Without getting into the details of the TESCREAL bundle, some of the world’s richest and most powerful people believe they are on the verge of creating a sentient, inorganic being whose cognitive capacities will surpass any human’s by an order of magnitude; that this infallible computer deity should be trusted with complex, value-based decisions on humanity’s behalf; and that our descendants will fuse with machines and colonize space. They also believe that humanity has an “ethical” obligation to make this fever dream come true, and that anything that stands in the way—inconveniences like intellectual property, privacy, disparate impact laws, risk assessments, even democracy itself—represent speed bumps on the way to developing a god-like artificial general intelligence (AGI) answerable only to tech moguls themselves, and should be relegated to the dustbins of history. Hence, the MAGA/Silicon Valley rapprochement.

Some of these ideologies (notably effective accelerationism) are explicitly committed to tech deregulation, massive investments in developing AGI, and using AI and other technologies to replace human labor, expertise and judgment as much as possible. In particular, those who view the development of a god-like AGI as an ethical imperative dismiss concerns about the near-term negative impacts of AI as acceptable collateral damage. Concerns about civil rights, disparate impact and the like are further dismissed as “woke AI” by Dark Enlightenment types. Whatever the framing, we should read such arguments as callous disregard for the lives and wellbeing of others combined with hubris about the future.

The unimaginable wealth that allows men like Sam Altman, Marc Andreessen, Marc Benioff, Jeff Bezos, Greg Brockman, Larry and David Ellison, Alex Karp, Elon Musk, David Sacks, Peter Thiel, and Mark Zuckerberg to wreck the country’s government, institutions and economy, so far with impunity, flows from the tech industry. They are billionaires because of our collective failure to regulate commercial surveillance, break up monopolies, and impose a reasonable tax rate on breathtakingly profitable companies or on the super-rich themselves. And because they are wealthy enough to buy their way out of most of the consequences of their actions, they believe themselves to be wholly unaccountable to the law, to electoral politics, and even to their companies’ own shareholders. Some of them even think they should be immune from death.

The AI & Big Tech industries’ will to power is among the proximate causes of the US’s accelerating descent into autocracy. “AI” is both the motivation and the instrument of authoritarian consolidation. Astonishingly, the US foreign policy apparatus is currently functioning as an arm of Silicon Valley, combining PR (propaganda?), lawfare and straight-up bullying to protect corporate profits, promote a deeply bigoted worldview, and systematically dismantle the postwar international system that, for all its flaws, has been the cornerstone of American prosperity for the past 80 years.

If American democracy is to be reconstructed after the Trump regime is ejected, the American people must confront the concentration of state power, private wealth, asymmetrical access to data, and influential (albeit flawed) narratives about the role of “AI” technologies in society. The tech policy field, in particular, needs to reckon with the political economy of Big Tech and “AI”, and also with the dangerous ideologies motivating many tech billionaires (and their followers) to embrace authoritarianism. Tech policy professionals—regardless of whether we work in civil society, trade associations, or industry—especially need to reckon with the role these dynamics have played in bringing us to this point, imagine new kinds of relationships between people, institutions, data and power, and chart a decisive course for getting there. Only then should we get back to the technological and legal hair-splitting that we do so well.

So, which side are you on? Are you on the side of MAGA, Elon Musk, Sam Altman, and the rest of them? Or are you ready to fight for democracy, human rights and the rule of law? Regardless of where you sit in industry, government or civil society, the time to choose is now.

Authors

Nathalie Maréchal
Dr. Nathalie Maréchal is a writer, researcher and advocate fighting for democracy and human rights in the age of technofascism. After stints at Ranking Digital Rights and the Center for Democracy and Technology, she is currently the Managing Policy Director at Northeastern University’s Institute for...

Related

Perspective
Don’t Trust Trump? Don’t Trust Big TechApril 23, 2026

Topics