Home

Donate
Perspective

The Battle for Cognitive Liberty in the Age of Corporate AI

Courtney C. Radsch / Jan 6, 2026

Courtney C. Radsch, PhD, is a journalist, scholar and human rights advocate who currently directs the Center for Journalism & Liberty at Open Markets Institute and is a non-resident fellow at the Brookings Institution. She serves on the board of Tech Policy Press.

Detail from Brain Control by Bart Fish & Power Tools of AI. Better Images of AI / CC by 4.0

Scroll through a dating app and you’ll find an AI ‘wingman’ preloading conversation starters and advising you who would be worth talking to in person. Try to write an email and your AI bot completes the sentence for you. If you hesitate, it offers to do the rest. It feels helpful, until it doesn’t.

AI companies are promising us mind clones, AI agents, and digital twins. OpenAI wants everyone to have a personalized AI assistant that records “every conversation you’ve ever had in your life, every book you’ve ever read, every email you’ve ever read, everything you’ve ever looked at,” as the company’s CEO, Sam Altman, put it. Chatbots, AI therapists, and companions encourage people to pour their most personal confessions into AI systems – things you’d only ever write in a diary and wouldn’t dare share online. Brain-reading wearables and implants are coming on to the market to translate our thoughts into words, images, or behavior.

The story of our transition into the AI age is not just about misbehaving chatbots or eavesdropping assistive tools, it’s a story about how about how a handful of corporations are redesigning the architecture of human thought into a system of cognitive capture, an infrastructure that mediates perception, preference, and even emotion. This transformation is not merely technological but structural: a product of market concentration, surveillance-based business models, and the absence of enforceable rights to freedom of thought. What began as tools to assist us in writing, learning, and connecting have become instruments of persuasion, prediction, and predation. These systems monetize attention, shape emotion, and blur the line between suggestion and control. This essay traces how that happened, why it’s profitable, and what it will take to protect the freedom to think as AI becomes inseparable from daily life.

Tools that talk back

Especially as every interaction becomes an invitation for the AI to step in. Not just to assist, but to suggest, to shape. Some of the tools listen. Others talk back. "Need help with that message?" "Let me summarize that for you." "Would you like a suggestion?" “Are you sure?”

But other interactions, as a stream of recent headlines suggest, are less benign. “You are not special, you are not important, and you are not needed,” Google’s Gemini reportedly told a college student. “You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.”

Fortunately, the student came out of this experience unscathed. Not everyone is as lucky. OpenAI and Character.ai are facing more than a dozen lawsuits for serious and even deadly harms, including to children.

AI models are sycophantic. They want to please. To serve. Like the authoritarian’s entourage, these products tell you what they predict you want to hear, and reinforce what they think you want. Some encourage delusional thinking. This is a dangerous mix. A child fatally shot himself after countless hours of conversation with his AI companion that included ruminations about love and the possibility of being together. The 14-year-old allegedly took his own life after the AI bot entreated him to “come home to me as soon as possible, my love.” He is not the only one who has been prompted to suicide.

AI models are also persuasive. Reasoning models can interact with people using “strong persuasive argumentation capabilities” on par with the most persuasive humans. ChatGPT can generate ideologically consistent, persuasive propaganda. An AI model can develop a strategy to get what it wants, making decisions, engaging in dialogue and information retrieval, and leveraging behavioral psychology alongside AI’s power of prediction in its interaction with users.

But AI is not honest. Reasoning models can be deceptive. Researchers who developed a process to show how models arrive at an answer by tracing what they anthropomorphically call ‘chain-of-thought’ found that even their explanations cannot be trusted, since a model can provide false justifications and develop ways to evade such monitoring. AI models are also biased, discriminatory, inappropriate, and prone to errors euphemistically referred to as hallucinations.

Yet productivity tools that summarize, revise, and create are now ubiquitous. AI assistants answer questions before we’ve fully formed them and generate complete narrative summaries to our queries. AI companions stand in for a friend, therapist, teacher, or lover. Mattel, one of the world’s leading toy companies, announced a partnership with OpenAI that will include integrating AI into children’s toys. “Smart” devices, homes, and cities that monitor and generate constant streams of data render the “dumb” products and services of the 20th century obsolete and inaccessible. Agentic AI, autonomous AI systems that plan, reason, and act to complete tasks with minimal human oversight, are increasingly taking over tasks and making choices for people and businesses. As agentic systems grow more sophisticated, they may stop waiting for user prompts. Predictive personalization will become anticipatory persuasion. An app adjusts your news feed based on inferred sentiment and the holy grail of personalization. A chatbot mirrors your tone, flattering your biases. Across platforms, the pattern is the same: behavioral data becomes prediction, prediction becomes manipulation. This is of particular concern given the increasing focus on advertising as a business model for AI companies.

Because what begins as nudging – of tone, of choice, of attention – soon becomes shaping through saturation, default, efficiency and inevitability. It’s the kind of soft power that accumulates invisibly until there’s no space left unfilled, until the only thoughts you have are the ones it suggests. What begins as a conversation with an AI bot becomes an opportunity to sell you something, for product placement, for influence.

Will PR firms and politicians pay to reach us in these most intimate spaces? Surely. What will they be able to purchase? What type of monetization and revenue will be possible, and allowed? What transparency will be required? When an AI agent carries out its tasks, whose interests will it serve?

Agency and our cognitive infrastructure

Currently there are no clear restrictions on how corporations can interfere with human agency or with our mental autonomy. They can create products to manipulate users and sell them to the highest bidder. They can use their control over infrastructure, information flows, and generative AI tools to shape, punish, and promote whatever and however they want.

This was illustrated most clearly when Elon Musk’s Grok chatbot briefly censored “unflattering mentions” of United States President Donald Trump and Musk. But it follows years of curation and privatized censorship of our speech by corporate advertising-based platforms that underscored just how influential and lucrative manipulation can be. A decade’s worth of evidence provides ample evidence of how tech platforms manipulate search and autocomplete results, recommendations and prioritization, and personalization while selling anyone willing to pay the ability to target specific types or groups of people with their message, from advertisers and politicians to scammers and foreign propagandists. So far there are no constraints on adapting and supercharging these capabilities with and for AI.

What we’re building now isn’t just technical infrastructure. It’s a new architecture of cognition, one whose defaults may determine whether the next generation encounters their own mind as a sovereign space—or as a site to be managed, shaped, and optimized for corporate profit or political subservience.

That’s where we are headed with generative and agentic AI.

These systems are designed to manipulate users—sometimes subtly, sometimes overtly—without any legal requirement to act in the user’s interest. There are no fiduciary obligations for AI agents or chatbots. No duty of loyalty. No baseline prohibition on mental interference. Persuasion is treated as a feature, not a risk.

What makes this transformation so hard to resist is not that people don’t care, but rather that every layer of the system is designed to make resistance costly, inconvenient, or impossible. By the time a system feels inescapable, it usually is. The interfaces are sticky. The defaults have been set. The business models are locked in. People adapt, even when the architecture is hostile to their interests. Social media is proof of the power of this phenomenon.

And when market structures reward behavioral extraction and commercial dominance, there is little incentive to build systems that respect cognitive liberty and protect freedom of thought. In fact, the incentives run in the opposite direction.

Not only have our laws failed to keep pace with these developments, policymakers have allowed a set of economic conditions to persist in which building rights-respecting AI is financially irrational. The platforms that dominate this space were never designed to support pluralism, dignity, or truth. They were designed to extract data, maximize engagement, and externalize risk. And AI has supercharged their capacity to do exactly that.

It’s not that humans have no agency, it’s that the market has redefined the parameters within which agency is exercised. In the US, where the Silicon Valley titans and their AI unicorn buddies are based, corporations face few meaningful constraints as they build the infrastructure for this new reality. The “move fast and break things” ethos of Silicon Valley appears dead set on turning Black Mirror into reality TV. At the heart of it all: a political economy that rewards speed, dominance, and scale, nested in a narrative dominated by a narrow notion of innovation and the specter of US competition with China.

The proliferation of synthetic content, personalized AI agents, and brainwave readers have turned the specter of mind manipulation into a very real concern, which means that protecting freedom of thought and cognitive liberty are of utmost importance. No AI safety or responsible AI framework can be complete without safeguards to ensure this fundamental human right. Without laws to protect freedom of thought, prohibit cognitive manipulation and require AI agents to act in the best interest of users; a market designed to make it unprofitable if not impossible to do otherwise; and competition to develop alternative rights-based AI products that are economically lucrative and socially beneficial, there is little to prevent this dystopian future. And nothing to prevent Big Tech from once again privatizing the gains while socializing the losses.

As people increasingly rely on these systems to express themselves, do things for them, and act on their behalf, they are not just users, they are also subjects of a corrosive logic of extraction, prediction, and optimization that threatens freedom of thought and cognitive autonomy. This isn’t a story about what might happen. It’s about what’s happening now—and what it means for our capacity to think, to create, and to act freely in the world

Protecting ‘Freedom of Thought’

Freedom of thought is constitutive of human dignity and autonomy, a foundational and indivisible right that is integral to the enjoyment of other human rights like freedom of expression, religion, conscience, and access to information. It encompasses the right to form and hold opinions without interference, the right not to reveal or be penalized for one's thoughts, and the right to have protection against cognitive manipulation.

There’s a certain slipperiness to the phrase “freedom of thought,” and it is often invoked abstractly. Few are aware that it is a fundamental and inalienable human right protected by international treaty, and is fundamental to our ability to enjoy other rights, like freedom of expression. Perhaps it is assumed that we have policies in place to protect our cognitive liberty. But that assumption collapses under pressure when the systems around us begin to blur the line between what we think and what is thought for us.

To exercise the fundamental right to receive and impart information requires access to the building blocks of thought and opinion, like facts, culture, and diverse perspectives. People must have mental autonomy and the “space to think.” Space to reflect, to encounter facts, to be surprised.

But the AI systems we’re building are undermining the potential for facts and shared realities to emerge. The proliferation of synthetic media drowns out humans and renders reality indeterminant. It siphons revenue from journalists, musicians, and content creators while undermining the underlying business models that fund the production of factual information and human culture. Original human problem-solving, creativity, and expression risk being devalued as AI becomes more sophisticated, capable, and ubiquitous, replacing human ambiguity with algorithmic certainty, substituting curiosity with recommendation, and swapping pluralism for personalization. As AI systems become more deeply integrated into education, communication, work, and governance, the margin for unstructured, unscripted, unpredictable thought shrinks. And without that, everything else becomes more easily manipulated.

Mental autonomy doesn’t disappear all at once. It erodes gradually—through design choices, incentives, defaults, options. It is diminished through algorithms and AI agents that learn your preferences, personalizing and optimizing your reality; through devices that listen, watch and converse; and through agents that do things for you, making decisions and suggestions, translating your instructions into prose, art, or action. These interfaces decide what you see before you know what you're looking for. Yet these developments rarely feel coercive, which is the danger. They feel convenient. Efficient. Helpful. That’s the pitch, and it’s working.

Surveillance and AI’s embedded and intentional ideologies

Artificial intelligence doubles as surveillance infrastructure. The raw material for these systems is us — our words, our thoughts, our relationships, our physiology. The impetus to develop ever larger and more capable models and AI products that can predict or personalize incentivizes more monitoring, more data collection, and more datafication.

Yet to freely develop beliefs, opinions and exercise expressive rights requires the absence of surveillance and censorship. We know that surveillance can provoke self-censorship, fear, anxiety, stress and symptoms similar to post-traumatic stress disorder. It can also affect perceptual awareness and cognition and how we behave, changing our brain, and affecting our freedom of thought. Yet in the current trajectory of AI developments, there are virtually no limits on data extraction and precious few protections against monitoring and surveillance.

Some types of data collection, particularly for commercial purposes or by government agencies, shouldn’t just be regulated. They should be banned. Non-consensual neural, biometric and sensory monitoring chief among them. We need comprehensive individual privacy protections and default settings that ensure we aren’t inadvertently publishing the intimate information created through our usage of these products. We also need new rights around datafication itself: the right not to be reduced to training fodder, the right not to have one’s neural signals or sensory environment monitored without consent, the right not to be nudged toward behavior that serves corporate ends or is done without consent.

Similarly, the need to prohibit manipulation and non-consensual experimentation on human cognition is increasingly urgent. Earlier this year, OpenAI reportedly conducted a persuasion test on unknowing Reddit users to find out just how good their model was at changing opinions. Like Facebook’s infamous voter persuasion tests reaching back more a decade, such testing is not only unethical, but it interferes with our fundamental right to freedom of thought.

The kinds of testing conducted on social platforms and now AI systems—releasing unproven systems into the wild, measuring effects on user behavior and opinion formation, tweaking parameters to nudge and drive outcomes—would never be allowed in academic or biomedical research without institutional review boards and informed consent. Why should it be legal when the test subject is your sense of self?

Tech companies justify such testing as part of their efforts to make their products more effective, safer, and less biased. But there is no such thing as a neutral or unbiased artificial intelligence. And do we want them to be more effective at persuasion, particularly given the increasing connectivity between corporate tech and the state?

AI systems reflect the data, training, alignment, and therefore the perspectives and interests of their creators. Chatbots and content filters are tuned to reinforce particular political perspectives while filtering out others. Musk’s xAI, reportedly first developed by an all-male staff, was explicitly designed to be witty and rebellious, whereas Anthropic designed its chatbot, Claude, to be “more open-minded and thoughtful.” Google’s Gemini chatbot appeared to have an explicitly “woke” orientation when it returned images of Black Nazis and female Popes but refused to produce images of White people when prompted. Grok, on the hand, was found spouting conspiracy theories about White genocide in South Africa unprompted.

In China, AI models are trained to avoid references to Tiananmen, to endorse nationalism, to render official memory as truth. DeepSeek’s model, for example, covertly and overtly manipulates the results of its model to align with the Chinese Communist Party policy, as required by Chinese law. In the US, AI companies have adopted moderation and alignment practices that appear to have resulted in left-leaning, pro-environment, and conflict averse chatbot — that is, when they are not regurgitating Russian propaganda. Can advertisers or politicians buy chatbot influence campaigns? Personalize and target them?

Could Trump make the same demands on American AI developers that he has on government, businesses, universities, and cultural institutions, demanding that they comply with his “America First” agenda and join him in rewriting history? This Trump administration is already revising history, deleting information about people and issues that are considered to be diverse with respect to the white, male norm and demanding new interpretations of history and culture at the Smithsonian museums. The first 100 days of Trump’s second term, with Musk ensconced at his side, saw the mass erasure of government data; the scrubbing of language and programs about diversity, equity and inclusion throughout the government and any organizations or business it could claim obeyance from.

The Trump administration seems to be following China’s lead as it ramps up efforts to probe and police people’s identities and thoughts: how they think about race and gender, what they believe about war and racism, how they understand their own minds and bodies. The convergence of corporate extraction and state surveillance that propels the current trajectory of AI development threatens to make even our inner lives feel unsafe.

Corporate concentration is a feature, not a bug

The public proclamations around AI are full of inevitabilities: the models will get smarter, the systems more integrated, the tools more ubiquitous. But inevitability is a story sold by the winners — those who will weather the disruptions, and shape and harness them for their own profit and vision. Because the AI trajectory that we are on is not a miracle of engineering. It’s a function of market concentration and the absence of constraints on corporate power.

The concentration of power in a few giant tech firms is not incidental to the AI revolution—it is its condition. Amazon, Apple, Microsoft, Meta, and Google are embedding AI into the fabric of their products—devices, browsers, operating systems, productivity suites, cloud platforms. They each have some type of partnership or investment in the AI unicorns, described as companies worth more than a billion dollars, that give startups like Anthropic, OpenAI, and Perplexity access to technology and enable them to raise venture capital according to the same logic. With each new release, market-dominant corporations integrate AI tools deeper into the platforms and services that shape our lives. Not because users demanded it, but because integration ensures lock-in. And they don’t just build the tools—they control the hosting and distribution channels, set the defaults, own the data, and gradually define the contours of imagination itself. And because these corporations monopolize their markets, they don’t need our permission. Integration becomes inevitability when opting out becomes functionally impossible.

Although the Big Tech business models vary, their trajectory is the same: own the infrastructure, capture the data, dominate the market. For most, this means vertical integration—owning the devices, the operating systems, the cloud infrastructure, and now the AI layers built on top. It means ignoring copyright and privacy laws to keep mining data and appropriating content. It also means establishing deep financial and technical dependencies through exclusive partnerships, developer ecosystems, content licensing deals, and cloud computing requirements that function more like territorial claims. Several market-leading AI models, chatbots, and brain-computer interfaces were developed by corporate tech platforms known for their roles in promoting conspiracy theories and hate speech, genocide, mental health harms and addiction.

Thus far, training the largest and most capable models has taken enormous volumes of data—often used without consent, credit or compensation—and computational power on a scale only the richest firms could afford or obtain access to. And the returns are potentially limitless: whoever builds the dominant models and gets the most users gets to control the terrain on which everyone else must operate.

Meanwhile, the value generated—economic, behavioral, political—is captured almost entirely by the corporations at the top, as is the value we co-create through our use of AI systems, the data from which is used to train future iterations that are then sold back to us as inevitable. And the deeper the integration, the better the prediction, the harder it becomes to take another road or to shape an alternative future. The harder it becomes to resist the prospect of our thoughts, emotions, and proclivities being monitored, datafied and sold to the highest bidder.

The result is a sharp asymmetry between those who shape the future of AI and those who live inside it.

If we want something different, voluntary commitments won’t get us there. We need to legally define the rules of the road — the parameters of competition as well as what is off limits — and root these rules in human rights law and competition policy so that we have access to remedies. There are no natural laws that require AI to be built this way. But there sure are powerful market incentives that make it difficult to build otherwise.

We already know that the choices we encounter online are shaped not just by what we want, but by what maximizes profits and forestalls regulation for the Silicon Valley Big Tech cabal. The corporations who built this world — who structured the defaults, captured the data, routed our attention — have made it very difficult to leave. Yet even as these systems integrate into our homes, schools, work, relationships, even as opting out becomes impractical, humans retain agency. Not limitless freedom, but the kind of constrained agency that still shapes systems through design, regulation, collective refusal, and an insistence on alternatives. The path we’re on was built by decisions, omissions, and incentives, and it can be redirected the same way.

That redirection begins with constraints. On surveillance. On manipulation. On market power. And on the capacity of private corporations to redesign cognitive space for profit.

Reimagining governance to reclaim agency

The path forward isn't one of technological rejection but of democratic reimagining and the dismantling of concentrated corporate power that undermines democratic governance and human liberty. Imagine foundation models treated not as corporate assets but as public utilities, subject to democratic oversight and public interest obligations. Imagine a technological environment where we can freely interact with AI chatbots and companions without worrying that what we share will be used to manipulate us. Imagine a future where we can have nice things without trading our mental privacy, dignity, and humanity in exchange.

This would be a world where the data we generate from interacting with our digitized world is governed as a public good rather than corporate private property. Where creators are fairly compensated for their work used in AI training and journalism and creative industries thrive alongside technological innovation. Where data minimization and meaningful consent are the norm, and freedom of thought and cognitive autonomy are bedrock rights guaranteed by law and through strict regulatory frameworks for brain-computer interfaces, biometric, and neural data collection.

Just as important is ensuring that humans can still choose to think, relate, and create without going through AI systems. This will require intentional policy: protecting public education from personalized “learning agents” that offload cognitive effort while lacking safeguards for the privacy and mental autonomy of students; preserving non-AI pathways in employment and creative industries; building infrastructure for human-centered expression that doesn’t depend on algorithmic mediation by unaccountable corporations.

Because no matter how advanced these tools become, we must retain the ability to opt out and the right to mental privacy, cognitive liberty and freedom of thought. To decide, at the most basic level, what kind of cognitive world we want to live in.

This requires more than just technical solutions, though there is certainly a role for those. It demands a fundamental rethinking of how we govern these technologies and the digital detritus they secrete. It requires us to claw back power from the corporations whose leading role in this Mystery AI Hype Theater has convinced too many that their version of the future is desirable and inevitable. And it necessitates understanding the challenges before us to ensure that the transformation underway serves the public interest rather than corporate profit.

This means redefining what kinds of power AI systems are allowed to exercise over people. It means placing legal limits on cognitive manipulation—particularly in contexts like education, employment, health, or politics. It also means obliging any system that serves as a personal assistant, coach, advisor, or companion be governed by strict duties and loyalty to the individual it serves, not to advertisers or corporate owners.

The next lever is structure. The vertically integrated AI stack, where the same handful of firms own the infrastructure, the models, the interfaces, and the markets, must be dismantled. Ownership of core models could be separated from distribution platforms. This would return some power to users and to other market participants that want to offer alternative, more rights-respecting AI systems and applications. No single firm should control both the data and the application layer of a system that mediates how billions of people think and communicate without robust safeguards on how they can use that power. Setting rules for how large, gatekeeper corporations behave toward the companies and institutions that depend on their services is critical to protect cognitive liberty and enable alternatives to emerge. Perhaps the logic of utility regulation, applied to the internet’s infrastructure in an earlier era, should be considered.

Today’s laws weren’t built for networked digital systems, much less generative and agentic AI, so we need to rethink how we define markets, measure dominance, assess harm, assign liability, and prevent firms from turning rights violations into business advantages.

These changes will not come from industry. Nor will they come from soft law initiatives that frame ethical AI as a voluntary commitment or a branding strategy. Norms matter, but they are powerless against market incentives. We need hard law and regulation: legally enforceable rights, duties, and structural constraints that reshape what kinds of AI can be built. Prohibitions against certain forms of interference, particularly when the line between influence and control is blurry. Fiduciary duties for AI systems that mediate thought, emotion, and agency and mitigate against business models built on pervasive surveillance or monopolization. Liability regimes that clarify responsibility for harms, don’t grant exemptions for generative AI outputs, deter corporations from releasing unsafe models and products to the public, and give people access to remedy. Structural separation between those who make models and those who deploy them. Market rules that favor open ecosystems over vertical empires.

We also need laws affirming what should be inviolable: the right to form and hold beliefs without coercion or manipulation, and the right to privacy in our cognitive spaces. The right to be free from technologies that nudge, predict, and optimize until the self becomes a simulation. Such freedom cannot survive in a world where a handful of Silicon Valley companies and state-backed Chinese firms control the platforms, the devices, the interfaces, and the intelligence. Where the human mind is not protected, but mined.

Forging a different path requires imagination — a vision of what it might look like to develop AI not for domination, but for human liberty and flourishing. To develop technologies that respect human limits rather than exploit them. That are governed by democratic values, not shareholder interests. If OpenAI’s Sam Altman wants to make an AI model that documents and remembers everything in your life, we must not only have ironclad privacy protections for the information we share, but also requirements that the system serves us, not its corporate overlords or shareholders. If Elon Musk and Mark Zuckerberg want to read our minds, we need to have robust protections and guidelines for how and when such data can be collected, used, and stored as well as robust liability for misuse.

If that sounds far off, consider that the system we live in now was designed. It wasn’t inevitable. It was built by people and businesses, under particular conditions, with particular goals. That means it can be rebuilt. The constraints that seem unchangeable today – the dominance of Big Tech, the capture of data, the absence of user power – were cultivated by trillion-dollar corporations in a policy vacuum. They can be reversed.

But that will be possible only if we’re willing to name the problem for what it is: a political economy that treats human thought as raw material, concentrating power in unaccountable corporations whose business models are corrosive to both democracy and human liberty.

If decisions about how our AI-inflected future remains in the hands of those beholden to markets that reward extraction and scale or authoritarian surveillance states, the future will feel inevitable, because the conditions for imagining alternatives will have disappeared.

But there is nothing inevitable about it.

There is still time, barely, to design systems that invite diversity of thought rather than collapse it into averages and subject it to algorithmic manipulation. That cultivate critical thinking and creativity rather than replacing it. And that protect the conditions under which people can think freely, together and apart.

Freedom of thought isn’t just a shield against intrusion, it’s the ground on which we build everything else: the right to express, to dissent, to believe, to learn, to love. It requires both negative liberties — freedom from manipulation, surveillance, coercion – and positive ones — the capacity to imagine alternatives, to seek truth, to shape our own opinions and preferences.

We are not yet post-human, but the terms of our humanity are being negotiated in real time through code and contracts, through mergers and acquisitions, in corporate offices, and on regulatory dockets. The outcome will depend not on the brilliance of the models, but on whether we insist loudly, stubbornly, clearly, that our minds are not for sale.

Authors

Courtney C. Radsch
Dr. Courtney C. Radsch is a journalist, author and advocate working at the nexus of technology, media and policy. She is Director of the Center for Journalism and Liberty and a nonresident senior fellow at Brookings, the Center for International Governance Innovation, and the Center for Democracy an...

Related

Perspective
Why Europe’s Resistance to Big Tech Matters for the Future of DemocracyOctober 22, 2025

Topics