Home

Donate
Perspective

Mapping California’s AI Tribes, From Optimists to Alarmed Populists

Daniel Stone / Aug 21, 2025

Northwest view up to the pediment, rotunda, and dome of the California State Capitol in Sacramento. (Radomianin / Wikimedia Commons)

On a warm Tuesday evening in Fresno, a retired union organizer and a twenty-something software engineer found themselves in rare agreement: artificial intelligence was going to change everything, and not for the better. Their politics could not be more different — one a lifelong Democrat and the other a libertarian who has never voted blue — but both voiced distrust in the tech companies building the tools during a focus group.

That session was part of a large research project conducted through May and June this year, drawing on a new statewide survey, focus groups, and in-depth interviews conducted by Diffusion.Au, TechEquity, Lake Research Partners, and Voss Strategy.

This project marks the most significant attempt to date to capture how Californians are making sense of AI — at dinner tables, in workplace break rooms, and in town halls — and its burgeoning industry in their backyard. Often, they found unexpected common ground.

Understanding attitudes in California is crucial: when it comes to artificial intelligence, the state is both a proving ground and a political fault line. It’s home to the companies building the tools, the investors fueling them, and the legislators most actively trying to regulate them — all while its voters watch the AI debate unfold through the lens of their own hopes, fears, and lived experience.

According to findings, voters are keenly aware of the tools: eight in ten have heard about AI in the past month, and two-thirds have tried one of the popular LLM tools like ChatGPT or Gemini. But beneath that familiarity lies an unsettled mood: a cautious optimism shadowed by deeper unease. People expressed hope that AI might lower grocery or energy bills, improve access to healthcare, or ease the morning commute, yet when asked a single question — “Who do you think will benefit most from these technologies?” — the mood in their responses shifted decisively.

Nearly six in ten Californians answered: corporations and ultra-elites, with Elon Musk, Mark Zuckerberg, and Sam Altman most often mentioned. Fewer than one in five said working or middle-class families will benefit. As one Republican-leaning voter put it: “This is going to negatively impact the traditional upper classes, too. If you don’t own the corporation, you’re screwed.”

Many said they believed ultra-elite figures were going beyond just profiteering, and are quietly rewriting the rules of society in their own favor, skirting democratic processes and undermining rights and welfare in the process. Jobs and workers’ rights topped the list of their most immediate anxieties, with nearly half of Californians saying that AI is developing far too quickly, driven by corporate interests rushing products to market without oversight. In one focus group, a participant summed up the sentiment bluntly: “I just keep thinking… who is going to step in? Somebody has to do something before it goes too far.”

The Californian AI Compass: Tribes, not demographics

At first glance, one might expect attitudes about AI to map neatly onto party affiliation, age, or income, but our research shows they don’t.

That is why we applied a segmentation strategy that maps not just demographics, but mindsets: what people value, what they fear, and which stories they trust. Decades of behavioral research show these underlying worldviews are far more predictive of political behavior than age, income, or geography, especially when the issue is abstract, fast-moving, and hyped.

This first-of-its-kind analysis produced a detailed map of five distinct AI personas. Each constitutes a coherent “cultural tribe” in Gillian Tett’s sense: a community bound by shared stories, assumptions, and sources of trust. At one end, a group we are calling “Market Optimists” echo the language and risk appetite of Silicon Valley venture circles, confident private innovation will deliver faster than public oversight. At the other, “Alarmed Populists” embody manufacturing-era anxieties about job loss, remote decision-making, and powerful machines they can neither see nor control. Between them lie groups whose attitudes hinge on concrete issues: economic fairness, workplace surveillance, and whether policymakers can demonstrate real independence from industry.

Understanding these tribes is strategically essential. Policymakers cannot persuade, mobilize, or build a durable consensus without knowing which communities are already aligned with them, which can be won over, and which may actively resist them. This approach makes those dynamics visible. It shows where common ground can be built across seemingly opposed constituencies, and where the fault lines may be too deep to bridge.

The five tribes

  • Market Optimists (14%): This group is predominantly made up of men in their career prime, often in tech or biotech. They see AI through a lens of personal gain and free-market confidence. They were enthusiastic about innovation but skeptical of government oversight, fearing it might stifle progress. But their political clout may be limited: they voted less, organized less, and wielded more economic than cultural or electoral influence.
  • Hopeful Regulators (23%): On paper, this group looked demographically similar to Market Optimists: male, mid-career, educated, and very enthusiastic about the potential for these new technologies. But they held a fundamentally different worldview and set of values. They were deeply Democratic and politically engaged, and saw government as essential to harnessing AI for the public good. They were optimistic about innovation but believed enforceable rules were necessary to prevent misuse. Legislators looking to rein in the tools have natural allies in this camp.
  • Pragmatic Skeptics (14%): Disproportionately female, younger, and less politically active, this group held one of the most sharply ideological views of power, with clearly defined heroes and villains. They were deeply concerned about AI’s impact on jobs, inequality, and privacy. Their skepticism was sharp but not fatalistic: they were open to being persuaded about the potential benefits of AI if offered tangible solutions, such as enforceable workplace protections or strict data privacy rights. They are a pivotal swing group, waiting for practical assurances before they can be mobilized.
  • Alarmed Populists (19%): Older, highly politically engaged, and deeply anxious, this tribe’s thinking was often emotional, shaped by a strong sense of powerlessness. For them, AI appeared to be as a mysterious, threatening force, capable of acting on its own, yet unleashed by unknown and unaccountable elites. They gravitated towards science-fiction metaphors and were vulnerable to reactionary narratives. Yet they are not unreachable. What resonated most were concrete stories that spelled out who is doing what and with what consequences — such as how AI affects the price of groceries, the quality of healthcare, or the security of jobs — paired with straightforward assurances of accountability.
  • Cautiously Disengaged (30%): This constituted the largest segment of participants: civically engaged in general, yet disengaged on AI. They are concerned about AI’s future but do not see it as relevant to their own lives or to the political debates that matter most to them. Their outlook was cautious and critical: they did not trust AI’s decisions, and when they did focus on the issue, they consistently called for stronger legal protections and tighter safeguards. They were open to persuasion but unlikely to mobilize until they saw clear, tangible reasons why AI matters to them.

A common thread: mistrust of industry

Despite their differences, most tribes converged on one point: they shared deep skepticism of industry-driven solutions. Even Market Optimists, while broadly supportive of innovation, hesitated to grant tech companies free rein. For Pragmatic Skeptics and Alarmed Populists, industry self-policing was anathema. They rejected voluntary standards outright, regarding them as fatally compromised by profit motives.

Seventy percent of Californians said they want strong, enforceable laws to govern AI, and the California government, far more than Washington, was seen as the institution capable of delivering them. Yet trust in lawmakers was conditional. Voters were blunt in their assessments during focus group sessions and interviews: “Are their campaigns funded by these companies? You can’t serve two masters.” “They’re completely captured by industry.”

This mistrust hardened into a policy agenda that cut across tribes. A majority of people said they want clear labelling of AI-generated content, strict rules against algorithmic bias in hiring, housing, and finance, and a voice in decisions about automation in their workplaces. They wanted guarantees that if jobs are displaced, safety nets would follow. And they wanted legislators who are demonstrably independent: no stock-trading, no capture, and visible accountability.

China as a prism: anxieties begin at home

This mistrust also shaped how Californians responded to stories about China. For some, AI was framed through global rivalry, but even here, interpretations diverged along tribal lines. For Market Optimists, personal gain and innovation were motivators, while China talk actually diminished their enthusiasm. For Alarmed Populists, China confirms their existential fears of powerful, unaccountable outside forces. For Pragmatic Skeptics, China fears looked like a corporate smokescreen — an easy scapegoat to deflect attention from domestic inequality.

In this way, China is less a foreign-policy concern than a prism refracting voters’ deeper anxieties about power, fairness, and trust at home. Even when people responded strongly to narratives about competition with China, it was rarely about Beijing itself. Instead, it was chiefly about unease in their own workplaces, neighborhoods, and political institutions — conditions that made them more receptive to simple stories about external threats.

California’s moment

California is not just a major AI hub, but also the world’s regulatory testbed. Just as its emissions standards once forced automakers to redesign engines worldwide, the rules it sets today will shape not only who benefits from AI, but how these systems are built in the first place.

What our segmentation shows is that these choices will not land on a blank slate of voters. On this, Californians do not divide neatly by party or income; they sort into tribes defined by values, fears, and sources of trust. That map makes visible where consensus can be built. It is the difference between flying blind and having a compass.

And the message from across those tribes is clear: enforceable protections for jobs, rights, and democracy, and leaders visibly independent from industry. The question is not whether AI will transform the state, but whether the state’s people will have a say in how. The stakes are immense, and the world will take its cue.

Authors

Daniel Stone
Daniel Stone is the Executive Director of Diffusion.Au, a fellow with the Centre for Responsible Technology Australia, whose research with the Centre for the Future of Intelligence at the University of Cambridge recently explored global AI policy narratives.

Related

Perspective
Californians Deserve Better than Newsom’s ‘DOGE but Better’August 1, 2025
Podcast
How US States Are Shaping AI Policy Amid Federal Debate and Industry PushbackJuly 13, 2025

Topics