Home

Donate

The Public Doesn’t Care About Your Tech Policy, And That’s A Problem

Daniel Stone / Dec 4, 2024

The blame game over why Vice President Kamala Harris’ 2024 campaign failed to resonate with Americans' urgent concerns is already well underway. But one fact is already clear: President-elect Trump managed to connect with people at a deep, emotional level that many elites do not understand. This connection now gives him the power to reach into America’s institutional core – potentially transforming the structures that define how Americans live and interact with one another.

This result is a wake-up call for anyone advocating policies aimed at improving human well-being. Across 15 years in politics, I’ve seen it repeatedly: brilliant minds converge on policy but miss the crucial step of grounding it in the real concerns of everyday people. Climate and public health advocates frequently fall into this trap, but technology policy advocates are the worst offenders. The abstract nature of tech policy allows a comforting distance from public pressure, often resulting in toothless, aloof proposals. However, no such buffer exists in health, education, or employment—the public won’t tolerate ideas that don’t tangibly deliver.

Over the past year, I've spoken to nearly 100 politicians around the world—from Sacramento to Whitehall to Brussels—and a consistent pattern emerged. Despite consensus among leaders on the need for greater transparency and accountability regarding AI and other new technologies, they often hesitate to act without clear signals of public pressure. A forthcoming study I recently conducted at Cambridge has confirmed a troubling trend: Advocates for human well-being frequently use language that inadvertently undermines their own case, just when their voices are needed most.

When asked to identify the country's most important problem today, fewer than 0.5% of Americans are concerned about new technology’s societal impacts. Instead, as of October 2024, 43% are preoccupied with immediate economic concerns like rising costs and inflation. In the UK, 50% see these same pressures as decisive to their vote, followed by health and migration. It’s the same story again and again across the EU and countries like Australia: people are focused on getting their kids to school, not debating algorithmic transparency.

It’s a strange disconnect. People using these tools are already shaping our lives in very tangible and consequential ways: companies and governments use them to decide whether you get a job, a credit card, or a place to live. In the wrong hands, they can enable harm, like terrorism. In the right hands, they promise to cure diseases, invent cheaper medicines, accelerate clean energy, and cut household costs and emissions.

How we regulate these tools will define the society we create. This isn’t just a tech issue – it’s a choice about the future we want to build together. Holding companies accountable and demanding transparency ensures people can afford groceries, secure a home, or see a doctor when they need to. Each decision shapes whether our systems advance fairness or entrench inequality, but advocates risk losing sight of this by hiding behind jargon, turning real challenges into academic debates. Doing so may impress colleagues at conferences but saps the community of the energy needed to drive change. To be clear, this doesn’t mean focusing solely on near-term harm or oversimplifying your message. Instead, it’s about enriching the emotional resonance of your argument and creating clear links between how today’s decisions about future systems are linked to present-day concerns.

Without the public pushing for action, tech reform stalls, gets diluted, or stays symbolic. When critical moments come, efforts will falter—just as they did in California when Governor Gavin Newsom did not see the political reward in signing the AI bills that made it to his desk. The same story is unfolding in the UK, where the Starmer Government is delaying its pledge for a UK AI Act, and in the EU, where the strong stance of the AI Act is undermined by weak or compromised enforcement. Meanwhile, the US Congress failed to enshrine any of the regulatory ideas that arose from President Biden’s executive order into law – an order Trump has vowed to revoke.

As politicians waver, torn between advocates' ideals and voter indifference, industry leaders are moving swiftly to shape the future on their terms. For them, winning over key politicians and setting the rules for the game is as crucial as the technology itself. They’re spending big to shift the fundamental incentives of politicians in their favor. In 2022, 158 organizations lobbied on AI in Congress; by 2024, that surged to 462—a 192% jump. In 2023, 86% of high-level AI meetings with European Commission officials involved industry.

These companies grasp what socially minded advocates often miss: this fight won’t be won through insider lobbying alone—it’s about mobilizing public opinion. The crypto industry, for instance, spent $245 million on the US presidential race—nearly half of all corporate election spending—targeting key races with emotionally charged messaging to weaken tech accountability. In California, corporations spent over $5 million to defeat a bill to increase accountability for harm, making it the state’s costliest advocacy campaign this year. A wave of senior lobbying hires by AI firms suggests this is fast becoming the sector's default way of business.

The Harris campaign's struggles and President-elect Trump’s success show that unless we connect our proposals to people’s real, urgent concerns, we will not win them over. If we believe these technologies will reshape our lives, then the stakes are too high for abstract language and jargon. We know it’s possible; the climate sector overcame the same challenge by emotionally connecting with the public. Regulators and tech advocates must offer a hopeful vision of how our ideas can create happier, healthier, and more prosperous lives. The window for change is narrowing—either we build momentum by speaking to people's realities, or we surrender the future to those prioritizing profit over public good.

Related Reading

Authors

Daniel Stone
Daniel Stone is the Executive Director of Diffusion.Au, a fellow with the Centre for Responsible Technology Australia, whose research with the Centre for the Future of Intelligence at the University of Cambridge recently explored global AI policy narratives.

Topics