AI at the Brink: Preventing the Subversion of Democracy
Paulo Carvão, Slavina Ancheva, Yam Atir / Mar 3, 2025
Jamillah Knowles & We and AI / Better Images of AI / People and Ivory Tower AI / CC-BY 4.0
The year is 2028. The world’s leading economies are in turmoil as artificial intelligence systems, once hailed as engines of progress, have outpaced human governance. AI-driven financial markets operate beyond regulation, executing trades at speeds incomprehensible to human oversight. AI legal agents flood the courts with appeals and counterappeals, paralyzing the judicial system. Generative AI platforms tailor disinformation campaigns with surgical precision, dismantling electoral processes before governments can intervene. Meanwhile, a handful of oligarchs with exclusive control over the most advanced AI systems command unprecedented influence, bypassing legislatures and setting policies through proprietary governance mechanisms. Democracy, once thought resilient, is crumbling under the weight of unchecked artificial intelligence.
It’s hard not to see a trajectory towards such a world in today’s headlines. Yet, this future is not inevitable.
The AI Dilemma: Power, Progress, and Democratic Resilience
Technological advancements have driven economic transformation, redefining industries and human capability. Today, artificial intelligence stands at the center of an unprecedented shift—not just in terms of economic productivity but in its ability to shape governance and power structures. What was once an innovation-driven pursuit of efficiency is now a struggle over control, as AI’s rapid deployment tests the resilience of democratic institutions.
AI’s potential is undeniable—it optimizes industries, automates complex decisions, and accelerates discovery. Yet, its unchecked expansion carries profound risks. The same tools that enhance productivity can also be leveraged to manipulate information, distort political discourse, and consolidate influence within an elite few. Armed with powerful AI systems, private entities are no longer mere stakeholders in governance; they are defining the rules of engagement, often outpacing legislative and regulatory oversight.
These dynamics pose critical challenges. AI-powered financial markets operate at speeds regulators struggle to monitor, threatening economic stability. Automated legal systems process vast amounts of data yet risk circumventing due process and overwhelming courts with algorithmically generated filings. Disinformation crafted by generative AI spreads with surgical precision, eroding trust in democratic systems. Meanwhile, the computing resources necessary for leading AI models are concentrated in the hands of a few, reinforcing an imbalance where power and economic gains accrue to those with exclusive access to advanced technology.
Governance structures must adapt swiftly. The question is not whether AI should be regulated but how to design frameworks that foster innovation while upholding democratic principles. Without intervention, AI’s trajectory could shift from being a catalyst for progress to a force that entrenches inequality and undermines democratic institutions. The need for transparent, accountable, and adaptable AI governance has never been greater.
To understand the evolving AI policy landscape, we conducted 49 in-depth qualitative interviews—half with members of Congress and their staff and half with AI industry leaders. These interviews, held under confidentiality agreements, provided first-hand insights into the priorities, concerns, and strategic tensions shaping AI governance. Additionally, we analyzed the 150 AI-related bills introduced in the 118th Congress, applying machine learning techniques to identify patterns and legislative trends. We also reviewed public materials, policy documents, and congressional records, ensuring a comprehensive understanding of the competing forces in AI governance. The result is a research-backed framework that identifies industry fragmentation, maps areas of consensus, and proposes governance mechanisms that balance innovation with accountability. Details of this work can be found in the “Governance at a Crossroads: Artificial Intelligence and the Future of Innovation in America” report.
The Fragmented AI Landscape: Six Industry Segments and Their Competing Visions
Our deep-dive interviews with industry members revealed that referring to AI development as a singular "industry" is misleading. Instead, the AI landscape is shaped by distinct factions, each with its vision and priorities.
- Accelerationists advocate for rapid AI development with minimal oversight, arguing that regulations stifle innovation. This group, comprising some of the major tech firms, startups, and venture capitalists, prioritizes speed, efficiency, and profitability, often at the expense of ethical considerations and security measures.
- Responsible AI Advocates push for ethical AI development, focusing on fairness, accountability, and bias mitigation. They include policymakers, academics, and corporate AI ethics teams who seek to align AI systems with societal values, though they often struggle to maintain influence in a competitive industry.
- Open AI Innovators promote transparency and accessibility by supporting open-source AI development. Research institutions, independent developers, and nonprofits in this space believe that democratizing AI fosters collective progress. However, they must navigate concerns about intellectual property protection and the potential misuse of open AI models.
- Safety Advocates emphasize AI risk mitigation and the long-term societal impacts of advanced systems. AI safety researchers, regulatory agencies, and policy experts within this group prioritize addressing alignment challenges and existential risks. However, their efforts are hindered by the difficulty of defining enforceable safety standards. Notably, the term “safety” has recently acquired a negative connotation in free-speech absolutist circles. In our interviews, safety has instead been used as it relates to long-term risks.
- Public Interest AI Proponents focus on ensuring that AI serves the public good and advances accessibility, inclusion, and social equity. This segment, represented by non-profits, public interest organizations, and some government bodies, seeks to develop AI applications that benefit marginalized communities. They often face funding constraints and competition with profit-driven industry players.
- National Security Hawks view AI as a strategic asset essential for national defense, economic stability, and global competitiveness. Government agencies, defense contractors, and security analysts prioritize AI deployment in critical infrastructure and military applications, but their emphasis on security sometimes clashes with concerns over civil liberties and responsible AI use.
Our interviews in Congress confirmed these distinctions, with multiple sources emphasizing that AI governance requires a nuanced approach that accounts for “industry from different parts of the ecosystem.” Some policymakers highlighted the ideological divides within AI development, from the debate over open source versus proprietary models to the aggressive tactics of venture capital firms shaping AI’s trajectory.
Moreover, many companies do not neatly fit into one category. Depending on internal divisions, a single corporation may house accelerationists and responsible AI advocates simultaneously. As one researcher at a frontier AI lab noted, “There are people who truly believe AI should serve the public good, and then there are those who are driven by financial incentives to push development forward with little oversight.”
Recognizing these segments is critical to crafting effective AI governance policies. Without this differentiation, regulatory efforts risk being either too restrictive for innovation or too permissive for safety and security. Understanding these competing forces is the first step toward a governance model that fosters AI progress while ensuring accountability.
The Dynamic Governance Model: A New Approach to AI Regulation
To navigate these challenges, we propose a Dynamic Governance Model—a policy-agnostic, adaptive framework designed to ensure that AI innovation serves the public interest rather than concentrated private power. This model consists of three core components:
- Public-Private Partnerships for Evaluation Standards – Governments, industry, and civil society collaborate to establish ethical and security benchmarks that evolve with technological advancements.
- A Market-Based Ecosystem for Audits and Compliance – Independent entities conduct real-time AI audits, ensuring ongoing accountability and risk mitigation.
- Accountability and Liability Mechanisms – Clear legal structures define responsibility for AI-induced harms, ensuring transparency in algorithmic decision-making.
This approach ensures that AI regulation is neither overly restrictive nor laissez-faire but instead adapts to the speed of innovation while prioritizing democratic oversight. The report expands on the Dynamic Governance Model and its applications, but one key takeaway is that this model represents an approach to policy stewardship that builds upon existing institutions and frameworks without presupposing the creation of additional state bureaucracy.
Rewriting the Future: A Democratic AI Renaissance
Returning to a vision for 2028 and beyond, we see a different outcome. In this version of the future, the world does not succumb to AI-induced instability. Instead, proactive governance frameworks ensure that AI development remains aligned with public interests. The financial markets, once at risk of collapse, are subject to robust AI auditing mechanisms. Judicial systems integrate AI responsibly, using it to improve case efficiency rather than overwhelm courts with frivolous filings. AI-driven media platforms are held to transparency standards, mitigating the weaponization of information.
Most critically, democracy endures—not by resisting AI but by integrating it into governance to strengthen institutional resilience. The Dynamic Governance Model provides a blueprint for how societies can embrace AI’s potential while safeguarding democratic values.
The Path Forward
The trajectory of AI governance is not predetermined. As we stand at the crossroads, the choices made today will determine whether artificial intelligence becomes a tool for empowerment or a force for democratic erosion. By adopting an adaptable, transparent, and accountable governance framework, the US can lead the global AI policy agenda, ensuring that innovation flourishes without sacrificing fundamental democratic principles. The Dynamic Governance Model represents an alternative to both unfettered laissez-faire and more restrictive approaches like the European one.
Unlike the view that unleashing AI's potential without safeguards is the only way to win the global race, establishing solid foundations for trust in these technologies can accelerate adoption while avoiding an undesirable backlash. The future remains unwritten, but one thing is clear: AI must be governed not by the interests of a few but by the needs of all.
Authors


