Why US States Are the Best Labs for Public AI
Alex Pascal, Nathan Sanders / Apr 3, 2025
January 26, 2024 — New York Governor Kathy Hochul makes announcements related to Empire AI, a consortium and public computing investment based at the University of Buffalo. Source
Since the arrival of generative AI a few years ago, there’s been a lot of lip service and mythologizing from Big Tech and tech investors about the unprecedented potential for AI to benefit humanity. We are now witnessing concerted, widespread efforts by corporate America to adopt AI into their businesses, as well as the attempted “AI-fication” of the federal government by Elon Musk and DOGE in the name of business slogans like operational efficiency, performance improvement, and resource optimization. However, despite the remarkable advancements of AI, we have yet to see it actually solve big societal problems, help struggling, vulnerable people in their everyday lives, or make government work better as promised. So far, the use of AI in government has been defined more by secrecy and fear than by enthusiasm and efficiency.
This should not be surprising. Private companies are driving advances in AI, and their core incentive is to deliver shareholder returns, not serve the public interest. Moreover, because of the corrupt, self-dealing conflicts of interest and general incompetence rife within the Trump Administration and Elon Musk’s DOGE, we have no faith that federal AI as currently construed will improve government functioning and deliver benefits for Americans—let alone protect our fundamental rights. Instead, it will serve those who already have power, and amplify that power.
AI in the US doesn’t have to be this way—and shouldn’t. But if AI has a chance of bettering Americans’ lives over the next four years, especially those without power and resources, it will come from responsible implementation at the state level.
AI has useful applications that can make people's lives better. But, the sophistication of the technology itself is not what will make a difference. After all, AI is just a technology. It’s how AI is used and by and for whom that matters. Therefore, to benefit people and create public goods, AI must be developed, deployed deliberately, and monitored carefully, with the right incentives, safeguards, accountability, specific problem-driven use cases, and rigorous oversight. We believe that states willing to pursue this approach are the best testing grounds available in the United States for AI.
US states can and should lead the charge to develop and deploy AI in the public interest. States can be laboratories of twenty-first century democracy and examples for future US administrations and the world if they pursue public AI investments and applied projects that use AI to demonstrably improve their citizens’ lives in concrete ways: to provide superior services accessible to more constituents, to model fair treatment of workers, and to be more responsive to constituents, including about the settings where people do—and do not—want to see AI used.
The problem of incentives
Notwithstanding the good intentions of many employees of the AI labs, all the leading generative AI systems today are financed by investors and developed by corporations driven by return on investment and subject to severe economic pressures. The saga of OpenAI’s heel turn from idealistic non-profit to $300 billion shareholder asset is case in point.
These private interests have every incentive to prioritize their shareholders over ordinary citizens, workers, and families. Like the search engines and social media platforms before them, AI developers are incentivized to bias their products and services in favor of advertisers, corporate affiliates, or authoritarian parties. (This is no coincidence; Big Tech dominates the AI industry.) For example, machine translation developers are incentivized to tune their models to serve communities with more market power, not greater accessibility needs. And no matter how proficient AI models may be at facilitating public debate, can we trust them to do so impartially? Consider the example of Musk-owned Twitter, now rebranded X and formally a subsidiary of his AI company with an explicitly ideological agenda. Irrespective of the integrity of their owners and leaders, the fundamental reason these companies will not and cannot serve the public interest is their business model: rapid hyperscaling throughout society and breeding user dependence on their systems en route to private profit.
The problematic incentives of the AI industry have driven many academics, commentators, and some policymakers to propose an alternative path: Public AI. This is not an alternative technology but rather an alternative approach, with different incentives, for developing AI technologies. In a Public AI framework, AI is developed or cultivated by government agencies to provide AI models and services as public goods and to perform a set of tasks for government and/or citizens, with democratic oversight and accountability. Either public sector entities and employees develop the AI systems themselves on public infrastructure, or they support and partner with a non-profit civic ecosystem to do so. Public AI should be driven by real-world problems experienced by citizens and shaped by affected communities in its design and deployment.
The US is a leading producer of open-source AI models, such as the Allen Institute’s OLMo 2 project. These freely reusable and transparently produced systems are a solid basis for public-oriented projects, but they are not Public AI. They are largely produced by private entities with limited public input and are not subject to public oversight or accountability.
Within the US, the closest thing to government-led Public AI has been the Biden administration’s National Artificial Intelligence Research Resource (NAIRR) Pilot program. However, its focus has been on supporting the academic research ecosystem, particularly by providing compute resources, and its future is uncertain under the Trump administration. While useful, NAIRR’s mission of basic research does not prioritize developing, let alone operating, trustworthy, applied AI models that might deliver concrete goods and services that matter in people’s lives, such as easing traffic congestion, regulatory enforcement, renewing your drivers license, accessing medical services and public benefits, bolstering rights-respecting public safety measures, enhancing and scaling targeted K-12 educational interventions, harmonizing, streamlining and enforcing regulation, and other uses in public administration. Public AI needs to serve citizens, not just scientists.
In this regard, Public AI remains more an aspiration than an alternative, for now. Thus far we have only demonstrations of its potential. Singapore’s SEA-LION model, developed by a public agency at a relatively low cost using open source components and offered as free and open source software, aims to solve the problem of under-representation of Southeast Asian languages in corporate AI models. The government of Estonia has deployed AI systems to improve a range of government services, from health care to public transit. These projects show how even small governments and agencies with resources far less than the Big Tech-backed AI giants can build and deploy practical AI systems for everyday public benefit—under the right conditions. The key here is that these governments are using AI systems narrowly and deliberately to help solve discrete problems. AI is not seen as the solution to every problem but as a possible part of a solution set to a specific problem. By contrast, the one-size-fits-all AI solutionism of DOGE is a disaster in the making for the US.
Why states should lead the way in US Public AI
What makes Public AI enticing is 1) sidestepping market incentives that tend to eschew the provision of public goods and services; and 2) the rapidly decreasing cost of developing fit-for-purpose AI systems.
Industry and government alike often talk about making AI “trustworthy,” but the larger question is how to make AI worthy of the public’s trust. We’ve already discussed why, absent significantly more public oversight, democratic accountability, and legal liability, the private sector is unlikely to do so. Yes, others might say, but no one trusts the government either, so why would we trust AI from government? Indeed, trust in the US federal government is alarmingly low.
However, the closer government gets to the people, the more it is trusted. A 2023 Gallup survey reported much higher levels of trust in state and local governments, 59% and 67%, respectively. The reality is that states and localities provide the governance that matters most in our everyday lives, and their leaders are held accountable for delivering accordingly. This creates a much greater pragmatism, a more immediate and meaningful sense of accountability, and a better set of incentives for applying AI responsibly. What’s more, the majority of government spending in the US is done at the state and local level; even a large fraction of federal spending comes in the form of transfers to state and local governments.
Taken together, it makes perfect sense that states would lead the way in designing and applying AI to public services and benefits delivery.
Government provision of Public AI should take a similar federated approach to the provision of other public goods like healthcare and education. State governments should take the lead in experimenting with AI models and fitting them for purpose where and when appropriate to help address discrete problems and needs of their constituents. The federal government should support states through grantmaking and basic research, funding innovation through existing institutions like the National Science Foundation and state universities. (The Trump administration seems unlikely to fulfill even this limited role, given their aggressive slashing of funding for scientific research and orientation toward institutions of higher education, so philanthropy will likely have to intensify its support.)
Regardless of how AI development is funded, states will need to be at the forefront of practical applications of AI in the public sector—as well as instituting guardrails and accountability for them. State leadership in AI development allows public services to be locally optimized for the use cases that are most needed and politically and culturally accepted in each state. Leadership, however, does not mean full-steam-ahead cultivation and adoption of AI across the state and by state government. It entails a deliberate, problem-first, values-driven approach to AI that treats it as a tool in the policy toolkit and creates a governance regime with meaningful accountability (ideally based on the Blueprint for an AI Bill of Rights and March 2024 OMB memo on government use of AI) that centers people and the planet.
Many are chomping at the bit. More than 10 Governors from both Republican and Democratic-controlled states have issued executive orders to study AI use in running government operations and providing government services and benefits. Numerous states have passed or considered AI legislation to explore and govern AI use in and by the state. New York has established the Empire AI consortium of public and private universities, using public funds to help establish an energy-efficient AI computing center to drive statewide innovation, research, and development of AI technologies. While NY’s initiative is largely focused on basic research, for now, Massachusetts is pursuing a bold vision of applied, values-driven, sustainable AI to serve the citizens of the Commonwealth. Backed by an authorized $100M public investmentment, a new public sector-led AI Hub will work to drive job-creating innovation and applied problem-solving, workforce development, responsible use and deployment, and improved public services.
And state government need not—and should not—go it alone. When there is mutual interest and common purpose, public partnership with philanthropic initiatives, civil society organizations, and the private sector can support the development of a responsible AI ecosystem that provides public goods for people, solves problems for communities, and creates new jobs. Moreover, regional partnerships among several states could work for AI investments that may have larger costs, require cross-jurisdictional collaboration, and/or could benefit multiple states. States have a history of working together on traditional infrastructure projects such as energy grids, water resources, and greenhouse gas emissions management. Similar state partnerships for problem-focused AI development could help smaller states access greater resources and distribute the costs and risks of AI development and implementation to generate shared public goods.
Costs and benefits
In every state where Public AI is proposed and pursued, there will be legitimate considerations about cost—as there always are with significant public investments. States should make these decisions mindful of the cost/benefit tradeoffs and solutions to prevent and mitigate adverse impacts on people and communities.
When considering the expense of state-level public AI, think of it more like running a healthcare exchange than curing cancer. Governments don’t have to invent new technologies, but they need to operate them effectively—which is not easy. Spending millions to recruit the top AI researchers and hundreds of millions to push forward cutting-edge model performance can remain the province of private AI developers. Sophisticated though they may be, many of the most capable components of AI have become commodified. You don’t need the resources of a superpower, or even a nation state-sized corporation, to train them. For Public AI, relatively modest amounts of compute and open-source models and tools provide sufficient starting points, and small teams with access to the right data can tailor those capabilities to specialized applications relatively quickly. The resources required to develop and deploy AI are dropping with startling speed, which means that more organizations–including cash-strapped public sector agencies and non-profits–can design and apply it.
Furthermore, state investment in AI could net out in favor of the public purse, particularly through optimization of existing public resources or increased tax revenue from more entrepreneurship and better tax enforcement at the state level. We expect many states to pursue AI deployments in the hopes of automating work and saving costs, and that is possible. But such plans should consider carefully the long and challenging history of technology adoption in government, and be undertaken with a realistic understanding of their scope, costs, and potential impacts.
In particular, there will also be vigorous debates over whether AI integrations in state government should displace humans—state employees. Cutting public sector headcount may be some of the appeal for certain states, as it has clearly motivated the Trump administration. We would rather see AI applied to empower civil servants and public institutions with new capabilities and to improve how the government operates and serves the public. We advocate a calibrated walk-before-run approach in which AI tools are tested, piloted, and then deployed with robust monitoring and oversight to augment the performance of public sector workers and processes. States should cautiously and incrementally increase AI use only after progressively demonstrating their capability on simpler tasks. Taking this approach may elongate the time to achieve a positive return on investment but is less risky than a more aggressive approach that faces harsh opposition or fails catastrophically in the initial deployment.
The recent federal example of Elon Musk’s DOGE approach of slashing and burning the federal workforce and replacing it with automated algorithmic systems in the name of so-called efficiency and imagined cost-savings demonstrates the peril of the sprint-before-crawl path. As the harms and failures of this inhumane and short-sighted authoritarian approach mount, American taxpayers will be left footing the bill. For their expertise, experience and empathy, people remain central and indispensable to good government and, vitally, the effective and responsible use of AI by government.
When implementing AI, governments should also weigh the medium to long-term costs of procuring private sector alternatives to tools they could build themselves or partner with civil society to develop. In addition to its foundational benefits in trustworthiness and democratic control, despite the potentially higher upfront investment, Public AI will likely be cheaper over the medium term than private offerings procured through vendors and save taxpayer dollars and public resources for other uses. And the value of developing systems in-house means that public agencies will understand how they work, be better at adapting and troubleshooting them, and avoid dependence on private vendors that can be exploited.
Government contracting for software services in the US has a well-deserved reputation for inefficiency. Software vendors are notorious for overcharging government agencies, delivering shoddy products, locking them into contracts, and substantially lagging behind consumer and industrial offerings in functionality. States must not fall prey to the AI snake oil salesmanship of companies with half-baked, ineffective “AI” branded solutions looking to siphon public funds. Public AI is not a panacea for these problems—public sector software development problems may also overrun budgets and underdeliver on performance. But there are ways for states to avoid or at least mitigate these pitfalls, for example, by tapping non-profit civic tech communities (as in Taiwan) or bringing in tech talent for temporary stints of public service as the US Digital Service before DOGE took it over. For these reasons, the starkest opposition to Public AI is likely to come from the AI industry itself. AI companies have quickly become one of the most prominent lobbying forces at both the federal and state levels and will likely resist public investments that could curtail their future bottom lines.
A final factor that makes Public AI by state governments more appealing than at the federal level is political stability. Political polarization and volatility at the national level often create policy whiplash every two years, which makes sustaining longer-term technology projects exceedingly difficult to implement. Most state governments, by contrast, have consistent single-party control of the executive and legislature. Deploying AI in and around the public sector to benefit citizens will necessarily be an iterative process as governments pilot systems and adjust them based on performance as well as employees’ and citizens’ experiences. The more political space and time that state governments have to experiment and iterate with AI systems will allow them to ensure that the AI is helping and not harming people.
Contrary to the lofty hype of AI evangelists, AI is not and never will be a panacea. But state governments should consider Public AI development within a framework of democratic accountability to expand the toolkit for serving their constituents, providing public goods, addressing societal problems, and saving taxpayer dollars. State government’s greater trustworthiness, pragmatism, proximity to people, and political stability make state-level Public AI a far more appealing option at this stage than corporate or federal AI to ensure that AI is used responsibly to actually benefit citizens, workers, entrepreneurs, and grassroots communities.
Authors

