Home

Donate
Perspective

Want Accountable AI in Government? Start with Procurement

Nari Johnson, Elise Silva, Hoda Heidari / Jul 15, 2025

In 2018, the public learned that the New Orleans Police Department had been using predictive policing software from Palantir to decide where to send officers. Civil rights groups quickly raised alarms about the tool's potential for racial bias. But the deeper issue wasn't just how the technology worked, but the processes that shaped its adoption by the city. Who approved its use? Why was it hidden from the public?

Like New Orleans, all US cities rely on established public procurement processes to contract with private vendors. These regulations, often written into law, typically apply to every government purchase, whether it's school buses, office supplies, or artificial intelligence systems. But this case exposed a major loophole in the city's procurement rules: because Palantir donated the software for free, the deal sidestepped the city's usual oversight processes. No money changed hands, so the agreement didn't trigger standard checks such as a requirement for city council debate and approval. The city didn't treat philanthropic gifts like traditional purchases, and as a result, key city officials and council members had no idea the partnership even existed.

Inspired by this story and several others across the US, our research team, made up of scholars from Carnegie Mellon University and the University of Pittsburgh, decided to investigate the purchasing processes that shape critical decisions about public sector AI. Through interviews with nineteen city employees based in seven anonymous US cities, we found that procurement practices vary widely across localities, shaping what's possible when it comes to governing AI in the public sector.

Procurement plays a powerful role in shaping critical decisions about AI. In the absence of federal regulation of AI vendors, procurement remains one of the few levers governments have to push for public values, such as safety, non-discrimination, privacy, and accountability. But efforts to reform governments' procurement practices to address the novel risks of emerging AI technologies will fall short if they fail to acknowledge how purchasing decisions are actually made on the ground. The success of AI procurement reform interventions will hinge on reconciling responsible AI goals with legacy purchasing norms in the public sector.

When asked what procurement entails, many people think of a competitive solicitation process, which often involves a review followed by a reward decision.. Once a use case for AI has been identified, a government initiates a solicitation process where they outline their needs, and invite vendors to submit proposals (a "Request for Proposal", or RFP). City employees then follow structured review processes to score vendors' proposed AI systems, and select a winner. The city and awarded vendor negotiate a contract that specifies obligations for each party, such as an agreed price for a specified time period. In some cities (but not others), all contracts must be approved in a public city council meeting.

Today, most efforts to improve AI procurement target steps in this conventional solicitation process. Groups like the World Economic Forum have published resources to help governments include responsible AI considerations into RFPs and contract templates.

But as we’ve seen, many AI systems bypass the formal solicitation process altogether. Instead, cities often make use of alternative purchasing pathways. For example, procurement law typically allows small-dollar purchases to skip competitive bidding. Employees can instead buy low cost AI tools using government-issued purchasing cards.

Other alternative purchasing pathways include AI donated by companies, acquired through university partnerships, or freely available to the public, like ChatGPT. Vendors are increasingly rolling out new AI features into their existing contracts, without notifying the public or city staff. The result is that most available resources designed to support responsible AI procurement are not applicable to the majority of AI acquisitions today.

While competitive solicitations offer several benefits to promoting responsible AI governance, many city employees view them as inefficient and cumbersome. Instead, many employees make use of alternative purchasing pathways when acquiring AI. This raises a key question: how might local governments establish consistent AI governance norms when most tools are acquired outside of the formal solicitation process? Answering this question requires looking more closely at who is involved in each type of acquisition.

How local governments organize and staff their procurements

Across interviewed cities, one of the clearest divides was in which city employees were brought in to oversee each AI acquisition. Some interviewed cities had established fully centralized oversight processes where every software acquisition — AI included — must pass through IT staff who can vet it for quality and risk. In contrast, other cities were largely decentralized, giving individual departments like police, fire, and schools free reign to manage their own IT portfolio.

These governance arrangements have real implications for oversight capacity — and suggest that a one-size-fits-all reform approach is unlikely to succeed. Some cities have started adopting centralized reviews that require "AI experts" trained to assess AI risks into every acquisition, enabling more consistent oversight. In contrast, cities with histories of decentralized IT governance face two paths: either train individual departments to assess AI risks, or reconfigure existing procurement workflows to establish centralized reviews to ensure minimum ethical standards are met.

Open questions looking ahead

Advocates have long recognized the potential of public procurement to serve as a gatekeeping role in determining which technologies are acquired and deployed. The past year has marked an especially exciting time for local governments who have started to integrate responsible AI considerations into their existing public procurement practices through grassroots organizations such as the Government AI Coalition. Our team's research, published at the 2025 ACM Conference on Fairness, Accountability, and Transparency, adds a missing layer to the existing conversation on AI procurement by surfacing how AI procurement actually works in practice.

Our research raises key questions that local governments will need to grapple with to establish effective oversight for all AI acquisitions:

  1. How can local governments stand up oversight and review processes for AI proposals that may bypass the conventional solicitation process?
  2. Who within a government has the capacity and leverage to be responsible for identifying and managing the risks posed by procured AI technology?
  3. How might existing procurement workflows be restructured to ensure that the right people are brought in to conduct meaningful evaluation of proposed AI solutions?

We anticipate there's no one-size-fits-all model for how local governments should structure their procurement processes to promote responsible procurement and governance of AI. But this moment offers a rare opportunity for policy experts, researchers, and advocates to come together to reshape AI procurement (for example, to center residents’ input and participation). Public procurement is where some of the most consequential decisions about public sector AI are made. If we want to understand why an AI system is adopted — and whose interests it serves — we must begin by looking at how it was acquired in the first place.

Acknowledgements: Co-authors Beth Schwanke, Ravit Dotan, Harrison Leon, and Motahhare Eslami contributed to this research.

Authors

Nari Johnson
Nari Johnson is a PhD student at Carnegie Mellon University, where she researches methods to evaluate and govern the risks posed by emerging AI technologies.
Elise Silva
Elise Silva, PhD, MLIS, is the Director of Policy Research at the University of Pittsburgh's Institute for Cyber Law, Policy, and Security, where she studies information ecosystems and conducts tech policy-related research.
Hoda Heidari
Hoda Heidari is the K&L Gates Career Development Assistant Professor in Ethics and Computational Technologies at Carnegie Mellon University, where she studies the social, ethical, policy implications of AI.

Related

Podcast
How US States Are Shaping AI Policy Amid Federal Debate and Industry PushbackJuly 13, 2025
Perspective
Why Technology Won’t Save Us Unless We Change Our BehaviorJuly 14, 2025

Topics