Home

Donate

AI Accountability Starts with Government Transparency

Clara Langevin / Mar 20, 2025

Clara Langevin is the AI Policy Specialist at the Federation of American Scientists.

Imagine this: You’re a veteran who has served your country, and each month, your well-earned VA benefits help cover your rent, groceries, and medical bills. But one day, after updating your direct deposit information, your payment doesn’t arrive. You check your bank, call the VA, and after hours on hold, you’re told your benefits were flagged as potential fraud—by an artificial intelligence system. The Payment Redirect Fraud (PRF) model, designed to catch fraudulent direct deposit changes, mistakenly flags your legitimate update, triggering a false positive and a bureaucratic nightmare.

This software issue isn't merely a frustrating glitch—it's a failure of accountability in systems that increasingly govern our lives, risking severe consequences for those who rely most on government support. Luckily, we know of the PRF model and its possible impacts because the Department of Veterans Affairs reported it within its AI Use Case Inventory. Similar concerns extend across the government. For example, the Department of Homeland Security's AI Use Case inventory flagged 29 high-risk AI systems and documented mitigation efforts for 24 of them, ensuring they wouldn't inadvertently harm the people they're meant to serve. Without such transparency, these risks could go unnoticed and unaddressed.

When AI makes decisions that impact people's lives, those affected deserve to know how these systems work, what risks they pose, and what safeguards exist to prevent harm. Without transparency in how AI models are used for government services, Americans have no way to challenge wrongful decisions, and agencies have no clear incentive to fix them. That's why maintaining substantial federal AI use case inventories is crucial: they shine a light on AI failures before they become widespread crises and provide the transparency needed for individuals to challenge flawed decisions.

These inventories also benefit the government, enabling agencies to identify AI implementations that build public trust while tracking systems that pose reputational, operational, or adverse risks. By documenting mitigation efforts, agencies demonstrate their commitment to responsible AI deployment and help federal agencies refine AI strategies, creating a virtuous cycle of better governance and more trustworthy technology. Transparent AI inventories also go beyond government accountability—they clarify federal AI priorities, allowing businesses, universities, and technology providers within the federal innovation system to develop new products and services more effectively.

Government AI Inventories Established During Trump’s First Term

The agency inventories were established during President Trump’s first term by Executive Order 13960 and codified in the Advancing American AI Act as part of the National Defense Authorization Act (NDAA) for Fiscal Year 2023. Additional guidance from the Office of Management and Budget (OMB) (M-24-10) under President Biden further strengthened these inventories by standardizing AI definitions and requiring agencies to collect information on potential adverse impacts. By the end of 2024, federal agencies dramatically improved reporting on AI systems, with AI use case inventories capturing over 1,700 AI applications—a 200% increase from the previous year.

At their current level of detail, AI Use Case Inventories promote government transparency and accountability through annual documentation of key system information, including agency ownership, purpose, outputs, and commercial status. They also track whether systems impact rights or safety, if agencies withhold them from public reporting, and their procurement details, including whether they support High Impact Service Providers. These inventories also capture critical technical aspects of AI systems, documenting information about training and evaluation data, use of demographic information and Personally Identifiable Information (PII), custom code requirements, and infrastructure needs. This detailed technical documentation ensures agencies have a clearer understanding of their AI systems' functionality and dependencies.

For rights — or safety-impacting AI systems, agencies must provide enhanced documentation on risk management practices, impact assessments, test environments, human oversight in decision-making, and ongoing performance monitoring. Agencies must also specify whether they notify affected populations about AI use, offer mechanisms to contest AI-driven decisions, and allow individuals to opt-out.

DOGE’s AI Push and the Continued Need for Transparency

As the new administration accelerates AI adoption across federal agencies, the importance of these accountability measures is becoming increasingly apparent. The Department of Government Efficiency (DOGE) has signaled a push toward AI-driven government, with potential applications including detecting fraud and waste in federal contracts. According to The New York Times, Thomas Shedd, a former Tesla engineer now heading technology efforts at the General Services Administration, intends to create a central database of all government contracts and employ AI to identify potential redundancies and budget-reduction opportunities.

This push for AI-driven oversight underscores why AI use case inventories will be increasingly vital as these technologies take on greater roles in government decision-making. Without clear documentation of how these new AI systems operate and what data they rely on, agencies or the public cannot assess their accuracy or reliability. Poor oversight could result in valuable contracts that serve essential public needs being erroneously flagged for cuts. As OMB is required to update its AI guidance in mid-March, transparency must remain a top priority. Agencies must continue to provide the public with sufficiently detailed AI inventories.

Beyond the federal government, policymakers at all levels of government, including state and local leaders, should resist any efforts to dilute or weaken AI inventories and instead expand them—ensuring that every AI system used in government is adequately documented, evaluated for risks, and held to the highest standards of fairness and accountability. AI use case inventories are already serving as a model for state and local governments, as seen in California’s AB 302, signed by Governor Newsom in 2023, which mandates a comprehensive inventory of high-risk AI systems, and the City of San Jose’s own AI inventory initiative.

The leadership shown by Newsom is a good reminder that in the absence of federal action, states — and even other governments – should take action to improve AI use transparency. Should the Trump administration roll back oversight mechanisms like AI inventories through its review of M-24-10 and other related federal AI policies, then the responsibility falls on state and local governments to demonstrate that public sector AI can be transparent and trustworthy.

Government transparency on AI use and systems is not just a bureaucratic exercise—it is a fundamental component of maintaining public trust, responsible governance, and continued AI innovation.

Authors

Clara Langevin
Clara Langevin is an AI Policy Specialist on the Emerging Technologies team at the Federation of American Scientists (FAS). She focuses on promoting responsible AI adoption across different sectors and creating policy guidance on AI and data ethics, transparency, and explainability. Before joining F...

Related

DOGE Plan to Push AI Across the US Federal Government is Wildly Dangerous

Topics