Home

Donate
Perspective

OSTP’s Misguided Effort to Deregulate AI

Ankit Khosla, Alice Fisher, Christabel Randolph / Dec 1, 2025

White House Office of Science and Technology Policy Director Michael Kratsios testifies during a Senate Commerce Subcommittee on Science, Manufacturing, and Competitiveness hearing titled “AI’ve Got a Plan: America’s AI Action Plan” on Wednesday, September 10, 2025.

Breaking with a long tradition of scientific advice to the president, this fall the Office of Science and Technology Policy (OSTP) requested information from the public about regulations that might hinder the adoption of artificial intelligence in the United States. With growing concern in the US about the unregulated deployment of AI, this is precisely the wrong question.

The response from the public was swift. The American Council on Education pointed to the need to ensure human oversight in the administrative processes and to guarantee strong privacy protections. The Association for Computing Machinery highlighted the growing risks that AI could exacerbate online scams, create deepfake pornography, lead to the apprehension of innocent people, direct lethal weapons and violate personal privacy. It also proposed an enforceable tiered governance approach and a regulatory framework for AI, with mechanisms for collaboration with a broad range of stakeholders. Individual commenters urged establishment of clear legal frameworks granting individuals ownership and control over their personal data to ensure that AI policy upholds the dignity, safety, livelihoods and legal rights of all Americans. A letter from a coalition of civil rights groups warned that AI systems used in housing and lending must operate with fairness, transparency and accountability, principles to “prevent algorithmic redlining and digital discrimination.”

The essential problem with this request for information is that the OSTP lost sight of its mission. Congress established OSTP in 1976 with a broad mandate to advise the president on “the effects of science and technology on domestic and international affairs,” according to the federal register The mission of the agency is to provide the president and senior staff with “accurate, relevant, and timely scientific and technical advice on all matters of consequence.” That mission requires gathering scientific evidence, consulting experts and providing objective analysis — not advancing predetermined deregulatory conclusions.

Over many administrations, OSTP has examined critical societal and public challenges while advancing scientific innovation. During former President George H.W. Bush’s administration, OSTP developed the country’s first national technology policy, laying the foundations for the US government’s current approach to innovation. The Clinton-era OSTP championed electric vehicles and nanotechnology. Former President George W. Bush’s OSTP spurred the creation of the “science of science policy” as a research discipline, leading to new knowledge about how science works and benefits the public.

The Obama Administration was the first to set out a national policy for AI. In 2016 the White House released a landmark report titled “Preparing for the Future of Artificial Intelligence,” developed by the OSTP and the National Science & Technology Council’s (NSTC) Subcommittee on Machine Learning and Artificial Intelligence. The report surveyed the state of AI, identified opportunities and challenges and made recommendations (e.g. for more coordination across agencies and for public engagement). The Obama-era AI report also called out economic and workforce impacts of AI-driven automation in a companion document: “Artificial Intelligence, Automation, and the Economy.”

These reports included sections on how AI affects jobs, inequality and skills, as well as the need for public participation in shaping how AI is used. The Obama White House framed AI as an area where the public needs to be involved, and where skills, education and inequality were part of the picture.

During the first Trump administration, OSTP built on these efforts and led the “American AI Initiative” that expanded AI research investments, established the first-ever national AI research institutes, released regulatory guidance to govern AI development in the private sector, established guidance for federal use of AI (including prohibitions on unsafe systems) and backed the Organisation for Economic Co-operation and Development (OECD) AI Principles, the first global framework for AI governance.

Much of that work was carried on and expanded by the Biden Administration. The Biden OSTP also held convenings to listen to the American people, to explore how AI can be safely deployed to improve health outcomes for all Americans and to bring together decision-makers deliberating how AI can be used to achieve a better future for all. The focus remained squarely on identifying challenges and developing solutions. Former OSTP acting director Alondra Nelson called for an AI Bill of Rights to ensure necessary safeguards as AI was more widely deployed.

Earlier this year, many science policy experts were alarmed at the administration’s move away from funding basic research and the best ideas across all fields, including computing research. Therefore it was welcome to see the Trump OSTP seeking to anchor federal science policy in the Gold Standard Science. The Gold Standard Science framework provides powerful guardrails to evaluate the development and deployment of AI systems. Most commercial AI products, especially large language models, would fail these standards. They lack reproducibility, operate as black boxes, obscure error rates and uncertainty and are developed by companies with obvious financial conflicts of interest. Studies show widely used models for credit scoring and risk prediction are built on "pseudo-science," leading to increasing unfairness and bias.

Until this recent request for public information (RFI), there was bipartisan consensus around OSTP’s mission and the need to promote innovation while also developing safeguards. How can OSTP simultaneously demand Gold Standard Science from government agencies while pushing to deregulate an industry whose products fundamentally violate those standards?

The urgency of this work is evident. Recent public surveys indicate widespread American support for AI regulation. A Gallup poll revealed that “80% of U.S. adults believe the government should maintain rules for AI safety and data security, even if it means developing AI capabilities more slowly.” A Pew poll found that “more than half of U.S. adults (55%) and a similar share of AI experts (57%) say they want more control over how it is used in their lives. And those in both groups worry more that government regulation of AI will be too lax than overly excessive.” Americans fear AI-enabled fraud, workforce displacement, threats to children and privacy violations. They're concerned about deepfake pornography targeting minors, AI chatbots encouraging self-harm and voice cloning used for scams. Congressional hearings, often convened by Republican leaders, have documented these harms.

And the scientific experts that should be guiding the work of the OSTP have made clear the need for action. Leading AI researchers, including Nobel Prize and Turing Award recipients such as Geoffrey Hinton, Yoshua Bengio and Stuart Russell have emphasized the urgent need for enhanced AI safeguards, and Nobel Laureate Daron Acemoglu has explained that regulation is necessary to ensure that the benefits of AI innovation are broadly distributed.

Yet OSTP's RFI treats regulation as the problem rather than the solution, fundamentally misreading the American public's priorities and expert opinion. A remarkable consensus has emerged among Americans, leading scientists and many bipartisan lawmakers that AI requires more oversight, not less. There have been collaborations among US senators on many AI topics, the enactment of AI bills in both red and blue states, the work of state attorneys general (both Republicans and Democrats) to establish guardrails for AI services and widely shared concerns across the US public about unregulated AI.

To further American leadership in AI, OSTP is best placed to promote gold standard science in AI and identify “accurate, relevant, and timely scientific and technical advice” from relevant scientific experts. First, OSTP should withdraw this RFI and begin a new one and arrange participatory convenings focused on the need to update US law and regulations to address the challenges AI presents. Second, OSTP should gather evidence from leading scientific experts and associations regarding the risks arising from the continued deployment of unregulated AI in the US. Finally, OSTP should work with Congress to update laws to address the challenges of AI, based on Gold Science Standards and scientific evidence, aligned with the mission of the Office of Science and Technology Policy.

To be clear, there are many efforts underway to explore beneficial applications of AI, from novel organ transplants to making algorithmic decisions more fair. Experts at the National Academies have also produced recommendations for making machine learning applications safer.

Instead of asking how to eliminate AI safeguards, OSTP should ask how to ensure AI development serves the American people. That's the question scientific evidence, democratic consensus and OSTP's own mandate all demand be answered.

Authors

Ankit Khosla
Ankit Khosla is currently an extern with the Center for AI and Digital Policy (CAIDP). He is pursuing a Master of Laws in National Security and Cybersecurity Law at The George Washington University Law School. Ankit is a licensed attorney in Washington DC and Maryland. In 2024, Ankit graduated with ...
Alice Fisher
Alice Fisher is currently an extern with the Center for AI and Digital Policy (CAIDP). She is majoring in Theology with a concentration in Religion, Politics, and the Common Good at Georgetown University. She is also pursuing minors in Tech Ethics and Sociology. Alice has experience working on digit...
Christabel Randolph
Christabel Randolph is Associate Director at the Center for AI and Digital Policy, overseeing the US law and policy group, and coordinating CAIDP statements to Congressional committees and federal agencies. She led CAIDP’s eXorts with the Federal Trade Commission to establish guardrails for AI servi...

Related

News
Trump Executive Order Launches AI ‘Genesis Mission’November 25, 2025
Transcript
White House Office of Science and Technology Policy Director Michael Kratsios Testifies in SenateSeptember 11, 2025

Topics