Home

Donate
Perspective

OpenAI’s New ‘Industrial Policy for the Intelligence Age’ is a Policymercial

Eryk Salvaggio / Apr 8, 2026

WASHINGTON, DC: OpenAI CEO Sam Altman speaks during the BlackRock Infrastructure Summit on March 11, 2026 in Washington, DC. (Photo by Anna Moneymaker/Getty Images)

OpenAI’s latest policy document, “Industrial Policy for the Intelligence Age: Ideas to Keep People First, debuts as the company is caught between rising competition, the pressures of an impending IPO, and the challenge of living up to its own hype. In this new document, the company, as always, positions itself as a capable, responsible steward guiding humanity toward the long-promised arrival of artificial superintelligence.

The document fixates on possibility, promising a future that only the most enthusiastic boosters of the technology can see in what is on sale today. While promised as the start of a conversation, OpenAI commits to funding research projects to advance its proposals with up to $100,000 and up to $1 million in API credits.

A close reading of the document reveals the contradictions at the heart of OpenAI’s purported commitment to the public interest and its rapacious corporate appetite for capital and global domination. (Perhaps, in some way, the paper extends a pattern described in a New Yorker profile of OpenAI founder and CEO Sam Altman, in which public pronouncements are paired with private contradictions.) Here are five considerations.

The ‘industrial policy’ proposes concepts OpenAI helped kill in California

This isn’t OpenAI’s first policy document in the Trump era, and it should be considered not just in terms of what it suggests but also in the context of what the company has spent its resources to oppose. For instance, OpenAI advocated to weaken aspects of the EU AI Act that would have created greater oversight into AI companies creating high-risk systems. Closer to home, when California's SB1047 proposed risk-management strategies similar to those Altman had called for in congressional testimony, OpenAI opposed it. When it passed the state legislature, OpenAI lobbied Governor Gavin Newsom to veto it, which he did.

OpenAI's 'Industrial Policy' reveals itself when compared to the bill it helped kill. SB1047 called for third-party audits of so-called frontier models, incident reporting requirements, safety protocols before deployment, whistleblower protections, a public compute cluster for researchers and startups without access to frontier infrastructure. The OpenAI ‘Industrial Policy’ proposes "auditing regimes," "incident reporting," "mechanisms for public input," and broader access to AI. SB1047 even included CalCompute, a public cloud computing cluster for startups, researchers, and community groups that reflects the “Right to AI” proposals in OpenAI’s industrial agenda.

A sympathetic reading would be that OpenAI opposed SB1047 more due to its rejection of the state as the appropriate venue to regulate such matters, or that it opposed the details of the implementation more than its general goals. Nevertheless, its current proposal sidesteps critique of its present systems by shifting the conversation to imaginary problems that will only arrive alongside civilizationally transformational abundance, which conveniently requires building the very infrastructure it's selling. Despite any technical evidence to the contrary, OpenAI has ultimately co-opted the idealism of public infrastructure while actively undermining concrete steps toward it. So what are these policy papers even for?

Empty abundance

Perhaps the best way to read the ‘Industrial Policy’ is as a policymercial, marketing copy dressed as policy proposals. OpenAI makes bold promises of a future economy that requires radical action, but it has little to do with traditional US policymaking. For example, there is no stated role for state or federal government in the “AI-first entrepreneur” section, which asks policymakers to “help workers turn domain expertise into new companies by using AI to handle the overhead that usually blocks entrepreneurship.” That's a product pitch, not a policy. We still don't have solid data on how LLM productivity gains manifest or are distributed, and who absorbs the time-debt of rapidly produced text and code that requires increased scrutiny and oversight. To that end, the text is surprisingly out of step with the recent push to “agentic AI” and does not grapple with the problems of a world saturated in slopware and agentic code.

The superabundance frame dissolves questions of accountability in a present-day era of increasing scarcity. OpenAI can focus on hyping what it actually sells: the distant future. OpenAI calls for workers to be included in management decisions over AI—and “to respect labor rights.” (Meanwhile, the company is facing active lawsuits from journalists and authors, who are themselves workers.) The paper assumes that strong worker rights are a given, even as the technology undermines stability for workers and threatens their replacement across a wide variety of sectors.

Public costs, private gains

The second half of the ‘Industrial Policy’ turns to resilience—OpenAI's phrase is “new vulnerabilities alongside new abundance.” OpenAI’s proposals integrate OpenAI more deeply into economic and physical infrastructure while asking governments to pay for the expansion.

In one section of the document, The Right to AI, policymakers are asked to view access to LLMs in the way they viewed access to electricity and the internet, even subsidizing access. The document invokes “schools, libraries, and underserved communities” that are left behind as they are denied access to tokens. It then pivots directly to a call for governments to “accelerate the expansion of energy infrastructure required to power AI,” before suggesting investment credits for energy, subsidies, and less regulation of “advanced conductors.” OpenAI is linking underserved communities to the promised benefits of AI infrastructure expansion, despite evidence that those communities bear the most direct costs of data centers and energy development.

While the document calls for increased capital gains taxes and other policies to fund the social safety net in the wake of a disrupted labor pool, it also calls for a “Public Wealth Fund” that would, through vague mechanisms, offer “every citizen—including those not invested in financial markets—with a stake in AI-driven economic growth.” The fund would work through government-directed investments “that capture growth in both AI companies and the broader set of firms adopting and deploying AI.” A program like this would, in effect, enroll every citizen in a generative AI mutual fund, linking social programs to the industry's success.

Nationalizing the alignment team

The section marked “building a resilient society” hands what was in-house safety research to governments. Consider the fate of OpenAI’s superalignment team, co-led by OpenAI co-founder Ilya Sutskever and Jan Leike. The team was promised 20% of OpenAI's compute to address safety issues. It received just 1-2% on aging hardware before Altman oversaw its dissolution. Leike resigned publicly, citing Altman's abandonment of the alignment mission; Sutskever left alongside other staff shake-ups in 2024. The “Mission Alignment” team that replaced it lasted just 16 months. Now OpenAI apparently wants the public sector to fund this work on its behalf, and calls for strengthening existing institutions like the US Center for AI Standards and Innovation (CAISI), developing auditing standards in coordination with national security agencies, and building a global network of AI Institutes with shared protocols.

Additionally, several of the recommendations in the section titled “building a resilient society” offload responsibility for its projects to the public. The document assigns AI companies responsibility for upstream risk—testing, red-teaming, pre-deployment evaluation—then hands everything over to the public after deployment. For example, one point calls for governments to “[r]esearch and develop tools that protect models, detect risks, and prevent misuse.”

OpenAI also calls on its much-criticized pivot from its non-profit structure as the template for its competitors: “frontier AI companies should adopt governance structures that embed public-interest accountability into decision-making, such as Public Benefit Corporations with mission-aligned governance.” This is the same structure OpenAI used when it converted from nonprofit to for-profit under legal challenge. Another recommendation is “[c]reate structured ways for public input so that alignment isn’t defined only by engineers or executives behind closed doors.” OpenAI could implement this tomorrow. Instead, it asks governments to compel the industry to comply.

The five who decide

It could be said that OpenAI is a shrewd operator and that it’s natural for a company to look out for its own best interests. If, along the way, it advocates for something resembling public interest technology or socially responsible policy, who does it hurt? But the pattern has shown the company and its leaders have little commitment to meaningful investment in these approaches. OpenAI’s business model is to consolidate the policy imagination, tie it to a charismatic leader and technology, and then undermine and co-opt it. This leaves little room for real negotiations or progress, but gives Sam Altman maximum leverage to reposition as the moment demands.

In the early days of OpenAI, Altman proposed that the technology be used "for the good of the world," then added: "in cases where it's not obvious how that should be applied the five of us would decide." That’s likely a reference to the early OpenAI team, which then included Altman’s OpenAI co-founders Elon Musk, Greg Brockman, and Ilya Sutskever, and early research director Dario Amodei—most of whom have since left the company. The fallout of working with Altman’s OpenAI is clear in the career trajectories of everyone involved. Policymakers shouldn’t give this crew that kind of leverage.

Authors

Eryk Salvaggio
Eryk Salvaggio is a Gates Scholar researching AI and the humanities at the University of Cambridge and an Affiliated Researcher in the Machine Visual Culture Research Group at the Max Planck Institute, Rome. He was a 2025 Tech Policy Press fellow, and he writes regularly at mail.cyberneticforests.co...

Related

Perspective
OpenAI Closing Its One-Stop AI Slop Shop Sora Is a Cautionary TaleMarch 31, 2026
Analysis
5 Policy Questions Prompted by OpenAI’s RestructuringOctober 29, 2025

Topics