Home

Donate
Perspective

The Doublespeak in OpenAI’s ‘Industrial Policy for the Intelligence Age’

Paul Nemitz / Apr 27, 2026

Paul M. Nemitz is the author, with Matthias Pfeffer and Jürgen Pfeffer, of The Open Future and its Enemies—How We Can Protect Free Society from AI Dictatorship, published April 2026 by Dietz Verlag.

Sam Altman, CEO of OpenAI, takes his seat before a meeting of the White House Task Force on Artificial Intelligence Education in the East Room of the White House, Thursday, Sept. 4, 2025, in Washington. (AP Photo/Alex Brandon)

Earlier this month, OpenAI published “Industrial Policy for the Intelligence Age,” a 13-page document that promises to “keep people first” as artificial intelligence purportedly advances toward superintelligence. The paper advocates for broad-based prosperity, risk mitigation, and democratic access to AI, while proposing ambitious policy ideas ranging from public wealth funds to portable benefits and accelerated grid expansion.

But a careful reading of this document in light of OpenAI‘s—and its close partner Microsoft’s—actual lobbying record reveals a deep chasm between the company’s public rhetoric and its political maneuvering in relation to democratic legislation. What OpenAI presents as a visionary, democratically minded policy agenda is better understood as a sophisticated exercise in corporate reputation management, designed to support the claim that the company takes democracy and corporate citizenship with a public interest orientation seriously while it actually preempts and shapes regulation to fight meaningful oversight where it counts.

The chasm between rhetoric and reality: AI safety regulation

The paper’s second section, “Building a Resilient Society,” contains some of the document’s most striking language. It calls for “building new institutions, technical safeguards, and governance frameworks” to ensure AI systems remain “safe, controllable, and aligned.” It advocates for “auditing regimes,” “incident reporting,” and “mission-aligned corporate governance.” These proposals, taken at face value, suggest a company eager to be held accountable.

Yet the record tells a very different story. As Eryk Salvaggio previously pointed out on Tech Policy Press, when California’s SB 1047—a bill that would have required developers of advanced AI models to submit safety plans and face liability for catastrophic harms—advanced through the legislature in 2024, OpenAI vigorously opposed it. The company argued the bill was a threat to AI’s growth and could drive entrepreneurs and engineers out of the state. Together with Meta and several Democratic congresspeople, OpenAI helped secure Governor Gavin Newsom’s veto, which the bill’s author, Senator Scott Wiener, called a setback for everyone who believes in oversight of massive corporations.

The pattern repeats in Europe. A joint investigation by Corporate Europe Observatory and LobbyControl found that OpenAI, alongside Google, Microsoft, Meta, and Amazon, shaped the EU’s General-Purpose AI Code of Practice. The result was a much weaker code that softened obligations around copyright and discrimination risks. OpenAI’s CEO Sam Altman went further, publicly threatening that OpenAI might stop operating in the EU if the company could not meet the AI Act’s requirements. In Washington, the story is the same: OpenAI, Meta, Alphabet, and Microsoft poured millions into federal lobbying in just the first nine months of 2025, part of what Senator Josh Hawley (R-Missouri) described as a flood the zone with money strategy designed to replicate Big Tech’s successful social media lobby playbook to avoid legislation.

The contradiction is stark. OpenAI’s paper speaks of “democratiz[ing] access and agency” and “broad participation.” Its actions speak of regulatory capture and the preservation of competitive advantage. When the company advocates for “auditing regimes” and “model-containment playbooks,” one must ask: are these genuine proposals, or are they designed to create a regulatory moat that only well-resourced incumbents like OpenAI can navigate? The paper itself hints at this dynamic when it notes that certain models may require “stronger controls” while preserving “a vibrant ecosystem of less powerful systems”—a formulation that conveniently aligns with OpenAI’s interest in controlling the frontier while allowing smaller players to operate below the threshold of serious regulatory scrutiny.

Energy transparency and data center secrecy

The paper’s section on “Accelerate grid expansion” calls for public-private partnerships to finance energy infrastructure while ensuring that data centers “pay their own way” so households “aren’t subsidizing them.” This is a reasonable and even welcome proposal. But again, it must be read against the company’s and Microsoft‘s actual behavior.

In April 2026—the very month OpenAI published this paper—an investigation by Investigate Europe, The Guardian, and other media partners including Tech Policy press revealed that Microsoft and the trade group DigitalEurope had secured a secrecy provision in EU law to block public access to critical information on the environmental impact of data centers. The final legislative text, which differs by just a couple of words from industry demands, classified individual data center metrics as confidential commercial information, shielding them from public scrutiny. The European Commission even instructed national authorities that they were obliged to keep confidential all information and key performance indicators for individual data centers. Ten leading legal scholars warned that the provision could violate the EU’s Aarhus Convention obligations on environmental transparency.

The impact is tangible. In the Netherlands, Microsoft and Google submitted blank forms or no data at all regarding their data centers’ energy consumption, citing business confidentiality. Meanwhile, Microsoft’s electricity consumption nearly tripled between 2020 and 2024, climbing from 10.8 million MWh to 29.8 million MWh.

OpenAI’s paper calls for “efficiency dividends” and “accelerated grid expansion,” but it is conspicuously silent on transparency. The document says nothing about mandatory disclosure of data center energy and water usage, nothing about independent verification of sustainability claims, and nothing about the accountability mechanisms that would allow communities to assess the true cost of AI infrastructure. This omission is probably not accidental. When the companies building AI refuse to disclose basic environmental metrics, promises about “paying their own way” ring hollow. How can the public know whether data centers are actually covering their energy costs if the underlying data remains secret? The paper’s energy proposals, in this light, appear less like a genuine policy commitment and more like a rhetorical shield against growing public concern over AI’s environmental footprint.

Industrial policy as strategic positioning of exceptionalism

The paper’s broader framing—“The Case for a New Industrial Policy”—is perhaps its most revealing section. It invokes the Progressive Era and the New Deal, suggesting that the transition to superintelligence requires “an even more ambitious form of industrial policy” that reflects “the ability of democratic societies to act collectively, at scale, to shape their economic future.” This historical analogy is both flattering and misleading. The New Deal involved a fundamental rebalancing of power between capital and labor, including the recognition of collective bargaining rights, the establishment of minimum wages and maximum hours, and the creation of social safety nets that were genuinely universal and publicly administered.

OpenAI’s proposals, by contrast, are largely market-friendly and non-binding. A “Public Wealth Fund” seeded by AI companies is an intriguing idea, but it is framed as a voluntary collaboration—“policymakers and AI companies should work together to determine how to best seed the Fund”—rather than a mandatory contribution. “Portable benefits” and “adaptive safety nets” are sensible, but they are also policies that many labor advocates have championed for decades without meaningful corporate support. The paper’s suggestion of “time-bound 32-hour/four-day workweek pilots” is similarly qualified: it merely “incentivize[s]” employers and unions to run such pilots. It is a far cry from the 35 hour week for example in the German car industry which is based on binding agreements between social partners, thus unions and employers.

Notably absent from the paper is any mention of antitrust enforcement, data privacy protections, or the structural power asymmetries that AI companies themselves are creating. OpenAI’s own corporate structure has been the subject of intense criticism. The company’s plan to convert its non-profit arm into a Public Benefit Corporation while retaining nonprofit control has been challenged by co-founder Elon Musk and former employees, who argue it betrays the founding mission of developing AI for the benefit of humanity. Critics see the restructuring as a way to raise massive capital while maintaining a veneer of public-interest accountability—exactly the kind of regulatory capture and centralized control the paper claims to oppose.

A corporate vision paper contradicted by corporate lobby practices

OpenAI’s “Industrial Policy for the Intelligence Age” is a polished and rhetorically sophisticated document. It identifies genuine challenges and proposes ideas that, in a different context, could contribute to a more equitable AI future. But the paper cannot be evaluated in a vacuum. It is produced by a company that has systematically opposed binding AI safety legislation, helped water down the EU’s AI Code of Practice, and whose close partner Microsoft has lobbied to conceal data centers’ environmental impact. It is also a company that is aggressively expanding its lobby presence and spending to shape federal policy, state policy and policy in the EU.

The paper’s central weakness is that it treats industrial policy as something that governments and companies can design together in a spirit of collaborative problem-solving, while ignoring the fundamental conflict of interest at the heart of this arrangement: the companies that stand to profit most from the AI transition are the same companies being asked to help design the rules that govern it. History suggests that democratic societies cannot rely on the voluntary benevolence of corporate actors to ensure that technological change serves the public interest. The Progressive Era and New Deal reforms that OpenAI invokes were not gifts from industrialists; they were won through sustained political struggle, labor organizing, and public pressure against fierce corporate resistance. And they resulted in binding legal obligations on corporations.

If OpenAI genuinely believes in the vision it has laid out—if it truly wants to “keep people first”—it should begin by reversing its opposition to binding AI safety legislation, supporting mandatory transparency for data center energy and water usage, and endorsing the kinds of structural reforms that would give workers and communities real power in the AI economy, not just rhetorical invitations to “start a conversation.” Until then, the “Industrial Policy for the Intelligence Age” should be read for what it is: a carefully crafted piece of corporate diplomacy designed to shape the regulatory environment in ways that protect OpenAI’s interests while offering the public a comforting but largely unenforceable set of promises.

Authors

Paul Nemitz
Paul F. Nemitz is a visiting Professor of Law at the College of Europe in Bruges, where he teaches a postgraduate seminar on AI Law and Data Protection Law. He retired in 2025 as Principal Advisor and Director of the European Commission. He was the Director responsible for Fundamental Rights and Uni...

Related

Perspective
OpenAI’s New ‘Industrial Policy for the Intelligence Age’ is a PolicymercialApril 8, 2026
Podcast
Decolonizing the Future: Karen Hao on Resisting the Empire of AIMay 23, 2025

Topics