What Regulators Should Do About The AI Industry's Hidden Financial Loop
Hera Hyeonseo Lee / Apr 13, 2026
OpenAI CEO Sam Altman (left) and CEO of Microsoft Vice Chair and President Brad Smith (right) arrive before testifying at a Senate hearing on artificial intelligence competition in Washington, DC, on May 8. (Tom Williams/CQ Roll Call via AP Images)
By last year, Microsoft had committed $13 billion to OpenAI — but much of that investment has never left the tech giant’s own balance sheet. A significant portion of the money reportedly has flowed in the form of credits for Microsoft’s Azure cloud infrastructure. OpenAI spent those credits training its artificial intelligence models, Microsoft booked the usage as cloud revenue and that revenue growth supported the stock price that justified the next round of investment.
This is not a one-off arrangement. A January 2025 report by the Federal Trade Commission on AI partnerships documented similar patterns across the industry, including partnerships between Google and Anthropic and Amazon and Anthropic. Nvidia has played an analogous role. After profiting from the sale of Graphics Processing Units (GPU), it has reinvested in AI infrastructure startups like CoreWeave, which use the funds to buy more Nvidia chips, flowing right back to Nvidia's reported revenue. Bloomberg has described these arrangements as “AI circular deals”.
I call this dynamic the cloud credit circuit. It matters for three reasons that current policy frameworks are not equipped to address.
The demand for compute is partly manufactured
The justification for massive AI infrastructure investment from the $500 billion Stargate project to CHIPS Act subsidies rests on the assumption that compute demand reflects genuine technological need. But within the cloud credit circuit, a significant portion of what registers as demand is financially engineered. When a startup's consumption of cloud resources is pre-funded by the cloud provider itself, the resulting usage statistics are real on the balance sheet but do not reflect arm's-length transactions.
How large is this effect? Microsoft does not disaggregate its Azure revenue by customer relationship type, but analysts estimate that OpenAI alone accounts for a substantial share of Azure's AI workload growth. Google and Amazon face similar opacity with their Anthropic partnerships. The numbers are not trivial.
This does not mean there is no genuine demand for AI computing. There is. But policymakers committing public funds to infrastructure that will lock in physical assets and energy commitments for decades should understand that the demand signals they rely on are partly artifacts of the financial structure. DeepSeek's demonstration that competitive AI capabilities can be achieved with dramatically less compute only reinforces the point that subsidizing a capital-intensive financial loop may actually undermine the algorithmic efficiency that will determine the real technological victor.
The FTC is asking the right questions — but not all of them
The FTC's 2025 report identified genuine competitive concerns: rising switching costs, platform dominance and the potential for vertical integration to foreclose competition. Sens. Elizabeth Warren (D-Mass.) and Ron Wyden (D-Ore.) have argued that cloud-AI partnerships amount to functional mergers circumventing antitrust review.
These interventions share a common analytical horizon, however. The FTC report documents the financial relationships in detail, including equity stakes, revenue-sharing arrangements and commitments by AI developers to spend investment funds on their investors' cloud services. But its scope is explicitly limited to competitive structure: who controls which layer of the AI stack and whether dominance in one layer can be leveraged into adjacent ones.
What it does not address is how the cloud credit circuit generates the demand signal that justifies both private investment and public subsidy. The FTC identified the anatomy of the circuit; the question it was not designed to answer is what the circuit produces. The question is not only whether Big Tech controls AI — it is whether the financial architecture underwriting the industry is manufacturing the appearance of a market that may not exist at the scale being projected, and whether public resources are being committed on that basis.
Too entangled to fail
Each actor in the cloud credit circuit — from platform firms to AI startups to GPU manufacturers — pursues strategies that are individually rational: securing infrastructure access, inflating valuation and locking in customers. The aggregate result is a web of cross-equity stakes, exclusive computing contracts and revenue interdependencies so dense that no single node can be unwound without destabilizing the others.
If Microsoft falters, OpenAI's computing supply is severed. If OpenAI's growth stalls, Azure loses a core revenue driver, depressing a stock price that anchors global index funds and retirement portfolios. The Google-Anthropic and Amazon-Anthropic partnerships reproduce the same interdependence. As local governments lock in decade-long power purchase agreements for data centers and state agencies integrate these models into public services, the real-economy footprint of this entanglement only deepens.
This has direct consequences for policy. Export controls on AI chips, the centerpiece of the United States’ technology competition strategy, target supply. But if demand for those chips is partly generated by a self-referential financial loop rather than organic need, supply-side controls alone will prove insufficient. The financial architecture generating demand will route around supply constraints, which is arguably what we are already seeing with chip-smuggling through third countries.
What regulators should do
First, the Securities and Exchange Commission (SEC) should require cloud providers to disaggregate and publicly report the percentage of their cloud revenue originating from entities in which they hold equity, debt or convertible instruments. The SEC already has authority under Regulation S-K to mandate segment disclosures; extending this to related-party compute revenue should only require agency rulemaking, not legislation. If a market primarily consists of buying from oneself and using one’s own pre-funded credits, investors and policymakers deserve to see that figure before classifying the growth as organic.
Second, public investment in AI infrastructure, including CHIPS Act disbursements and proposed megaprojects like Stargate, should be conditioned on independent demand validation. The Department of Commerce, which administers CHIPS Act grants, should commission assessments verifying that projected compute demand is not substantially an artifact of the circuit's self-referential structure.
Federal law already requires this kind of due diligence. Under the National Environmental Policy Act, any major project that uses federal funds must assess its environmental consequences before construction begins. Requiring analogous demand validation for publicly subsidized AI infrastructure is no more radical.
Third, the financial oversight framework that emerged after the 2008 financial crisis should be extended to AI infrastructure. The Financial Stability Oversight Council (FSOC) has the mandate to identify systemic risks outside traditional banking. The interconnections among cloud providers, GPU manufacturers and foundation model companies present exactly the kind of cross-sectoral entanglement the FSOC was designed to monitor. Stress-testing these interdependencies should be a regulatory priority before the system becomes genuinely too entangled to fail.
The AI industry's boosters present infrastructure investment as an unambiguous good, a race the United States must win. But the financial structure underwriting that race deserves as much scrutiny as the technology itself. Understanding who profits from the circuit, and who bears the costs when it breaks, is not a technical question. It is a democratic one.
Authors
