Taking AI Commoditization Seriously
Trent Kannegieter / Mar 21, 2025As fears about DeepSeek consumed tech desks and tanked stock markets in January, Big Tech leaders sought to assure investors that AI incumbents were safe. Microsoft CEO (and OpenAI strategic partner) Satya Nadella argued that the new AI breakthrough would increase value for his company. He summarized this view in a recent post on X, stating:
Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning into a commodity we just can’t get enough of.
The Jevons paradox—the principle that, as the price of a good decreases, its use increases—quickly became a go-to analyst response to concerns about comparatively cheap-to-train models like DeepSeek.
But the interesting part of Nadella’s tweet isn’t the Jevons paradox. It’s the idea of AI frontier models becoming a commodity. Commoditization of frontier models would fundamentally transform the AI landscape for commercial and policy actors alike. While commoditization hasn’t arrived yet, it is a real possibility, and it could reset the entire AI governance debate.
Ultimately, commoditization would shift the actors shaping AI governance, their incentives, and the key success metrics for industry and regulators alike. After introducing the concept of model commoditization and why DeepSeek might preview a commoditized world, this article explores commoditization’s (1) consequences for industry and (2) new challenges for regulators, as well as new windows of opportunity that it might present.
Commoditization: A Primer
Commoditization is the process of products or services becoming “standardized, marketable objects.” Any given unit of a commodity, from corn to crude oil, is generally interchangeable with and sells for the same price as others.
Commoditization of frontier models could emerge in a few ways. Perhaps, as Yann LeCun predicts, open-source models could equal or surpass closed-source performance. Or perhaps competing firms continue finding ways to match each other’s developments. Such competition has more above-board variants—top-tier engineers at different firms keeping pace with each other—and less. Consider, for instance, OpenAI’s allegations against DeepSeek of inappropriate copying.
In any of these worlds, the underlying problem is the same: No firm can create and sustain a differentiated model performance advantage that puts their product ahead of competitors.
Why DeepSeek Might Preview Model Commoditization
Recent innovations by DeepSeek show the importance of considering AI futures beyond those dominated by today’s leading labs.
DeepSeek’s V3 base model and recently announced R1 reasoning models captured market attention because they found inexpensive new ways to train models that rival far more costly products from firms like OpenAI. To markets, this threatened tech incumbents. If top models could be trained with comparatively little hardware, would the seemingly limitless demand for Nvidia H100s evaporate? The valuation of model companies like OpenAI and Anthropic relies on recouping expensive training costs by selling access to their models.
But if new competitors could pop up after training for far less, they could also sell their products for less. (Fewer costs to recoup.) Such a landscape would force each competitor to reduce prices as much as possible or risk losing market share. When one commodity provider decreases costs, others must follow. Why would customers pay more to build on one model than they would for a functionally identical model?
Commoditization concerns around DeepSeek specifically are likely overblown. For instance, leading AI ecosystem analysts suggest that the total training cost wasn’t the reported ~$6 million but many hundreds of millions on the hardware alone. (The $6 million figure only accounts for “the GPU cost of the pre-training run,” a small share of total costs.) But even with these caveats, DeepSeek still made impressive gains with much less capital than the US frontier labs.
But DeepSeek’s very real innovations prove that model commoditization is possible by showing that access to the most cash for compute won’t fully determine who has access to the best models. They seriously challenge what has at times been a prevailing industry refrain: That the “just add compute” scale-and-slog method of AI training is the only path to model development. DeepSeek suggests that smaller firms might match compute-intensive competitors without similar resources. There are still algorithmic innovations to be had.
Evidence for commoditization is mounting quickly. This week, Microsoft announced internally developed models they claim rival OpenAI’s own.
With the possibility of commoditization established, let’s explore what it might mean:
Commoditization’s Consequences
1. Consumption May Rise, but Incumbents May Fall
AI commoditization will probably increase the net use of AI. Similar capabilities from a host of vendors increase competition and decrease prices.
This could be great news for the consumer. Lower prices mean more end users, and it expands access, which means more beneficiaries of the tech’s benefits. A broader (and thus more lucrative) potential user base also incentivizes new developers to set up shop and build for more specific markets and use cases.
With model commoditization, value in the AI ecosystem would move “up and down the stack.” Performance differentiation will have to occur at either the user-facing applications built on top of models (“up the stack”) or the foundational hardware, infrastructure, and systems that models rely on to run (“down the stack”). Let’s consider each in turn:
Up the Stack: Applications that Solve Real Customer Pain Points
Less differentiation on raw model performance compels firms to build better features that solve fundamental customer pain points. Without a clear performance advantage in raw model capabilities, companies will be forced to differentiate their product by building features around models that improve the user experience.
Such a pivot toward applications rather than model performance for its own sake would increase value for users, whose specific needs would become far likelier to be filled.
Down the Stack: Hardware and Tooling
Model commoditization doesn’t mean the hardware it relies on will be used less. Instead, lower prices for frontier model use will likely increase aggregate consumption and the need for hardware and infrastructure enabling the model. That’s why stocks like NVIDIA, ASML, and TSMC rebounded quickly after the DeepSeek shock.
The case for Microsoft’s increased value under commodification isn’t necessarily a bull case for OpenAI’s models. It’s a bull case for infrastructure like Azure cloud computing and new, premium AI-driven products in Microsoft Office Suite.
Tooling used to build and adjust models stands to win, too. In a commoditized model world, the AI world’s picks and shovels—the underlying tools and technology required to train, tune, and deploy models—will command a greater share of the pie.
The Threat to (and Response of) Incumbents
But what’s good for the industry and consumers might not be good for incumbents. Nadella’s tweet belies Microsoft’s most significant vulnerability to commoditization. AI consumption might grow dramatically, but unless a firm builds the best next-generation products, the use of that firm’s products might not. Even OpenAI isn't safe, as evidenced by DeepSeek’s January takeover as the #1 app in the US App Store. While some users will always remain “sticky” to the service they once used, there are a few reasons not to switch between commoditized foundation models. Thus, despite Nadella’s seeming optimism, commoditization could threaten OpenAI’s core product and Microsoft’s large investment.
Of course, OpenAI won’t just throw its hands up and concede. Instead, they’ll seek out value up and down the stack. In fact, they’ve already started. One could consider OpenAI’s largest recent initiatives hedges against commodification, from Project Stargate’s $500 billion plan to build domestic data centers to vertical-specific efforts to build products for governments and educators. These efforts in applications and infrastructure capture value beyond the model layer.
2. Regulation Gets a Whole Lot Harder
Lab Policy & Product Teams Would Be Disempowered
Model commoditization would disempower actors key to today’s production of safe AI products. Specifically, it would reduce the impact that the policy and product teams at companies Big Tech companies have on shaping the AI future.
These teams at OpenAI, Anthropic, and beyond work hard to prevent the misuse of their models. From extensive adversarial red teaming to aligning model behavior with collective input, these teams take preventing immediate and long-term societal harms from AI very seriously.
However, these firms’ interventions are only valuable if users run their models. Commoditization would threaten this assumption. If one can use models hosted in various jurisdictions and achieve similar performance, then any given lab’s constraints do far less to constrain behavior.
When Soft Law Fails, State-Led Action Becomes Far More Necessary
This disempowerment of AI lab policy and product teams would heighten the need for state-sponsored regulation. Innovative firms in many technical domains regulate themselves through soft law mechanisms or internal company policies. (One commonly cited example is privacy policies.) But successful variants of such self-regulation require effective coordination between the producers. Such coordination grows much harder with every possible producer that must be wrangled into adherence to a standard. Commoditization challenges soft law by creating an environment where getting anyone who can build sophisticated systems in the same room is near-impossible.
Of course, the conversation shouldn’t stop there. There are robust debates around whether legislation, executive regulation, or common law mechanisms like tort law would be the best vehicle for imposing state guardrails. States also must find feasible solutions to the problem of commodified models originating outside their borders without creating an overly laborious, high-friction compliance regime. Finally, the need for state action doesn’t imply support for any proposed action. Some policy proposals will still fail a cost-benefit analysis by being overly burdensome to innovation or failing to achieve their goals. But model commoditization would seriously weaken calls for soft law alone as an AI safety strategy.
3. What’s Next: Windows of Opportunity
Commoditization Creates New Diplomatic Openings
The emergence of new, decentralized AI threat vectors could offer the powers that be a common enemy. This might present a unique opportunity for US-China collaboration.
Modern US-China collaboration has required tangible mutual interest to succeed. The most famous modern US-China agreement, the Nixon/Kissinger-Mao/Zhou normalization of US-China relations, occurred in large part to overcome a perceived common threat in the USSR.
When few companies control cutting-edge frontier models, preventing third-party model misuse is comparatively simple. Fewer frontier developers imply fewer sites to monitor for malicious actors. But commoditization multiplies the sites where malicious third-party actors could obtain safeguard-less models. These threats are most acute around weaponizations like attacks on critical infrastructure.
Such concerns remain speculative today. However, they are top-of-mind for key national governments:
- China’s 2024 Shanghai Declaration on Global AI Governance also alluded to the threat of “terrorists, extremist forces, and transnational organized criminal groups from using AI technologies for illegal activities.”
- During his NSC tenure, Jake Sullivan, former President Biden’s National Security Advisor, publicly discussed the “catastrophic” risk that AI poses, due in part to the potential “democratization of extremely powerful and lethal weapons.”
- The United Kingdom’s Government Office for Science has begun scenario-planning around potential third-party AI threats, including “terrorist groups trying to develop bioweapons” and “AI-based cyber-attacks on infrastructure and public services.”
Such examples suggest that addressing third-party threats could represent an interest convergence for Washington, Beijing, and beyond. Ultimately, international AI governance requires US-China collaboration. Commoditization may help bring Beijing and Washington to the table.
Perhaps the Only Way Out Is Out-Innovating
Amid commoditization, perhaps the right strategy is just to build faster. Each new player decreases the feasibility of universal regulation (let alone a “pause” on development) and the sway of leading labs. Commoditization thus shrinks the tools in an already-limited regulatory toolbox.
Without other tools, the case for building quicker and hopefully breaking commoditization’s deadlock increases appeal. This world favors a “Manhattan Project for AI,” as a bipartisan congressional commission proposed last November.
Any state serious about such an AI race should prioritize recruiting and retaining human capital. It would make it easy for international students at its top universities to stay post-graduation and beyond. Talent recruitment is a nation’s most controllable lever in a chaotic AI race.
Conclusion
Despite DeepSeek consuming the AI world’s headlines, frontier model commoditization hasn’t arrived yet. But leaders should prepare for it now. Commoditization would move value from the model layer into user-facing applications and into infrastructure. It would also disempower the frontier lab policy teams leading many efforts to prevent model misuse and change the regulatory paradigm. There’s still much more to learn about commoditization’s impacts and no time to waste. If the last few years have taught us anything, it’s that the technology landscape can change in a moment.
Authors
