Home

Donate
Analysis

Considering Nvidia’s Partnerships Push in Bid for Dominance

Megan Kirkwood / Dec 16, 2025

Megan Kirkwood is a fellow at Tech Policy Press. This is the second in a series of three posts on Nvidia’s dominance in the AI industry.

NVIDIA CEO Jensen Huang delivers remarks as President Donald Trump looks on during an “Investing in America” event, Wednesday, April 30, 2025, in the Cross Hall of the White House. (Official White House photo by Joyce N. Boghosian)

At Nvidia’s 2025 annual GPU technology conference (GTC) in California, the company announced partnerships galore with firms including Google DeepMind, Disney Research, the Electric Power Research Institute, Oracle, Yum Brands, General Motors, and self-driving vehicle companies Torc and Gatik. The variety demonstrates the breadth of the company, and the degree to which its reach stretches across industries.

In addition to its aggressive partnerships strategy across multiple sectors, Nvidia is also attempting to verticalize across the tech stack, illustrating its depth of services. From the chips and compute layer, cloud, physical hardware, networking, data layer, model development and deployment, Nvidia ensures that it maintains a stake in the ground. This stake is then leveraged to shape these industries, be it through partnerships, investments, research and development, to control what technology is built, and continue to embed industry and research reliance on its ever-expanding suite of services.

This article is the second in a three-part series delving into the rise of the company as an “AI factory,” with the first article tracking the rise of the company as games chip maker to the de facto standard for AI chips. This article will look at how the company is both vertically integrating while expanding its relationships across various industries, ultimately leading to partnerships with governments around the world in a bid to maintain its dominance. The final installment will consider how those partnerships may conflict with antitrust investigations into the company.

Vertical integration through the tech stack

Going to the very top of the AI stack, the infrastructure and hardware needed to power compute-intensive technologies like large language models or other AI applications is largely dominated by Nvidia. In 2023, Forbes reported on Nvidia’s Q3 earnings and found that its data center business “chalked-up $14.5 billion in revenue for the quarter” and that this segment of the business “is also likely its highest margin business, where the company’s GPU accelerator technology has become the de facto standard for AI workload processing.” In the company’s latest financial report, data center revenue was “$39.1 billion, up 10% from Q4 and up 73% from a year ago.” Nvidia’s data center business is the clear cash cow, with its market dominance in GPUs often cited as the reason for this.

However, Nvidia supplies much more infrastructure than chips. In Nvidia’s 2025 Annual Review, the company noted that while its revenue growth was led by demand for its GPU “Hopper architecture used for large language models, recommendation engines, and generative AI applications. Ethernet for AI was another key contributor, including strong uptake of our Spectrum-X end-to-end Ethernet platform.”

Ethernet is a precursor to Wi-Fi that facilitates internet traffic and was standardized by the Institute of Electrical and Electronics Engineers (IEEE) in 1983. Ethernet is a secure method of data transfer, used to connect devices ranging from PCs to the components inside data centers. However, to meet the growing network demands of high-end servers and supercomputers, the InfiniBand open standard was developed in the 1990s as a networking solution with low delay and high reliability. Mellanox Technologies supplied Ethernet and InfiniBand network adapters, switches and cables for data centers and was bought by Nvidia in 2019 for $6.9 billion. The significance of such a deal is described by Nvidia as “[t]ogether, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker.” TechCrunch reported at the time that “[t]he deal underscores ongoing consolidation in the world of processors, and is a key move for NVIDIA to shore up its market share, specifically in high-performance computing and powering supercomputers.” The Financial Times reported that the acquisition was prompted by Nvidia president and CEO Jensen Huang’s wish to move “from a chip and system-level company to a data center-scale company” following a downturn in graphics cards sales after the cryptocurrency peak and subsequent crash in 2022. Graphics cards had been crucial for cryptocurrency mining, and following the crash of Bitcoin, Nvidia needed to pivot.

By acquiring the main producer of InfiniBand, Nvidia asserted itself as the “de facto standard” in GPU networking, now offering Quantum-X800 InfiniBand networking, NVIDIA Spectrum-X Ethernet networking and NVIDIA BlueField-3 DPUs (data processing units). Indeed, journalist Timothy Prickett Morgan writes that the acquisition means Nvidia is essentially able to offer “all of the hardware in the system,” particularly after the announcement in 2021 of the company’s Grace processor CPU, the company’s first datacenter CPU. By expanding its hardware offerings, Nvidia has managed to pivot from mere chip maker to “AI Factory” maker with a range of partnerships manufacturing server racks, power delivery, and server cooling technologies.

The company’s rebrand as an AI factory does not stop at the hardware level. Building on CUDA’s success, Nvidia has various AI model development and deployment platforms such as NVIDIA Omniverse, Drive, AI, HPC, RTX and Enterprise. These platforms provide software libraries, tools and frameworks to build industry-specific AI applications. For instance, the Enterprise platform gives the example of customer support tools, supply chain management and content generation. Meanwhile, the Drive platform is advertised as an end-to-end solution to create autonomous vehicles. Nvidia also offers its NIM and NeMo microservices, the former providing developers with “pre-optimized models and industry-standard APIs for building powerful AI agents, co-pilots, chatbots, and assistants,” and the latter also allowing developers to create custom generative AI applications. According to the company’s 2024 annual report, among the first to access NIM were Adobe, Box, Cadence, Cloudera, Cohesity, CrowdStrike, Dropbox, Getty Images, NetApp, SAP, ServiceNow, Shutterstock, and Snowflake, illustrating its widespread usage.

Nvidia is far from a chips-only company and is continually embedding its dominance through the entire vertical stack. This gives the company incredible leverage over the development of the industries that rely on the company.

Horizontal integration throughout industry

Nvidia maintains a significant number of partnerships as well as investments, including a full investment fund. These span through various industries, with the most significant being cloud computing partnerships, autonomous vehicles, and robotics. While Nvidia has been a smaller player in the venture capital space relative to the cloud giants, it is quickly expanding its investment footprint. Bloomberg’s Emily Forgash reported that as of October 2025, Nvidia has invested in 59 AI startups, which is “already more than the 55 investments that Nvidia made in all of 2024, and a significant leap from the 12 it made in 2022.”

Data centers make up the bulk of the company’s revenue, largely through its hardware offerings. Cloud giants like Microsoft Azure, Amazon Web Services, and Google Cloud secure access to Nvidia’s powerful chips and related services like GPU networking through partnerships. Relatively smaller cloud players and software companies like Oracle, Snowflake, Salesforce, and VMware have also entered into partnerships to access a wide range of Nvidia hardware and software. Beyond direct partnerships, there is a long tail of Nvidia-reliant customers, such as smaller start-ups, who access Nvidia hardware by buying access to GPUs through a cloud provider, usually Amazon, Oracle, Google, or Microsoft.

Along with Nvidia’s increasing interest in generative AI, Nvidia also maintains a vast array of partnerships in the autonomous vehicles market, including BYD, Hyundai Motor Group, Uber, Toyota, Aurora, Continental, Jaguar Land Rover, Mercedes-Benz, NIO and Volvo Cars. In its deal with Uber, for example, the company shares some of its driving data with Nvidia to improve its autonomous driving AI models for other car companies to use. Nvidia is also active in robotics investment and development, through its Omniverse simulation platform and Isaac GR00T, a “general-purpose foundation model for humanoid robots.” Indeed, Huang shocked attendees at the 2025 GTC conference by announcing its partnership with Google DeepMind and Disney Research to create robots of Disney characters to populate Disney theme parks.

This speaks to Huang’s vision of an AI-powered “industrial revolution” where the world is filled “with a billion humanoid robots, 10 million automated factories and 1.5 billion self-driving vehicles.” Importantly for Nvidia, such an automated world will be using its technology, ensuring that the company not only remains incredibly wealthy but strategically vital.

Cecilia Rikap, associate professor in economics and the head of research at the Institute for Innovation and Public Purpose of the University College London, points out that partnership structures and investment strategies are a way for the investor to steer the technology. For example, Nvidia’s startup accelerator offers startups free access to Nvidia resources, which ensures that companies are most comfortable using Nvidia software and hardware and gets them into their ecosystem. She writes that investors “quickly integrate the start-up into their sphere of control” in other ways, preventing startups from becoming direct competitors, directing their research and development, accessing their knowledge and talent pool. In a recent example, Nvidia bought $2 billion worth of shares in Synopsys, an electronic component designer, with part of the agreement seeing Synopsys integrate “Nvidia’s tools into Synopsys’ chip-design applications.”

Rikap also writes that Nvidia and the main cloud giants, Amazon, Microsoft, and Google “can double dip by getting the money back when the company spends on infrastructure, which means purchasing Nvidia GPUs or directly renting GPUs while using other services from Big Tech clouds.” Indeed, Nvidia is deeply entangled in what appears to many to be an AI financial bubble, pledging a historic $100 billion investment in OpenAI, which is in turn dependent on the company building AI data centers using Nvidia hardware. Similarly, Nvidia will invest $10 billion in Anthropic, which, according to The Information “has primarily been using Amazon and Google chips,” so Nvidia’s investment will see the company “commit to using new Nvidia chips.” Such circular investments are potentially “obscuring the true nature of demand for the industry’s offerings” and “artificially propping up the trillion-dollar AI boom,” according to skeptics. However, Nvidia is moving ahead undeterred, signing more deals to ensure the company “remains at the heart of the AI frenzy” and ensuring start ups are tied into Nvidia’s ecosystem.

Perhaps realizing that it can only grow so far in the private sector and that a large portion of its sales come from the biggest cloud providers, the company is aiming to onboard a new set of lucrative strategic partners in the public sector.

Selling sovereign AI or American dominance?

Huang is attempting to do two things at once: pitching to governments around the world that they need “sovereign” owned and controlled AI, and reiterating to the US and the world that US AI should continue to dominate. Last year, Huang went on a global tour to at least ten countries, getting country leaders on board with building more physical compute infrastructure and deploying applications throughout their national services and national private sectors, of course partnering with Nvidia to do so. Meanwhile, Huang has also reassured President Donald Trump that Nvidia supports “the American tech stack, which is really the world standard today, to continue to be the world standard like the US dollar. We want the world’s economy and industries to be built on top of American standards.” This contradictory narrative can also be found in campaigns by OpenAI and Google.

On one hand, if countries accept that the current AI hype cycle will produce the kind of economic growth often promised by those with a stake in the success of this narrative, it makes sense that they would want to invest in building the compute capacity and resources required to do it. If that is the case, as a near monopolist, there is no other avenue than to choose Nvidia to build that capacity. The practicality of having a completely sovereign supply chain is nearly impossible, with most chips being manufactured in Taiwan by TSMC. The cost to build new foundries is enormous, making completely sovereign semiconductor manufacturing difficult. Coupled with these realities, when Nvidia makes generous offers of extending free training programs to public sector workers and strategic partnerships to build compute capacity, it might be tempting to choose this option.

However, the bargain may not be particularly fair. Indeed, as researchers Rui-Jie Yew, Kate Elizabeth Creasey, Suresh Venkatasubramanian point out, the promise of sovereign AI:

…leaves those dependent on its infrastructure in a position to accept an attractive myth doused in technical language and the promise of national technological leadership, which buries a reality in which they may not be sovereign over their AI infrastructure — over how and the degree to which their territory and resources are used in the production of AI for their interests or for NVIDIA's.

The AI Now Institute’s Kate Brennan, Amba Kak, and Sarah West make the case that Nvidia is just expanding its market, and as:

…the provider of computing chips for the data center infrastructures central to sovereignty initiatives, the company stands to benefit from nation-states’ growing interest in building out their own homegrown industries and attracting AI investment. For chip manufacturers, the push toward sovereign AI can be seen as a way of diversifying their customer base away from the hyperscalers and hedging their business against the potential slump in the demand from these companies.

Beyond its growth imperative as a public company, Nvidia is also increasingly dealing with a demanding geopolitical environment. The current Trump administration is using the company as an extra arm to embed American dominance through tech, a demand on the company which likely marries with its growth imperative and current trajectory to sell to governments across the globe. Whether it can do this under the guise of “sovereign AI,” though, is questionable.

However, as Nvidia embeds itself both vertically and horizontally across the public and private sectors, increasing dependencies on its technology, what this might mean for potential regulatory action against the company will be explored in the final part of this series.

Authors

Megan Kirkwood
Megan Kirkwood is a researcher and writer specializing in issues related to competition and antitrust, with a particular focus on the dynamics of digital markets and regulatory frameworks. Her research interests span technology regulation, digital platform studies, market concentration, ecosystem de...

Related

Analysis
Examining the Source of Nvidia’s Power in the AI IndustryDecember 4, 2025
Perspective
Policymakers Have to Prepare Now for When the AI Bubble BurstsNovember 24, 2025

Topics