Home

The Global AI Conundrum for Antitrust Agencies

Cristina Caffarra / Feb 7, 2024

Luke Conroy and Anne Fehres & AI4Media / Better Images of AI / Models Built From Fossils / CC-BY 4.0

With a new AI law in Europe coming into force at best in a matter of several months, and a drumbeat towards regulation in other jurisdictions, antitrust agencies are saying they will ‘remain vigilant’ and ‘use all existing enforcement tools’ to police adverse competition developments. The anxiety is motivated by clear signs that for all its novelty, the AI field is shaping up to be dominated, at least in the medium term, by the “usual suspects” – the same digital giants that are the focus of digital regulation in Europe and antitrust actions in multiple jurisdictions: Microsoft, Google, Amazon and Meta.

Their respective “ecosystems” – comprised of each company’s constellation of computing power, data, GPU, financial resources – provide them with the required critical inputs at scale, while “strategic partnerships” with successful AI startups give them a privileged first-mover insight into forthcoming innovations, an insurance in case their own efforts fail, and a way to keep tabs and neutralize a future rival. On the other hand, these are nascent technologies with a few active players, and making a “monopolization” case (or an “abuse of dominance” in European terminology) seems ambitious. Of course, we do not want to undermine the development and uptake of a useful technology. So what can and will antitrust authorities do?

Defining the Narrative

True to its ambition as the world’s Regulator in Chief, Europe just managed to squeeze through the world’s first AI Act last December – a marathon arm-wrestle between Parliament, Commission and Council on a compromise document which just overcame opposition from France, Germany and Italy but is still being tinkered with and will take a couple of years to kick into action. The Act takes a “risk approach”, i.e. it conditions intervention on the basis of the expected “risk” of the application – but not a market power approach. Meanwhile, antitrust agencies that are carrying collective guilt for not doing enough to contain the growth of digital monopolies over the past 20 years are trying to appear more robust this time around: “we are vigilant, we will enforce the law”.

All of this is taking place against the backdrop of a vigorous debate over the potential risks and benefits of AI. The last few tumultuous months have seen at least six main intersecting narratives playing out, which have made the scene chaotic and the public discussion of AI a cacophony:

Narrative 1 (advanced by industry insiders and some of AI’s most prominent inventors):

Superintelligence is imminent. It’s all new, extremely powerful and very dangerous. There is existential risk! The technology could easily go rogue! Regulate us!

Narrative 2 (ethicists and academics):

What do you mean “it is arriving”? It’s not a UFO landing. You are inventing it and selling it. ChatGPT is just a giant ad, AI just is a marketing term for compute-intensive systems only a few companies can produce, but the framework for LLMs existed for years – it’s not a sci-fi threat, it’s not intelligent. The real risks are much more mundane – though very serious: not making work more productive but rather increasing inequality, limiting wage growth, exacerbating discrimination, supercharging disinformation, amplifying toxic content, further injuring privacy, powering up the massive crisis of human rights online. These are the real and present dangers.

Narrative 3 (tech startups, VCs/ funders and tech lobbyists):

Leave it alone, don’t intervene too early! It’s a nascent technology, don’t tamper with it! There will be untold damage to innovation if governments interfere too soon!

Narrative 4 (Big Tech executives):

Developing these models requires inputs (compute, chips, data, money) on a massive scale, we are the hyperscalers and we’ve got what it takes. We are the only ones who can do it. You need us. If you stand in the way, you are standing in the way of progress. And handing the primacy to China! We are here to help Western AI pioneers scale up, they could never do it alone! Tie ups are necessary for growth and completely benign! We don’t want control, we’re here to help, we’ll just commercialise the products a little bit.

Narrative 5 (AI company scientists and founders):

We’re non-profit (wink wink), we’re pure, our mission needs to remain pro-humanity – but the resources we have are simply not enough, we need a lot of money to scale up, thank you!

Narrative 6 (content owners, especially publishers and especially the New York Times):

Models are trained over vast amounts of information which we generate and produce at significant cost. Journalism is already decimated by the ability of the public to consume content for free on various platforms, this is a further blow in which our carefully curated, fact checked, verified content is being appropriated to train models with no compensation. This is not “fair use”, it is an infringement of copyright. Journalism is ailing, content creators are weak, we need compensation!

Against this panoply of views (and apologies to those who may feel a little caricatured), antitrust regulators feel pressed not to just “wait and see” – because AI laws when they finally come into force will have a broader, different purpose than keeping market power in check. Yet the last two decades in which a few digital giants have come to dominate the landscape are a haunting reminder that monopolization did not just happen through competitive organic growth, but also by failing to see how the eventual winners were shutting down challengers, violating data protection rules, and making multiple acquisitions all waived through by regulators because they often involved nascent competitors or “acquihires.”

Mistakes, I’ve Made A Few

Antitrust agencies’ antennae are up because they know they missed all signs in the past, and there are some early red flags: (a) the structure of the industry is essentially a “vertical stack” where the output of one “layer” is the input into another (for instance, chatbots that are/will be deployed on a massive scale in multiple businesses require a foundation LLM); and (b) the input requirements at the “upstream” level to create and train foundation models are extremely costly and their availability scarce, including computing power (cloud), chips, and data.

With this configuration – massive investments, huge scale economies, concentrated essential resources – agencies are naturally drawn to make analogies to the kinds of issues that have typically emerged in vertical structures with an infrastructure layer on which services are built. The simplified layout of the AI “industry” appears to a regulator to include, at one level, a highly concentrated piece of “infrastructure” (the creation and training of general-purpose large foundation models), and then a very fragmented “productization” layer where these models support a myriad of specific/bespoke user cases and applications. Foundation models may be furnishing the required input to third parties that create their own “plug-ins,” and at the same time develop their own products or embed themselves into partner services which compete with third parties. In this stylized view, the structure of the market is a familiar “vertical stack”, or a “hybrid platform,” and the agencies will adopt this as the obvious lens through which to see potential problems.

The “downstream” (“productization”) level is where multiple consumer protection concerns will arise – e.g. issues around product safety, or discrimination – and one expects agencies with consumer protection mandates to be very active on this front. At this level the issues do not have to do with market power, which promises to be fragmented in any event. The more interesting and complex questions arise around market power being stacked upstream, and by many of the same actors which currently “own” the digital landscape essentially as a result of ecosystem advantages: extraordinary financial resources and ownership of key inputs at scale.

Intense competition and a myriad of startups at the productization level will not dissipate market power “upstream”. This is a fundamental point. So there is considerable anxiety about the extent to which the “usual suspects” (gatekeepers now dominating the digital scene) are going to be “grandfathering” their power into this new technology: by controlling the key inputs and/or making “strategic partnerships” (instead of outright acquisitions) with AI companies needing vast amounts of capital to scale. Indeed, the fact that investment into AI by the Big Four (Amazon, Microsoft, Google, and Meta) materially outstripped all other VC investments in AI in 2023 is consistent with entrenchment of these same actors.

Why is this a problem? Potentially for multiple reasons. First, and traditionally, in a “hybrid” structure in which a “platform owner” with market power (here, the owner of the foundation model on which services are built) makes an input available to downstream product competitors, terms may be extractive or exclusionary. This is like any utility in which retail suppliers need to procure access to an infrastructure to offer their services to final consumers. If the supply is highly concentrated at the infrastructure level, then competition to supply that input will be softer; and if the infrastructure provider is also competing with third parties to offer its own applications, there may be incentives to foreclose them – i.e. offer worse terms or generally distort competition.

More fundamentally, this is because inflection points when a new important technology emerges are in principle opportunities for new actors to establish themselves. Yet what we have seen is that large digital companies have “hedged their bets” by both investing massively in developing their own models, and entering into partnership with the most successful early actors. This phenomenon is most significantly illustrated by Microsoft’s relationships with OpenAI (and others) starting in 2019; Google early on with DeepMind but now also with Anthropic, Hugging Face, Runway and Inflection); and Amazon’s interests in Hugging Face and Anthropic. Meta has been playing catch up with large investment in its own open source model, Llama. Apple is also in catch up mode, while Elon Musk is funding his own Grok, and significant stakes have also been taken by Nvidia (dominant GPU supplier to the industry) and Salesforce.

If the largest successful start-ups are in major partnerships with the tech giants, ostensibly to benefit from their various contributions (in cash and kind) then these partnerships are also the vehicle through which to manage the technology transition and pre-empt disruptions. “Partners” are in effect able to obtain early visibility into the direction of research and early access to future advances in the technology, and potentially be involved in setting their direction, thus benefiting from each breakthrough and innovations first, even if not exclusively. These “partners” are not independent sovereign funds. They are the tech giants of today, and we are watching them project their power forward into a new, highly dominant technology – thus ensuring their legacy as they swing from monopoly to monopoly. We are in effect watching these same gatekeepers, whose recent past (and present) is the focus of multiple antitrust actions and regulatory interventions, investing to preserve their position and their hold over the direction of innovation, by being able to exploit the “ecosystem” – their constellation of assets and capabilities.

Do they have a good counter-story? What is the narrative coming the other way? Surprisingly, the companies themselves do not appear to have made real progress coming up with a credible affirmative pitch for regulators. I have heard nothing much better than “AI needs assets and cash, we’ve got both, live with it.” But this is just asserting the “inevitability” of current power “persisting” into the future, because of deterministic reasons to do with the technology. Is there substance to it? I have not seen any attempt to articulate it credibly. I assume consulting economists will be no doubt be hired to argue that there are “indivisibilities” in the inputs (i.e. the need for “lumpy” amounts of cloud, data and GPUs which can only be procured in large batches in order to meet the requirements of big firms, and these can only come from a single source and cannot be multihomed); and that there are aspects of “the public good” in the technology which involve incomplete appropriation of the surplus generated by innovations, so that one needs either public investment or large enough ecosystems to make them viable. This is classic “consulting theorizing,” though I have not yet seen anything along these lines. Regulators will need a better story with a grip on reality, not just a theoretical exculpatory narrative.

Vigilance Does Not Necessarily Mean Proactive

With all of this, what can existing antitrust rules achieve? Agencies are promising to be vigilant, but what can they realistically do, and where would it get them to? I am somewhat skeptical of the suggestion that the new antitrust-inspired European digital markets regulation (known as the Digital Markets Act, or DMA) could be a realistic answer to cater for issues around AI. European regulators are struggling enough as it is to make material progress with existing gatekeepers and trying to induce some changes to their technology and business models today, before we pretend the framework can be easily extended to AI with a few tweaks. Certain rules of the DMA are fungible and useful, e.g. on treatment of data, but (for instance) cloud computing has not even been designated as a core platform service in the current round, so cloud providers are not even subject to specific obligations yet. The DMA is a fragile tool, the future success of which is uncertain. It cannot be confidently predicted that it will be an adequate tool to deal with competition concerns around AI. The UK equivalent, the DMCC (Digital Markets, Competition and Consumers Bill), is not even in force yet.

This leaves standard antitrust rules, and merger control, as the tools on the table. How could an antitrust concern be formulated? The threshold issue certainly in Europe is that classic enforcement needs to show there is significant market power as a precondition, and then a conduct that illegally impedes competition. The hurdle will be hard to clear for a while – and antitrust tools have singularly failed to make a dent on the conduct and behaviour of gatekeepers in Europe even in egregious cases of monopolisation and exclusion (see Google cases) after over a decade of enforcement effort. This has not been for want of trying, but cases have taken forever and some have been unclear in their theories of harm and policy objectives. A few have eventually closed with a painfully negotiated settlement and marginal commitments, others with an infringement decision but no real undertaking capable of restoring competition – yet more are still in limbo.

Another problem in European enforcement is the missing tool of “exploitative abuse:” While “exploitation” is on the books (Article 102), it does not get wheeled out as a theory of harm because there is no real case law – even though many concerns in this space are about exploitation and rent extraction (including unequal treatment, self-preferencing, and discrimination) rather than exclusion. More promising is Germany, where the Bundeskartellamt was just provided with a new market investigation tool with the 11th amendment to the Act against Restraints of Competition. Germany has a more flexible hybrid antitrust/regulatory regime which is not premised on dominance in a specific market but on sufficient market power – and it has shown it is not afraid of using it. In principle, the US appears better placed given the revitalised Section 5 of the FTC Act on unfair methods of competition, and the impetus against discrimination. The FTC has indeed issued several warnings it is going to be paying close attention, and there is much that can be done under consumer protection.

A counter to the ability of existing large players to uniquely project their power into the future could come from ad hoc regulatory interventions that designate foundation models (or certain of their inputs) as “common carriers” that are subject to non discrimination rules, and/or open up access to certain inputs. A large-scale investigation of highly concentrated cloud markets is already underway in multiple jurisdictions, and may eventually introduce remedies to address customer stickiness and facilitate multihoming. One might go beyond this and conceive of “common carrier” rules coming to bear, though that will be an uphill battle. One would also need to consider where the bottlenecks are in other inputs (data, GPUs) and whether intervention might be worthwhile in some form to improve access.

But will there be time? Given the time it takes to craft any rule, and even more to implement them, it seems entirely unrealistic to expect that one could disperse the power inherent in owning and coordinating the right assets and capabilities with regulatory instruments. Much of any such power is going to reflect precisely that ability to coordinate assets which is the essence of an ecosystem. Still, anti-discrimination or common carrier rules might help contain tech firms leveraging their power into downstream applications. Foundation models which are inputs into applications may need to remain neutral and not be at risk of favoring their own downstream operations.

More Scrutiny for Mergers and Dodgy Agreements

A prominent role must be played in the control of mergers and agreements. As mentioned above, much of the capital injections into the largest AI startups have taken the form of billions of dollars of investment from Microsoft, Google, Amazon and a few others. In parallel with their own R&D, these investments will help shore up their future position with alternatives should their organic efforts not deliver in the same way. These “agreements” with AI firms are formally short of acquisitions, and have been deliberately lawyered to remain minority stakes without formal control. They have thus so far required no formal notification – and as such have been waived through without scrutiny. The investment tranches of Microsoft into OpenAI ($1bn in 2019, $1bn in 2021, $10bn in 2023) have not given rise formally to a controlling influence, and even the latest large chunk did not trigger a formal filing notification (something that the German authority reluctantly recognized after an investigation last November). This of course was all before the dramatic boardroom events at OpenAI late in 2023, with the CEO being fired and rehired in a matter of hours. This event – which concluded with Microsoft in a much better position than where it started, with a seat on the Board though not yet a vote, and an expanding role since – should provide rich additional material for the determination.

Indeed the UK Competition Markets Authority (CMA), having initially waived the agreement through in February 2023, opened a formal investigation last December. The European Commission is also informally investigating. The Federal Trade Commission announced on February 26 the opening of an inquiry under Section 6(b) of the FTC Act, specifically into the investments of Microsoft and Open AI, and Google and Amazon into Anthropic. This is formally a “study” intended to gather information, though in practice it will be an important step to lift the lid on these partnerships. As explained by Chair Lina Khan at the FTC Tech Summit at which the inquiry was announced, “The Commission today is launching a market inquiry into the investments and partnerships being formed between AI developers and major cloud service providers. Through using the agency's 6(b) authority, we are scrutinizing whether these ties enable dominant firms to exert undue influence or gain privileged access in ways that could undermine fair competition across layers of the AI stack”. And, she noted, “we are squarely focused on aligning liability with capability and control. This requires looking upstream and across layers of the AI stack to pinpoint which actor is driving or enabling the law breaking and is best positioned to put a stop to it.” Assistant Attorney General Jonathan Kanter also mentioned at a conference in Brussels last week that the DOJ is also looking at AI agreements.

There are two issues to focus on: first, does the arrangement in place confer any form of actual control, or decisive influence into the running of the AI partner? This is a factual/legal question which requires assessment of actual decision-making structures and processes. Overall it seems at least doubtful that these actors could claim their investment into large AI startups does not affect current and future competition at all. And second, even if formally there is no control, what are the effects on competition likely to be? I can see at least a few potential issues (as also set out in a January letter to the CMA and to the European Commission from multiple civil society organizations:

  • Leveraging power: Does the “senior partner” leverage control over key inputs and infrastructure into a stronger hold over the strategic direction of the AI business?
  • Competitive advantages: To what extent do these partnerships provide advantages to the investor, e.g. in the form of advance availability of products and features? To what extent are these advantages exclusive? It seems hard to argue that the competitive environment is entirely unaffected relative to an independent counterfactual. Competitive advantage is not an antitrust violation, but to the extent a company may establish a lead in the early stages of a market it may raise potential questions (as the agencies raised for cloud gaming in the Microsoft/Activision merger proceedings).
  • Discrimination: A variant of the above, if products are offered to rivals on unfavourable terms to extract excess rents and undermine their ability to compete.
  • Elimination of potential competition: With visibility and control of the direction of travel, the “senior partner” can effectively ensure there is no threat to itself from the AI partner acquiring escape velocity. It can even justify the “senior partner” slowing down on its own effort to catch up (a potential “reverse killer” phenomenon).
  • Direction of innovation: Shaping the direction of innovation in a direction that benefits the “senior partner”, a persistent issue with tech giants determining the direction of complementary innovative investments.
  • Ecosystem advantages: Being able to exploit ecosystem advantages, i.e. leveraging complementary assets and capabilities in ways that rivals without an ecosystem cannot do.

Thus while legally there will be major resistance to these agreements being characterized as conferring any influence on the AI partners, I support the initiative of regulators to request that companies provide full disclosure on all such deals – so that the agencies can understand the level of control. There should be regular information requirements on all such partnerships, so that a level of scrutiny is maintained over time. And where a deal was found to be a “merger in all but name,” it should be possible at the very least to put in place obligations – such as a requirement that firms “must-deal” with others, or a margin squeeze or no-discrimination rules.

In addition to the body of current rules and regulations (including the General Data Protection Regulation in Europe) there is thus something that existing antitrust rules can and should do: both through consumer protection rules at the level of greater product safety and less discrimination, and in addition by dealing with the creation of market power upstream. Antitrust is absolutely just one tool, which does not deal directly with a multitude of issues (toxicity, inequality, surveillance, repression) associated with AI. But it can play a part. The issue is stretching the rules, flexing the posture of regulators, and compressing timing, which is also related to regulator posture. If reviews languish for months with no progress we will be replicating the current issues which in Europe have motivated the pivot to regulation, and in the US we will be awaiting for the uncertain outcome of monopolization trials in a distant future.

Time for a New Playbook

“Will this be a moment of opening up markets to fair and free competition, unleashing the full potential of emerging technologies, or will a handful of dominant firms concentrate control over these key tools locking us into a future of their choosing?” asked FTC chair Lina Khan at the agency’s Tech Summit on January 26. That is the question before the world’s antitrust regulators today.

There is much that in principle they can do to answer it, if they are willing to stretch and re-animate existing rules. This is happening in the US as part of a major rethink of antitrust on the part of a broad movement. IT is occurring more selectively in Europe, at least for now. Regulators everywhere should advocate for the right to review conduct given the special circumstances of this industry. We are not facing extinction from AI, but a form of Groundhog Day. The corporate playbook we have seen unfold in the past decade is playing out again. This is not AI exceptionalism; rather it reflects both the unique challenges AI poses, and the experience of a decade of looking the other way in the horribly misguided belief that the market would sort itself out.

Authors

Cristina Caffarra
Cristina Caffarra is an Honorary Professor at University College London, and Co-Founder of the Competition Research Policy Network at CEPR (Centre for Economic Policy Research), London. She has been a consultant for many years to agencies, governments and companies. As an expert economist, over the ...

Topics