Home

EU Competition Authorities Discuss Regulating Generative AI

Anya Schiffrin, Romy Ronen / Jun 27, 2024

Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0

When and how to regulate generative AI companies was top of mind at a June 17-18 conference held at the prestigious European University Institute on the topic of "The Digital Market Effect of Generative AI." Co-organized by the Organization for Economic Co-operation and Development (OECD) and the Center for a Digital Society, the meeting came on the heels of a June OECD gathering of competition authorities which also focused on generative AI.

Portugal’s competition authority was the first EU competition authority to publish a comprehensive report on competition and generative AI, and its president, Nuno Cunha Rodrigues, explained that nearly all aspects of the creation of generative AI have implications for competitive markets. Access to data is key, but it's disproportionately in the hands of a few powerful firms, which gives them a competitive advantage, enhancing their already strong market power. The use of massive amounts of data at their disposal also can collide with GDPR and intellectual property rights, and Cunha Rodrigues pointed out the “legal risks” for AI developers when “holders of IP rights demand compensation.” Computing power requires access to affordable cloud services and hardware, but the cloud market is already highly concentrated, with significant portions in the hands of the same firms that dominate in data, providing another avenue for amplification of market power. Having know-how and being able to experiment are essential for innovation but lack of access to data, contracts that handcuff employees and prevent them from switching companies, and no-poach agreements between firms are all possible choke points that can strangle competition.

“AI is introducing a moment of contestability” that “creates new points of entry into existing markets, exactly like the Internet did,” said Cunha Rodrigues. As so, he added, “Competition authorities should help to remove barriers to firms (…) so they have the incentives and means to innovate” in the context of generative AI.

Whether existing rules are enough–all that’s needed is good enforcement– or new ones are needed was also a subject of discussion at the conference.

Sabine Zigelski, Senior Case Manager at the German Bundeskartellamt, said “the tool kit is not complete” and that far more can be done in the EU. Of the tools currently available to authorities, Zigelski said that merger control is the most important as intervention against abuse of dominance and regulation only comes after the harm has already taken place. “We may need to be more bold. The risk of over enforcement doesn’t do as much harm as the risk of under enforcement. Once market structures have deteriorated, they are hard to fix. But innovation will always find its way,” Zigelski said.

Worries about “partnerships’

As Politico chief technology correspondent Mark Scott and others have pointed out, the fact that so many of the current arrangements between AI firms are structured as “partnerships” and not classical mergers or acquisitions means that companies may be able to elude current merger regulations and that such partnerships can raise barriers to entry. Regulators need to look closely at these partnerships.

There was also some skepticism about the effectiveness of solutions to the competition/intellectual property problems such as open source, with some suggesting that so-called “open source” LLM often are not truly open.

Anu Bradford, Columbia University law professor, believes that investigating competition in the Big Tech companies is essential— specifically their partnerships. One partnership that seems worrisome in particular is Microsoft’s major investment in AI start-ups.

Citing problems like lack of portability, interoperability, barriers to entry, lack of transparency regulators commented that many of the anti competitive practices that have plagued the tech industry over decades are still in place. “All the tactics we’ve seen over the last 30 years are popping up again,” said Susan Athey, a Stanford University professor and antitrust expert.

Copyright; French Competition Authority Fines Google Again

Another conference speaker, Grégoire Colmet Daȃge, a case handler with the French Competition Authority (FCA), provided background on the French competition authority’s March 2024 fine of €250 million euros levied on Alphabet. This was the latest of four decisions, issued by the Authority since 2019, regarding the company’s use of media content.

In France, as in many other countries around the world, publishers have been trying to get Google and Facebook to pay for the news they use. The EU’s April 29 copyright directive (Article 15) requires such payments, and so in October 2019 two groups of French magazines and newspapers and a news agency lodged a case with the FCA against Google for not paying for content. In April 2020, the FCA ordered Google, in an emergency procedure, to negotiate in good faith with any publishers, news agencies or collective management bodies on the basis of transparent, objective and non-discriminatory criteria. It also ordered Google to provide publishers and news agencies with information on the use of their content, and to take the necessary measures to ensure that these negotiations could neither affect indexing or presentation of publisher content nor other economic relations between Google and publishers and news agencies. However, Google was fined €500 million euros in 2021 for not complying with these injunctions and told again, under threat of periodic payment penalties, that it had to negotiate in good faith with publishers and provide information needed to determine appropriate remuneration.

In 2022, the FCA accepted the commitments proposed by Google to create a framework for negotiating the remuneration for the use of publishers and news agencies content and for sharing the information necessary for assessing the remuneration. However, in 2024 the FCA found that Google did not comply with its commitments, particularly because the company did not notify publishers when their content had been used. Instead, Google went ahead and used French publisher content for training and Grounding of its Bard (now Gemini) LLM. The FCA also found that press publishers were not given an opt-out option from Bard until recently. For these reasons it was fined again in March 2024. Apparently, the value of the content exceeded the (expected) value of the fines that would be imposed. Given the size of the fines already imposed, it suggests that the value of the information just taken from these French publishers is enormous.

Law professor Giuseppe Mazziotti gave a presentation discussing whether AI data ingestion and processing technologies can infringe copyright, from both an input and output perspective. To this end, he discussed the unresolved issue of copyright exceptions, which determine whether certain uses are legitimate without the copyright holder's consent. He reminded the group that “Fair Use” is a US concept which doesn’t exist in quite the same way anywhere else in the world. Europe, instead, has a fragmented and narrowly defined approach in this domain. He also warned against the notion that generative AI can produce truly synthetic content with no implications for intellectual property. “Let’s look at ‘authorless works’ with caution. At some stage in the production process there is often human selection, creation or involvement,” Mazziotti noted.

Mazziotti said that because there is currently no binding definition of literary and artistic works at a global level, it makes it harder to hold Big Tech companies accountable for large-scale digital copyright infringement. Indeed, since the rise of social media, he says there have been multiple “acts of aggression.” Mazziotti proposed that, despite the uncertainties raised by the newly enacted EU AI Act and its interplay with EU copyright law, every tech company in Europe should “reveal how they train their machines,” and be more transparent about the use of intellectual property.

Meta/Google warn against overregulating

Some representatives of the big tech firms in the room cautioned that too much regulation and enforcement would stifle innovation. None, however, presented a case that simply sharing the profits that they earn with those who generate the data that they use would have a significant effect on innovation.

In a follow up email, Google Public Policy Senior Manager Georgios Mavros said "GenAI models require significant upfront and operational costs. Therefore, involvement of large companies – including through partnerships with startups – is important for market development. As the EU trails the US in ICT productivity growth, EU public policies should facilitate access to and diffusion of technology across the economy to improve EU competitiveness, a key political priority for the next five years.”

Authors

Anya Schiffrin
Anya Schiffrin is the director of the Technology, Media, and Communications at Columbia University’s School of International and Public Affairs and a lecturer who teaches on global media, innovation and human rights. She writes on journalism and development, investigative reporting in the global sou...
Romy Ronen
Romy Ronen is currently pursuing her Ph.D. in Communications at Columbia's Journalism School and is focused on changes in the film and news industries since the domination of social media, Big Tech, and streaming services. Ronen graduated in 2022 from the Joint Program between Columbia University an...

Topics