Home

Donate

The Real “Brussels Effect” and Responsible Global Use of AI

William Burns / Nov 11, 2024

A recent article on Tech Policy Press by Kristina Fort made the case for the European Union to dedicate “more focus to the external dimensions of AI and its role in the international context” in order “to ensure that its values and opinions are reflected in the discussions surrounding AI more broadly.”

But what values and opinions will the EU project internationally? Already, unpromising signs are emerging from its vaunted AI Act.

The so-called “Brussels effect” – the global impact of EU regulatory approaches – could yet further damage democracy in the Global South via the export of dangerous AI.

Of course, everyone and their dog has found faults in the AI Act. But loopholes in how the Act handles the technology used for surveillance triggered notable concern among progressive voices such as European Digital Rights, a collective of NGOs, experts, and advocates concerned about digital rights.

For example, the Act does not ban biometric categorization systems, “real-time” and ex-post remote biometric identification in public spaces, predictive policing, and emotion recognition in high-risk areas (see analysis by Sandra Wachter). There is also no prohibition on selling AI to “third-party countries” (non-EU members), even when it includes features ostensibly banned in the EU itself.

This could, in principle, mean the export of a suite of new surveillance and manipulative communication tools that fall within the category of “technology of political control.” According to a preprint academic paper by Pratyusha Ria Kalluri and colleagues, there is a big pipeline in AI surveillance technology that reaches beyond the usual “rogue actors.”

While the US and China dominate, EU members such as Germany and France are also called “top countries.”

It is impossible to make solid predictions about the future prospects for this technology, but there are obvious risks of it being proliferated and sold to governments. This would not be a completely new pattern. Europe already exports substantial volumes of dangerous technologies such as lethal weapons, as well as police equipment exempted from the EU Anti-Torture Regulation, like “flashbang” grenades and kinetic impact projectiles.

However, AI would theoretically give new, insidious capabilities to those in power – and democracy has already retreated globally.

Out of sight, out of mind

Much of the discussion of strategic capacity in AI focuses on hardware, i.e., advanced microchips. In this field, Europe is often seen as lacking. However, to the contrary, Europe is exceptionally strong, if not the world champion, in digital services – and it is to this relatively intangible sector that we must first look.

Last year, a report from the Jacques Delors Institute found that the EU “exported digitally deliverable services valued at…770 billion USD in 2022. Throughout the 2010s, the EU-27 exported more digitally deliverable services than the US, and the gap has widened in recent years. The EU-27 also exported about…three times as much as India and China.”

The report did not offer a breakdown by industry sector, but the data might suggest a foundation for the proliferation of AI-related risks.

However, in our assessment, we must also factor in a European political environment that often favors the militarization of science and research and the existence of a big European weapons industry.

Regarding the first factor, there have recently been calls to develop a “CERN for AI.” CERN, headquartered in Switzerland (not in the EU), is a civilian, non-profit organization researching fundamental atomic particles.

However, what is often being proposed in the AI case is a public-private partnership with commercial motivations. The price tag for a “CERN for AI” was set at €31.5 billion by the think tank ICFG. The most recent commitment to creating civil R&D centers in Europe was a fraction of that number – €875m for the Extreme Light Infrastructure (ELI). ELI was beset with problems. Bottomless pits of cash for prestige civil R&D got harder to sell politically.

Substantially expanded cyberwarfare capacity with added commercial elements seems more aligned with what senior officials intend. It is an agenda discussed by the EU bureaucracy that was laid out bluntly by the EU’s outgoing foreign affairs supremo, Josep Borrell, in his blog last month. Borell foresees AI, quantum technology, etc., as devices to project awesome power – in other words, military technology – but also ones that will earn stupendous profits for those who hold them.

The European computer giants of yesteryear, such as Olivetti, which grew out of office equipment like typewriters, are effectively defunct. Their relics can now only be found on eBay. But, in the weapons industry, the situation is quite the contrary. The EU has some big firms deeply anchored in electronics that increasingly boast AI prowess. In light of the above, the risks of proliferation – some of it potentially dangerous – are not abstract.

Europe is out of control

Alignment with America is as close as we get to gravity in European science and technology policy.

US officials – as well as major American computer firms lobbying European policymakers – articulate the goal of invention, implementation, and trade of AI (and related tech) under American control.

The 2021 paper on Artificial Intelligence diplomacy, written by the European Council on Foreign Relations (at the request of the European Parliament), is crystal clear on this point.

“The US are [sic] Europe’s most important partner, and the EU should work closely with the US on AI as well as on other topics to face China…it should be clear that the transatlantic relationship is key to Europe’s external tech policy, and that there is and cannot be an equidistance between the US and China.”

However, outside these parameters, which surely carry certain restrictions on where European firms can conduct their business – namely, in China-US policy has not yet been noted for ethical restraints.

Speaking in a meeting concerning “scientific developments” at the UN Security Council last month, the American representative, Dorothy Shea, reviewed the extent of American commitments on this point.

The maximum thus far, in terms of international agreements, seemed to be the US signature on the Council of Europe Framework Convention on Artificial Intelligence.

I am not a lawyer and cannot comment systematically on the strength of the convention. But, it explicitly excludes “private actors,” “national security interests,” “defense,” and R&D from its strictures – huge gaps – and has no substantive remedies.

Hence, my conclusion is that we lack external checks and balances deriving from American leaders. This brings us to any possible internal checks and balances within Europe. The EU – i.e., the Brussels-based institutions – has superficial similarities to a big federal bureaucracy like the US Department of Energy, with its array of legal, regulatory, and scientific functions. However, it is not as big, powerful, or expert as the American model. While European officials, when tasked appropriately, will, of course, know how to plan and execute policies, a lack of staff bites as responsibilities pile up.

Of course, the EU’s new AI Office, set to recruit 150 officials, cannot be judged before it has been established. But, as already noted, there are loopholes in the underpinning AI Act. It is implausible that the office would take on activist tasks outside the legislation, such as gathering data on harmful sales overseas. Perhaps it could dabble in lower-key horizon scanning activities that would meet some of these needs. But in the absence of export controls, there is no solid way to measure what is going on, let alone halt it.

The European External Action Service, ostensibly dealing with foreign affairs, musters very low numbers of dedicated science and technology officials (such as 12 “science counselors”) and is therefore practically moribund on such issues. Besides, it is not an accountability mechanism but a means to articulate the views of senior EU officials.

Furthermore, taxpayer-funded science and research is often configured as a support for industry, not as a public service, which opens the activity to powerful industry lobbies and, in theory, leaves officials with public service instincts cowed.

The second aspect, which comprises the bulk of activities labeled as “European,” is conducted by the member states (with the big countries, notably France and Germany, having outsized impacts). Assessing the scope of this category proves difficult. It would hinge not just on the political “temperature” in each country but also on a host of other factors, such as the military-industrial base, the scale of R&D, the thrust of overseas development aid, and diplomatic clout. In effect, we need an index to assess the risks of any given European country proliferating dangerous AI. To my knowledge, this is currently lacking.

Democracy and internationalization

Last year, the official European Ombudsman, Emily O'Reilly, tasked with checking the probity of EU administration, warned of a growing lack of accountability in European policymaking, which she connected to a tendency to view all policies through a “geopolitical” lens.

“As more and more of the core business of the European Commission and other institutions becomes internationalized, the more it may move beyond the full reach of the accountability measures embedded in ordinary rules and procedures, and of watchdog institutions,” O’Reilly was reported to have said.

Her comments could not be more relevant to how the EU will operate over the coming years. In Europe, internationalization means secrecy, and vice versa – two sides of the same coin. Few outside the EU bubble realize just how much the “geopolitical” agenda dominates policy discussion, leaving a trail of secrets in its wake. Policies dressed up in military garb, as it were, get a free pass.

Bureaucracies certainly have a natural tendency towards secrecy, and military-industry complexes are not new bugbears. But science and technology is always a niche area, which means it already suffers from an accountability deficit. Perhaps understandably, only a relatively small number of politicians dive into the details.

Christian Ehler, a prominent member of the center-right in the science policy caucus of the European Parliament, recently objected to attempts by the European Commission to develop joint scientific programs with Singapore. His reason was that the Commission did not publish its human rights assessment of the city-state. Ehler said the parliament would vote to block the Commission’s plans with Singapore unless the officials provided “proof to show that Singapore complies with our democratic norms.”

Singapore is evidently not a signatory of some international human rights agreements; its British colonial-era laws feature hanging and caning. Yet, EU member states are up to their eyeballs in bilateral science and technology collaboration with Singapore. As such, the ship already sailed on moral hazard.

Ehler had also objected to joint science programs with New Zealand and Canada on the grounds the Commission had negotiated them without involving the parliament. This would seem, therefore, to be a tussle between the executive and the legislature about who sets international S&T strategy.

Overall, it was a minor ripple in the bigger picture of accountability – and no one, I would think, is going to get worked up if this is about ecology or cancer research.

But it highlights issues that really start to bite and, indeed, may already be biting in other more controversial fields. The steady interlocking of “geopolitics,” lack of accountability, and technology like AI is concerning. Unfortunately, we have limited means to detect problems and yet fewer means to stop them.

Authors

William Burns
William Burns is an advisor in science and technology strategy at Science Think Tank. His focus is on the European Union and emerging markets. He is a graduate of Imperial College in London but currently lives in Barcelona.

Topics