Home

Putting Speed Limits on Silicon Chips

William Burns / Jul 16, 2024

Catherine Breslin & Team and Adobe Firefly / Better Images of AI / Chipping Silicon / CC-BY 4.0

Governments often talk about restricting access to silicon chips and, in the case of the United States, some have even enacted laws to do so. But very few people, besides industry experts, talk about the details of the chips. Chips are treated in policy circles as if they are black boxes, unfathomable to mortals.

The workings of chips are certainly microscopic and outside the view of the naked eye. But they are also contingent on legible technology choices, accumulated over years in the computer industry, which are indeed possible to understand. A deep dive into this topic offers unforeseen opportunities for democratic shaping of the industry in the context of AI which have not been properly explored.

More Moore

The fundamental driving force of the IT industry has been the quest for more computational power by cramming ever-greater numbers of transistors on chips. While Moore’s law no longer has the salience it once did, the underlying spirit of Moore is still with us.

If we could change that metric– put a limit on it, as I will argue– we might see dramatic, positive effects on energy consumption, but also in other fields, such as reduced costs and opening up the market to new players.

A chip industry liberated from the dogma of cramming transistors into increasingly small areas also proposes a new and intriguing innovation paradigm which, so far, has been kept to the sidelines.

Incumbents of the industry are unlikely to move this way on their own. But I still believe the idea is not utopian. My optimism comes, in part, because the science and technology policy community is gradually waking up to AI being regulatable. It is not an unprecedented phenomenon, but something more familiar and therefore guidable.

One line of evidence for this perspective lies with talk around what are seen as the excessive energy and resource requirements of IT. They are proof of an appetite to talk about AI in material terms.

The second line of evidence comes with careful reading of some emerging AI regulations.

The European Union’s recent AI Act, for example, calls on developers to disclose the ‘estimated energy consumption’ of AI models while the European Commission, for its part, is required to report on “energy-efficient development of general-purpose AI models, and assess the need for further measures or actions, including binding measures or actions.”

We are still far from legally enforceable supervision backed with mandatory reporting of data, but it is a start for a process of change that could play out over the next decade.

Technology that matters to governments

Silicon chips are technologies that matter to governments, symbolic of national prestige as much as useful instruments. Like nuclear reactors, jet engines and space rockets - but unlike a lot of other important technology - these exquisite devices would not have been invented and disseminated without official elites wanting them.

Chips emerged from the American military-industrial complex in the last century, while being more recently picked up by firms in East Asia, which are either state-owned or with close historical ties to the state.

Manufacturing is located in a small number of large companies that exhibit a political immovability almost synonymous with the states that produced them but, equally, the structure of the sector is quite legible.

Rube Goldberg machines

Within this unusual industry, materiality such as power and energy has always been at the front of minds. Over time, the key players developed chips that were more energy efficient at performing computations.

Koomey’s law informs us that the number of computations per kWh increased exponentially since the advent of electronic computers, with an average doubling time of 1.57 years between 1946-2009 (slowing to 2.29 years in the period 2008-2023, citing Prieto, et al., 2024).

But this increase in efficiency was not due to a focus on energy, but because of an overwhelming obsession with packing transistors at ever greater density, which, coincidentally, gave efficiency gains.

To the contrary, a hypothetical drive to reduce energy use would have had different and intriguing results (archives of the important Hot Chips symposium reveal a few paths not taken such as slow computing).

Cramming transistors into an ever smaller space generated greater amounts of heat at junctions. A good part of the industry dedicated itself to fixing the problem with, I suspect, substantial but unknown opportunity costs.

Heat results when the chip is powered up by movement of electrons, which generates lattice vibrations. This heat is significant – thermodynamically, almost all supplied power ends as heat.

In a tightly-packed integrated circuit, the heat cannot disperse. The more packed the chip, the more extreme the problem, not so much affecting the ceramic components but melting the surroundings, lowering reliability and driving systemic failure.

As the number of components in circuits increased, the chance of failure of individual components also started to bite, purely as a numbers game.

A growing problem was recognized in the late 1970s and worsened through subsequent years, notably after the collapse of what was a fortuitous correlation between size and power density (Dennard scaling).

The authors of a popular textbook on thermal management of chips observed in 2006 that ‘it will not be long before microprocessors will have power densities comparable to that of nuclear reactors and rocket nozzles.’

The answer to the heat conundrum lay with Rube Goldberg combinations of fans, copper heat-pipes and thermal interface materials such as thermal paste (and, lately, liquid coolants).

The term Rube Goldberg is carefully chosen. A succinct article by one of the leading scientists in the field of thermal paste, Professor D.D.L. Chung, highlights the difficulties of evaluating such interface materials in a scientific way.

Professor Kunle Olukotun staved off the heat problem from another direction when he innovated the multicore chip, by increasing the number of cores per die and thereby, somewhat, unpacking the transistors.

Graphics processing units (GPUs), which were much less tightly packed than CPUs, had a particular advantage because they concentrated less heat. Bitcoin mining and then AI supplied them with new uses beyond their initial niche in games.

Political economy

You could say the quest for computational power has long-since spiralled out of control. Chips are defined by heating miniscule ceramics to high temperatures by passing electricity through them. This causes a host of additional problems, which need stop-gap efforts to patch without offering a permanent fix.

In response, some in the policy community call for hubristic renovations of the entire energy infrastructure to accommodate such new energy demand both to power chips and the cooling systems that support them.

This, in turn, implies vast central data centres like the old mainframes, rather than distributed computational devices that in theory are more democratic.

The time has come to recognize the causes of the phenomenon and set a limit.

My suggestion is that we need to think about computational sufficiency– no more than is needed, and no less, in light of computational needs and assessments of available resources for them.

This means reconnecting the basic currency of computation, the silicon/silicon dioxide band gap (Carver Mead’s terminology), with problems we want, or need, to solve.

In the early days of electronic computing, the link was explicit. Computers, down to the last valve, were tied to particular problems such as breaking wartime codes.

However, the psychological connection between physical materiality and its use has mostly dropped into the background, and in complex ways that are very difficult to track.

This is as much a question of political economy as it is of technology. Or, rather, it is both of those things together. The possibilities, incomplete as they might be, will emerge in the rough and tumble of regulatory efforts, rather than being ones we can define in advance.

Authors

William Burns
William Burns is an advisor in science and technology strategy at Science Think Tank. His focus is on the European Union and emerging markets. He is a graduate of Imperial College in London but currently lives in Barcelona.

Topics