London Tech Week: Showcase of Innovation—or a Signal of Growing Tech Dependence?
Megan Kirkwood / Jun 12, 2025
The London Tech Week expo at Kensington Olympia, London. Photo: Megan Kirkwood
June 9th marked the start of London Tech Week, the UK’s technology showcase and annual networking event. This year’s edition opened with a keynote from Prime Minister Keir Starmer, who urged the country to move beyond anxieties over AI. “Some people out there are sceptical,” he acknowledged. “They worry about AI taking their jobs. But I know from audiences like this, this debate has been had many times. We need to push past it.”
That message of optimism was reinforced by Nvidia CEO Jensen Huang, who joined Starmer on stage in a high-profile panel. “I make this prediction: because of AI, every industry in the UK will be a tech industry,” said Huang.
But beyond the enthusiasm, the event raised questions about the UK’s deepening reliance on US tech firms. Starmer’s keynote was followed by reports of a private gathering at his country residence with top American tech leaders—including Eric Schmidt, Demis Hassabis, Alex Wang, and Angie Ma—described as a bid to “reaffirm the UK’s position as a global tech leader.”
Big tech continues to feature heavily in UK initiatives
Starmer made several announcements in his keynote speech: Liquidity, an AI-driven private credit firm, opening new London headquarters; a £1 billion government fund additional compute power; faster data center building approval through the UK’s incoming Planning and Infrastructure Bill; the launch of a document data extraction tool called Extract, being trialed in a handful of local councils and built on Google’s Gemini model; and a partnership “with 11 major companies to train 7.5 million workers in AI by 2030.” Those companies include Google, Microsoft, IBM, SAS, Accenture, Sage, Barclays, BT, Amazon, Intuit, and Salesforce, and “have committed to making high-quality training materials widely available to workers in businesses— large and small—up and down the country free of charge, over the next five years.”
Training and AI skills were a major focus of the speech, as Starmer announced “a new £187 million government ‘TechFirst’ programme to bring digital skills and AI learning into classrooms and communities,” backed by IBM, BAE Systems, QinetiQ, BT, Microsoft and the Careers and Enterprise Company—the national body for careers education. On top of this, Nvidia and the UK government signed two Memoranda of Understanding, “supporting the development of a nationwide AI talent pipeline and accelerating critical university-led research into the role of AI in advanced connectivity technologies.” In addition, NVIDIA will expand its AI lab in Bristol to other areas of the UK to accelerate research in AI.
What this flurry of announcements reveals is how Big Tech firms continue to embed themselves into the policy decisions of the UK government. From Google providing the model to develop government services, to Nvidia providing AI training, such initiatives ensure that dependencies are formed. Google benefits by maintaining vendor lock-in on its model, and firms offering training programmes, such as the new Google AI Campus or Nvidia-funded training, ensure a generation of workers has been trained to use their proprietary systems, or are at least create “a pool of potential future customers and collaborators” familiar with their technology. Likely, such initiatives will create a steady pipeline of workers for those companies, and run the risk of industry capture in AI research if companies like Nvidia are funding higher education programmes.
AI-inevitability
The panel discussion between Starmer and Huang made abundantly clear the continued alignment of Starmer with Silicon Valley. Starmer, when discussing the roadblocks to AI adoption, essentially boiled it down to two problems: planning and infrastructure, which refer to the ability to build and fund data centers and computing power, and cultural barriers to acceptance. Starmer expressed concern about “how to lead people on this path” because “it is going to happen,” leaning into the AI inevitability narratives often pushed by tech leaders. Such narratives frame AI as an organic being, progressing and evolving, with tech leaders constantly urging governments and businesses to make sure they adopt it or risk being left behind. Indeed, Starmer told the audience that he directed Peter Kyle, the UK’s Secretary of State for Science, Innovation, and Technology, to visit every government department to introduce AI.
Such a manic push risks the adoption of a technology not suited to a specific context, as the Ada Lovelace Institute has found in its analysis of government procurement. Their research illustrated that government departments were unequipped to evaluate technical information supplied by tenders and that procurers are unclear on how AI works, leading them to blindly adopt technology “because of the pressure to be innovative, save money, and not be left behind.”
Meanwhile, the government does not appear to consider the nuance of what each department does, how the context will be different, and thus may be more or less suited to different levels of automation. Part of the issue lies in how AI is framed as a technology. Huang describes it as “so broad and so transformative to every single industry,” claiming it can “reason and think and solve problems”—both anthropomorphizing the technology and portraying it as general-purpose. Starmer adopts a similar framing in his opening speech, calling AI “transformative” in health, defence, education, and social care—without acknowledging that each application is distinct and grouped under the AI umbrella largely for marketing purposes.
Dangerously, this leads to the conclusion that if certain use cases appear to save workers time in one context, this can just be copied and pasted into another context. For example, the government has just released guidance to educators, essentially encouraging the use of generative AI for classwork marking or generating letters to parents, because “(AI) is here to stay. It’s already having a significant impact across the public sector - from helping police identify criminals to improving cancer screening in the NHS.” Meanwhile, police are not using generative AI, or specifically, large language models, which underpin some generative applications, in identifying criminals. Likely, they are using facial recognition, which relies on a different model architecture for face detection, feature extraction, and database comparison and matching.
Talking about these technologies as comprising one technology alone conflates the potential benefits of one type of application to all the different applications and contexts to which they are exposed.
Infrastructure push
Starmer stated on stage that business leaders had complained to him that the UK’s major roadblock in AI adoption was a lack of infrastructure—a concern that Huang further reiterated. The government, therefore, announced a £1 billion fund for additional computing power and faster approval for data center building. Curiously, there was no mention of previous efforts to publicly fund AI infrastructure, such as the University of Edinburgh’s Exascale supercomputer for research, which was scrapped due to a lack of budget.
The significant investment being pledged by both government and industry is hinged on the bet that AI will boost productivity and the UK economy. As I previously reported, such conviction stems from the outsized voices representing Big Tech, further exemplified in the announcements made and company kept by the Prime Minister. Companies like Microsoft also push out reports touting impressive time-saving productivity gains from using AI tools, which conveniently rely on their services. Such reports also tend to flash impressively big numbers, predicting productivity boosts economic growth, but only on the premise that the government buys in and scales up infrastructure. Closer inspection of the methodology reveals that the headline-grabbing number in that report was produced by GPT-4 prompts. Indeed, Kate Brennan, Amba Kak, and Dr. Sarah Myers West illustrate several examples of companies “circulat[ing] their own research as a PR tactic, leading to the mass circulation of unverified claims.”
From the announcements made on the opening day of London Tech Week, it is clear that the UK has wholesale bought into Big Tech’s AI promises. However, the UK runs the risk not only of buying into the false hope of productivity and efficiency gains, but also of dispersing AI into a variety of public services, which could lead to unintended consequences. These range from teachers inadvertently making mistakes when using generative AI for marking, due to so-called hallucinations, to justifying, in the long run, increasing classroom size and workload for teachers because of purported productivity gains from AI. Potentially more damaging still, the consequences of incorrect AI-generated summaries of asylum applications could lead to increased deportation, as the government attempts to rush through the backlog of applications and appeals while aiming to reduce migration.
In addition to these concerns, the UK risks aligning too closely with a handful of the world's wealthiest companies, thereby granting them outsized influence over policy and embedding their technology in public services. The lack of consideration for the role the tech oligarchs are playing in the breakdown of US democracy is staggering. By handing such companies more infrastructural power and aligning with their ideologies, we might be risking our democracy in exchange for some chatbots.
Authors
