Home

Donate

AI Facial Recognition Surveillance in the UK

Alex Wagner / Oct 22, 2024

Image by Comuzi / © BBC / Better Images of AI / Surveillance View B / CC-BY 4.0

On September 5, 2024, the UK became one of the first signatories of The Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law, the first legally binding international treaty that aims to manage the dangers posed by AI. Crucially, the framework includes provisions to protect the public and their data, human rights, democracy, and the rule of law.

Despite this commitment, Prime Minister Keir Starmer has already announced his government’s plans to establish “a national capability across police forces to tackle violent disorder,” including a “wider deployment of facial recognition technology” despite a clear lack of safeguards and regulatory oversight. The new government’s plans to roll out AI-powered facial recognition technology is just the latest attempt by successive UK governments to further entrench biometric surveillance in public spaces, threatening the fundamental right to privacy and emboldening what human rights group Liberty has described as “the most intrusive mass surveillance regime of any democratic country.”

Lack of safeguards

The UK’s current pro-innovation approach to AI governance consists of a principles-based framework with existing regulators, such as Ofcom and the Information Commissioner’s Office (ICO), who are responsible for overseeing the development of AI within their domains. This non-statutory approach aims to offer the flexibility necessary to allow the UK to keep pace with rapid and uncertain advances in AI technology, though the Labour government has pledged to “establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.”

With no binding legislation or central regulatory body, the use of facial recognition software in the UK relies on a patchwork of overlapping laws, mostly focusing on biometric data, with police forces able and encouraged to deploy the technology to locate missing persons, or find people on police watchlists – this includes people wanted by the courts and also cases where there are reasonable grounds to suspect that the individual is about to commit an offense in the future.

This use of predictive policing algorithms raises concerns beyond privacy. Last year, the National Physical Laboratory, the official UK body for setting measurement standards, reported that facial recognition technology was more likely to give false positive identification for black faces compared to white or Asian ones, something the report describes as "statistically significant." In July, the #SafetyNotSurveillance coalition wrote to the Home Secretary warning against the use of predictive policing systems, calling for AI algorithms that identify, profile, and target individuals to be prohibited, and calling for a legislative framework founded on transparency, accountability, and accessibility to underpin the use of all data based automated AI systems in policing.

Instead of establishing the regulatory framework necessary to protect citizens’ data from new and intrusive technologies, the new Labour government has continued to actively encourage its use, utilizing recent civil unrest as an opportunity to further embed the technology in policing, with police forces happy to oblige. In London alone, Met Police statistics show that in 2023, the force scanned over 366,156 faces over 34 deployments. The tech has also been recently used by authorities in Essex, North Wales, and Hampshire.

Biometric data and the private sector

Research by the Alan Turing Institute found that more than half of the British public are concerned about the sharing of biometric data between the police and the private sector. Despite these concerns, the House of Lords Justice and Home Affairs Committee is currently conducting an inquiry into responses to shoplifting, including the Pegasus Initiative, a business and policing partnership funded by the UK’s largest retailers to tackle shoplifting, including the use of facial recognition technology to identify and prosecute offenders.

Private companies across the UK are also increasingly gathering biometric data. In February of this year, Serco was ordered to stop using facial recognition technology and fingerprint scanning to monitor the attendance of its staff after the ICO found that the company had unlawfully processed the biometric data of more than 2000 employees. The ICO also issued a reprimand to a school that failed to carry out a data protection impact assessment before implementing facial recognition technology for canteen payments.

But even the ICO is struggling to act. Last year, the American facial recognition company Clearview AI overturned a £7.5 million fine from the regulator for unlawfully storing more than 20 billion images of people’s faces taken from publicly available information on the internet and social media platforms. The company was able to appeal due to its use by law enforcement bodies outside the UK, leaving the ICO without jurisdiction to take enforcement action despite the use of UK citizens’ data. The case highlights the lack of adequate regulation both domestically and internationally and the extent to which both governments and private companies can exploit this legal grey area to further increase surveillance in public, at work, and even in schools.

Following the Prime Minister’s speech in August, a coalition of human rights, racial justice, migrants’ rights, and civil liberties groups published an open letter citing serious concerns with the accuracy and bias of facial recognition technology, as well as the threat posed to freedom of expression and freedom of assembly. The group notes “that there is no explicit legal basis for FRT use by the police and it has never been debated by Parliament.” The letter further argued that public surveillance was incompatible with the European Convention on Human Rights while warning that its continued use risks making the UK an outlier in the democratic world. Despite this, the new government looks set to continue the expansion of biometric surveillance behind the guise of public safety while simultaneously positioning itself on the world stage as a global leader in responsible AI development.

At the signing of the Council of Europe’s Framework in Vilnius, Lord Chancellor and Justice Secretary Shabana Mahmood called the convention “a major step to ensuring that these new technologies can be harnessed without eroding our oldest values like human rights and the rule of law.” Back in the UK, these values are under threat by an outdated regulatory framework and a government that has no intention of defending civil liberties from surveillance and profit.

Authors

Alex Wagner
Alex Wagner is a consultant in the fields of tech policy and geopolitics. His work explores the regulation of emerging technologies and their impact on people and geopolitics.

Topics