Home

Donate

Names Today, Faces Tomorrow? UK Police Fail On Privacy

Manasa Narayanan / Aug 24, 2023

Manasa Narayanan is a journalist with the Citizens reporting on data, democracy and disinformation.

Police LFR watch tower in London's Trafalgar square during the King’s coronation, May 2023.
Photo by Manasa Narayanan.

Recent events raise substantial concerns about whether UK police departments can be trusted to utilize advanced surveillance technologies. In the last two months, multiple data breaches implicating UK police have come to light:

  • On 15th August, Norfolk and Suffolk police rather casually put out a statement announcing that the personal details of 1,230 people from their crime database had been mistakenly released as part of Freedom of Information (FOI) responses. This included identities of victims, witnesses and suspects linked to serious crimes including sexual offenses. Victims of sexual violence are usually protected by the right to anonymity given the gravity of their cases. But careless leaks like this put vulnerable individuals in an even precarious situation.
  • A week before, names, ranks and unit information of 10,000 police officers with the Police Service of Northern Ireland (PSNI) were also mistakenly uploaded in response to a FOI request. This list remained online for several hours, and was accessed by Republican dissidents in Northern Ireland who have since issued threats to the police. This has happened in a region where hostility towards the police is high, and against the backdrop of a police officer being shot multiple times by dissidents in February.
  • A month earlier, there was another breach with the PSNI in which a police laptop and documents were stolen from a car, compromising information about 200 officers and police staff.
  • In March, Cumbria police accidentally released the names and salaries of more than 2000 officers and staff online. This was only disclosed to the public in August.
  • More recently, South Yorkshire police also lost significant data from their systems, including bodycam footage. The department came out with a statement saying it was “unexplained reduction in data stored on its systems,”, and at the time of writing this piece, it was unclear if this meant the data was simply lost, or whether it was also stolen. The loss itself is concerning given such footage is used as evidence in court cases. But if stolen, it means video footage of vulnerable individuals has been compromised.

As a journalist, I usually find myself at the receiving end of delayed, inadequate and incomplete responses to FOI requests. We at the Citizens – a news non-profit that works in the space of democracy, data and disinformation – have particularly pushed for greater transparency surrounding governance of public bodies through the use of FOIs. By contrast, these recent events with UK police have highlighted another troublesome trend: data leaks surrounding citizens’ private information. Quite the irony where information in public interest with public bodies is kept hidden from us, while the personal information of citizens is terribly mishandled.

While police forces claim these incidents are just errors, they are more than that. They show how personal information stored in police systems is not properly anonymized and kept too readily accessible - far from a privacy first approach. It highlights the insufficiencies in data protection across UK law enforcement agencies. This is particularly concerning given the UK police are now extending use of various surveillance tools — including pervasive use of facial recognition — and are handling ever larger troves of sensitive biometric data.

A carelessness issue

Corporations can be fined as much as £17.5 million (or 4% of annual global turnover) if they infringe on data protection principles under UK’s data protection legislation. But that is not the case when it comes to the UK police, Dr. Eleanor Drage, Senior Research Fellow at The Leverhulme Centre for the Future of Intelligence, explains. “PSNI are floating in their own safety bubble because they're law enforcement,” she says. “They're not going to have any repercussions basically, which means that they can't be held accountable. They have no incentive to improve.”

Fraser Sampson, the former UK biometric commissioner, addressed the UK parliament earlier this year highlighting the “culture of retention” of biometric information by police forces in the country. Having visited and scrutinized various police forces, he said that the UK police unlawfully retain biometric information, even of individuals not convicted of a crime.

In fact, a recent investigation by Open Democracy that looked into reports from the UK’s biometric watchdog revealed that custody images of those not charged were still retained by more than half the forces inspected. Several departments were found to be following a blanket approach to retaining DNA samples, not to add regularly running fingerprint checks on asylum databases even when cases were not linked to immigration issues.

Many problems with data retention were traced to outdated computers. For a sector that continually advertises the introduction and use of cutting edge biometric technology, somehow basic and necessary infrastructure needed to ensure data protection has been overlooked greatly.

Cameras, cameras everywhere

While the recent data breaches were to do with names and other personal details, as well as video footage, civil society organizations are worried about the repercussions of routine use and collection of biometric information by law enforcement. If police forces lack the infrastructure and skills to protect basic information of their staff and citizens, should they be entrusted with sensitive biometric data?

In the last three years, there has been a pervasive rollout of facial recognition technology by UK police, including increased use of live facial recognition in public spaces. Reporting on a protest during the King’s coronation in London fairly recently, I found myself shadowed by a large watch tower fitted with numerous facial recognition cameras right in the middle of Trafalgar square. It was like nothing I have ever seen; straight out of a dystopian novel. But the King’s coronation was not the only event that saw this kind of surveillance deployed. Use of such technology is routine on Britain’s streets.

But while this kind of technology is being adopted hastily, the effectiveness of it has been called into question several times. According to Big Brother Watch, which has looked at the results Met Police have had with live facial recognition (LFR) since 2016, 84.7% of matches made were false, that is, 6 out of 7 matches were wrong. Another independent experiment from 2019, commissioned by the Met, found use of LFR to be incorrect almost 64% of the time.

Additionally, in a case brought forward by UK civil rights organization Liberty, the Court of Appeal found that the use of live facial recognition by South Wales Police was in fact illegal and infringed on people’s privacy and data protection rights. While the trial of LFR was halted, police forces in the country have continued to deploy and experiment with this technology without oversight or scrutiny.

Given the track record of the police in safeguarding information, particularly their lax attitude with biometric data generally, the creation of faceprints at unprecedented speed is worrying. While much of the biometric prints created are required to be deleted after a search is run, report after report has pointed out how this does not happen. And there is a legal loophole that lets this practice continue.

Unlike in the European Union, where the new Artificial Intelligence Act will ensure that the use of live facial recognition in public spaces is banned, the UK gives a free reign to law enforcement and allows private companies who supply these technologies unending opportunities for commercial exploitation.

Dr. Drage says that these companies are also part of the problem. “There are really pseudo scientific claims that are being made by companies that are working with the police, that do not line up with what these products can do at all.” “There is no proper regulation of the claims that AI companies can make when they pitch their product [to law enforcement],” she adds.

The Wild West, on an island in the North Atlantic

As it stands, the UK has a serious data crisis on hand. Dr. Drage calls it “the Wild West.” “We know that this is happening in industry. There's this kind of narrative around it being a really unregulated environment. But that shouldn't be happening within the police. The police should… always [put] safety first.”

This very week, in the UK, we are looking at headlines like ‘Cops share dozens of photos of dead bodies and crime scenes’. So the issues won’t go away with a quick fix band-aid approach that the police currently have, where it is more about crisis management than about preventing privacy breaches in the first place.

As I see it, the police data protection problem is threefold:

(1) lack of education, training and infrastructure within police forces to properly process data or handle it with care;

(2) an unregulated private industry supplying ill-suited technology to law enforcement; and

(3) an over-reliance (and unrealistic confidence) by police forces on technological fixes to human problems.

For the UK to get anywhere, all these aspects need to be addressed, urgently. Otherwise, the nation’s citizens will be forced to live with the consequences of an Orwellian surveillance state, no matter how poorly administered.

Authors

Manasa Narayanan
Manasa Narayanan works for the news non-profit the Citizens, reporting on data, democracy and disinformation. She's also a researcher and contributor to the Real Facebook Oversight Board, and has also written for outlets like VICE World News and Byline Times.

Topics