Home

Donate
Perspective

The High Stakes of Biometric Surveillance

Dia Kayyali / Jun 23, 2025

Dia Kayyali is a fellow at Tech Policy Press.

In 2025, identification through biometric data is nearly impossible to avoid. Once seen as futuristic and error-prone, biometric systems, including facial recognition, iris scans, voice recognition, and even gait analysis, are now widespread and increasingly accurate. But their expanding use poses a serious threat to political and human rights. From a technological and human rights perspective, the use of biometrics for identification and surveillance can infringe on privacy (due to their invasive and non-consensual nature) and the right to be free from discrimination (given their ongoing bias and misuse for discriminatory purposes).

Amidst a global shift towards the far-right and authoritarianism, there are clear indications from oppressive governments around the world that biometrics will be used to harm human rights, regardless of their accuracy or fairness. At this point, biometrics are not simply an individual issue that can be avoided through personal decisions; they have become a political crisis that demands direct action and legislative intervention. Last month, reporting from Wired revealed that the United States Customs and Border Patrol (CBP) is seeking to expand its existing, error-prone facial recognition system. This comes as the Transportation Security Administration rapidly expands its own use of face scans, while the Trump administration perfects the techno-surveillance state created by Democratic and Republican presidents before him. Meanwhile, airports across Europe are also using facial recognition to verify passengers’ identities, despite concerns from the European Data Protection Board.

These deployments of facial recognition, though, are just a glimpse of the problem. The term facial recognition is often used too narrowly. Instead, it is more accurate to talk about biometrics: “biological measurements — or physical characteristics — that can be used to identify individuals.” This includes faces, but also iris and voice patterns, and lesser-known measurements such as gait recognition. Biometric identification can be applied in a number of different ways, including real-time through surveillance cameras or retrospectively; on a one-to-one basis to verify a single identity (via a photo) or one-to-many to match a person against a database of many biometric profiles.

Some experimental uses go even further, for example, using sentiment analysis to detect “nervous” or “angry” people in customer service and workplace surveillance– applications that have been criticized for invading privacy, producing false results, and reinforcing existing biases against people of color and other groups. Moreover, thoroughly discredited biometric uses, such as a 2018 claim that a “machine learning algorithm [could] identify faces as gay or straight,” could be weaponized by transphobic or homophobic state actors to further discrimination and oppressive surveillance.

Biases in biometric identification have also been well-documented. For example, US police agencies often rely on biased facial recognition technology, trained on datasets that disproportionately feature white men, leading to misidentification and false arrests of Black individuals. Research has shown that this contributes to greater racial disparity in arrests. Although technical fixes, such as more representative and larger data sets, can improve performance, they are likely to be insufficient. For example, other groups like transgender people may still be poorly represented in training data, leading to discriminatory outcomes. More importantly, the more accurate these tools become, the greater their potential abuse by governments, including authoritarian regimes.

That being said, as noted above, facial recognition is not always accurate. In Wired’s reporting on CBP’s use of biometrics at the border, director of investigations at Electronic Frontier Foundation Dave Maass pointed out that in reviewing public records from the CBP’s recent facial recognition testing, he found that the agency revealed that cameras aimed at cars did a poor job of capturing usable images of everyone in the vehicle.

Still, the accuracy and performance of biometric systems have improved. As Hany Farid, UC Berkeley Professor and technologist who has been called the “father of digital image forensics,” noted in an interview, “from a technical perspective, face recognition is very good.” He explained that, while the “underlying tech has changed, really what has changed is data. We now have access to billions and billions of images of people because we have all been busy uploading images to the internet.” As Farid further pointed out, even if you yourself don’t have a social media account, your friends do.

At a minimum, our faces and fingerprints are known, meaning that for even the most privacy-conscious person, total avoidance is virtually impossible. Technical advances and limitations aside, the real danger lies in how these technologies are used.

In addition to the concerns about bias noted above, other human rights threats posed by biometric applications are not hypothetical. In China, biometric systems have been used to target the Uyghur population, including an effort by Huawei to test “facial recognition software that could send automated ‘Uyghur alarms’ to government authorities when its camera systems identify members of the oppressed minority group.” Biometric tools, including facial recognition, sentiment and ethnicity analysis, and more, have served as pretexts for China to place Uyghurs in detention centers “for reasons as simple as practicing their religion, having international contracts or communications, or attending a Western university.”

Authoritarian uses are not limited to China. Russia has used biometric identification in combination with city surveillance systems to track down draft evaders and protestors against the war in Ukraine. Biometric-fueled surveillance and identification are proving a cornerstone of increasingly authoritarian “Western” governments. Hungary recently announced that facial recognition would be used to support the enforcement of its ban on LGBTQI+ pride activities. Agencies in the Trump administration are touting their use of AI, including biometric data, to target political activists and immigrants.

There have been efforts to regulate biometrics, especially facial recognition, at various levels of government. Some regulatory bodies have passed guidelines specifically focused on biometrics, such as New Zealand. The EU’s General Data Protection Regulation broadly covers personally identifiable information, while the newer EU AI Act regulates, and in some cases bans, the applications of biometric data. Earlier this year, the European Commission published two sets of guidelines under the EU AI Act, including one on prohibited uses of AI. The guidelines list eight prohibited uses, including “Untargeted scraping to develop facial recognition databases; Emotion recognition in workplace and education institutions; Biometric categorisation; and Real-time remote biometric identification (RBI).” However, there are troubling exceptions. For example, emotion recognition used “by a supermarket … to conclude that somebody is about to commit a robbery, is not prohibited under Article 5(1)(f) AI Act.”

In contrast, the United States federal privacy laws are woefully outdated. States, and even some counties and cities, have stepped in with privacy laws, as well as transparency laws related to the acquisition of some types of surveillance technology, and even bans on facial recognition in some places. Pursuing these types of policies at the local level is one way communities in the US can resist the Trump Administration. While these examples of legislation wouldn’t have applied to every instance of biometric identification, for example, the US Army’s biometric technology potentially being used by National Guard troops deployed to quell protests in LA, they can still have a meaningful impact, limiting the transfer of local data to federal agencies. Local policies can also address law enforcement's access to commercially available tools.

Such efforts matter because biometric harms are concrete, growing, and disproportionately affect already marginalized communities. Individuals alone cannot stop the biometric surveillance machine, but they can do less to enable it. Since the start of the year, many have begun reconsidering their relationship to large social media platforms. While a lot of our data is out there, it is worth thinking more carefully about what you share and encouraging others to do the same. This can include small acts of resistance, such as refusing to consent to TSA facial scans and pushing for policies that restrict or ban the collection, use, or sharing of biometric information at the local level.

While human rights advocates and others face an uphill battle to enact more holistic policy interventions, such collective resistance is vital in resisting surveillance overreach and building support for more meaningful political action.

Authors

Dia Kayyali
Dia Kayyali (they/them) is a member of the Core Committee of the Christchurch Call Advisory Network, a technology and human rights consultant, and a community organizer. As a leader in the content moderation and platform accountability space, Dia’s work has focused on the real-life impact of policy ...

Related

Status of State Laws on Facial Recognition Surveillance: Continued Progress and Smart InnovationsJanuary 6, 2025
Perspective
Ask the Experts: AI Surveillance and US Immigration EnforcementApril 22, 2025

Topics