Where AI Meets Racism at the Border
Tsion Gurmu, Hinako Sugiyama, Sobechukwu Uwajeh / Sep 30, 2025
Department of Homeland Security (DHS) Secretary Kristi Noem visits the Custom’s and Border Patrol Unmanned Technology Operations Center for an Industry Day demonstration including DHS component heads and the CEOs of 22 vendors of counter-manned aircraft systems, in Summit Point, WV, on July 24, 2025. (DHS photo by Mikaela McGee)
Following the passage of President Donald Trump’s “Big, Beautiful Bill,” the United States is anticipated to spend billions more on technology to surveil its borders, track immigrants, and execute its mass detention and deportation program. Part of the money will go to acquire and deploy new AI systems, including surveillance towers that utilize facial recognition, social media monitoring, and database analytics. However, the US has previously committed to international law guidelines that require taking another look at utilizing these biased AI systems for a “smart border.”
In response to a meeting with United Nations Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, the Black Alliance for Just Immigration (BAJI) and the Immigrant Rights Clinic and International Justice Clinic at UC Irvine (UCI) School of Law recently submitted a report to break down the specifics of how AI affects Black migrants and migrants of color while, giving suggestions for change in the future, drawn from the international human rights law.
As AI can implicate a wide range of fundamental human rights, many sets of human rights law apply directly to how states should use AI on people. The most relevant of them, in the context of racism, is the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD). ICERD sets out a negative duty for states “to not engage in and prevent acts of racial discrimination” and, importantly, a positive duty to mitigate structural racism, specifically to “amend policies, laws, and regulations that perpetuate existing racial discrimination.” It also requires states to ensure equal treatment before the law, the right to effective remedy, and private sector compliance. The US ratified the ICERD in 1994, binding it under international law to commit to these terms.
BAJI and the UCI Clinics detail how US policy for the use of AI in border enforcement violates those duties under ICERD in the whole immigration process.
Many migrants are already impacted by the US use of AI along the migration route, even before they arrive at the border. Customs and Border Patrol (CBP) uses autonomous surveillance towers and small unmanned aerial systems to identify human movements and other “objects of interest” in place of monitoring by Border Patrol agents. However, the use of these devices can prove detrimental for migrants. First, they mark these individuals as lawbreakers rather than people seeking safety and security. Additionally, migrants often take more dangerous routes to the border to avoid detection, leading to a greater number of migrant deaths that disproportionately affect Black migrants.
At points of entry, AI systems make formal routes to entry even more difficult for Black migrants. For example, the previously utilized CBP One app required a selfie to be included in a migrant’s application to ensure they were a “live person” through their Traveler Verification Service. However, CBP One technology was often unable to recognize migrants with darker skin tones. CBP One inaccurately identified Black faces at a rate 10 to 100 times more than white faces according to Priya Morley in AI at the Border: Racialized Impacts and Implications. Additionally, CBP One was often not accessible in the languages or dialects of many Black migrant populations. This created additional barriers to entry for these groups. Currently, the app is no longer available, but reinstating the app remains a possibility under the current administration.
Even if migrants get past this stage, they could still be harmed under the Automated Targeting System (ATS). Under ATS, the US uses data from multiple domestic and international databases to determine which individuals have a high likelihood of overstaying. Though risk assessments are commonplace in immigration systems, the ATS system perpetuates already existing bias in who is likely to overstay in the US and exacerbates harmful data points. For example, since Nigeria was added to a list of countries facing heightened travel restrictions in 2020, the AI systems now flag those Nigerians and other similar applicants as being at higher risk.
Tools like the ATS generally fly under the radar of the US government because advocates say the tools are preventative rather than punitive. However, the use of these tools directly contradicts the US commitment under ICERD to uphold both negative obligations to avoid engaging in racial discrimination and positive obligations to mitigate structural racism.
Next, when migrants make it into the US, they face more discrimination from ICE from interior enforcement and if they are detained. ICE uses predictive algorithms such as a “Hurricane Score” to determine who merits heightened surveillance. There is a lack of transparency on the factors that affect one’s Hurricane Score. Because the algorithm is provided by a private company, B.I. Incorporated, which has strong ties to the prison industry, the government has not had to disclose the factors of this score. However, the lack of transparency has raised more grave concerns about compliance with ICERD articles and domestic regulations to ensure that discrimination does not persist and that migrants are treated with equal treatment under the law. The lack of information also makes for a lack of access to an effective remedy for participants to redress their determination as “high risk.”
ICE also uses the Repository for Analytics in a Virtualized Environment (RAVEn) platform to analyze trends and patterns across a series of data sources to further assess the risks migrants may pose in the US. These data sources both stem from local data from executive agencies and law enforcement (which are fraught with disproportionate biases) and international data from a series of offices across over 56 countries. The RAVEn score can have a large impact on the lives of these migrants, but there is no opportunity to consent or opt out of providing information to the system. Additionally, and similar to the use of the Hurricane Score, there is also a lack of transparency.
Lastly, under immigration relief systems, AI may be used by the US Citizenship and Immigration Services (USCIS) to sort evidence and detect fraud in applications. USCIS tends to use a training model called Asylum Text Analytics (ATA), a system responsible for identifying fraud by reading asylum application text. Oftentimes, ATA may generally prejudice non-English speaking applicants. This is especially true for those who speak languages without widespread adoption and translate through the same providers because the ATA may weed out those with legitimate claims whose applications contain similar phrases or narratives as other applications.
Rather than simplifying its application process, USCIS also uses an AI-powered Evidence Classifier to “review” millions of pages of evidence ranging from birth certificates to medical records and photos for USCIS adjudicators. These AI reviews can negatively impact migrants who may have atypical documentation, oftentimes exacerbating racial discrimination.
The solution to decolonizing artificial intelligence is to ensure US immigration policy embodies a collectivist rather than individualistic view. Recalling that the 2001 Durban Declaration and Programme of Action, adopted by the UN General Assembly in 2002, identified colonialism as a root cause for the racism and racial discrimination, BAJI and UCI Clinics call for applying the decolonial praxis of Cosmo uBuntu to AI, which involves the voluntary embracing of the African concept of Ubuntu (personhood) as “a foundational value system in our participation in planetary conviviality, without forcing universality.”
In contrast to the Western-centric, individualistic views on humanity, African cosmology embraces the humanity of all humans.” Decolonizing AI, which constitutes part of the obligation under ICERD, requires the African diaspora to play a significant role in the process of conceptualizing, inventing, innovating, and operating AI. In contrast, the AI currently used by DHS fail to incorporate decolonial perspectives, perpetuating and exacerbating racial biases rooted in colonialism, extraction, suffering, and death.
Additionally, BAJI and the UCI School of Law Clinics also make clear recommendations in the report to the Department of Homeland Security, the White House, Congress, and state and local governments. These recommendations include:
- Ensuring that individuals who may be negatively impacted by the use of AI are promptly notified about such decisions, and providing individuals an option to opt out of AI systems where appropriate;
- Enact federal laws governing DHS’s use of AI that:
- Prohibit and prevent any AI use that would result in racially discriminatory results or exacerbate structural racial discrimination; and
- Mandate (i) the effective discrimination-prevention measures, (ii) independent oversight on implementation, (iii) robust public disclosures, (iv) stakeholder consultation with diverse populations, and (v) access to effective remedies by those who are negatively impacted by DHS’s use of AI;
- Adopt and revise city policies to include an explicit pledge not to share information with DHS if it is expected to be used for AI development or deployment by DHS or its vendors.
Embedded in each of these calls is one that resounds: We call for the immediate end of the use of AI systems by DHS until the government can ensure the systems it deploys are free of discrimination, and until diverse perspectives are meaningfully included in the development and use of AI systems.
Authors


