Home

Donate
Perspective

Five Findings from an Analysis of the US Department of Homeland Security’s AI Inventory

Paromita Shah / Apr 21, 2025

Paromita Shah is the co-founder and executive director of Just Futures Law.

WASHINGTON, DC - JUNE 28, 2020: A DHS Officer in formation outside the White House. Shutterstock

Starting in early 2024, Just Futures Law and Mijente researched the United States Department of Homeland Security’s (DHS) use of artificial intelligence (AI) amidst growing concerns—even before the start of the second Trump administration—about the lack of transparency and public information available on the inventory of AI tools DHS maintains. Our initial findings, presented in the 2024 report “Automating Deportation,” exposed details of the DHS AI armory—most of which had never been seen—and how Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) are using it to surveil the millions of migrants entering and residing in the United States.

In the course of our research, we also discovered that DHS was already violating existing policies and laws related to transparency, oversight, and its obligations to monitor its products for AI harm. Our team met with DHS to share our findings and organized letters demanding that DHS shutter AI to mitigate further harm. The pressure exerted by national civil rights groups led to the termination of some AI programs and DHS’s review and assessment of its AI inventory, including direct responses to our inquiries and public pages that publicly named AI tools and uses that had never previously been identified.

In the last days of the Biden Administration, DHS released its most complete inventory, revealing new AI uses that it had kept hidden. It was the only requirement that DHS was able to meet out of a long list of requirements from the Biden administration’s Executive Orders on AI directed to federal agencies, many of which were fast-tracking AI without considering whether it would hurt the public or violate civil rights protections. Just Futures Law went through the most recent DHS inventory to share these insights with the public.

Top Five Findings

1: DHS disclosed almost 200 AI uses in its AI inventory, a figure unknown to the public before December 2024.

DHS shut down a handful of key AI programs by the end of 2024, including the Asylum Text Analytics and the I-539 approval prediction, two programs identified in our report. The ATA attempted to assess fraud, similar to machine learning programs used by USCIS to assess “when stories don’t align.” This system lends itself to erroneous denials, bias, and discrimination against limited English proficiency speakers, a large majority of asylum applicants. Unfortunately, it kept the detention scoring program Hurricane and the program that hosts it, the Risk Classification Assessment (RCA). ICE uses the “Hurricane Score” and the “Risk Classification Assessment” (RCA), seemingly AI-powered tools, to make decisions on whether to release a person from detention or determine the terms of their electronic surveillance. According to the limited information released in the inventory, DHS will not notify the attorney or the impacted individual of the AI output (See DHS AI inventory, DHS-2408).

2: CBP has the most AI uses, and almost all of them are at the border. 

Some AI uses were being used in mobile devices. Many AI tools deployed by CBP involve facial recognition, scanning, drones, and data analysis. This means CBP can collect and analyze personal data and images, including face scans, without permission or knowledge. For example, the CBP One App, now called CBP Home, utilizes facial recognition and data tracking. Additionally, many of these programs involved multiple uses of social media monitoring and tracking.

3: Contrary to requirements in the Biden AI Executive Order, DHS continued to extend and approve many powerful AI programs despite finding that they were “rights-impacting” and could potentially result in bias or error. 

DHS often elected to self-certify its own compliance (vs using a third party). DHS did not and will not notify people, including attorneys, on whether AI has been used or provide meaningful mechanisms to challenge its use. DHS asked for extensions on several surveillance AI tools, such as Babel, a social media monitoring AI program, and Fivecast-Onyx. Also, many AI components of RAVEn, a massive data analytics platform used to track and analyze vast amounts of data, have been allowed to continue. DHS leaned on its Office of Civil Rights and Civil Liberties (OCRCL) for compliance, although OCRCL does not carry enforcement powers to ensure monitoring. All in all, DHS failed to comply with key components of civil rights protections.

4: The DHS AI inventory was scattered, misleading, and incomplete.

DHS failed to fill in all the fields of the inventory spreadsheet, leaving out material information about how the AI is used, whether it was terminated, the companies that make the AI, and how the AI would be monitored. In short, key details of the AI uses were omitted. The inventory failed to consistently share procurement contract numbers across the inventory, making it impossible to identify which companies were behind some of the programs or the DHS platforms that hosted them. The procurement numbers were written in error, sometimes failing to match with any procurement order. Moreover, some AI uses were listed as “retired,” giving the impression that they were terminated; however, further review showed some uses were terminated while others simply were removed from the AI inventory because DHS decided they were not AI.

5: Very few AI products were created in-house. DHS purchases most AI from companies and contractors, including multiple social media monitoring and facial recognition programs.

These are hosted on large systems used by DHS for visa vetting, border enforcement, immigration processing, and threat modeling. We learned programs like Babel (AI inventory, DHS-185) and unnamed programs under the Investigative Prioritization Aggregator (AI inventory, DHS-125) are involved in social media monitoring. Another, troubling social media program, GOST (also known as Giant Oak), was “retired” in 2022. However, we have reason to believe that many, if not all, of these programs are still in use.

So, What’s Next?

Under the Trump Administration, AI use is swiftly escalating within DHS for social media surveillance, revocation of visas, and deportation operations at dangerous and unprecedented levels. Since Trump took power, DHS has gutted its civil rights office, utilized antique national security laws for political targeting and mass deportation, and revoked the only EO addressing civil rights, leaving these powerful tools in the hands of DHS agents who disappear people and access sensitive information in violation of privacy safeguards. AI requires data, and DHS sits on a vast repository of it. It will be imperative to monitor and uncover what data is being accessed by AI systems so that communities can discover ways to challenge its use.

Authors

Paromita Shah
Paromita Shah is the founding Executive Director of Just Futures Law. Paromita has worked at the intersection of racial justice and immigrant justice for over two decades. She is a racial and immigrant justice lawyer who works alongside community-based movements to end mass deportation, criminalizat...

Related

In the War on Migrants, A Trump Win is Really a Win for Big Tech

Topics