DHS AI Surveillance Arsenal Grows as Agency Defies Courts
Justin Hendrix / Feb 1, 2026
People participate in a protest in solidarity with Minneapolis and against US President Donald Trump and US Immigration and Customs Enforcement (ICE) in New York City on January 23, 2026. (Photo by Deccio Serrano/NurPhoto via AP)
Last week, a federal judge in Minnesota included a list of “96 court orders that [Immigration and Customs Enforcement] has violated in 74 cases” in an order concerning an individual that had been detained. "This list should give pause to anyone—no matter his or her political beliefs—who cares about the rule of law,” the judge wrote. “ICE has likely violated more court orders in January 2026 than some federal agencies have violated in their entire existence."
Yet even as ICE is defying judicial authority and engaging in violence, it is also acquiring sophisticated surveillance tools powered by artificial intelligence technologies and rapidly deploying them in American cities. The latest Department of Homeland Security AI inventory, released on January 28, reveals more than 200 AI use cases that are deployed or in development at DHS and its component agencies—an almost 40% increase since the last disclosure in July 2025. ICE drives much of this growth, adding 24 new AI applications including tools to process tips, review social media and mobile device data, and deploy facial recognition to confirm identities. Among these additions are products from Palantir, the surveillance contractor whose technologies are powering ICE's targeting operations.
Evidence of alleged lawlessness by DHS component agencies is substantial, including on digital rights and surveillance, even as the broader DHS comes under fire for spreading falsehoods about American citizens killed in Minnesota by ICE and border patrol agents. For instance, the ACLU has filed suit documenting ICE and CBP's pattern of suspicionless stops, warrantless arrests, and racial profiling of Minnesotans, including with the use of facial recognition. The New York Times reported Friday on how tech companies are enabling these facial recognition applications, building a surveillance infrastructure that can target anyone for profit while challenging constitutional principles.
A growing AI arsenal
According to a FedScoop analysis, the newly disclosed inventory details several applications that have "raised concerns among experts." Five such examples Tech Policy Press reviewed include:
- ELITE, a Palantir tool that uses generative AI to help ICE officers extract information from records and warrants, is described in the inventory as a tool to “extract accurate addresses and build usable enforcement leads.” Per reporting from 404 Media, which obtained a copy of the user guide for the application, ELITE creates a map populated with potential deportation targets, providing dossiers on individuals complete with an address "confidence score." The tool pulls data from the Department of Health and Human Services and other government sources, allowing agents to identify neighborhoods for raids based on data density.
- Mobile Fortify is a facial recognition and fingerprint matching application that compares biometric information against agency records that has been used by both CBP and ICE since May 2025. 404 Media documented a case where this app misidentified a woman twice during an immigration raid—despite ICE claiming the app's results are a "definitive" determination of someone's immigration status. Mobile Fortify has two entries in the inventory, indicating CBP operates the system and ICE accesses it.
- A set of tools for “AI-enhanced tip processing” rely on Palantir technology using commercially available large language models to review and categorize and even translate incoming tips—technology that helps ICE more efficiently process and act on reports, including from the public. Wired detailed what is known about this too, including the contract that appears to describe it.
- A "Hurricane Score" is a predictive risk model to assess the likelihood that non-citizens in Alternatives to Detention (ATD) programs will "fail to comply" with check-in requirements. "Officers may then consider this score, along with many other factors, when determining whether current levels of case management or technology assignment remain appropriate or should be adjusted." This tool was developed under the Biden administration.
- A tool for "Open Source and Social Media Analysis" from the firm NexusXplore allows for fine-grained social media searches. "This tool utilizes AI modules for Text detection and translation as well as object and image recognition to provide analysts with possible matches to manually review in a single interface versus doing multiple manual queries," according to the inventory. "The output is not solely used for action or decision making and are used to identify additional Open Source or Social Media of a person or identify additional selectors (such as phone and emails) that are previously unknown to CBP and compared by an analyst against Government systems to identify additional derogatory information." Last year, a former Biden administration official told NBC News that the Trump administration's use of social media surveillance tools was "different from what the previous administration intended."
At least 23 applications use some form of facial recognition, face matching, or related biometric identification. Some are mundane, such as a tool that determines whether "a user-uploaded ID photo is suitable for use on an EAD card" when "submitting an e-filed I-765 via myUSCIS to apply for employment authorization." But the suite of tools available to ICE include applications that scour images drawn from the public internet for matches. DHS issued a $3.8 million contract to Clearview last year.
Of 238 use cases in the latest inventory, 55 are deemed "high-impact," 134 are classified as "not high-impact," and 49 are categorized as "presumed high-impact but determined not high-impact." ELITE falls into this last category—DHS says it's not high-impact because its outputs "do not serve as a principal basis for decisions or actions with legal, material, binding, or significant effects on individuals." That means DHS determined a tool that helps ICE decide which neighborhoods to raid with military-style operations based on “AI-extracted addresses” doesn't have "significant effects” on rights or safety.
As Just Futures Law founding executive director Paromita Shah documented in an analysis of the 2024 inventory for Tech Policy Press — drawing on a substantial report her organization produced with Mijente — last year's inventory was "scattered, misleading, and incomplete," with missing procurement information making it impossible to identify which companies profit from the many programs. The latest installment is also missing material from a substantial number of fields; for instance, the risk management fields (justification, impact assessment, monitoring, etc.) appear to be more than 80% empty across all records.
In her analysis of the 2024 inventory, Shah found the department "failed to comply with key components of civil rights protections" and continued approving "rights-impacting" programs despite finding they could result in bias or error. She noted that DHS "leaned on its Office of Civil Rights and Civil Liberties" for compliance, but that office has been severely diminished under the Trump administration.
Automating authoritarianism
As Minneapolis resident and Kairos fellow Irna Landrum wrote in a recent perspective for Tech Policy Press, "How ICE Uses AI to Automate Authoritarianism," ICE and other DHS component agencies appear to be using cloud capabilities and AI to automate monitoring, expand repression, and centralize power. The agency has consolidated data from unprecedented sources—from license plate readers to utility hookups to social media posts—creating what amounts to a surveillance panopticon.
As Tech Policy Press fellow and author of The Walls Have Eyes Petra Molnar recently argued, this technological expansion represents "a win for Big Tech" and companies like Palantir. The firm—cofounded by Trump ally Peter Thiel—has spent years positioning itself to secure lucrative DHS contracts.
Together with willing vendors, DHS has built what Georgetown Law's Center on Privacy & Technology calls an "American Dragnet"—a surveillance infrastructure enabling the agency to pull detailed dossiers on nearly anyone, at any time. As the Georgetown researchers note in their 2025 update, "The struggle now is not to uncover the right information, but to rightly understand the meaning of the information we already have, and to face that meaning together."
What they found is that ICE surveillance deters people from accessing essential services—not just from political organizing, but from enrolling in healthcare, reporting crimes, or even connecting utilities to their homes, a finding that is playing out in real time in communities such as Minneapolis. When people need water, electricity, and heat to survive, but seeking those services means their information flows into ICE's databases via data brokers like Thomson Reuters, the surveillance state has made basic human needs a trap.
As 2025 Tech Policy Press fellow Dia Kayyali wrote in "AI Surveillance on the Rise in US, but Tactics of Repression Not New," the bipartisan post-9/11 expansion of surveillance—the point of origin for the DHS, FBI Joint Terrorism Task Forces, fusion centers, the no-fly list, and social media monitoring—created the infrastructure Trump inherited. But Trump’s ‘big, beautiful’ budget bill, passed by a Republican-controlled Congress last summer, represents another massive step change, with billions of dollars in funding towards the expansion of state surveillance alongside a network of detention facilities some experts say resemble concentration camps.
What DHS will ultimately do with its new surveillance technologies and the ways they may challenge the US constitution remains to be seen, as does the political reaction. After the recent violence across American cities, polling suggests "more Americans now say they would support abolishing ICE than say they would oppose the agency's elimination." The weeks ahead will likely see discussion of more substantial reforms to DHS, including curtailing its surveillance capabilities.
About the tracker
Tech Policy Press developed the tool below to help readers scrutinize the information provided in the latest 2025 inventory, released on January 28, and compare it to the revised 2024 inventory, released last July. DHS itself provides a tool to search this material, but it is not presently up to date with the latest inventory information.
The information is reproduced as provided in the DHS source files and has not been independently verified, including statements on safety and impact. The tracker does not include details from a separate inventory of more mundane “commercial” tools used for office tasks. It is released as a work in progress and may be updated or corrected. The underlying spreadsheets published by DHS are provided for comparison.
DHS says it “provides this inventory of unclassified and non-sensitive AI use cases within DHS in accordance with the Advancing American AI Act (December 2022), Executive Order 13960 Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government (December 2020) and Office of Budget and Management Memorandum (OMB) M-25-21 Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (April 2025).” Earlier versions of the DHS AI Use Case Inventory (2022–2024) are available from the DHS AI Use Case Inventory Library.
Authors
