Harnessing AI: Use It To Stabilize, Not Stigmatize Homelessness
Claudine Sipili / Jun 11, 2024As major cities across the United States grapple with rising homelessness rates, some civic leaders are turning to artificial intelligence to detect encampments and aid in providing services.
But many dedicated professionals and advocates working to end homelessness express they fear these AI detection practices could enable the criminalization of unhoused people. The fear is that AI may be used for surveillance and heavy-handed enforcement rather than connecting people to housing and supportive services.
Having endured the dehumanizing experience of homelessness myself, I intimately understand the constant fear and instability that comes with not having a safe place to call home. Now, as the Director of Lived Experience and Innovation at Destination: Home in San Jose, California, I am committed to advocating for unhoused individuals and ensuring they are treated with the dignity and compassion they deserve.
Valid concerns arise over the grave risks of AI being wielded as a weapon of surveillance and criminalization against the poor. A 2022 report by the UN Special Rapporteur on extreme poverty and human rights warned that the increasing use of AI and other technologies in welfare systems could lead to "human rights violations on a massive scale." The report cautioned that without proper safeguards, these tools could be used to profile, police, and punish the poor.
In 2023, a state audit found that New York City agencies were using AI tools without standard rules for responsible use, transparency, or accountability. The audit revealed incomplete public reporting of AI systems, a lack of formal policies for tracking the impacts of these tools, and insufficient oversight of AI used by city contractors.
Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, agrees that the city has yet to address the threats posed by AI tools. These findings underscore the urgent need for robust governance frameworks and regulations to prevent the misuse of AI in ways that discriminate against marginalized communities and erode public trust.
Some civic leaders are already exploring innovative ways to leverage AI in addressing homelessness.
In Austin, Texas, a startup firm called Nomadik AI developed an app that allows users to document and map the locations of homeless camps across the city. The app not only helps to provide a clearer picture of the extent of homelessness but also aims to connect people experiencing homelessness with vital resources. Users, including unhoused individuals themselves, can request assistance, such as shelter. For those who don't have access to a personal device, Nomadik has an outreach team that uses the app to address the needs of these clients.
Additionally, Austin has programs in place to provide free devices to clients, resulting in a relatively high rate of device ownership among the homeless population. In cases where individuals still lack access to a smartphone or tablet, they can visit public libraries and use the computers there to access the Nomadik AI platform and request assistance.
The app can also facilitate nonprofit leaders in locating and assisting those in need. Nomadik AI's co-founder, Trevor Sorrells, emphasizes the company's commitment to privacy: "Our system is set up to be as privacy-focused as possible. All of the data is transformed into a format only the models understand and only delivers specific data to end users who need it, such as outreach workers and Continuums of Care.”
In Los Angeles County and Chicago, officials are piloting promising approaches using predictive AI to identify individuals and families at risk of homelessness and proactively connect them with assistance. These programs analyze data like emergency room visits, jail stays, and use of food assistance to determine who is most likely to become homeless.
Social workers then reach out to offer critical financial aid, case management, and housing support to help people remain stably housed. With 86% of participants in the Los Angeles County program maintaining permanent housing, these data-driven early interventions demonstrate AI's potential to prevent homelessness when implemented with care and compassion.
However, without clear guidelines, transparency, and community oversight, even well-intentioned efforts can lead to unintended consequences and further stigmatization. This demonstrates a crucial need for a shift from reaction to prevention.
Ultimately, the long-term solution to homelessness lies not in high-tech fixes but in bold policies that address its root causes. Civic leaders must thoughtfully embed AI as one component of a holistic strategy that invests in building deeply affordable homes, expanding access to health care and substance abuse treatment, ensuring living wages and strong social safety nets, and dismantling the structural inequities that perpetuate cycles of poverty and housing insecurity.
With the vital input of people with lived experience of homelessness, urban leaders have an opportunity to harness AI's vast capabilities to identify vacant properties for conversion to affordable housing, automate and expand rental assistance programs, coordinate entry into a comprehensive continuum of care, and equitably allocate limited resources for maximum community impact.
Adopting clear guidelines on the ethical use of data, transparency on how information will be used, and robust community oversight are key to maintaining the public trust. It is essential to ensure technology is used to affirm human dignity and improve well-being, not stigmatize and punish the most vulnerable community members.
In a nation as prosperous as the US., homelessness is not an inevitability, but a policy choice. By encoding equity and human rights into AI algorithms, leaders and advocates can create a digital path to a brighter future in service of housing justice.