Home

Donate

Cities Can Lead the Conversation on Responsible AI Use by Governments

Christopher Jordan, Jody Oetzel / Apr 1, 2024

National governments have been scrambling to get a hold on Artificial Intelligence, piqued by the launch of ChatGPT in November 2022. While some nations like Canada and Germany crafted national AI strategies to ensure the responsible development of AI as early as 2017, many of the world’s largest centers of AI development, notably the US, are laggards in creating meaningful policy and governance mechanisms. The recent passing of the EU’s AI Act creates further pressure for national direction on responsible AI use.

While the Office of Management and Budget’s recent memorandum requiring each federal agency to appoint a Chief AI Officer signals the US’ commitment to strengthening AI governance, the specific mechanisms are less clear. As the US considers AI's applications, ethics, and governance, it should look no further than its cities, where minimal federal direction has not stifled meaningful and experimental approaches to tackling AI.

AI has long been in the repertoire of tools cities use, ranging from leveraging digital twins for urban planning to implementing algorithms for service-oriented domains like locating potholes and more contentious uses such as predictive policing, and spotting homeless encampments. However, the rise of large language models (LLMs) and a parallel rise in public awareness and concern for governmental usage of AI tools are leading cities to examine their use of these technologies.

And with good reason: Cities have stories to share of rapid technology deployment gone wrong. Sidewalk Labs’ failed smart city project in Toronto offered an infamous parable for urbanists. Despite boasting futuristic features like robo-taxis, heated sidewalks, and autonomous garbage collection, the project was ultimately abandoned in May 2020 due to widespread resident protests on data access and ownership. In Rotterdam, an AI-powered fraud detection system was found to discriminate unfairly based on ethnicity and gender. These and countless other examples caution cities to consider the human face of technology governance. And this week, a report by the nonprofit news sites TheMarkup and Documented found that a chatbot deployed by New York City to help answer questions about operating a business is instructing people to break the law.

As generative AI hype made waves in the media, local governments took varied responses, from soft-handed, wait-and-see approaches, to immediate hands-on regulation. In Boston, the Department of Innovation and Technology (DoIT) was quick to release guidelines for “responsible experimentation,” which provided city employees with examples of how to experiment with generative AI prompts, while warning against bias, hallucinations, and sharing private data with AI systems.

In Seattle, intermediate guidelines paved the way for codified policy that aligns with existing equity and privacy standards, while establishing oversight for continuously assessing the risk of generative AI models. More recently, small-sized cities have taken up AI responses: Lebanon, New Hampshire, and Los Altos Hills, California, passed regulations through their city councils in the last few months. After initially putting a moratorium on generative AI use, Chattanooga, Tennessee, has since adopted a “crawl, walk, run” approach, working with key stakeholders to develop tools like prompt libraries to help staff interact with AI responsibly.

While still in the early stages, cities are finding that by working together, they can outpace a laggard federal government to lead action on a complex policy issue. San Jose’s GovAI Coalition, which brings together 100+ local governments to address AI use and ethics, has created tools like a template policy manual and vendor agreements forms to guide cities of all sizes as they consider how to approach AI. Through building these kinds of partnerships, larger cities can lend their capacity to conduct AI audits and reviews to smaller cities, which may not have the time and resources to work as carefully with vendors, while simultaneously putting demand-side pressure on LLM developers to be more transparent and accountable.

Similarly, MetroLab is convening policy-specific groups of city practitioners and university academics to create use cases and guidance for AI in the public sector. The National League of Cities’ AI Advisory Committee is also working with local officials across the country to develop a playbook to help governments demystify and harness AI in government operations and public services.

In the US, local governments often act as the testing grounds for federal policy. However, on the issue of responsible AI use, cities can become AI leaders, particularly when their leaders share information with one another. By vetting a repository of use cases, developing standards for vendors, and collaborating on training and education initiatives, cities and local governments can build an infrastructure to create responsible AI use policies that serve the interests of their constituents and the country.

Authors

Christopher Jordan
Christopher Jordan is a Senior Specialist in Urban Innovation at the National League of Cities, where he researches emerging technologies and public sector innovation, advocating for local governments to build sustainable and people-centered solutions to urban challenges. He has worked with governme...
Jody Oetzel
Jody Oetzel (she/her) is a Research Associate at the Notre Dame – IBM Tech Ethics Lab. At the Lab, she researches local government usage of emerging technology and explores how academic research can inform local governments’ ethical technology usage. Jody previously worked with the National League o...

Topics