Home

Donate

New York’s AI Policy Falls Short On Surveillance

Patrick Lin / Mar 28, 2024

New York Governor Kathy Hochul. Source

In January, Governor Hochul and the New York Office of Information Technology Services announced a statewide AI policy, titled “Acceptable Use of Artificial Intelligence Technologies.” The policy took effect immediately upon its publication. According to Gov. Hochul’s press release, the policy “establishes the principles and parameters by which state agencies can evaluate and adopt AI systems to better serve New Yorkers” and “ensure agencies remain vigilant about evaluating any risks of using AI systems and protecting against unwanted outcomes.”

The NY ITS policy applies to state entities and agencies, including local governments, employees, and even contractors or other third parties that may access or manage AI systems on behalf of a state entity. The AI policy covers new and existing systems that deploy AI technology, including machine learning, large language models, natural language processing, and generative AI, that could “directly impact the public.” Examples provided in the NY ITS policy include assessments or decisions about individuals in law enforcement, hiring and employment, or healthcare contexts, among others.

Unfortunately, the scope of the policy is limited. Gov. Hochul’s plan does not cover “authorities, boards, and other New York State governmental organizations” as defined by the New York State Technology Law. Significantly, the Metropolitan Transportation Authority, which has aggressively implemented AI to surveil subway riders, falls outside the scope of the NY ITS policy. In addition to using AI scanning software to track fare evasion, the MTA is still moving forward with its plan to install surveillance cameras in all 6,455 of New York City’s subway cars by 2025. While the IT Services Office “strongly encourages” out-of-scope organizations to adopt the policy, there is no requirement to do so.

Gov. Hochul’s policy shares some similarities with the federal guidelines set forth in President Joe Biden’s executive order issued late last year. For example, while Biden’s AI executive order requires covered companies to have their AI models undergo an independent audit, the order itself fails to set forth audit standards or safety requirements. It also remains unclear what would happen if a company’s audit report showed its AI models were high risk or unsafe.

Similarly, under Gov. Hochul’s AI policy, covered agencies must conduct a risk assessment for any new or existing AI system to evaluate “all security, privacy, legal, reputational, and competency risks.” These assessments must follow the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF).

It is encouraging to see New York leverage the AI RMF, particularly because the framework explicitly recognizes the sociotechnical nature of AI systems and appreciates that contexts, such as the sector and use of an AI system, are key factors in determining and managing AI risk. The AI RMF also emphasizes the importance of documentation, crucial for the NY ITS policy’s other requirements, such as creating and maintaining an inventory of AI systems and developing privacy policies and controls for those systems.

However, like the AI RMF, the NY ITS policy lacks clear guidance with respect to surveillance applications. More specifically, the NY ITS policy calls on state entities to assess, manage, and document AI risks, but does not prescribe risk tolerance or outline when it may be more appropriate to stop the development or deployment of an AI system rather than attempt a risk mitigation strategy. The AI RMF acknowledges that development and deployment of AI systems should stop in cases where an AI system “presents unacceptable negative risk levels.” Still, the NY ITS policy is unclear about when an AI system’s risk levels become “unacceptable” or what actions to take when such a determination is made.

The NY ITS policy does make some strides in other areas. For example, the NY ITS policy includes a requirement that state agencies appoint supervisors to oversee AI systems and ensure decisions impacting the public are not made without human oversight and final human approval. The AI policy also recommends the development of privacy policies and controls when state entities process personally identifiable, confidential, or sensitive information, though the policy itself does not require the adoption of standard data protection practices, such as data minimization or limited data retention. The policy also outlines unacceptable uses of AI, such as using generative tools to deceive the public, creating content without determining whether the input data violates intellectual property protections, and implementing AI chatbots without disclosing them as non-human.

All in all, the NY ITS policy is a step in the right direction, particularly by requiring risk assessments as well as human oversight when an AI system is involved in decision making. Unfortunately, the NY ITS policy falls short in addressing AI’s surveillance applications, particularly by not establishing clear standards or defining risk tolerance for AI systems. Gov. Hochul’s approach, like many other AI policies, remains fixated on technical solutions without confronting an essential question: should an AI system be used in the first place?

Authors

Patrick Lin
Patrick K. Lin is an attorney and researcher focused on AI, privacy, and technology regulation. He is the author of Machine See, Machine Do, a book that explores the ways public institutions use technology to surveil, police, and make decisions about the public, as well as the historical biases that...

Topics