Home

Disclosures of NYPD Surveillance Technologies Raise More Questions Than Answers

Justin Hendrix, Joël Carter / Feb 23, 2021

This piece was written in collaboration with the Propaganda Research Team at the Center for Media Engagement at the University of Texas at Austin.

Surveillance technologies are reshaping society. Technical advances and investment in tools ranging from facial recognition and social media analysis to cell-site simulators and drones continue to increase each year in cities across the United States. “Because they pose an unprecedented risk to our civil liberties and privacy, we cannot passively accept them, as we have done too often with previous technologies,” concludes Jon Fasman in his book, We See It All: Liberty and Justice in an Age of Perpetual Surveillance.

New disclosures of surveillance technologies employed by the New York City Police Department (NYPD) make clear why the public must demand greater specificity, input and oversight on surveillance systems, and, in some instances, seek to ban technologies all together. The disclosures by NYPD-- the first required by the POST Act-- raise substantial questions about how police in New York acquire and maintain data across dozens of surveillance systems, how NYPD thinks about safety and possible harms to society, and reveal new details about the suite of technologies enabling covert police activities on social media networks.

I. Background: The POST Act Disclosures

In June 2020, at the height of the uprising against police violence following the murder of George Floyd, the New York City Council passed police reform measures. In an amendment to the administrative code, the City Council introduced a measure requiring “comprehensive reporting and oversight of New York city police department surveillance technologies.” Dubbed the Public Oversight of Surveillance Technology (POST) Act, the new law requires the NYPD to publicly disclose its entire complement of surveillance technologies. The required disclosures must be accompanied by “impact and use policies for all existing surveillance technologies describing how the technology will be used, the limitations in place to protect against abuse, and the oversight mechanisms governing use of the technology,” according to the Brennan Center for Justice at NYU Law School.

The product of years of effort by activists, academics and civil society groups to raise awareness of police surveillance in New York, the passage of the POST Act represented a victory over NYPD, which opposed it vehemently. NYPD Deputy Commissioner of Intelligence and Counterterrorism John Miller called the disclosure requirements “insane” and “an effective blueprint for those seeking to do us harm,” according to The Intercept. Despite NYPD opposition, the act commanded a veto-proof majority; the Mayor also broke from NYPD and announced his support of the bill before its passage by the Council.

NYPD made its first disclosures earlier this year, posting links to PDFs containing information on 36 technologies, listed alphabetically from “Audio-Only Recording Devices, Covert” to “WiFi Geolocation Devices.”

According to NYPD, the disclosures “provide details of: 1) the capabilities of the Department’s surveillance technologies, 2) the rules regulating the use of the technologies, 3) protections against unauthorized access of the technologies or related data, 4) surveillance technologies data retention policies, 5) public access to surveillance technologies data, 6) external entity access to surveillance technologies data, 7) Department trainings in the use of surveillance technologies, 8) internal audit and oversight mechanisms of surveillance technologies, 9) health and safety reporting on the surveillance technologies, and 10) potential disparate impacts of the impact and use policies for surveillance technologies.” The public is invited to comment, via email, on each technology by February 25th, 2021. After a period of review, “final impact and use policies will be published by April 11th, 2021.”

While there was widespread coverage of the passage of the POST Act, so far, there has been relatively little reporting on the disclosures, even as concerns and lawsuits over NYPD surveillance mount. Gizmodo notes in a report that “a full and transparent accounting of the force’s spying power has largely been absent until now.” Despite the appearance of transparency, our review of the disclosures raises a multitude of questions-- not only about the technologies themselves, but also about their composition and contents, and the policies that dictate their use. Our analysis suggests more specificity is required to understand exactly what capabilities NYPD employs, on what scale each system is implemented, the ways in which these technologies link to one another and utilize common data sets, and how NYPD, judicial and public oversight work in practice.

II. Inspecting the Disclosures

While the POST Act establishes a general framework for disclosure, NYPD has latitude in how it drafts the briefs and how it makes them available to the public. For the purposes of this report, we consider one technology widely discussed in the news media, facial recognition, and two technologies disclosed separately that provide a useful way to examine how these various technologies fit together-- social media network analysis tools and internet attribution management infrastructure.

In general, the disclosures follow the outline provided in the law, with headings for “Capabilities of the technology,” “Rules, processes and guidelines relating to the use of the technology,” “Policies relating to retention, access, & use of the data,” etc. Most are between 4-8 pages, and consist entirely of text-- there are no technical diagrams or charts.

Curiously, across all 36 technologies, the remarks in the “Health & Safety Reporting” category reveal a possible discrepancy between the intent of the law and the way NYPD has interpreted it. While the law requires disclosure of “any tests or reports regarding the health and safety effects of the surveillance technology,” in nearly all of the disclosures this section is essentially dismissed with boilerplate language. While NYPD notes that drones may “interfere with other lawful aircraft communication systems,” the disclosures assure the reader that its Manned Aircraft Systems meet “FAA safety standards,” and that radiation exposure from “NYPD mobile x-ray technology is considered trivial.” There are “no known health and safety issues” for the 33 other technologies, suggesting that the department is using a narrow concept of health and safety and may not have conducted any tests or produced any reports before their implementation. Indeed, researchers are now looking at the physical and psychological health impacts of policing, including surveillance, as per research published by the American Public Health Association.

Facial recognition

Perhaps anticipating it would draw the most scrutiny, the NYPD disclosure for facial recognition is the longest of the set. But on a close reading, significant questions arise. On the first page, the disclosure states on it: “Facial recognition technology does not use artificial intelligence, machine learning, or any additional biometric measuring technologies.” This is noteworthy, since nearly all state of the art facial recognition technologies employ machine learning to train model embeddings. Writing in Wired, Albert Fox Cahn, the Surveillance Technology Oversight Project's (S.T.O.P.'s) founder and Executive Director, and Justin Sherman, Co-Founder and Senior Fellow at Ethical Tech at Duke University, note that such statements “directly contradict New York’s own report on artificial intelligence systems,” which was also recently published.

Another distinction worth investigating regards how the system functions. The disclosure states that “facial recognition technology is not integrated in any NYPD video cameras or systems” and that “NYPD video cameras or systems do not possess a capability for real-time facial recognition.” Presumably, this means that NYPD cameras do not contain embedded systems that process data on the device, but it does not rule out the possibility that images are transferred for near real-time processing. Such technologies are widely available to police. “The reality is that if you wait to take a live photo and then upstream it, it's technically not live facial recognition. It’s a kind of distinction without a difference in terms of how they can use surveillance footage and bodycam footage pretty close to [it being] live to try to identify people,” noted Ángel Díaz, counsel in the Liberty & National Security Program at the Brennan Center for Justice.

A key question for each of the technologies is the scale of the human operational systems connected to their deployment. For instance, the facial recognition disclosure refers to the role of the “facial recognition investigator.” How many such investigators are employed by the NYPD? A 2020 BuzzFeed report on NYPD’s use of the controversial service Clearview AI states that “30 officers have Clearview accounts” and, collectively, the officers conducted 11,000 searches on the Clearview system. The NYPD disclosure also suggests that use cases for facial recognition technology that fall outside of the categories “foreseen or described” in NYPD policies are referred to the “Chief of Detectives or Deputy Commissioner of Intelligence and Counterterrorism,” who apparently have the sole responsibility for determining if novel uses of the technology are “appropriate and lawful.” If facial recognition technologies are used to investigate political activity, such as protests, that responsibility lies with the Intelligence Bureau. It is unclear if this was the protocol last summer, when facial recognition was employed to identify a Black Lives Matter activist arrested after an hours long “siege” on his apartment. This use case contradicted a past statement from NYPD that facial recognition is not used to identify protestors.

Social Network Analysis Tools

Another notable technology listed in the disclosures is a suite of software as a service (SaaS) “social network analysis tools” that allow NYPD to assess social media profiles and connections between individuals that are apparent on social platforms. The tools also equip NYPD to ingest social media content and permit police “to retain information on social networking platforms relevant to investigations and alert investigators to new activity on queried social media accounts.” The disclosure maintains that NYPD only has access to publicly available information, insofar as it is “viewable as a result of user privacy settings or practices.” User “practices” are not defined, but could refer to actions a user might take that would permit NYPD to see information even if it is not posted to a public channel. There is very little information provided in the disclosure about how long content acquired from social media profiles is retained, which may include “audio, video images, location, or similar information contained on social networking platforms.”

Similar to the disclosure for facial recognition, this disclosure does not state the number of police officers that have access to social media network analysis tools, how many individuals have been tracked using these tools, and the extent and duration of such tracking. As with facial recognition technology, social media network analysis tools that are employed in relation to political activity are under the domain of the NYPD Intelligence Bureau. While the document states that “information is not shared in furtherance of immigration enforcement,” it does make clear that NYPD shares social media information with other law enforcement agencies and other third parties, who are not disclosed.

Internet Attribution Management Infrastructure

The public must consider the ways in which various systems and databases interact across the various surveillance systems. The disclosure for “Internet Attribution Management Infrastructure” brings this question to the fore. This suite of technologies, which includes everything from servers and network infrastructure to SaaS software to laptops and smartphone devices, permits NYPD personnel to “engage in undercover activity to covertly obtain information” by visiting social media platforms and “chatrooms” and engaging individuals with messaging applications without creating a digital footprint traceable to the NYPD to “allow its personnel to safely, securely, and covertly conduct investigations and detect possible criminal activity on the internet.” As with social media network analysis, the use of this technology requires no court authorization and “may be used in any situation the supervisory personnel responsible for oversight deems appropriate.”

Just as with the other technologies, there is the open question of the scale of use that is not defined by the disclosure. How many covert profiles does the NYPD operate? How many individuals in the department are regularly participating in social media groups, or messaging members of the public (or indeed minors) under a false profile? What guidelines are there for the behavior of a police officer in these situations?

Indeed, engaging with individuals on messaging apps may happen in intimate circumstances. Rachel Levinson-Waldman, a Senior Counsel to the Liberty and National Security Program at New York University School of Law’s Brennan Center for Justice, has noted that in past investigations NYPD detectives have targeted mostly Black and Hispanic teenage males “by using a fake avatar of a female teenager.” The disclosure notes that “allegations of misuse are internally investigated at the command level or by the Internal Affairs Bureau.” How many instances of misuse have been reported? How many are related to investigations of minors? Dr. Desmond Patton, founding Director of Safe lab at Columbia University said, "One of the places where we saw this most is in doing interviews with legal aid attorneys that had young Black and brown clients that had been arrested because of their social media profile in some way." His lab looked specifically at how social media information is misused by police in a 2017 paper, Stop and Frisk Online: Theorizing Everyday Racism in Digital Policing in the Use of Social Media for Identification of Criminal Conduct and Associations.

III. Key questions for City Council and New York City citizens

With so many questions and so much potential for misuse, some advocates would prefer the City to ban some or all surveillance technologies. Albert Fox Cahn, a leader in advocating for the POST Act, noted that STOP’s advocacy starts from “the standpoint of calling for full abolition of the underlying technology.”

Indeed, other anti-surveillance activists question if these types of disclosures and similar frameworks in other cities are worthwhile. Hamid Khan, the coordinator of the Stop LAPD Spying Coalition has warned that laws like New York City's Post Act are "diametrically opposed to what abolition requires because they create a framework for police to claim that the public approves of surveillance, as well as for police to frame their use of surveillance tools on their own terms." The Stop LAPD Spying Coalition organizes against a "privacy-focused approach to surveillance," instead drawing attention to the history of “how race and poverty and suspect bodies are policed.” As Khan put it: “Ultimately the whole narrative about privacy needs to change because it's not just about invasion of privacy. That's a very narrow white privileged sort of assumption. It is about the police and their intent to cause harm.”

So, it is unclear the extent to which seeking further disclosure of details or clarification of the requirements of the POST Act will meaningfully address underlying privacy and civil liberties concerns. Oakland, San Francisco, Boston, and most recently Minneapolis have banned the use of facial recognition technology altogether in response to concerns over civil liberty and racial bias. STOP has joined with other New York politicians, activists and civil society groups to call for a ban on facial recognition as well.

That said, in the context of the existing law, the City Council and the public have some recourse to ask for further clarifications and make comments on NYPD’s complement of surveillance technologies. That may lead to other political outcomes or open the door to other forms of scrutiny by activists, journalists and the public. Based on our review of the disclosures, here are nine key sets of concerns lawmakers and the public should consider:

  1. Consider the big picture. To what extent is it incumbent on NYPD to explain how these various technologies combine to create compound capabilities that have not been addressed by any of the individual disclosures? How is data federated from each of the systems and how do analyses of data translate into law enforcement activity and policy decisions? The comment mechanism requires a separate communication against each technology. How should citizens communicate concerns about more than one technology, or a bundle of them?
  2. Context is key. What clarity can the NYPD provide about the application of these technologies in different contexts? Most of these disclosures purposefully include leeway so that their use can be “extended to any crime no matter how minor, [which is] quite problematic,” as Fox Cahn noted. Is facial recognition technology prioritized to solve more violent crimes, like murder or sexual assault, or also low-level and non-violent offenses? If so, how?
  3. Novel situations should trigger external oversight. The disclosure for facial recognition states that NYPD does not seek court authorization because the images used are either publicly available or lawfully obtained information. But, in the next sentence, the disclosure states that there are novel situations that fall outside of the policy. Should there be more stringent rules on the use of facial recognition generally, or should there be a requirement to seek court authorization for use cases outside of the proscribed policy? More generally, when use cases fall outside its policies, should NYPD have free reign to make ad hoc decisions about how to employ surveillance technologies with no oversight? As Sherman mentioned, what "really makes this concerning in a policing context is that facial recognition is often used by police departments outside of traditional fourth amendment processes."
  4. Data retention is a key question. Across all of the disclosures, there are questions that should be raised about data retention. How long does NYPD need to keep information it gathers from social media sites, for instance, or even data from license plate readers? Brian Hofer, Chair of the Oakland Privacy Advisory Commission, suggests that these “more granular level details like retention limits and third-party restrictions need to be fleshed out more” in order to bring NYPD’s disclosures in line with the standards set by the city of Oakland, which has a longer history of public disclosure.
  5. More details are needed on training protocols. Very few details are given about how personnel are trained to use these tools aside from boilerplate language. The public may wish that stringent restrictions are placed on all users to guide the use of the most consequential tools. For instance, per the disclosures, “NYPD personnel utilizing facial recognition technology will receive specialized training on the proper operation of the technology and associated equipment” and that use must be “in compliance with NYPD policies and training.” What does the training consist of and who leads it? Regardless of the technology, certainly anti-bias training should be included. How many hours of training is needed for access to the technology? What assessments certify an individual to use it?
  6. Human systems matter as much as technologies. In order to understand the extent to which certain technologies are employed, it is important to not only understand the technologies but also the scale of the human operational systems that take advantage of them. To what extent should POST Act require the disclosure of the scale of NYPD units such as the Facial Recognition Section, or how many covert profiles the NYPD is operating on the web, and how many operators there are?
  7. NYPD should be required to disclose all instances of misuse. Another key area of concern is the number of infractions or disciplinary actions around the use of surveillance technologies. These numbers should be reported to the public, in order for citizens to adequately assess the cost-benefit tradeoff of such tools.
  8. Vendors need to be listed. Identifying vendors in the disclosures would be “helpful to understand some of the specifics about the particular technologies or companies” as Chris Gilliard, a privacy expert and Shorenstein Center Research Fellow at the Harvard’s Kennedy School, noted. Understanding the vendors may also be useful to citizens defending themselves in court when evidence is presented from surveillance systems. The lack of vendor information “really frustrates the ability to meaningfully use this law in a way that was intended,” notes Díaz. It also means citizens may be unaware when NYPD systems rely on vendors with ethical issues, such as Clearview AI.
  9. Health and safety concerns need to be considered more broadly. NYPD should be required to conduct a more thorough review of health and safety concerns of technologies such as facial recognition, and indeed the use of other tools to collect covert information on citizens. How is the term “health” defined, and could assessing health impacts include the broader cognitive and physical health impacts of surveillance on citizens, including the often racialized trauma of over-policing?

IV. Conclusion

In the future, New York City could develop an oversight model that mirrors Oakland’s Privacy Advisory Commission. Oakland is known for its citizen informed policies and ability to shape how the police department conducts surveillance. The Commission “provides advice to the City of Oakland on best practices to protect Oaklanders' privacy rights in connection with the City's purchase and use of surveillance equipment and other technology that collects or stores [their] data,” according to its website. New York should have its own privacy commission, but the road to get there is winding. The notion of curtailment under New York’s Municipal Home Rule Law states that a voter referendum is needed to form a privacy commission. Otherwise“the commission would be curtailing the power of the executive under the charter to enter into contracts and make a lot of these decisions,” Fox Cahn says.

At present, the public can utilize the public comment period to request further clarifications on the policies required by the POST Act. In addition to further clarification, citizens and activists may consider creating their own impact statements. Jackie Zammuto, US Program Manager at Witness, an international human rights organization, says “stories of how these technologies are actually impacting people, especially in Black and Brown communities, is where effort will be most valuable.” The deadline for public comment on the POST Act disclosures is Friday, Feb. 25th.

“Digital technology is not destined to do harm,” wrote Carnegie Endowment fellows Steven Feldstsein and David Wong in Just Security last year. “But a failure to establish clear and enforceable guidelines about how law enforcement agencies can operate powerful new surveillance tools will make it more difficult to protect citizens’ rights as these new technologies are increasingly deployed.” New York City has taken an important step towards opening a dialogue on surveillance technologies with its police department, the largest and most equipped of any city in the country. Citizens must use their voice if they wish to define how they are policed. The outcome in New York will have implications far beyond the five boroughs.

Special thanks to the Center for Media Engagement at the University of Texas at Austin:

Katie Joseff, Senior Research Associate

Katlyn Glover, Research Assistant

Romi Geller, Undergraduate Research Assistant & Tech Policy Press Fellow

Jimena Pinzon, Undergraduate Research Assistant

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...
Joël Carter
Joël M. Carter is a Research Associate at the University of Texas at Austin Center for Media Engagement. He holds a Master’s of Global Policy from the Lyndon B. Johnson School of Public Affairs and investigates issues on data privacy, mass surveillance, and is interested in intellectual property.

Topics