Home

NYC Law Regulating AI Hiring Systems Needs Work

Alisar Mustafa, Krystal A. Jackson, Michael Yang / Apr 11, 2022

Krystal A. Jackson is currently pursuing an MS in Information Security Policy & Management at Carnegie Mellon University; Alisar Mustafa is a Syrian-American Responsible Innovation Consultant who works on identifying and mitigating potential harms to society caused by emerging technologies; and Michael Yang is an artificial intelligence researcher transitioning into policy.

New York City Council, 2017. Felix Lipov/Shutterstock

Decades of work to create fair and equitable hiring practices risk being erased as artificial intelligence (AI) is introduced into the process. What's at stake are the values that we hold most dear: equality, dignity, and fairness. The challenge is apparent in the wake of the passage of one of the first pieces of legislation to address AI in hiring in the country, in New York City.

Passed into law last December, the NYC bill on “Automated Employment Decision Tools” puts requirements on employers to disclose the use of such tools, and requires the vendors that provide them to go submit them to an annual bias audit. A product of the age of big data, these tools employ algorithmic decision-making systems (ADS) to crunch large amounts of personal data and, given some objective, derive relationships between data points. The aim is to use systems capable of processing more data than a human ever could to uncover hidden relationships and trends that will then provide insights for people making all types of difficult decisions.

Hiring managers across different industries use ADS every day to aid in the hiring decision-making process. For example, employers use ADS to screen and assess candidates during the recruitment process and to identify best-fit candidates based on publicly available information. Some systems even analyze facial expressions during interviews to, they claim, assess personalities. These systems promise organizations a faster, more efficient hiring process. ADS do theoretically have the potential to create a fairer, qualification-based hiring process that removes the effects of human bias. However, they also possess just as much potential to codify new and existing prejudice across the job application and hiring process. Without clear standards for the creation and use of these tools, most employers utilizing them will never know if they are doing more good than harm, both to the interests of their employer or to the people who apply to work for them.

The NYC bill is an attempt to provide guidance towards fair and equitable ADS. However, it lacks a clear definition of a bias audit, which would function as a way to measure the computational and human bias that can get encoded into ADS. It only requires that the audit be “an impartial evaluation by an independent auditor,” but that independent auditor could be any number of consultants or technology firms. It also lacks the enforcement to make sure that any such audits, if conducted, are truly impartial. So, the bill runs the risk of failing to protect those most at risk from these technologies, and instead creating a cottage industry around rubber stamping AI systems. Already we’ve seen criticism over the effectiveness and thoroughness of these audits from the dozen or so organizations currently providing them.

Luckily, because the bill doesn’t go into effect until 2023 there is still time for the NYC City Council-- working with Mayor Eric Adams’ new administration-- to make the necessary changes to fix these issues. In the meantime, other localities eager to regulate AI hiring should take note of this bill’s mistakes. There is strong reason to believe we can expect to see many more bills like this over the coming years, as many advocacy and research groups sound the alarm on hiring ADS, raising the pressure on lawmakers to act.

In his article “Auditing employment algorithms for discrimination,” the Brookings Institution’s Alex Engler reported various hiring ADS discriminate against women, people with disabilities, and African Americans. The hiring ADS industry suffers from a variety of problems. Some are based on data fraught with the legacy of discriminatory hiring practices, others are based on pseudoscience, others, make vague buzz-word filled promises that lack clear backing or justification. Companies like HackerEarth claim to "Remove Bias Out of Tech Hiring," and Beamery promotes the positive impacts of their tools as "Bias-Free AI Boosts Diversity." However, such claims are difficult to test and therefore rarely tested. Definitions of diversity, equity, bias, and fairness vary company to company, and currently there is no guidance for what standards those creating ADS or the organizations using them need to meet. Furthermore, the use of these systems is not isolated to a few problematic companies or even industries, as a 2020 study that reported 55% of human resources leaders in the United States use predictive algorithms across their business practices, including hiring decisions, made clear.

The interconnected nature of ADS in the hiring process makes it difficult to pinpoint specific steps that require intervention. ADS can be used in search, outreach, screening, and interviewing. However, the entire process requires careful evaluation system usage at each stage. When many unvetted systems are combined, algorithmic harms can start to quickly compound. Recognizing this is no simple challenge, policymakers must focus on an approach to legislation that takes into account future operationalization. With no clear path from a bill to meaningful regulatory enforcement, even those who desperately want to make changes don’t have the power to do so.

In addition to clarifying the scope, authority, and standards of bias audits, the NYC bill would benefit overall from delegating enforcement authority to an agency with the appropriate expertise and capability for the task. Currently, enforcement of the bill is up to the Department of Consumer and Worker Protection, which, as its name might suggest, is responsible for a vast array of issues. In contrast, the New York City Commission on Human Rights, which has specific and deep expertise in civil rights law, is one of the few entities capable of fleshing out the bill’s audit and notice requirements. By deleting those provisions, the current bill creates ambiguity as to which entity, if any, has that authority. The current bill also leaves enforcement to New York City’s Corporation Counsel, a generalist office tasked with a wide range of legal responsibilities. As a result, the odds of aggressive enforcement of the law’s already weakened provisions are greatly reduced.

To truly prevent discrimination and protect against the slow erosion of our civil liberties, we need robust, timely, and well executed legislation. The lessons learned from the NYC bill have started the national conversation; the hope is that it spreads and continues to advance from here.

Authors

Alisar Mustafa
Alisar Mustafa is a Syrian-American Responsible Innovation Consultant where she works on identifying and mitigating potential harms to society caused by emerging technologies. She is passionate about using data, policy and technology to bridge the gap between advocacy and policy change. She complete...
Krystal A. Jackson
Krystal Jackson is a Non-Resident Research Fellow with the Center for Long-Term Cybersecurity (CLTC) AI Security Initiative (AISI) at UC Berkeley. She is also an analyst at the Cybersecurity and Infrastructure Security Agency. Krystal received her MSISPM degree from Carnegie Mellon University.
Michael Yang
Michael Yang is an artificial intelligence researcher transitioning into policy. He is primarily interested in helping democracies use and regulate AI. Michael started his career as a research engineer at Salesforce. His investigation of bias in customer data sparked an interest in machine learning,...

Topics