Home

Donate

Senators Explore AI in Criminal Investigations and Prosecutions

Haajrah Gilani / Jan 26, 2024

Witnesses swear in at a hearing in the Senate Judiciary Subcommittee on Criminal Justice and Counterterrorism, Wednesday, January 24, 2024. Source

WASHINGTON — Detroit police used facial recognition technology to investigate an instance of stolen watches four years ago. The system matched a photograph from the store to the man they eventually arrested.

Except he didn’t do it.

The police department falsely arrested Robert Williams after the software misidentified him. As a Black man, Williams was at a higher risk of being misidentified, according to academic research.

Williams took his case to the American Civil Liberties Union and he decided to sue the city for his wrongful arrest. The case is ongoing.

How to avoid discrimination while using facial recognition software was among many topics US senators raised during a hearing Wednesday about the innovations and risks of AI in criminal investigations and prosecutions.

During the Senate Subcommittee on Criminal Justice and Counterterrorism hearing, Sen. Alex Padilla, D-CA, cited a Government Accountability Office (GAO) report that found several federal law enforcement agencies failed to require civil rights training when using facial recognition software.

“This lack of training is troubling given that the technology has been shown to produce biased results, particularly when it involves identifying Black and brown persons,” he said.

Other senators seemed more willing to embrace AI in criminal investigations.

“AI is consequential in our society as a whole in ways that many of us can't fully imagine– Both in the possibility and the promise, as well as the potential problems.” said the Subcommittee Chairman Sen. Cory Booker, D-NJ, in his opening remarks.

While the three expert witnesses differed on the use of AI in criminal investigations, they all agreed that the technology has the potential to increase law enforcement’s accuracy and efficacy.

Throughout the hearing, the witnesses each agreed that the agencies relying on AI technologies should be transparent about their use of them.

One witness, Assistant Chief of the Miami Police Department Armando Aguilar, discussed how he helped integrate facial recognition into his department. Aguilar also currently serves as a member of the National Artificial Intelligence Advisory Committee (NAIAC), which provides AI-related advice to the President and the National AI Initiative Office.

After he read a news article raising concerns about police use of facial recognition technology, he said his department consulted with local privacy advocates and tried to address their worries about Miami Police policy.

“We're not the first law enforcement agency to use facial recognition or to develop [facial recognition] policy, but we were the first to be this transparent about it,” Aguilar said.

He said his department treats facial recognition matches like an anonymous tip, which still need to be corroborated with testimonies or other evidence.

“We laid out five allowable uses: criminal investigations, internal affairs investigations, identifying cognitively impaired persons, deceased persons, and lawfully detained persons,” he said.

Another witness at Wednesday’s hearing, co-director of the Berkeley Center for Law and Technology and an assistant law professor, Rebecca Wexler, raised more concerns for the future of AI than her fellow witnesses.

Sen. Booker told her, “you obviously had a lot of concerns in your testimony.” During questioning, she responded by clarifying that she is neutral about AI technology.

“The technologies themselves are not the issues— The legal rules that we set up around them to help us ensure that they are the best, most accurate and effective tools and not flawed or fraudulent in some way,” she said.

Wexler cautioned that AI is often used by law enforcement trying to prove guilt, which could make the systems biased.

“This may bias the development of technologies in favor of identifying evidence of guilt rather than identifying evidence of innocence,” she said. “So any support Congress could give to AI technology’s design to identify evidence of innocence would be very promising.”

Experts not present at the hearing also raise pros and cons of facial recognition technology.

Divyansh Kaushik, the associate director for Emerging Technologies and National Security at the Federation of American Scientists, referenced how the Federal Bureau of Investigations (FBI) has used the technology to identify Jan. 6 insurrectionists, who were mostly white Americans.

“The question becomes not just of an impact of a system, but of how the system is put in place and used,” he said.

Facial recognition systems more frequently fail when the person being identified is not a white male, according to research conducted by the National Institute of Standards and Technology.

Deputy Director of the ACLU Speech, Privacy, and Technology Project Nathan Freed Wessler said police shouldn’t use this technology because of the high risk that the software is biased and misidentifies people of color. Wessler is one of Williams' lawyers at the ACLU.

He said that he has seen policymakers who take this issue seriously, but that he has also seen hundreds of police departments across the country “where police have been more or less free to just experiment at their whim.”

Wessler said policymakers shouldn’t just consider the perspective of law enforcement, but also need to hear about the experience of everyday citizens, such as his client Robert Williams, whose case was not explicitly mentioned at Wednesday’s hearing.

“These aren't just academic questions. These are questions that have extraordinarily serious effects on real people's lives,” he said.

What follows is a lightly edited transcript of the hearing.

Sen. Cory Booker (D-NJ):

Hello everybody. I'm excited to see such a crowd, almost standing room only. There's a whole bunch of people waiting outside. I'm not sure if it was because of the rumors that broke out on Twitter that you and I were going to have a cage match, but in all seriousness, I can't tell you how excited I am being next to Tom Cotton for this hearing because I think as much as Tom Cotton and I have that differs us. We actually both have a lot of the same commitment to community safety and I think this is a hearing that strikes right to the heart of keeping community safe and strong and I'm excited that the two of us are here for this inquiry really to hear from folks about an issue that is really exciting. There is much made of technology and its influence on our society.

When I was mayor of the city of Newark, the number one issue of my constituents, the pollster that I had and never seen anything like it was the safety and the security of our neighborhoods and communities. And what was exciting for me is that we introduced a lot of technology as a mechanism with which to keep people safe. Technology now that some of our witnesses actually can talk to like shot spotters and cameras and more license plate readers and more, and what excites me now is I was a mayor about 11 years ago, which is a long time when it comes to the advancements of technology and innovation. We know that technology in America has so much promise and in this sphere it could do a lot. Many people would be stunned to know that our murder clearance rates in the United States is something where Chief Aguilar, you can help us me out, but somewhere around 50%.

Armando Aguilar:

Correct.

Sen. Cory Booker (D-NJ):

Yeah, it's astonishingly…. I think that was a hallelujah that he said there. Amen, Senator Booker. It's astonishingly low and so one of the things that a lot of people don't realize is that technology could help us on clearance rates, it could help us to create more community trust. It could help us on investigations, it could help us in so many positive ways, but we also know that technology presents a lot of possibilities to undermine our core values and our ideals as well when it comes to certain constitutional protections, when it comes to privacy, when it comes to some of the most sacrosanct elements of our democracy. And so we know that AI is consequential in our society as a whole in ways that many of us can't fully imagine both in the possibility and the promise as well as the potential problems when it comes to our criminal justice system where often a person's liberty and a person's life are at stake when the safety of their neighborhoods are at stake.

Where their constitutional protections are at stake is particularly consequential. Law enforcement's use of artificial intelligence technology is not a recent development again, as one of all of our experts can attest to. It's recent expansion raises a lot of questions as well as raises a lot of excitement for me about the possibilities and so I'm going to submit the rest of my opening testimony for the record, but I do want to say to the witnesses that are here how excited I am. I know you all made sacrifices and took time out. I'm going to introduce you in a moment after my ranking member Senator Cotton does his opening statement, but we really are at a point in humanity where every generation has breakthrough technologies that shape and alter the of humanity. AI is most certainly one of those things. I know that Senator Cotton and I both are committed to trying to find a way to capture the possibility as well as protect against the risks. With that, I'm very excited and very grateful to Senator Cotton and his entire team for helping to make this hearing possible and I will turn the microphone over to him for his opening remarks.

Sen. Tom Cotton (R-AR):

Thank you. Senator Booker. I'll say that artificial intelligence has gained a lot of attention lately over the last year and a half or so in particular, but the core technology has been around for some time because of how it's depicted. Maybe we start with what it's not many people. Fortunately, most people don't have much personal interaction with law enforcement or with the criminal justice system. What is AI in law enforcement? Not though it's not robocop, it's not the Terminator, it's not the Matrix. It's not even walle. It is never used independent of human decision-making in our criminal justice system. That's why it's a tool in the law enforcement toolbox and it can provide impressive time-saving and criminal crime solving and justice serving tools. Just one example. Facial recognition software for instance, could take a crime scene sketch or a security camera image and go through thousands and thousands of pictures in a public database, say a driver's license database.

It can eliminate obvious non matches. It might be even to narrow it down, but then that is a tool that independent human judgment can use to pursue leads at the early stages of a criminal investigation are like looking for a needle in a haystack. Then artificial intelligence, if you will, can provide a magnet for some of our criminal investigators in the police forces around the country. That's why I want to enter into the record with consent here, a statement from the National Sheriff's Association, the Major County Sheriffs of America and the Major Cities Chief Association. They represent state and local law enforcement all across the country. I want to highlight one particular line from their statement. It is essential to recognize that AI powered technology serves as an investigative assistant to law enforcement rather than a replacement for the human element. Mr. Chairman, I ask consent to enter that statement into the record without objection.

I also want to give one concrete real world example of how this can work. Last summer, British authorities contacted the Department of Homeland Security about a video child sexual exploitation. They had reason to believe it was made in the United States. DHS ran the faces against a mass database of photos and they found a match. A college sports administrator in Missouri. Investigators then found the suspect's Facebook profile and were able to confirm not only that it was the same person, but even that the child victim was on his Facebook page. They eventually got a search warrant from a judge and in July they were able to arrest and charge this predator. Without that facial recognition technology, he might be free today and that child might still be subjected to abuse today as well. AI powered law enforcement tools also improve efficiency and save lives in other ways.

For instance, by modeling where and when crimes happen, law enforcement agencies can better position limited patrol units to prevent crimes or to respond quickly. They can coordinate with other emergency services like ambulances for better staffing at times and places whether they be needed and yet other cases AI powered products like those made by Motorola solutions are used by nine one one call centers to clean up, transcribe and even translate 9 1 1 calls in real time or an AI product made by the company. Axon is used by law enforcement to quickly blur faces and body cam footage so police departments can share the footage with the public faster. Other AI powered tools can analyze financial records to help identify financial crime and AI powered cameras can recognize license plates of wanted individuals and alert law enforcement to investigate. Now I understand there are some who have concerns about the use of artificial intelligence technology and law enforcement and those concerns are in some cases valid and should be aired, which I think will do today.

But I do want to point out that probably the most uncomfortable people with the use of artificial intelligence are criminals who would like to avoid being cut because again, artificial intelligence is simply a tool that human investigators, police officers, prosecutors can use to help solve cases, convict criminals and put them behind bars. We need to remember that artificial intelligence powered law enforcement tools are assistive technologies helping law enforcement officers be better at their jobs and responsibly use these technologies can help create a faster, cheaper, more accurate criminal justice system where the criminals are being caught prosecuted, victims are being provided justice and the innocent are not being punished. I know our witnesses have thoughts and expertise to share on these questions and I look forward to this conversation. I do want to say and apologize in advance that since this hearing was scheduled, the Republican Conference has scheduled an all Republican meeting about the national security supplemental bill that is currently under debate, so I may not make it to the end, but I assure you it has nothing to do with your testimony. Thank you all.

Sen. Cory Booker (D-NJ):

Senator Cotton, thank you for those excellent remarks. You went through an array of movies there from Wally to Terminator. Were those recommendations or did your staff sort of put them in? Alright, I'm excited that what I'm going to do now is introduce all the witnesses and then I'm going to ask you to stand up and raise your right hand and swear in oath. We'd prefer you not to swear at us, but just swear the oath and then we'll go and open it up for your testimony in alphabetical order. So I want to introduce really quickly the Assistant chief of police, Armando Aguilar from the City of Miami department and law enforcement subcommittee member of the National Artificial Intelligence Advisory Committee. Assistant Chief Aguilar is assistant chief of police for the city of Miami Police Department, which is having some extraordinary success in protecting citizens and keeping their community and safe.

In 2022, you began serving as a member of the law enforcement subcommittee of the National Artificial Intelligence Advisory Committee, which advises the president of the United States on matters related to the use of AI in law enforcement. As Assistant chief of police, you oversee hundreds of sworn and civilian employees and you've held many senior management positions in all divisions of the Miami Police Department. During your time with the MPD, you have implemented offender focused strategies that have contributed to significant reductions in violent crime and significant increases in clearance rates. You hold a master in public administration from Barry University and a Bachelor's of Criminal Justice from St. Leo University. It's really an honor to have you here and we're very, very grateful.

I want to next introduce Dr. Karen Howard, PhD. Dr. Howard works with the US Government Accountability Office, the GAO as acting chief scientist and a director of the GAO's Science Technology Assessment and Analytics team, STAA or STAA, I'm not sure which one, or STA. You manage a broad portfolio of topics, frankly, an impressive portfolio of topics including forensic algorithms, biological threats, chemical weapons, national security, implications of emerging technologies and other issues.

It's pretty extraordinary the work you do. You've produced key GAO reports that you're going to testify about including forensic technology, algorithms used in federal law enforcement and forensic technology algorithms strengthen forensic analysis, but several factors can affect outcomes. You've earned your PhD in environmental chemistry and your master's degree in analytical chemistry. You are, as my dad would say, you have more degrees in the month of July and it's great to have you here and thank you for again flying up from Alabama as well. It's difficult for me.

Professor Wexler, you have so much Berkeley on your resume, as a Stanford man, I'm going to try to muscle my way through this. Okay. You are assistant professor of law at the University of Berkeley School of Law and you're the co-director for the Berkeley Center of Law and Technology. Professor Wexler is a faculty co-director of the Berkeley Center for Law and Technology and Assistant Professor of Law at UC Berkeley.

Your teaching and research focus on data technology and secrecy in the criminal legal system. Your scholarship has appeared in so many places. The Harvard Law Review, Stanford Law Review, Yale Law Journal Form NYU Law Review, UCLA, law Review, Texas Law Review, Vanderbilt Law Review, Berkeley Technology Law Review. I would you because nothing in Arkansas you need to correct for that. Professor Wexler served as senior policy advisor for science and justice at the White House Office of Science and Technology during the spring of 2023 and as the James C. Carpenter visiting professor at Columbia Law School. You've got your JD from Yale, HIL from Cambridge as a Gates Scholar and you are Summa Cum laude at Harvard and again, we thank you for flying in all the way from California.

Would the three witnesses please stand up and raise your right hand? Do you swear or affirm that the testimony you are about to give before the subcommittee will be the truth, whole truth and nothing but the truth, so help you God? Let the record show that each witness has answered in the affirmative. You may sit down unless you want to stand for the hearing, which of course we have no preference. You each will now have five minutes for your opening statement. And again, we're going to start with an alphabetical order with Chief Aguilar.

Armando Aguilar:

Good afternoon subcommittee, chair Booker Ranking member Cotton and Subcommittee members. I'm Armando Aguilar, assistant Chief of police at the Miami Police Department, also currently serving a three-year term as a member of the law enforcement Subcommittee of the National Artificial Intelligence Advisory Committee or NAIAC. I would however, like to point out that I'm speaking here today on behalf of the Miami Police Department and not on my position on NAIAC. I'm proud to say that the Miami Police Department story is among the greatest turnaround stories in law enforcement in 1980 Miami with a murder rate comparable to that of Honduras was America's murder capital. I became a Miami police officer in 2001 and a homicide detective in late 2004 a year when 69 people would be murdered in Miami and another 6,400 would fall victim to violent crime. By this time we had the audacity to high five each other because we were no longer in the top five most violent cities in America.

We were however, perennially on the top 25 list. Fast forward to 2023, Miami ended the year with 31 murders and 2,600 violent crimes, 31 and 2,602. Many still our murder clearance rate, the rate at which cases are solved was 68% or 97% under the FBI's Legacy reporting system, which counts prior year cases closed during 2023, our violent crime clearance rate was 58%. Now for most of my career, our murder clearance rate hovered below 45% and our violent crime clearance rate below 38%. So what changed A great deal. I'll begin by stating that I've had the pleasure of leading the best generation of officers, detectives, and professional staff to serve the people of Miami and our success begins with community trust. Violent crime is especially unsolved. Violent crime is among the greatest threats which serve to undermine that trust. A shooting takes place, community member calls in an anonymous tip.

The police without any other leads to corroborate the tip will eventually result in the case going cold. People stop reporting gunfire. The police in turn to not respond to gunfire that we don't know about. The perception quickly becomes that the police are at best unable to keep them safe or at worst. Unwilling to the Miami Police Department has successfully leveraged artificial intelligence over the past few years to great effect. We use gunshot detection systems, public safety cameras, facial recognition or FR video analytics, license plate readers, social media threat monitoring and mobile data forensics. We use ballistic evidence to connect the dots between shootings and the violent actors that are victimizing our communities. A recent PABJA funded study by Florida International University found that violent crimes are one such resource was used by our detectives, had a 66% greater likelihood of being solved when compared to similar cases where no such resource was used.

I'm happy to discuss any of the technologies that we employ, but I'm going to take this time to discuss how we came to develop our policy governing the use of FR and criminal investigations. It started in January, 2020 when the New York Times ran an article by Kashmir Hill. The article was critical of the use of law enforcement, facial recognition and of one vendor. In particular, Ms. Hill posed several questions which resonated with me as I do spend my time out of uniform as a private citizen without proper safeguards. Ms. Hill asked, “What would stop police from using FR to identify peaceful protest organizers or cyberstalking and attractive stranger to cafe? What about the public whose biometric data that is our faces would be analyzed by police? So my team and I set out to establish a facial recognition policy that would address these and other concerns we're not the first law enforcement agency to use facial recognition or to develop FR policy, but we were the first to be this transparent about it.

We did not seek to impose our policy on the public. We asked them to help us write it. We started out by meeting with local privacy advocates. They absolutely hated it, but we wanted to know why they hated it. So they told us, we found that many of their critiques were thoughtful and reasonable, so we heard their objections, took it upon ourselves to treat them as recommendations and incorporated several of them into our policy. We highlighted successful arrest aid by FR through local media coverage and that march we held two virtual town hall style meetings virtually because in-person meetings were not an option due to the pandemic. One session was conducted in English, the other in Spanish. Each session included public questions and comments and each session had about 1300 live views and 3,600 total views. The policy that resulted from our effort created a narrow framework within which we would use fr.

Most importantly, our policy emphasizes that FR matches do not constitute probable cause to arrest. Matches are treated like an anonymous tip, which must be corroborated by physical testimony or circumstantial evidence. We laid out five allowable uses, criminal investigations, internal affairs investigations, identifying cognitively impaired persons, deceased persons, and lawfully detained persons. We use FR retrospectively that is that we don't use it in a live or real-time basis to identify persons going about their business in public spaces and we do not use it to identify persons who are carrying out constitutionally protected activities. We establish a policy limiting who has access to our FR platforms and we disclose our FR used to defense counsel in criminal cases. We do not substantively manipulate or alter probe photographs, use composite sketches as probe photographs, or use any other technique which has not been scientifically validated. These efforts along with many others resulted in a Miami that is safer today than in any other time in our history. I thank you for inviting me to speak before the subcommittee and I'm happy to answer any questions that you may have.

Sen. Cory Booker (D-NJ):

Thank you. That's extraordinary testimony. I'll remind the witnesses that Senator Whitehouse and Senator Kennedy are here, so you should be on your best behavior. Dr. Howard.

Karen Howard:

Chair Booker Ranking Member Cotton and members of the subcommittee, I am pleased to be here today to discuss technologies that can assist in criminal investigations. Criminal justice is an important governmental responsibility with a significant impact on the public. Such investigation should be conducted as quickly and accurately as possible. A number of algorithms can aid investigators in the search for links between individuals and evidence collected at a crime scene. Some of these tools are enabled by artificial intelligence while others rely on statistical or automated methods rather than ai. My testimony today is based on two reports we issued describing the strengths and limitations of the most common algorithms used for criminal investigations. In our work, we found that federal law enforcement agencies primarily use three types of algorithms in their investigations, probabilistic genotyping, latent print analysis and facial recognition. These algorithms offer some common advantages over human analysts.

For example, they dramatically increase the speed of comparisons, allowing larger databases to be searched in significantly less time. In addition, they generally produce more consistent results because they don't get tired or have an off day as human analysts can. The best of these tools are highly accurate and effective, often more so than a human analyst alone. However, this is not to say that such algorithms are perfectly accurate or objective. There are hundreds of algorithms in development and on the market and performance can vary significantly among them. As shown in independent testing by the National Institute of Standards and Technology, NIST studies have shown that the best results may be obtained from the combination of a highly accurate algorithm and a well-trained analyst. Despite the advantages offered by high quality algorithms, there are also several challenges to their use. For example, poor quality crime scene evidence can significantly reduce the effectiveness The tools and experts told us.

There is considerable variability in the standards for what constitutes suitable evidence quality, especially across the many state and local law enforcement agencies. Other challenges include errors or improper use by human analysts, unknown and potentially biased training data for AI-based algorithms, the possibility that the correct person isn't in the database used for comparison difficulties with interpreting and explaining the output, lack of the necessary information for law enforcement agencies to identify the best performing algorithms and sometimes a lack of resources to purchase those and a lack of public trust in the results. For some algorithms, we proposed three policy options that may help address these challenges. First, increased training for analysts and investigators which could improve their understanding of the importance of crime scene evidence. Quality increased training could also reduce errors and inconsistency by human analysts and increase proper interpretation of algorithm results. Second, policymakers could facilitate the development and adoption of standards for appropriate use, which could set standards for algorithm performance that is acceptable, reduce the use of low quality evidence and increase public confidence in the results.

And third, increase transparency regarding the testing performance and use of algorithms, which could make it easier for law enforcement entities to identify the best performing technologies and improve understanding of the training data for AI enabled tools perhaps leading to the development of more robust and representative training data. Finally, I note that there are many other types of algorithms in development or use beyond the three we examined. Some of these depend on AI to power their comparisons and decision-making in our judgment. The challenges we identified are likely to also be relevant to these other types of algorithms. In conclusion, we found that probabilistic genotyping, latent print analysis and facial recognition can be very useful to help investigate crimes, but several challenges may limit their effectiveness. By addressing these challenges, we can improve the accuracy and effectiveness of criminal investigations and enhance public trust. This concludes my prepared statement and I would be happy to respond to any questions.

Sen. Cory Booker (D-NJ):

Dr. Howard, I'm really grateful. Professor Wexler.

Rebecca Wexler:

Mr. Chairman and members of the subcommittee. My name is Rebecca Wexler and I am a co-director of the Berkeley Center for Law and Technology and an assistant professor of law at Berkeley Law School. I'm honored to testify here today about the need for fair and open proceedings to scrutinize AI tools used in the criminal legal system. Although AI tools present exciting opportunities to render the legal system more accurate and equitable in some respects, they also present troubling obstacles to fair and open proceedings. To help fix this problem, Congress should do two things. First, require that AI tools used in the criminal legal system be made available for auditing by independent researchers with no stake in the outcome. Second, prohibit either party in a criminal case from invoking the so-called trade secret privilege to block access to relevant evidence and instead require that such evidence be disclosed under a reasonable protective order.

Zooming out the US criminal legal system is a model to the world in its commitment to fair and open proceedings to protect against wrongful convictions. That reputation rests on the safeguards. We offer those accused of crime including a fair opportunity to uncover weaknesses in the government's evidence of guilt and to expose unlawful and unconstitutional police conduct. This is why the Sixth Amendment guarantees the accused the right to confront witnesses against them and to compel witnesses in their favor. It's why due process and statutory discovery laws require prosecutors to disclose evidence to the defense. It's why the Supreme Court has directed judges deciding whether to admit scientific and technical evidence to consider whether the evidence has been subject to peer review. Fair and open proceedings in which evidence is subject to robust and independent adversarial scrutiny are necessary for accurate, accountable, and legitimate criminal investigations and prosecutions.

These transparency commitments are all the more important in the emerging age of AI because we are together facing a new form of evidence that by its nature cannot be scrutinized with the tools that parties have traditionally used such as cross-examination and classic detective work. In turn, allowing AI to enter criminal courtrooms without sufficient scrutiny is dangerous. While AI has the potential to improve law enforcement efficacy, it can also cause harmful errors with high stakes consequences for life and liberty. For instance, erroneous ai face recognition hits and gunshot alerts have led to multiple alleged wrongful arrests and imprisonment. A recent study found that state-of-the-art AI photo forensic program performed worse than regular people with no special training and 2D NA analysis forensic software programs came to divergent results about whether a defendant's DNA was included in a crime scene sample. In a homicide case, we need fair and open proceedings to expose these kinds of costly mistakes and discrepancies.

Unfortunately, some vendors block independent review that might expose flaws in their AI products. For instance, consider probabilistic genotyping software, the GAO office, the PCAST report and NIST all expressed concern about the lack of independent review of these tools. Yet when fellow academic researchers and I sought to purchase a research license to study one of them, the vendor, a company called Cyber Genetics, told us Cyber genetics does not provide research licenses. Representatives of this company have testified under oath in courts across the country that their product is subject to a peer review process and hence its output should be admissible in criminal cases. But when we actually tried to perform independent research into quality assurance and validation, the company stopped us from doing it. Congress should require vendors to allow independent audits by clarifying that AI tools used in the criminal legal system must be subject to independent review.

There's an important role for Congress here to authorize federal grants for law enforcement agencies to purchase AI systems only if the vendors make them available for auditing by research groups with no stake in the outcome in another obstacle to fair and open proceedings. Some vendors have also relied on a so-called Trade secret privilege to refuse to disclose details about how their technologies work to criminal defendants and expert witnesses even under a protective order and even in capital cases where the risk of error is wrongful death, this should not be happening. We need criminal defense counsel and experts as well as judges and prosecutors to scrutinize AI systems to ensure accuracy and fairness. There's no good reason for trade secret law to block this crucial process. Once again, there's an important role for Congress to clarify that no trade secret privilege exists in federal criminal cases and relevant evidence must be disclosed under a protective order. Thank you for the opportunity to testify here today.

Sen. Cory Booker (D-NJ):

Thank you for the strong testimony. I'm here for the duration, so I'm going to go to Senator Butler. Then we're going to come back and go to the ranking member Cotton. Then we're going to go to White House and I'll, unless nobody else comes, I will follow up in the end. But we'll go to Senator Butler representing the great state of California.

Sen. Laphonza Butler (D-CA):

Thank you, Senator Butler. You can be

Sen. Cory Booker (D-NJ):

Senator Butler too. Hey, Butler's a kick up.

Sen. Laphonza Butler (D-CA):

It is Senator Booker, thank you so much and to our ranking member for hosting such an important conversation. I will skip the preface and really jump right into what I think is a very important topic to examine. Dr. Howard. If it's okay, I'll start with you and I'll try to land in my constituents basket there. But Dr. Howard, you mentioned in the three recommendations that you would invite Congress to act in this space of increased transparency. I'd like to hear you talk a little bit about what kinds of transparency you think are needed. Am going to direct this question to Professor Wexler as well, but I'd love to start with you and just sort of hear your thoughts there.

Karen Howard:

Certainly, and we, in making our policy options for policymakers, we try not to be too prescriptive. We want to leave room for the policymakers, but some of the things we heard about from experts we spoke with include being transparent about the training databases. For example, how representative are they? Where are they drawn from? Are they representative of the kind of evidence that might be collected at a crime scene, the sort of print quality that might be expected or photo quality that might be expected. Also, increased transparency about whether they've been tested and how and what were the results of those tests so that everybody understands how does the algorithm perform under ideal conditions under less than ideal conditions, which are often present in a criminal investigation and can then make a determination about how much weight to give the evidence that is produced by the algorithm.

Sen. Laphonza Butler (D-CA):

And then Professor Wexler, what would you add?

Sen. Laphonza Butler (D-CA):

I know there are a couple specific areas that you also recommend increased transparency. You ended your testimony in the space of auditing. What other recommendations would you add there?

Rebecca Wexler:

Thank you for the question. I would add that the right test is the baseline relevance test. So by default, criminal defense counsel is entitled to discover relevant evidence on a case by case basis. That's the threshold that they should have to show in order to get access to any details about evidence used against their client. What's the problem is when that threshold burden is raised to a necessity burden, for instance, based on some, so-called secrecy interest and the secrecy interest if it's intellectual property is not a legitimate one to do that.

Sen. Laphonza Butler (D-CA):

So stay there for just one second because I did want to talk just a little bit about public defenders, and we all know have seen the data, particularly in metropolitan communities around the country, the overworked nature and caseload management of public defenders. You talk a lot about the potential challenges, the need for transparency. How share just a little bit about how the failure to disclose the usage of AI might enter the consideration or the management of the case workload for public defenders.

Rebecca Wexler:

Sure. So I have two thoughts on this. One is that because AI systems are used across many different cases, even if you have a public defender or even private defense counsel who's particularly well-resourced, so say in a centralized office has enough counsel to have specialized counsel focusing on DNA or other forensic technologies, even if disclosure, if it goes to everybody, but those well-resourced counsel are able to identify flaws or weaknesses in the technology, those identifications benefit everyone as a whole. So even though not every council will have the bandwidth to address it, disclosure is still beneficial. And I can give a concrete example of where disclosure was beneficial in one case, if that would be helpful. The office of the Chief Medical Examiner of New York City created a forensic software program called FST to analyze DNA evidence. And for years, they refused to disclose source code for that tool claiming they had a trade secret interest in withholding it. In one case, they finally were ordered to disclose by Judge Valerie Caponi, former general counsel of the FBI in the Southern District of New York. And the defense expert witness who reviewed it in that case uncovered a undisclosed function that discarded data in certain circumstances and had been added after the New York State Forensic Science Commission's regulatory approval for the tool. So that discovery happened in an individual case and was beneficial. It was useful information for many other defense counsel as well.

Sen. Laphonza Butler (D-CA):

Thank you so much and thank you Mr. Chair.

Sen. Cory Booker (D-NJ):

Thank you very much. Senator Ranking member Tom Cotton.

Sen. Tom Cotton (R-AR):

Mr. Aguilar, you mentioned in your opening testimony that tip lines like crime stoppers. You'd also said last year in the interview that your department treats facial recognition technology like a tip that is called into crime stoppers. Is that correct? Do you recall that?

Armando Aguilar:

Yes, Senator.

Sen. Tom Cotton (R-AR):

So in your career as a police officer, have you ever gotten erroneous tips from crime stopper hotlines,

Armando Aguilar:

At least tips that we couldn't corroborate with other evidence?

Sen. Tom Cotton (R-AR):

Absolutely, Senator. Okay. Have you ever considered eliminating crime stopper hotlines because you get erroneous tips?

Armando Aguilar:

Absolutely not.

Sen. Tom Cotton (R-AR):

Okay. You're not testifying that say facial recognition technology or ballistic technology is flawless. Are you?

Armando Aguilar:

I'm not.

Sen. Tom Cotton (R-AR):

Okay. Let's say that you have facial recognition technology and you run a security camera image through that technology and it gives you back seven potential positives. What's the next steps do you take, go out and arrest all seven individuals?

Armando Aguilar:

Absolutely not. Senator. We would treat the matches as if we had just received seven crime stoppers tips and our detectives would do their due diligence and either try to discount or corroborate it with other evidence that could tie that person to the crime

Sen. Tom Cotton (R-AR):

Scene using other technology or even artificial intelligence technology or using traditional gumshoe investigative techniques, all

Armando Aguilar:

Of the above. Senator, this is absolutely a substitute for traditional investigative methods. It compliments traditional investigative methods, but in no way can AI, at least in my view, substitute traditional methods.

Sen. Tom Cotton (R-AR):

Okay. What would be the consequence if you did not have this kind of technology in your department?

Armando Aguilar:

I am very certain that we would have many more crimes that would go not only unsolved, but there's an additional consequence to having unsolved crimes. And that is that those criminal suspects are allowed to continue to victimize other people. And so it's not just a lower clearance rate, but it's also a higher victimization rate.

Sen. Tom Cotton (R-AR):

Okay. Ms. Wexler, one of your chief points seems to be that the vendors who provide such technology should be compelled to disclose their source code. Is that correct?

Rebecca Wexler:

If the source code is relevant evidence in a particular case, then that should be the baseline standard and it should be disclosed under a protective order that protects the intellectual property interest to the reasonable degree.

Sen. Tom Cotton (R-AR):

Could you give examples again of when that would be relevant evidence? Because as Mr. Aguilar said, if you get seven positives from facial recognition technology, that itself is not going to be what cinch is the case or gets a conviction or perhaps even admitted into court. It's going to be the follow on investigative techniques that his detectives use.

Rebecca Wexler:

Sure. So relevance means any evidence that makes a fact at issue in a case more or less likely than it would be without the evidence. So it, it's actually a pretty low bar. I tell my students a brick is not a wall. So evidence doesn't have to disprove the whole case in order to be relevant, it just needs to weaken or strengthen one element of it. So any information about how a forensic technology works that could expose unreliability of a particular match would be relevant.

Sen. Tom Cotton (R-AR):

Let's use something a little more high tech than say, facial recognition, which we all engage in every single day. Let's say ballistic matching. If you have AI assisted ballistic matching, is that the source code of that technology is that's what's relevant, or is it going to be the testimony, the human testimony that follows on that, presumably from dueling witnesses, the state uses and that the defense uses? Why does the source code matter once you have those dueling witnesses in the follow-on question?

Rebecca Wexler:

So various aspects of the technology might be relevant in a particular case. The source code might not always be relevant, but it could be in a particular case, even if an AI technology is then being reviewed, the conclusions are reviewed by a human witness, the AI results may have biased the human witness. So it'd be important to understand if they were flawed. AI results may in face recognition technology, produce alternate candidate lists that could be exculpatory evidence that would need to be disclosed to the defense. So those are examples.

Sen. Tom Cotton (R-AR):

Why do you think these vendors don't want to disclose your source code to you and your fellow law professors or to criminal defense?

Rebecca Wexler:

So to be clear, my fellow academic researchers,

Sen. Tom Cotton (R-AR):

And y'all wanted a license, why do they not want to give you a license to use it? And why do they not want to disclose it to criminal defense attorneys?

Rebecca Wexler:

My personal view, and of course I don't have insight into their own heads, is that they're concerned about subjecting their products to robust adversarial scrutiny because that scrutiny might expose weaknesses or flaws in their tools.

Sen. Tom Cotton (R-AR):

And these products have undergone testing by, say, our own department of justice, haven't they?

Rebecca Wexler:

Some of the products have undergone testing and some of them less so in terms of probabilistic genotyping software programs. Again, the GAO office, nist, the PCAST report have all identified concern that much of the testing was not performed by independent researchers. And most recently NIST identified that some of the published studies validation studies don't have enough publicly available data to be independently verified.

Sen. Tom Cotton (R-AR):

Mr. Aguilar, do you have any thoughts on why these vendors might not want to disclose their source code to criminal defense attorneys or provide research licenses to professors?

Armando Aguilar:

Senator? I cannot speak for the vendors. I can only imagine that there's at least some concern over an intellectual property, but I certainly cannot speak for the vendors.

Sen. Tom Cotton (R-AR):

Okay. And Ms. Reckler, you keep saying protective order there. You're talking about disclosure of source code in a criminal proceeding, right?

Rebecca Wexler:

Could be disclosure of any aspect of the technology. So vendors, these companies have actually alleged that trade secret privileges should apply to all sorts of aspects of these tools, including things like user manuals. So it's not limited to source code, but I will say that courts order disclosure of intellectual property under protective orders, reasonable protective orders all the time. It's a common solution in civil cases. And if it's effective there, it should also be deemed sufficient in criminal cases where the stakes are life and liberty. And I'll add that protective orders give intellectual property holders more safeguards than the non-disclosure orders that companies usually non-disclosure agreements, I'm sorry, that companies usually disclose the same information under when they share it with their employees or with business negotiators. So the protective order, if there were to be a leak rare instance, there's not evidence that protective orders are generally violated or commonly violated. But in the rare circumstance where that might happen, a trade secret holder would still be entitled to sue for misappropriation of their intellectual property. And on top of that substantive trade secret guarantee, there would be the additional guarantee of criminal or civil contempt of court plus the possibility of disciplinary sanctions if it were an attorney who violated the protective order. So given those safeguards, trade secret law does not provide a good to withhold relevant evidence in a criminal case.

Sen. Tom Cotton (R-AR):

Mr. Aguilar, in your long history of criminal proceedings, is a protective order ever been violated?

Armando Aguilar:

No instances come to mind, Senator.

Sen. Tom Cotton (R-AR):

Okay, thank you.

Sen. Cory Booker (D-NJ):

Thank you for that really thorough questioning, Senator Whitehouse.

Sen. Sheldon Whitehouse (D-RI):

Could I ask each of you to consider in the law enforcement arena what you believe to be the most promising use of AI that we in Congress should consider encouraging and what you consider to be the most dangerous use of AI that we should be more closely monitoring? Just go right across, if you don't mind, Aguilar, to Howard to Wexler

Armando Aguilar:

Senator, it's very difficult to list any one that has the most promise, and I can cite several examples where we've successfully solved everything from violent crimes to property crimes using a number of AI tools. I think that the greatest threat comes in the absence of sound policy, and that applies to any tool that law enforcement uses. So again, sometimes we may think of AI as something new and a challenge that we've never faced, but the reality is that law enforcement has for many years had access to sensitive information through driver's license, databases, criminal records, and it's the policy around those systems that really safeguards the public. The same can be said for other tools that we use, our weapons, our vehicles, we don't rely on manufacturers to write our deadly force policies or our driving policies. So I think that the absence of policy would pose the greatest threat, and the Congress should be looking at a number of AI tools that are right now making our community safer.

Sen. Sheldon Whitehouse (D-RI):

Dr. Howard.

Karen Howard:

The only AI enabled tool that we examined in any detail was facial recognition. But I would say broadly, the most promising AI tools are those that are trained on a reliable and representative dataset that is well understood, and that is

Sen. Sheldon Whitehouse (D-RI):

Like ballistic information.

Karen Howard:

And we have not studied that one, so I can't affirm that. I'm sorry.

Sen. Sheldon Whitehouse (D-RI):

There's plenty of it though.

Karen Howard:

And that also have a well-trained analyst and investigator using them because remember, every AI tool is the middle piece of an evidence process. Somebody collects crime scene evidence, prepares it for use, determines whether it's a suitable quality to be inserted into the algorithm and then assesses the output that comes out the other end and decides what to do about it. So that well-trained analyst is key. The riskiest AI tool would be one where the training dataset is not understood, not representative, and it's being handled by somebody who really doesn't understand what the technology is and isn't telling them in the output.

Sen. Sheldon Whitehouse (D-RI):

Thank you. Professor Wexler.

Rebecca Wexler:

I think that it would be promising for Congress to support AI that is aimed at uncovering exculpatory evidence. One of the concerns that I have about the markets for AI tools is that most of the paying customers are law enforcement and defense experts and defense investigators don't have the same money to throw at the problem. And this may bias the development of technologies in favor of identifying evidence of guilt rather than identifying evidence of innocence. So any support Congress could give to AI technology is designed to identify evidence of innocence would be very promising. And my biggest concern are technologies that are kept secret and withheld from adversarial review by the vendors.

Sen. Sheldon Whitehouse (D-RI):

And Mr. Aguila, you might've mentioned this, but with your experience in law enforcement, have you had occasion to use any kind of AI technologies with ballistics information with cartridges, spent bullets?

Armando Aguilar:

Absolutely. Senator. So with our partnership with A TF, we do have an in-house ballistics program, a crime gun intelligence center, and it's one of many tools that we employ to investigate and prevent violent crimes. It's certainly been helpful. Is

Sen. Sheldon Whitehouse (D-RI):

There an AI component though? Because before there was ai, there were still computer programs that compared, for instance, the markings on an ejected cartridge or the markings on a spent bullet. Two other markings in the same way that fingerprint searches take place, not so much with modern AI as just computer databases.

Armando Aguilar:

So our definition of AI sometimes changes, right? As technology becomes more mainstream, we sometimes think of it as less AI than newer technologies. But if we look at ballistic evidence where it's compared through an automated system in a database and then later has to be verified by a human analyst, and it makes connections through the automated process that has to later be verified by a human analyst. So in that sense, I certainly consider it AI as it has to be human verified.

Sen. Sheldon Whitehouse (D-RI):

Thanks very much, chairman.

Sen. Cory Booker (D-NJ):

Thank you very much. I'm not sure who was here first. Padilla Padilla, the California is, well-represented today. Senator Padilla.

Sen. Alex Padilla (D-CA):

Thank you, Mr. Chair. Speaking of California, it was recently reported that a local police department in my state used AI to render a 3D image of a suspect from crime scene data DNA, and then attempted to run that image through facial recognition software in an effort to identify a suspect. Now, this is the first reported case of law enforcement attempting to use facial recognition technology, which has questions in and of itself on an AI generated image. The incident first occurred actually in 2020. It's been a couple of years, but the public did not become aware of it until it was reported this week. There appears to be a little transparency, questions and concerns about how AI tools are being used by police and prosecutors during criminal investigations. And I said, just one example. First questions for Dr. Howard. What steps should be taken to ensure that defendants and the public are aware of when AI is being used in criminal investigations?

Karen Howard:

Part of our policy option calling for transparency would envision a scenario where it is revealed when algorithms have been used in one form or another. And to speak to Mr. Aguilar's point, AI is often what we now think of as automated technologies was originally considered ai. What most people call AI now is machine learning, which is trained on training data. We believe that transparency step should be known. People should be aware when an algorithm has been used in one form or another as part of the evidence collection and assessment process. We do believe, however, it's very important to validate the results of those methods. And in this case, the articles that I've read have not indicated whether any validity testing has been conducted on the reconstruction process for the face. And then whether those faces can be reliably run through even the best algorithms, we've not seen any data indicating that those have been tested.

Sen. Alex Padilla (D-CA):

Exactly. So it brings up a whole host of questions. Follow-up. Question for Professor Wexler, welcome from California. How critical is transparency for fostering a positive relationship between citizens and law enforcement for ensuring just results in criminal proceedings?

Rebecca Wexler:

Transparency is crucial for a positive relationship between citizens and law enforcement and just results in criminal proceedings. Transparency is one of the key functions of the adversary system. We have an adversary system to enable the criminally accused to scrutinize and contest the evidence against them so that we can ensure that law enforcement's use of that evidence is effective, accountable, and legitimate.

Sen. Alex Padilla (D-CA):

Now a question on a separate topic. Last September, the GAO released a report showing that several federal law enforcement agencies, including the FBI and DEA among others, did not require their agents to be trained on protecting civil liberties and rights when using facial recognition software. As I mentioned earlier, facial recognition software begs a whole host of questions in and of itself. But this lack of training is troubling given that the technology has been shown to produce biased results, particularly when it involves identifying black and brown persons. Back to Dr. Howard. Since the release of this publication have steps been taken to ensure that federal agents are trained on how to responsibly use facial recognition technology,

Karen Howard:

We are not aware of any changes as a result of that report. That doesn't mean they haven't occurred. We just have not been able to follow up with the agency to see

Sen. Alex Padilla (D-CA):

Whether, so then the bottom line question is, what recommendations do you have to this committee, to the Senate to Congress to advance this?

Karen Howard:

One important feature of facial recognition is the need to test and verify the accuracy and the demographic bias. And this is one of the things that NIST does in its testing, but NIST does not exhaustively test every algorithm. It tests the algorithms that are submitted to it for testing. So one possibility would be to require algorithms to be tested through an independent third party such as nist, if they're going to be used for federal law enforcement purposes, that would add that measure of third party independent accountability. If you're using the best tools, can an error still occur? Of course, human beings make lots of errors as well. And many of the most highly publicized errors with forensic algorithm tools have actually been traced back to errors the human analysts made. For instance, submitting evidence that was not of high enough quality to the algorithm and the algorithms then unable to give a reliable result.

Sen. Alex Padilla (D-CA):

Right? And you have confidence that NIST or any whichever federal entity ends up with this responsibility has the capacity for determining guidelines for testing and certification of algorithms.

Karen Howard:

We've not looked at that directly, but we do know that NIST is often tasked with the job of setting standards for such things and would seem to

Sen. Cory Booker (D-NJ):

Have, that's the flip side, right? You can do test all you want, but what are you testing for and determining what passes what doesn't? Okay, thank you. Thank you, Mr. Chair. I think when you get two parts, California, one part Georgia, that is New Jersey. So with that, the senator, you have to bake at three 50 for about 45 minutes. But Senator Ossoff, thank you very much. Please proceed.

Sen. Jon Ossoff (D-GA):

Thank you for convening us, Mr. Chairman, and thank you to our witnesses for your expertise beginning with you please, professor Wexler, what technologies or practices are being developed or might need to be developed to make criminal investigations, law enforcement activity, civil litigation, resilient against synthetic fabricated evidence? There's much discussion about voice cloning and the rapid advance in that technology. You can imagine that being provided by a malicious third party to law enforcement to prompt a criminal investigation of a targeted individual or other cases where such technology could be used to frame or improperly implicate somebody. How do you against that risk, and do you think it's right to think of that as a risk?

Rebecca Wexler:

I think it's absolutely a legitimate concern, and it raises the question of whether our current legal system and current evidence rules are up to the task of maintaining accuracy and adversarial review with new technologies like deepfake technologies available. So when AI produced evidence or AI analyzed evidence comes into court, risks of error are present just as they are with human witnesses. And yet one of the new challenges is that some of the traditional safeguards we have for cross-examining human testimony don't apply to machine-generated outputs. So it's very difficult to put a AI output on the stand under oath and cross-examine them and observe their demeanor. That's not going to work. So the technology, I would say as a lawyer, that we need to develop to ensure proper safeguards is the legal technology of ensuring we have robust discovery and transparency rules available. Perhaps more pretrial testing access to these technologies for validation and scrutiny rather than simply relying on the current evidence rules as they exist.

Sen. Jon Ossoff (D-GA):

Where AI is used for purposes of investigative analysis, transparency and discovery through the adversarial process could be sufficient. But what about verification? The verification of the veracity or authenticity of evidence that's presented in court?

Rebecca Wexler:

So for evidence presented in court, or…

Sen. Jon Ossoff (D-GA):

Lemme say it's presented as a predicate to open a criminal investigation prior to even arriving or perhaps never arriving in court.

Rebecca Wexler:

Excellent. So right as an evidence professor, I always have to tell my students the evidence rules don't even apply if you never get to court. So what are you going to do? And there I would say, we really need independent researchers with no stake in the outcome to be able to have access to these technologies to test and scrutinize and validate them. And so one of the things that Congress could do here is require or incentivized through the federal Grants program for law enforcement assistance that AI technologies used in the criminal legal system are open to peer review or scrutiny by independent researchers with no stake in the outcome. And that will help with that verification piece. Even before we get to trial

Sen. Jon Ossoff (D-GA):

And thinking about the rules of evidence and discovery, if synthetic and inauthentic evidence were used to open a criminal investigation and the subsequent progress of the investigation yielded for criminal investigator sufficient evidence to prosecute and the initial underlying inauthentic and synthetic evidence that was used to justify the progress of the investigation is never presented in court. Would a criminal defendant ever have means of knowing that at the very beginning of the process that led them to this position was something artificial or synthetic?

Rebecca Wexler:

Under current law, I don't believe there would be a requirement to disclose. There's not a clear path for that, and that's a concern. There's a process called parallel construction where law enforcement may undergo multiple investigations and present one path towards finding evidence in court and not disclose the other paths. So there may be an opportunity and a role here for Congress to mandate affirmative disclosures of investigative strategies.

Sen. Jon Ossoff (D-GA):

And that's a concern as well, isn't it, where foreign intelligence information may be the root of a criminal investigation, but through parallel construction, sufficient evidence is developed by investigators and prosecutors such that the origins of the investigation being rooted in foreign intelligence is never disclosed to the court or to the defendant.

Rebecca Wexler:

There certainly are concerns about that. Yes.

Sen. Jon Ossoff (D-GA):

Dr. Howard, any thoughts on verification of evidence? Think about deep fakes, voice cloning.

Karen Howard:

So we've done limited work on deep fakes. It was not a part of the work I'm testifying about today, but the fact that the current state technology is able to detect DeepFakes, it sometimes takes time. It's not necessarily a quick process. And with DeepFakes that can be an issue if something is out in the public and causing harm and we can't identify it quickly enough. In the cases you're talking about, criminal investigations can take a long time. Prosecution can add time to that as well. So one would hope the technologies would allow us to identify synthetic evidence before it were brought to court. For example, in the case that Professor Wexler might be hypothetically talking about, but the tools in any kind of a fraud type scenario, the tools are almost always a step behind the fraudsters. They're creating some new way to do something and those who are trying to detect it are just a step behind them. One more question,

Sen. Jon Ossoff (D-GA):

Or as Professor Wexler noted, the underlying synthetic evidence may never be brought into court, may never be scrutinized, may never be subject to verification. Dr. Howard, is your impression that the technology, have you done any study, do you have a sense of whether the technology for verification, for detection of synthetic evidence, such as voice cloning is expected to stay ahead of the capacity for synthesis and production of fakes? Or are we expecting instead that the apparent authenticity of deep fakes will become so sophisticated, so advanced that it defies verification?

Karen Howard:

We don't have any direct evidence either way, but historically speaking, the detection tools are normally just a step behind the new creations of ways to perpetuate fraud. One might assume the same would occur here, but we have no evidence of that in either direction. We don't see any evidence of the current time of something being created that's totally undetectable. It just may take time.

Sen. Jon Ossoff (D-GA):

Thank you. Thank you Dr. Howard. Thank you all.

Sen. Cory Booker (D-NJ):

Thank you very much. Professor Wexler. I really do question whether Senator Cotton has seen Wally, but otherwise I actually think he's a really intellectually rigorous, and I don't always agree with his conclusions, but he's an intellectually honest person and I think where he was going is really substantive, which is this idea of somehow the AI technology companies that we're talking about that you rightfully asked for adversarial scrutiny, peer reviews, and how urgent it is for us to get over this trade secret privilege. I guess the concern that I think that he was getting at was that those companies do have a legitimate, and again, it might be cynical to say that they're just interested in not in being able to continue to sell their product, but you had some good retort for the idea, and I want to scratch it a little bit more, that those companies get more protection by opening themselves up to that scrutiny. Could you just explain that to me a little bit more?

Rebecca Wexler:

Sure. Absolutely. So substantive trade secret law is a body of intellectual property that creates a negotiation between the public interest in accessing information and the goal of incentivizing innovation by providing property rights to the owner. And so there are limitations all the time to substantive property rights. Patent law has an expiration date on trade. Secret law has some limitations as well that are part of that policy balance. The limitation of trade secret law is that you only get a right against mis appropriators, which is wrongful acquisition use or disclosure. And my position is that cross-examination in court is not misappropriation. So the only time the intellectual property interest would kick in is if somehow a protective order, if information's properly disclosed under the protective order subject to cross-examination for purposes of a criminal case. That limited case could be attorney's eyes only, expertize only. The only reason that a company might have a legitimate concern is if that protective order might leak. And what I was trying to explain, and I'm happy to have another opportunity to elaborate thank you, is that in that unlikely event, and there's not evidence that this happens regularly, in the unlikely event that a trade secret were to leak when it was subject to a protective order, all of the substantive trade secret protections are still available to the intellectual property owner. They can sue for misappropriation. And yet there's even more safeguards than normally occur outside of court because you have criminal contempt of court charges.

Sen. Cory Booker (D-NJ):

Okay, so I understand that, and that's really I think a substantive point. And so to the extent you can, the only jeopardy, you don't see any real business pecuniary jeopardy to that person exposing their algorithms or their processes to peer review. You really can't think of any other business threat to them because clearly they're getting some additional safeguards. Is that what you're saying?

Rebecca Wexler:

Correct. There's no legitimate business concern with disclosing the information for cross-examination under a protective order for the limited purpose of ensuring accuracy in a criminal case.

Sen. Cory Booker (D-NJ):

And then what about a prosecutorial concern In all sincerity, could there be a concern amongst the prosecutors?

Rebecca Wexler:

I believe that the trade secret privilege should not be available to either party to claim in a criminal case so that this would be a parallel and equal requirement of disclosure.

Sen. Cory Booker (D-NJ):

Okay. Lemme take off my Senator Cotton. Hattie does a much better job at being Senator Cotton than I do, although I can ask the staff maybe am I stepping up to the plate? And now let me ask one of my concerns, which is 98% of criminal convictions is done during plea bargaining. It has nothing to do with trial. And I'm wondering about this element that you said where, and Senator Cotton did a good job, I think also of bringing out this fact that as Chief Aguilar said that, well, the AI gets me to a point, but my analysts and my experts and my good gum shoe police work, I need to know where that gum shoe, where's that come from? I said, they really have gums on their shoes. Explain this to me someday please. But that it really is not, I'm not relying solely on the technology. This has now been reviewed by expert human witnesses. So that renders your desire to break open my algorithm and all of that mute, because now it only led me to knowing it's one of seven people. Why should I be subject to that kind of scrutiny now?

Rebecca Wexler:

Great. So I have two responses. So first is to say defendants or prosecutors are only entitled to discover relevant evidence. So if information truly is moot, in other words, there's no possible way that disclosing the information could render a fact at issue more or less likely than it would be without it, it's not discoverable, it's already protected, you don't get it. So we're only talking about relevant information. Now my second point is there may be all sorts of circumstances where it could be relevant to discover information about the algorithm despite the fact that a human has reviewed it after the fact. And I certainly can't anticipate all those circumstances on a case by case basis, but I can think of a few. An example would be if the algorithm, say a face recognition system, were known to the analyst who's reviewing it and potentially biased to that analyst to have overconfidence in their conclusion, you'd want to know then whether it was overconfidence, how good was that algorithm where they write to trust it so much. Another example might be that the algorithm could have produced alternate exculpatory information that wasn't disclosed by the human analyst because it didn't fit with the analyst's own conclusions. And that could help as well.

Sen. Cory Booker (D-NJ):

Now, brilliant points. And so the last thing I would ask you is you said you give your law students this, the brick is not the wall. Can you explain that to me again? I went to Yale Law School, as you know, that's an inferior law school compared to B. But go ahead.

Rebecca Wexler:

Yes, thank you. I don't even know where the phrase comes from, but a brick is not a wall means relevance is a relatively low bar. It doesn't mean nothing, but it's not an onerous threshold like showing that this piece of evidence is going to prove your whole case. No, it just means you get information and you get to introduce it in court if it will make a fact, an issue ever so slightly more or less probable. And then with all those individual pieces of information, all those little bricks that are each relevant on their own, you build a case.

Sen. Cory Booker (D-NJ):

Right? And then finally, there's a great book, why innocent people plead Guilty because they face such an enormous stacking of criminal charges and the enormity of it all. And I guess we could imagine a situation where a prosecutor is sitting there saying, take this deal, we'll give you five years or else you're going to face 50 because we've got this evidence based upon ai. It's a very intimidating moment for an overworked, as one of my colleagues was saying, defense attorney. Is that a concern of yours?

Rebecca Wexler:

Absolutely a concern. I absolutely agree. And I'll just add to, I add to this concern that many indigent defense counsel may not have the resources to challenge the ai, but also individual defendants might the cost of fighting the case, even when they maintain their innocence, could be so great risk of losing your job risk, of losing custody of your children, your home, that it is perfectly rational to plead guilty even when you haven't committed the crime. Alright,

Sen. Cory Booker (D-NJ):

Chief Aguilar, it's interesting as an African-American lives in an African-American community that's highly impacted by crime that you have these two dueling reactions. One is you see how unfair the criminal justice system is. I grew up in a predominantly white area, went to Stanford, Yale, saw lots of drug use, the enforcement of which was nil to nothing. But in communities of color that like mine, you see a tremendous amount of drug use. And then on the flip side, you see all these crimes as you were eloquently putting that are going on your neighborhood that aren't being solved. So you have this almost double down discontent with your public safety. And that's a problem. There's one great criminal justice writer. This black communities have too much of the policing they don't want and not enough of the policing that they need in terms of, as we said, the closeout rates of murders and shootings and the like.

But your story is extraordinary. So I was so excited that you came as a witness today because you are working so hard to restore that trust within communities and that technology. And I saw this where the ACLU was dead set against me using camera technology and shot spotters. We invited them in to help us write the operating procedures and then how much my community wanted this technology being used on their streets. You don't live here. We want cameras on our streets and the like. So I guess this tension that I've lived my life is something you struggle with every day and technology brings a whole new era of challenges to something that's so important for law enforcement, which is police community trust. And so I'm hoping with that context, could you sort of tell me what your hope is amidst this for restoring the kind of trust needed to safely for communities to be safe? And then what is the excitement that you have as you see the future of this technology to even better help the most impacted communities have the kind of safety and security every community deserves?

Armando Aguilar:

So Senator, to your point, absolutely. We know that at least through my experience in the communities that are the most affected by gun violence, I have never heard anyone in those communities say, we need less of this. They want everything that we can throw at the problem so that their children stop dying, right? That's the overriding concern in the communities that are most affected by gun violence. And so what I found is that these tools have helped us hone in on those people that are the drivers of repeat gun violence. We know through numerous studies that it's usually about 5% of our gun offenders that are driving 50% of the shootings. So AI has given us the opportunity along with traditional policing methods where we employ micro hotspot policing, where we focus on repeat offenders. We saw, for example, a disturbing trend in some domestic homicide cases where those incidents were preceded not by felony incidents of domestic violence, but by misdemeanors that perhaps we couldn't get to quick enough.

And so we found a way to identify those people that were repeat offenders for domestic violence, that were also carrying out other crimes to where when we're talking about carrying out enforcement, that's preventative in nature, we're not targeting entire communities so much as we're targeting those 5% of offenders and 5% of locations that drive 50% of our violent crime. So I think that numerous studies have also shown that the more police officers we have out there, the more resources human resources, the more positive our impact on crime rates. But it goes beyond that where we have to just also embrace a lot of these technologies that help us focus on the right people that are driving violence in our

Sen. Cory Booker (D-NJ):

Communities. Technology is clearly a force multiplier. It allows one individual to do a lot more work a lot more quickly. And you've been a homicide investigator, so you understand that yes,

Armando Aguilar:

These technologies can cut out hours, even weeks worth of time in certain instances where we're talking video analytics where we can get several weeks worth of footage and be able to compress it to those points that are most relevant and of evidentiary value. Absolutely.

Sen. Cory Booker (D-NJ):

And then how do you see, is there anything on the frontiers that gets you really excited about where you think policing might be with these tools in five years, 10 years from now?

Armando Aguilar:

I think that a lot of the technologies, just like I mentioned earlier with ballistic evidence to where we sometimes have a hard time considering that artificial intelligence, a lot of these technologies are going to become more mainstream. We use with facial recognition as an example, we use it in our daily lives just to get into our phone several times a day. And so I think that these technologies hold a great deal of promise. I think that they become smarter and more accurate with time. We just have to put behind it the responsible policy, the right amount of research, the NIST facial recognition vendor testing, for example, is an excellent resource. If Congress were to consider federal funding for some of these technologies, comparing it to sources that are available like the NIST vendor testing will be greatly helpful.

Sen. Cory Booker (D-NJ):

What about trading and accountability of police officers? I mean, my police union resisted body cams and then it shifted really quickly. They appreciated them because for citizen complaints, it was a very good video to have. My former director now thinks body cams is great for all that footage is great for training because you can break down officer interactions. I'm imagining that could be done with AI looking at the volumes and hours and hours looking for certain patterns that might help to create more accountable policing. Have you used it all to hold your police to higher levels of accountability and transparency?

Armando Aguilar:

So Senator, to your first point, there's never enough training. And so I am always a fan of more and better training. Right now we are looking into a partnership to where we will use artificial intelligence to go through those hours and weeks worth of body cam footage to not only highlight problematic behavior but also commendable behavior.

Sen. Cory Booker (D-NJ):

Yeah, I saw, I just had a sheriff in New Jersey commit suicide. I have seen officer mental health challenges. I was just talking to a reporter about the Obama's 21st century task force on policing and they saw a lot of predictive analytics for officer misconduct. One of them, which is actually doing suicide calls, seemed to be something that would pop up before an officer would have an interaction with a citizen that was negative. And I don't know if that's something you're thinking about, how to see how AI might be able to help in that space to anticipate officer behavior who might need more training?

Armando Aguilar:

Without a doubt, Senator. I think that any technology that'll help us flag problematic behavior, we've seen it. We've seen officers whose careers have come to a very ugly end and they're often proceeded by problematic behavior where that officer could have benefited from early intervention. And so we as a police department can't correct problematic behavior that we're not aware of. And so if this technology helps us get there, I'm all for it. Absolutely.

Sen. Cory Booker (D-NJ):

And so I would imagine, and last question that I'd like to move on to Dr. Howard. I do hear this though still in my community, my young kids are stopped by and frisked more than any other community. This doesn't happen in wealthier, whiter communities. Now we're under video surveillance in ways that other communities are not. Now they have facial recognition that we're exposed to more than other communities. You can see how that list can go on and on and on and on. Which I wonder if people start to feel like they're under a surveillance state or feel like they're having their just basic privacies, can you empathize with that feeling or is that just not your experience that you're seeing out there at all?

Armando Aguilar:

So Senator, I think that right now video cameras are ubiquitous. I read in one place, I can't remember the exact source, but the United States is second only to China in terms of the number of video cameras that the average citizen encounters every day. The difference of course being that most of the video cameras that we as Americans encounter are owned by individuals are owned by the private sector. And so I think that perhaps in some communities that are more prone to gun violence where some of the cameras belong to law enforcement versus other communities where they're held by individuals in the private sector. But I do think that right now being under some form of camera surveillance is just an accepted part of life for all communities

Sen. Cory Booker (D-NJ):

The United States. Thank you, Dr. Howard. I was really appreciative of your best practices and they resonated with Dr. Wexler in a significant way. And one of the things I think that resonates with Chief Aguilar is just this idea of training. Could you just go another level deeper on that? You said how important it's for analyst training. What does that mean?

Karen Howard:

So we know that in order for these tools to be used properly, the analyst decisions before the algorithm comes into play are critical and the analyst decisions afterwards. So if we think about any one of these, if we think about latent print analysis, for example, just to use a different one, the analyst, somebody in the human chain collects a fingerprint at a crime scene or from a piece of evidence processes that fingerprint scans, it decides whether it's of sufficient quality to even be able to match it to something else or to attempt to link it to somebody else, marks it up or has a computer program mark it up on where are the key features of this fingerprint that might be worth comparing with a database, puts it into the algorithm. All the algorithm is doing is comparing it to a database of prints. There's nothing magical about that that's been done for decades on computers.

Before that, on cards stored in files, they would pull out the cards and they would compare each fingerprint. So all the algorithm is doing is speeding up that comparison process. And then at the end it puts out a candidate list. And that candidate list is nothing more than the top 10, 20, 50, however many the program is designed for or however many the analyst has asked for of the best matching candidates from the best match at the top to the lesser matches as you move down the list. But that doesn't say anything about the strength of the actual evidence. The person at the top of the list might actually be a lousy match, but it's the best match in the database and that's all the algorithm is able to do. If you have a partial printer, a smeared print or one that's distorted, they may not be able to get a very good match.

And if you were putting a percent match on it, which is not what these algorithms do, but if you were doing that, it might say something like 35% match and that's your number one candidate. Analysts often do not understand that. They often think this is my number one candidate or these are my top 10. It's got to be one of these people that came up at the top of the list. And in reality, it might be somebody completely different. All the algorithm can do is find the best matches to whatever quality fingerprint they're given. And it has to be somebody who's already in the database. And obviously not everybody has their fingerprints in every database that could be used for comparison. So a lot of human understanding at the beginning about what quality of print is good enough and human understanding at the end about what do these results really mean.

Sen. Cory Booker (D-NJ):

But variations in training for local police departments could vary widely and end up with widely different results.

Karen Howard:

That's absolutely true.

Sen. Cory Booker (D-NJ):

And so that's why one of your recommendations is to have national standards, right?

Karen Howard:

We think national standards would help to drive the conversation if there were standards at the federal level or if there were standards that applied to how much is good enough, what quality of print is good enough to put into a latent print algorithm. Some of those kinds of standards could start to bring more consistency to this and help reduce the potential for well-meaning human investigators who are doing their best to solve a crime, to be able to use the tools more effectively and interpret the results more efficiently.

Sen. Cory Booker (D-NJ):

Right. And how would you feel is that overly prescriptive for the federal government? Because I'm a big believer having run a local police department, how cash strapped you are, how you're constantly battling for resources. And so things like COPS grants and other grants for technology are really critical. But how would you feel if those grants came with a requirement for certain standards for your analysts and the like? Is that overly burdensome,

Armando Aguilar:

Senator? I think it's a bit of a broad question, but I think just generally speaking, a grant that comes with a requirement that the agency receiving the grant properly trained to a particular standard, the people who are going to be carrying out the function, I don't think is overly burdensome.

Sen. Cory Booker (D-NJ):

Great. And so Dr. Howard, last question. When you say increased transparency, are you getting at some of the things that professor Wexler's talking about in terms of the need of opening up these algorithms to peer review and to a more adversarial analysis?

Karen Howard:

So we looked at the criminal investigation portion, which I realize feeds into the courtroom, but we did not look at the courtroom processes that might occur. So I want to set the stage with that. But the experts we spoke with indicated that transparency is critical, both in terms of how the algorithm was built, if it's an AI enabled algorithm, what training data were used, but also then transparency about has it been tested, what is the accuracy of this through something like the NIST facial recognition vendor testing, what accuracy was determined through that kind of testing, what demographic bias did this algorithm exhibit through that kind of testing? That can be very useful information. And our experts were very much in favor of that. Opening the source code, though our experts told us is not necessary. It is a path to figuring out how the algorithm works. It is not the only path in the view of the experts we spoke with. Independent third party testing can accomplish many of the same things.

Sen. Cory Booker (D-NJ):

And you guys noted in your report that there was a lack of independent, a lack of sufficient independent validation. Is that you, is often true?

Karen Howard:

Yes. The NIST tests, which are top of the line gold standard testing, they are done only if a vendor chooses to submit its algorithm for testing. There's no requirement for a vendor to do that.

Sen. Cory Booker (D-NJ):

And you're saying, so right now the d oj in the DO J'S engagement of this technology right now, there's a lack of sufficient independent oversight.

Karen Howard:

So we didn't look at the DOJ in depth in terms of how they validate their algorithms. We did talk to a number of officials at FBI and they told us, and I believe this is in our report, that they worked with NIST to develop a validation protocol that they then run in-house. But

Sen. Cory Booker (D-NJ):

You haven't evaluated that?

Karen Howard:

That? We have not evaluated it, no. That was not part of the scope of our work. It's work we could do if there was interest

Sen. Cory Booker (D-NJ):

And Well, I have interest, but I can't speak for the d oj, but if you were to be called upon to do that analysis, one of the things you would be looking at is sort of an independent objective review, a sufficient review of algorithms of the technology itself,

Karen Howard:

Correct, of the algorithm accuracy, demographic bias, features like that.

Sen. Cory Booker (D-NJ):

And what's the danger of perhaps them not having done that

Karen Howard:

Right now? Then they may not know how effectively their algorithm works, and they may not know that the candidate list they get out could be skewed by, for example, bias training data that the algorithm was trained on. They just may not be aware of that if they haven't run that proper

Sen. Cory Booker (D-NJ):

Validation, that set alarm. Should I be writing to the D OJ and saying, look, I have legitimate concerns that you have not tested this in a sufficient way way.

Karen Howard:

I would say any law enforcement entity that's using an algorithm that has not been validated in a form that would be considered defensible should set off alarms. Absolutely.

Sen. Cory Booker (D-NJ):

Okay. And then finally, Ms. Wexler, again, you traveled the farthest and I'm grateful, but I do wonder, you obviously have a lot of concerns in your testimony, but there are also protections that AI tools could use. I always think of these waves of technology as potentially democratizing because you talked about the possible expense of a defense attorney in a situation, but AI might be a tool that defense attorneys could use to help out. You've done a lot of writing and obviously sit with the technology group at Bolt. Are there some things that give you hope when you look at this technology that you might want to put into the record?

Rebecca Wexler:

Oh yeah, absolutely. To be clear, my concerns are neutral about the technology itself. I think AI technologies have potential to increase accuracy and efficacy of law enforcement investigations and prosecutions, and also have the potential to help criminal defense investigators identify evidence of innocence. So the technologies themselves are not the issue. It's the legal rules that we set up around them to help us ensure that they are the best, most accurate and effective tools and not flawed or fraudulent in some way.

Sen. Cory Booker (D-NJ):

I just want to say before I give my sort of closing remarks, number one, I'm just so grateful to Senator Cotton with a lot of demands, given some of the international issues that are going on. For him to be here and give such, I thought a really important line of questioning his partnership is extraordinary. But I want to thank the three witnesses. It is a frontier that we have that I think it's difficult to anticipate where we will be in five years from now and this technology really accelerating. The challenge for government, and I've seen this in waves of innovation, is that we have not moved as fast as the innovations around us. I've seen this in nuclear energy, I've seen this in drone technology. Government has not been able to keep up maybe a platform of the substantive accord that we have in a bipartisan way on some of the challenges with social media, for example, is another great example.

You all are on the cutting edge of looking at this, and it's exciting to me not only to hear your testimony, but to hear what some common sense precautions could be on a federal level as well as ways to try to figure out how to advance the technology to the greatest aims of humanity, which is a democratic system that protects the rights of all individuals, including rights to privacy as well as what I think is a fundamental human right or a freedom that we should have, which is a freedom from fear of freedom from the kind of depraved criminality that we often see manifesting in communities as well. So this testimony has been rich. They're both your written testimony and your verbal testimony. It gives me a lot of gratitude. I know that Congress as a whole is looking at AI from every different perspective.

This is from what I know of the first hearing in the Senate that really is focusing in on its effect in the criminal justice system. My hope is that you all will be available in the future to consulting with us. There may be some potential for some great bipartisan ideas to come out of this, or at the very least, opening up conversations with some of the key actors in law enforcement on the federal level. I cannot tell you Chief Aguilar, as a guy who lives in a community that has struggled with a lot of the issues that yours has, the heroic work that you're doing every single day. It's just something that means a lot to me and I look forward to visiting it hopefully. And the truth of the matter is I'm a little concerned that we'll be confused for each other. We're both have similar haircuts, but I'm hoping that that won't be the case.

But I want to remind everyone that the subcommittee that questions for the record are due a week from today, Wednesday, January 31st at 5:30 PM and I hope that, should there be questions because again, the senators have been pulled in so many different directions. My ranking members team might have some questions for the record as well. I hope that you all, after all the sacrifice you made, getting here, preparing written testimony and testifying that you will still be able to respond in a timely fashion to any questions that are sent to you when you testify before the United States Senate. I know how important that is. It may seem like a small committee subcommittee hearing, but it is a service to your country and the work that you all are doing to me stands in that stead of patriotism, of giving to a nation, helping us to be better. Today was very, very, and I'm grateful to each of you individually for participating and being a part with that. The committee subcommittee is adjourned.

Authors

Haajrah Gilani
Haajrah Gilani is a graduate student at Northwestern University’s Medill School of Journalism in the Investigative Lab. She cares about the intersection between crime and public policy, and she covers social justice.

Topics