Home

Donate

AI and Legal Systems: Bridging Resource Gaps?

Maria Lungu / Apr 7, 2025

Public defenders operate within a system fraught with resource shortages, excessive caseloads, and widening justice disparities. According to the Department of Justice, public defenders handle approximately 80% of all criminal cases. Yet, many are responsible for workloads up to five times the recommended maximum set by the American Bar Association (ABA). In New Orleans, for example, a 2017 study found that public defenders were allotted seven minutes per case due to overwhelming demand.

As digital evidence, body-worn camera footage, social media records, and surveillance feeds become a staple in court proceedings, the burden on public defenders has only intensified. AI-powered tools emerge to alleviate burdens in the legal system, promising efficiency gains in evidence review and case preparation. However, their integration raises questions about the scope of their role and added pressures around the classification and adjudication of these digital tools.

Given these immense administrative pressures, AI-powered tools have emerged to help alleviate some of these burdens. Developed by Devshi Mehrotra, JusticeText is an evidence management tool that generates automated transcripts of digital discovery, extracts key legal insights, and makes it easier for attorneys to create clips in preparation for trial. Currently used in 65 public defender offices, the tool represents a broader movement toward AI-assisted legal administration. By enabling attorneys to search within video footage and automatically flag significant moments, JusticeText exemplifies how AI can potentially improve efficiency and enhance the quality of representation, ensuring that critical evidence is not overlooked due to administrative load.

However, integrating AI in public defense also raises fundamental due process questions. A survey found that 93% of public defenders in Virginia reported that the demands of audiovisual discovery review were overwhelming for attorneys, underscoring the necessity for technology-driven interventions. Evidence review, a time-intensive process, has ballooned in complexity as body-worn camera footage, digital communications, and video surveillance have become standard forms of evidence. A public defender's office in Cook County, Illinois, reported receiving 60,000 hours of body-camera footage annually, which, under current staffing conditions, is challenging to review thoroughly. Ultimately, AI’s ability to categorize, summarize, and highlight critical evidence could improve efficiency and the quality of representation.

But at what cost? AI-generated evidence summaries could lead to over-reliance on algorithmic outputs, reducing human scrutiny of case details. If AI tools misinterpret or omit key evidence, it could directly impact case outcomes, raising serious concerns about accuracy and reliability. Additionally, the use of AI in shaping case narratives may complicate a defendant’s right to confront the evidence against them. Attorneys would have to challenge AI-generated summaries and determine if exculpatory details were overlooked.

AI in judicial decision-making: augmenting human judgment

Beyond public defense, the judiciary also explores AI’s potential to support decision-making processes. For instance, US Circuit Judge Kevin Newsom conducted a mini-experiment with AI tools, including ChatGPT, to interpret the legal term “physically restrained” in a federal sentencing context. Despite initial variations in AI responses, Newsom observed that these discrepancies mirrored natural language patterns, suggesting that AI could potentially assist in interpreting legal terminology. However, Judge Newsom emphasized that AI should augment, not replace, human judgment in legal proceedings.

This caution is warranted, as past AI-driven tools have exhibited various forms of bias within the code and in outcomes. The COMPAS algorithm, used in some US courts for risk assessment, has been shown to incorrectly label Black defendants as high-risk at nearly twice the rate of White defendants. This bias is a systemic issue in AI-driven sentencing tools, prompting calls for rigorous oversight and accountability mechanisms. As courts increasingly rely on digital tools, concerns remain about how these technologies will be classified, regulated, and assessed for fairness in legal proceedings. Despite such concerns, these tools continue to shape sentencing and probation decisions.

These questions extend internationally. Sir Geoffrey Vos, a senior judge in England and Wales, previously advocated for a legal right to have court decisions made by humans rather than machines. His concerns echo human rights groups' concerns, which warn that fully automated judicial rulings could erode due process protections and undermine procedural fairness. He argues that governments must establish clear AI governance frameworks to ensure that AI remains a tool for augmentation rather than a replacement for human discretion. Vos suggests that safeguarding this right may require embedding it in human rights legislation relating explicitly to contemporary technology.

Challenges and ethical considerations in AI integration

While AI’s expansion into public defender officers and judicial processes is under scrutiny, its application in law enforcement has also created debates around AI and digital transformation in police agencies. On the one hand, in nascent research with police chiefs through the University of Virginia’s Digital Technology for Democracy Lab, the project described predictive policing (a type of AI) as “a standard method,” “something many people do not understand despite deployment,” and also “a necessary part of assigning resources in the present day.” On the other hand, many people fear the nature of these models.

AI models trained on historical data have been widely criticized for reinforcing biases. A 2019 study by the National Institute of Standards and Technology (NIST) found that commercial facial recognition systems exhibited false positive rates up to 100 times higher for Black individuals compared to White individuals. Since AI systems often rely on historical records, they risk perpetuating and amplifying existing biases in automated decision-making. As these technologies become more embedded in law enforcement and judicial processes, examining their broader implications for fairness and accountability is crucial.

With this expansion in mind, the current experimental nature of AI in administrative structures has become standard practice. A white paper from the ACLU warned that AI-generated police reports, such as those produced by Axon’s AI-powered reporting tools, learn from vast datasets. Since large language models (LLMs) are trained on these datasets, they can reflect societal biases, potentially distorting narratives in subtle but harmful ways. Additionally, because reports are generated from body camera footage, they risk altering officers’ recollections of events, an issue given that recollections vary.

The ACLU also raised concerns about transparency, emphasizing that the experimental nature of AI-generated reports makes it difficult for defendants and the public to scrutinize how evidence is recorded and used in prosecutions. Furthermore, shifting to AI-driven reporting could erode a safeguard in policing. Officers would have previously had to manually justify discretionary actions such as stops and searches, which supervisors would review for potential issues.

The opacity of AI decision-making processes is also a significant challenge in overall public administration. For instance, in 2025, the UK’s Department for Work and Pensions (DWP) implemented an AI “white mail” system to process correspondence from benefit claimants. The system handles around 25,000 letters and emails daily, prioritizing cases for officials. However, the government failed to inform claimants that an algorithm was determining their benefits, raising due process concerns. The decision-making processes of AI systems can be opaque, making it difficult to understand how conclusions are reached. This lack of transparency is concerning in legal contexts, where understanding the rationale behind decisions is crucial. If AI is to play a role in administration, transparency mandates, such as explainability requirements, must be legally enforced.

These disparities underscore the dangers of relying on AI-driven tools without addressing the structural biases ingrained in the underlying data and the nature of the systems they are a part of. Without proper oversight, such systems risk entrenching disparities in the criminal justice system, exacerbating existing inequalities rather than mitigating them.

Policy considerations

Accountability and liability

Policymakers, state legislatures, courts, federal regulatory agencies, etc, all have a difficult job regarding AI. Research often discusses AI's legal-technical or socio-technical aspects, and yet, despite these discussions, policy and the law lag behind these rapid technological developments. The most recent AI Action Plan (2025) prioritizes enhancing America’s position as an AI powerhouse without hindering private-sector innovation.

Many people favor AI expansion. However, determining responsibility for AI-generated structures is where things get complex. If a new AI tool incorrectly assesses a defendant’s risk level, resulting in wrongful detention or release, who is responsible, and to what extent? If an AI system misused in parole hearings assesses a prisoner as low-risk, leading to their release and subsequent reoffending, it becomes challenging to assign accountability. Legal scholars argue that the black-box nature of AI models creates a liability vacuum despite the benefits that AI proposes. As such, neither developers nor the justice system bears full responsibility for errors. Establishing a transparent chain of accountability is crucial before further AI deployment in the legal system. This does not mean eradicating AI but being mindful about its use.

More AI literacy

AI can potentially improve efficiency in the justice system, but its use must be grounded in understanding how these systems work and where they fall short. Judges, public defenders, and court administrators need to be able to critically assess AI tools rather than unquestioningly trusting or outright dismissing them. State governments should invest in AI literacy programs for legal professionals, ensuring they understand these technologies' strengths, limitations, and ethical risks. Without this foundation, AI could either be over-relied on in ways that reinforce systemic biases or ignored entirely, missing opportunities to improve administrative processes. The legal community must come together to determine how AI should be used in practice, where it can be helpful, where it poses risks, and what safeguards are needed to ensure fairness and accountability. AI should support the justice system, not undermine it, and that requires careful, informed integration rather than uncritical adoption.

Conclusion

AI holds significant potential to reduce administrative burdens in public defense and streamline legal processes. However, its integration into the justice system must include a greater understanding of oversight, given the connection between legal and technical questions. Additionally, how can we incorporate ethical safeguards that do not hinder innovation? AI risks deepening existing disparities rather than resolving them without a human-centered proactive policy framework.

Public administrators, legal scholars, and policymakers must work together to ensure that AI tools uphold due process and fairness at every stage of the legal system. Technological advancements must serve justice, not just efficiency. State legislatures, judicial bodies, and regulatory agencies must act now to establish these guardrails, ensuring that technology serves the pursuit of justice rather than becoming an obstacle to it.

Authors

Maria Lungu
Dr. Maria Lungu is a Postdoctoral Research Fellow in the Digital Technology for Democracy Lab at the University of Virginia. She holds a Bachelor of Science in Finance and Business Administration with a minor in Leadership from The University of Charleston. She earned her Juris Doctor from the Unive...

Related

Why Do People Fall For a Fake Robot Lawyer?

Topics