How AI Could Help Wrongful Conviction Review Cases
Meekness Ikeh / May 23, 2025In 2024, I served as a law clerk with the Los Angeles District Attorney’s (LADA) Justice Conviction Review Unit (JCRU), which investigates potential wrongful convictions. Since its founding in 2015, the JCRU has exonerated sixteen individuals, nearly all wrongfully convicted due to mistaken witness identification. In a few instances, false testimonies or confessions contributed to these convictions.
The JCRU reviews cases prosecuted by the LADA’s office when applicants claim factual innocence or wrongful conviction. If a credible claim of innocence is presented and the Unit identifies potential investigative leads, the case is reopened for review and where necessary, further investigation. Requests are submitted using a standardized conviction review request form where claimants can explain why their conviction deserves a another look.
While the process may seem straightforward, my time at the JCRU revealed the immense workload placed on the Unit. With thousands of conviction review requests received and many still awaiting formal denial response, the real challenge lies in sustaining fairness while striving for timely and scalable screening of claims, an undertaking that the JCRU approaches with diligence despite limited resources and a consistently high volume of requests.
Advances in artificial intelligence may offer a way to help. What if a digital law clerk could assist with intake and triage, allowing requests to be screened faster and at scale? Enter MARTHA: the Machine-Assisted Review Tool for Honest Assessments. Still in development, it is a conceptual AI platform I am designing to support Conviction Review Units or Conviction Integrity Units,* by expediting initial screening of claims without sacrificing care or oversight.
The problem: backlogs in wrongful conviction reviews
As of January 22, 2024, the Los Angeles JCRU had received 2,262 conviction review requests. Of those, 1,743 were denied, an outcome that reflects the Unit’s rigorous intake and review standards. It’s important to note that denials are based solely on whether a claim meets the criteria for further investigation; cases are never declined due to limited resources. However, the ability to formally close a case by drafting a denial letter can be delayed by staffing constraints, creating a backlog in processing responses even after a determination has been made.
Amid rising caseloads and staffing constraints, even the most dedicated conviction review units and innocence projects face significant pressure to meet demand without compromising the integrity of their assessments.
As a law clerk, I witnessed the JCRU team’s deep commitment to justice. I also observed how limited resources hindered progress. The growing volume of requests places strain on the Unit’s ability to screen and assess each claim received, and notably, distinguish the meritorious claims that truly warrant further review and investigation from the not-so-meritorious claims. As a result, initial screenings and eventual reviews can sometimes take longer than expected, even as the team remains deeply committed to giving each request the careful attention it deserves.
My role at JCRU, involved assisting with the assessment of conviction review requests, by reviewing request forms, case files, Court of Appeals opinions and drafting detailed recommendations on claimants’ requests. A Deputy District Attorney (DDA) would then review my work and and make the final decision on whether the case would be assigned for further review and investigation to a designated unit attorney or whether the claim would be denied.
The process was necessarily thorough, justice demands nothing less, but it was also time-consuming. Despite the dedication of staff, law clerks, and volunteers, the backlog continued to grow. The Unit faced the difficult task of balancing the steady influx of new requests with the careful reinvestigation of older cases that showed potential merit, an imbalance driven more by limited resources than by lack of commitment.
This experience, paired with my background in AI and legal tech, led me to imagine a digital assistant tailored for conviction review work. I called it MARTHA.
MARTHA: an AI assistant for Conviction Review Units
Still in the research phase, MARTHA is envisioned as an AI-powered assistant that would support, never replace human judgment in conviction reviews. It would assist conviction review staff JCRU taking on the initial screening of conviction review requests, document analysis, and identifying the meritorious cases warranting further review and investigation, essentially, acting as a tireless digital law clerk. If developed, MARTHA could help streamline several key tasks:
- Document Conversion: MARTHA would convert scanned applications and completed conviction review request form into searchable text using Optical Character Recognition (OCR).
- Legal summarization: Using advanced language models, MARTHA would scan legal documents, trial transcripts, witness statements, and Court of Appeal decisions to extract key facts and inconsistencies.
- Evidence flagging: Pattern recognition tools would help flag inconsistencies in the testimony of witnesses or disputed forensic evidence, making it easier for conviction review staff to review.
- Drafting assistance: MARTHA would generate initial drafts, such as denial letters with citations supporting the denial of meritless claims or a summary brief for cases that warrant further review, to assist a Deputy District Attorney (DDA) with assessing any potential merit to the claims. Final decisions would always rest with a human reviewer.
Automating these early steps could significantly reduce the time spent on initial screening/front end of conviction review work. More expeditious reviews mean the innocent spend less time wrongfully incarcerated, while non-meritorious claims can be resolved and closed more efficiently. In both scenarios, speed and transparency would help reinforce public trust.
Privacy, fairness, and human oversight: designing MARTHA responsibly
Screening wrongful convictions involves sensitive legal and personal data. For MARTHA to be used responsibly, its development must prioritize privacy, transparency, and human oversight. Though still in the research phase, the platform is being designed with these principles in mind, drawing on legal standards and best practices in ethical AI.
- Secure architecture and compliance: MARTHA’s design aims to comply with the California Consumer Privacy Act (CCPA), California Privacy Rights Act (CPRA), and the FBI’s CJIS Standards. It would use encryption, role-based access, and detailed audit logs, and it would avoid storing sensitive data by retrieving facts in real time. All operations would take place within the District Attorney Office’s secure cloud, keeping records internal.
- Bias-aware training: MARTHA would be trained on redacted and synthetic legal data, using demographic balancing and regular audits to reduce bias.
- Transparency & explainability: MARTHA would include explainable AI features, such as citing exact sections in case files or court records behind each output. This would allow a DDA to trace conclusions to specific passages in case files or court opinions.
- Human-in-the-loop by default: MARTHA would be built to support, not replace, human decision-making. In the envisioned workflow, a DDA would review all AI-generated materials, retain final judgment, and have full authority to revise or disregard MARTHA’s recommendations.
These design principles reflect a commitment to aligning with public sector values of fairness, accountability, and public trust. If implemented carefully, MARTHA could enhance, not compromise, the integrity of the justice process.
A path forward
MARTHA reflects a broader point: innovation in justice can save time and sometimes lives. With the right tools, conviction review units could better tackle their backlogs and identify wrongful convictions faster. Early legal AI tools show that technology can augment human work. The same lessons apply to public sector applications like MARTHA.
Justice delayed is justice denied. For potentially innocent people still in prison, each year lost is a year that cannot be returned. Tools like MARTHA offer a way to shorten those delays, bringing speed while maintaining fairness and transparency to conviction review. If designed responsibly, such systems could help correct errors more quickly and reinforce trust in the rule of law.
Turning MARTHA from concept to reality will take more than technical design; it will take collaboration. Building a tool that truly supports conviction review work means drawing on the expertise of legal practitioners, technologists, ethicists, and privacy professionals. It also means listening closely to the needs of those working on the front lines of post-conviction review. As I continue researching MARTHA’s potential, I welcome the opportunity to connect with others interested in building responsible AI for justice. Together, we have a chance to create something that not only improves how cases are reviewed but also strengthens public trust in the system itself.
* The terms "Conviction Integrity Unit" and "Conviction Review Unit" are used interchangeably across the United States to describe internal divisions within District Attorney’s Offices investigating potential wrongful convictions.
Authors
