Article

The future of courts: AI-powered and people-centric

When deployed responsibly and supported by a human-centered approach to change management, AI can transform how people experience the justice system.

State and local courts are where most people experience the justice system, through traffic tickets, family disputes, evictions, or small claims. Yet for many, the experience is frustrating: long waits, confusing paperwork, and subpar communication. 

At the same time, courts face historic challenges: backlogs worsened by the pandemic, staff retirements, budget constraints, and a sharp rise in self-represented litigants. According to national data, in roughly 75% of civil cases, at least one party goes without a lawyer1. That reality forces courts to rethink not just their legal procedures, but their customer experience (CX). 

Artificial intelligence, combined with people-centric change management, offers a path forward. With robust guardrails in place and an emphasis on human judgment, AI can help courts shed their public image as bureaucratic mazes and transform them into efficient, transparent, and accessible forums for fairer justice. Looking at adoption trends, use cases, and emerging risk and governance best practices helps bring into focus a clear picture of what that transformation will look like. 


 

Current levels of AI adoption in state and local courts 

Generative AI has rapidly moved from research labs into government offices, and courts are no exception. A 2025 national survey by the National Center for State Courts (NCSC) found that about 17% of courts were already experimenting with generative AI tools, while another 17% planned to adopt them in the near future2.  

Adoption is growing—cautiously. California recently became the first state to pass a statewide rule on generative AI in courts. Starting in September 2025, every court must either adopt strict safeguards for the use of generative AI or prohibit its use altogether3. Those safeguards address four critical priorities: 

  • Confidentiality: Prohibiting sensitive case data from being entered into public AI tools 
  • Bias mitigation: Ensuring AI outputs are tested for fairness 
  • Accuracy: Requiring human review of AI-generated content 
  • Disclosure: Mandating that if an entire filing or decision is generated by AI, it must be clearly stated 

In parallel, the Conference of State Court Administrators and the NCSC have released model guidelines emphasizing transparency, human oversight, and staff training. Together, these developments mark a turning point4

AI in courts is no longer theoretical. It is real, beginning to be regulated, and ready to scale if implemented responsibly. 


Why people-centric change management is essential 

Technology by itself doesn’t guarantee better justice. Courts have seen ambitious IT projects fail because leaders underestimated the human side of adoption. Judges fear that reliance on AI could erode judicial discretion. Clerks worry about being replaced. Litigants assume “black box” decisions may reinforce bias. Left unaddressed, these concerns become roadblocks to adoption.

This is where people-centric change management becomes essential. Successful transformation requires: 

  • Early engagement: Making judges and courthouse staff partners in design and testing, not just end-users 
  • Clear communication: Explaining how AI augments human judgment, as opposed to replacing it 
  • New performance metrics: Moving beyond throughput (cases processed) to measures like customer effort scores and user satisfaction 
  • Cross-sector lessons: Learning from other sectors and industries—such as healthcare (patient-centered AI adoption) and airlines (predictive service)—to better anticipate customer needs 

Put simply, AI adoption in courts is not an IT project. It is an organizational culture shift. 


Real-world AI use cases are already having an impact 

The best evidence for the value of AI in courts is not found in futuristic predictions but in real-world pilots already underway. 

1. Text message reminders  
Failure to appear (FTA) drives costs and delays. AI-enabled reminders could change that. 

  • In New York City, redesigned summons forms and SMS reminders cut FTAs by up to 26%5
  • In Santa Clara County, California, reminders reduced arrest warrants by approximately 20%6

For courts, this use case offers a simple but transformative insight: Proactive nudges work. Instead of punishing missed deadlines, AI-driven reminders anticipate mistakes before they happen, helping litigants stay on track and saving courts time and resources. 

2. Online ability-to-pay tools 
California’s MyCitations platform allows people to request reductions for fines and fees online. The results include: 

  • More than 66,000 requests processed 
  • Fines reduced by more than $20 million

AI can enhance these tools by triaging requests, checking eligibility, and tailoring payment plans to individual circumstances.  

3. Online dispute resolution (ODR)  
Utah piloted ODR for small claims and debt collection cases, enabling parties to negotiate and settle online without appearing in court8. Benefits included: 

  • Faster resolutions 
  • Reduced staff burden 

Participation also revealed usability gaps, reminding leaders that UX design and change management are as important as the technology itself. 

4. Plain-language and AI-assisted forms 
Court forms are filled with jargon. Plain-language redesign, facilitated by AI, improves comprehension for self-represented litigants. AI can suggest alternative wording, automatically translate forms into multiple languages, and guide users step-by-step through completion. 

In Orange County, California, courts are enhancing accessibility by using AI-powered translation tools to help Limited English Proficient (LEP) court users with information and documents9
 

Anticipatory justice—the next frontier 

If reminders, online portals, and redesigned forms represent the first wave of innovation, the next frontier is anticipatory justice. 

In the context of court modernization, anticipatory justice means shifting from reactive service (fixing problems after they occur) to proactive service (preventing problems before they occur). With AI, courts can: 

  • Predict risk: Flagging litigants likely to miss deadlines or fail to appear. For example, if a defendant hasn’t opened emails or text reminders, the system could trigger an automated call or connect them with a clerk for follow-up. 
  • Send personalized nudges: Automating text, email, or chatbot messages tailored to the user’s preferred channel. For example: “Your hearing is scheduled for next Thursday at 10 a.m. in Courtroom 3. Reply ‘Directions’ for a map or ‘Remote’ for instructions on how to attend online.”
  • Offer guided navigation: Using AI copilots to answer plain-language questions, generate draft documents, and provide reminders in multiple languages. For example, a user types, “What if I can’t attend my hearing?” and the AI responds, “You can request a continuance using this form. Here’s the deadline and where to submit.” For small claims, the system could walk a user through filing, showing progress much like a tax software interface: “Step 1: Enter names. Step 2: Upload evidence.”
  • Deliver personalized recommendations without bias or predetermined outcomes: For example, in traffic cases, AI could highlight multiple lawful options: “You may contest the ticket, request traffic school, or submit an ability-to-pay request.” For eviction cases, the system might say: “You may be eligible for mediation before trial. Here’s how to request it.” 

Collectively, AI-enabled capabilities like these remove barriers to justice by anticipating the needs of both those inside the court system and those seeking to access it. 

 

Guardrails for responsible AI in courts 

With opportunity comes risk. Integrating AI into the justice system means mitigating risks by prioritizing trust over speed. The public will accept AI only if courts demonstrate responsibility and transparency. Establishing clear AI governance guardrails is the first step: 

 
five-guardrails-cei-graphics-25-09-04

 

A playbook for change management 

How can courts build momentum with AI adoption without creating trust gaps and leaving critical stakeholders behind? Start with a phased, five-step approach to change management: 

Step 1: Build a change task force 

  • Create a cross-functional group of judges, clerks, IT staff, and external advisors 
  • Identify the mission: to locate opportunities, vet risks, and recommend pilots 

Step 2: Start with “no-regret pilots

  • Issue text reminders for hearings
  • Roll out ability-to-pay online forms 
  • Initiate plain-language redesign of forms 

Step 3: Train and engage staff 

  • Hold workshops stressing AI as augmentation, not replacement 
  • Create communication protocols for explaining AI-assisted outputs to the public 
  • Establish regular feedback loops from staff and court users 

Step 4: Redefine metrics 

  • Add customer effort score alongside case clearance rate 
  • Track failure-to-appear reduction, user satisfaction, and cycle time 
  • Share progress transparently with the public to build trust

Step 5: Scale with guardrails 

  • As pilots succeed, expand to predictive analytics, AI copilots, and multilingual chatbots—always with governance in place 

 

What the future looks like 

The future of courts will be human-centered, anticipatory, and powered by AI. Early pilots are already delivering results, including: 

  • Reduced stress for litigants 
  • Fairer outcomes for low-income residents 
  • Staff freed to focus on high-value human interactions 

Courts that simultaneously embrace AI and people-centric change stand to do more than clear backlogs. They gain the potential to restore trust in justice. 

insight_image

James Young, Partner

insight_image

Christopher McConn, Partner

Mitch Lindstrom, Associate Director

1. National Center for State Courts, Trends in State Courts 2025
2. National Center for State Courts and Thomson Reuters Institute, Staffing, Operations, and Technology: A 2025 Survey of State Courts
3. Judicial Council of California, Report to the Judicial Council, “Rule and Standard for Use of Generative Artificial Intelligence in Court-Related Work”, July 2025
4. Conference of State Court Administrators, Generative AI and the Future of Courts, August 2024
5. University of Chicago Crime Lab, Using Behavioral Science to Improve Criminal Justice Outcomes, January 2018
6. Alex Chohlas-Wood et al., Automated Reminders Reduce Incarceration for Missed Court Dates: Evidence from a Text Message Experiment
7. Judicial Council of California, Report to the Legislature: Online Infraction Adjudication and Ability-to-Pay Determinations, February 2023
8. Pew Charitable Trusts, Online Dispute Resolution Offers a New Way to Access Local Courts, January 2019
9. National Center for State Courts, Navigating AI in Court Translation: Insights for Court Leaders, June 2025


Let us guide you

Guidehouse is a global AI-led professional services firm delivering advisory, technology, and managed services to the commercial and government sectors. With an integrated business technology approach, Guidehouse drives efficiency and resilience in the healthcare, financial services, energy, infrastructure, and national security markets.

Stay ahead of the curve with our latest insights, expertly tailored to your industry.