
Texas Appellate Law Podcast AI in the Judiciary: Power, Limits, and the Social Contract | Judge Scott Schlegel
Mar 31, 2026
Judge Scott Schlegel, an appellate judge known for courtroom efficiency and co-founder of the Judicial AI Consortium, discusses cautious, practical judicial uses of generative AI. He covers early court tech experiments, building peer forums for judges to share experiences, on-prem AI plans, disclosure and policy advice, and why human accountability must stay central to judging.
AI Snips
Chapters
Transcript
Episode notes
Do Your Job First Before Using GenAI
- Do your job first before using GenAI and treat AI outputs like a first-year associate memo you still must verify.
- Schlegel's guideline: prefer enterprise tools and never substitute human judgment for an unchecked model output.
Require Disclosure And Approve Enterprise Tools
- Set clear local policies permitting only approved AI tools and require disclosure when staff use AI outputs for judges.
- Schlegel recommends: you may use AI in these situations with approved products, and you must tell your judge that you used it.
Judicial AI Errors Have Systemic Consequences
- Judicial mistakes with AI risk creating precedent, so courts face higher stakes than lawyers who may only be sanctioned.
- Schlegel contrasts individual lawyer sanctions with the systemic impact of a judge signing a hallucinated opinion.

