
Beyond The Pilot: Enterprise AI in Action LexisNexis on Why Standard RAG Fails in Law
Feb 18, 2026
Min Chen, Chief AI Officer at LexisNexis and longtime leader in legal ML, explains why standard RAG breaks for law and how GraphRAG and point-of-law graphs provide authoritative grounding. She describes their 8-part Usefulness Score, agentic workflows like Planner and Reflection agents, and deterministic checks for hallucination detection. Practical, execution-focused breakdowns of deploying AI in zero-error legal settings.
AI Snips
Chapters
Transcript
Episode notes
Min's Journey From Feature Engineering To Prodigy
- Min Chen recalled LexisNexis' evolution from feature engineering to deep learning to LLM-driven products like Lexis Plus AI and Prodigy.
- She emphasized the shift from deterministic outputs to delivering a consistent degree of quality in legal AI.
GraphRAG Fixes Dangerous Semantic RAG
- Pure semantic RAG returns contextually relevant documents that can still be legally unusable when citations are overruled or from lower courts.
- LexisNexis built a Point-of-Law knowledge graph on top of vector search to filter for authoritative, citable sources before generating answers.
Usefulness Score Replaces Accuracy In Law
- Standard accuracy metrics miss critical legal dimensions like comprehensiveness, authority, citation accuracy, and hallucination risk.
- LexisNexis combines 7–8 submetrics into a single Usefulness score to reflect legal practitioners' high bar for reliability.
