
AI & I The AI Model Built for What LLMs Can't Do
309 snips
Apr 15, 2026 Eve Bodnia, founder and CEO of Logical Intelligence, is building verifiable AI with energy-based models. She digs into why token-by-token systems struggle with mission-critical work. The conversation explores energy landscapes, what it means for AI to understand data, why progress may be plateauing, and how plain English could lead to formally verified code.
AI Snips
Chapters
Transcript
Episode notes
EBMs Use Energy Landscapes To Model Probability
- Energy‑based models minimize an energy function to form an energy landscape where low points represent likely states and high points improbable ones.
- Bodnia connects this to physics (Lagrangians, equations of motion) to explain how EBMs discover conservation laws and system behaviors.
Energy Landscapes Give EBMs Bird's Eye Reasoning
- EBMs map observed data directly into an energy landscape that captures possible states and their probabilities instead of predicting next tokens.
- That bird's‑eye view lets EBMs navigate multiple routes and avoid 'hallucination' paths common to autoregressive LLMs.
Language Dependency Makes LLMs Unsuitable For Spatial Tasks
- LLM intelligence is language‑dependent because it reasons via token prediction, which can misrepresent non‑linguistic tasks like spatial navigation.
- Bodnia argues tasks such as driving or spatial reasoning shouldn't be forced into token sequences.

