AI Snips
Chapters
Transcript
Episode notes
Why LLMs Seem Humanlike
- Neural nets can generalize beyond exact web examples by learning deeper continuation patterns of language.
- That generalization explains surprising humanlike outputs from simple numeric operations in networks.
Separate Language From Factual Data
- Factor language from factual knowledge by using thin LLM interfaces plus dedicated knowledge systems.
- This enables private, local runs and more reliable factual answers via external data sources.
Computational Irreducibility Limits Prediction
- Simple rules can produce unforeseeable complex behavior due to computational irreducibility.
- Knowing rules doesn't guarantee being able to predict long-term outcomes without running the computation.


