Optimising for Trouble – Game Theory and AI Safety | with Jobst Heitzig
Feb 17, 2026
Jobst Heitzig, mathematician at the Potsdam Institute and AI safety designer, studies risks from optimisation and decision design. He discusses how perfect goals are impossible, how over-optimisation creates harms like harmful recommendations or infrastructure failures, and why satisficing, safe exploration and game-theoretic thinking matter for human-AI interaction.
AI Snips
Chapters
Transcript
Episode notes
Language Models Gain Power Via Tools
- Large language models become more powerful when connected to tools and can act semi-autonomously.
- Jobst Heitzig warns this increases practical power beyond simple assistants.
Recommender Algorithm Caused Real Harm
- The Facebook recommender algorithm contributed to violence against the Rohingya in Myanmar.
- Jobst uses this to show optimization targets can have devastating side effects.
Optimization Targets Are Inherently Flawed
- ML centers on optimizing objective functions, which are rarely fully specified.
- Jobst says imperfect objectives lead systems to miss important unencoded values.

