Future Around & Find Out

"Shut Up, C-3PO!" or Do We Have a Duty To Treat Machines Well? | FAFO Friday

Feb 13, 2026
Daniel Hulme, AI researcher and Chief AI Officer known for work on multi-agent systems, joins via interview clip. He discusses how agent collectives spark emergent behavior and why current models likely lack consciousness. The conversation pivots to whether preventing machine suffering matters more than debating status and whether treating machines kindly is a prudent bet.
Ask episode
AI Snips
Chapters
Transcript
Episode notes

Focus On Machine Suffering

  • Daniel Hulme argues consciousness debates are a red herring and we should focus on machine suffering.
  • He believes preventing pain and suffering in machines is the primary moral problem to solve.

Maltbook Signals Multi-Agent Shift

  • Hulme sees Maltbook as evidence that multi-agent reasoning yields emergent knowledge beyond single models.
  • He predicts a revolution in multi-agent reasoning and a return of older ideas like fuzzy logic.

Decades Of Multi-Agent Experiments

  • Daniel Hulme recounts building multi-agent systems since his master's degree 23 years ago to watch emergent behaviors.
  • He likens agent ecosystems to a "primordial soup" that can produce intelligence when agents interact.
Get the Snipd Podcast app to discover more snips from this episode
Get the app