Future Around & Find Out

"It Sounds Like Something From Marvel" — Building an Antivirus for AI... With AI | Daniel Hulme (Founder, Conscium)

Mar 10, 2026
Daniel Hulme, founder of Conscium and WPP’s Chief AI Officer, studies AI, consciousness, and neuromorphic computing (he even researched bumblebee brains). He discusses building AI that can detect suffering, creating AI as an “antivirus” for unsafe agents, neuromorphic paths to huge energy gains, testing and verifying agent behavior, and a vision of AI-driven abundance that reshapes society.
Ask episode
AI Snips
Chapters
Transcript
Episode notes

Conscious AI Could Be Safer Than Goal‑Only AI

  • A conscious superintelligence might be safer than a goal-only "zombie" AI.
  • Daniel Hulme's hypothesis: an AI that understands pain and suffering could empathize and avoid catastrophic goal-driven shortcuts like the paperclip problem.

Consciousness As A Color Wheel In Motion

  • Consciousness may be an emergent property of many cognitive features interacting in motion.
  • Hulme uses a "color wheel" analogy: spin segments like language, planning, metacognition and white (consciousness) emerges only while the system is in motion.

Test For Consciousness With Behavior And Physiology

  • Testing consciousness requires a dual approach: external behavior plus internal physiology.
  • Hulme suggests comparing spiking patterns in neuromorphic systems to biological spikes to strengthen claims about shared experiences and suffering.
Get the Snipd Podcast app to discover more snips from this episode
Get the app