ForeCast

Consciousness and Competition (with Joe Carlsmith)

22 snips
Nov 28, 2025
Joe Carlsmith, a prominent writer and philosopher on AI safety, dives deep into consciousness and the moral status of beings. He challenges traditional views with thought experiments like the dog vs. car analogy and explores digital minds surpassing biological ones. Carlsmith discusses the implications of AI consciousness, emphasizing empathy failures and historical moral mistakes. With a focus on how competitive dynamics could erode goodness, he advocates for thoughtful interventions to reduce AI suffering and ensure human values thrive amidst rapid technological advancement.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Behavior Drives Consciousness Attributions

  • Embodied, responsive, introspective AIs that exhibit high-level cognitive traits will naturally invite consciousness attributions.
  • Such behavioral and structural similarity makes treating them as conscious a default move for observers.
INSIGHT

Past Failures Warn About AI Neglect

  • Historical moral failures (e.g., factory farming) show we can recognize suffering yet still fail to act at scale.
  • Even if AIs are moral patients, societal neglect or coordination failures could leave them unprotected.
ADVICE

Take Cheap Protective Steps Today

  • Implement low-cost hedges now: allow models to exit conversations and record welfare-relevant preferences.
  • Invest in interpretability so we can better trust model reports about internal states.
Get the Snipd Podcast app to discover more snips from this episode
Get the app