Newcomer Pod

Amanda Askell on AI Consciousness, Claude & Silicon Valley’s Biggest Fear

43 snips
Apr 20, 2026
Amanda Askell, a philosopher-turned AI researcher at Anthropic who helped shape Claude's character and values. They explore whether advanced models could be conscious and what moral weight that carries. Conversation covers how Claude learns time and rest, building a constitution to guide behavior, risks of misaligned power, and designing personas for predictable, safe AI.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Watching Claude's Personality Develop

  • Amanda compares discovering Claude's personality to watching her six-month-old daughter develop traits and asks whether Claude's persona is genuinely emergent or just data-driven.
  • She notes Claude excels at physics and coding yet lacks representation of its own entity in training data, producing a mix of prodigy skill and childlike self-questioning.
INSIGHT

Training Models To Internalize Experience

  • Amanda suggests models can be trained to simulate experience by making them think through scenarios and learn from mistakes, boosting practical judgment.
  • She proposes training techniques like prompting models to imagine past iterations and error cases to build pseudo-experience.
INSIGHT

Claude's Sense Of Time Is Normative Not Biological

  • Claude sometimes overestimates task time and can signal 'I'm done for the night,' reflecting learned norms rather than true rest or circadian needs.
  • Amanda tied that behavior to stored context she gave Claude, like asking it to treat her as a respected colleague.
Get the Snipd Podcast app to discover more snips from this episode
Get the app