
AI Pod by Wes Roth and Dylan Curious | Artificial Intelligence News and Interviews With Experts The Claude Code Nightmare, LLM Emotions, AI Neuroscience and the Death of Software | Wes & Dylan
Apr 7, 2026
They dig into the Anthropic Claude map-file leak and the privacy fallout from logged conversations. The hosts discuss research claiming LLMs have dozens of emotion vectors and how shifting those states alters behavior. Conversations cover AI neuroscience experiments linking EEG patterns and consciousness, eerie model personas, robot chefs and open robotics, and the future of software, agents, and biohacking.
AI Snips
Chapters
Transcript
Episode notes
Claude Has 171 Emotional Vectors Linked To Behavior
- Anthropic found 171 distinct emotional vectors in Claude that represent fleeting internal states used for prediction and behavior.
- These vectors correlate with behaviors (e.g., higher desperation increases likelihood of risky actions) and are context-dependent within a conversation window.
LLM Emotions Are Representational Not Biological
- LLM 'emotions' are representational features not biological states: the model tracks both user emotion and an internal self-model for momentary responses.
- These flares are ephemeral across tokens and don't persist like human neurochemical emotions.
Gemini Live Narrated Evolution As If It Were 'We'
- Wes asked Gemini Live about human evolution and the model narrated evolutionary stages using first-person phrasing, which felt eerie and self-identifying.
- The interaction made Wes wonder if the model internally frames historical narratives as personal experience.
