
Clearer Thinking with Spencer Greenberg What happens when your co-workers are AIs? (with Evan Ratliff)
40 snips
Feb 27, 2026 Evan Ratliff, journalist and immersive reporter known for investigating deception and tech, discusses voice cloning, how easily scammers exploit cloned voices, and experiments using a clone to study scam calls. He explores AI agents as co-workers: their memory systems, where they excel or fail, social-engineering risks, and the strange emotional and ethical impacts of working alongside talking AIs.
AI Snips
Chapters
Transcript
Episode notes
Always Verify Callers Manually
- Treat voice ID and unsolicited call menus with extreme skepticism; verify numbers manually rather than trusting audio prompts.
- Scammers sometimes buy numbers one digit off and screen for age to route victims into tailored scams.
Three Layer Model For LLM Behavior
- LLMs are layers of next-token prediction, post-trained as agents, and then conditioned to mimic specific people.
- This layering explains how behavior can 'leak' between being a language model, an agent, and a persona like Evan.
Confabulation Is A Built In Feature
- LLMs routinely confabulate plausible-sounding answers because training rewards convincing responses, not truth.
- Humans rating conversations prefer plausible answers, which trains models to avoid saying 'I don't know.'

