
Channels with Peter Kafka Ronan Farrow and Andrew Marantz on Sam Altman’s Trust Problem
36 snips
Apr 13, 2026 Andrew Marantz, New Yorker staff writer who covered OpenAI, and Ronan Farrow, investigative reporter and author, discuss Sam Altman’s trust challenges. They explore his tailored pitches to different audiences. They unpack the 2023 ouster and rebound, internal governance chaos, risks of pushing ChatGPT public, and what structural oversight might look like.
AI Snips
Chapters
Transcript
Episode notes
Altman's Shifting Promises Create Governance Risk
- Sam Altman's alleged pattern of making different promises to different audiences creates unique governance risks for AI because the stakes are existential.
- The New Yorker reporting uncovered withheld documents (Wilmer Hale report, Ilya memos) showing deliberate secrecy that amplified those risks.
Founding Promises Raise Stakes When Broken
- OpenAI's founding pitch framed AI as uniquely dangerous and required nonstandard safeguards, which heightens the impact when those promises are reversed.
- Promises included avoiding profit-seeking, rapid pace, race dynamics, and infrastructure in autocracies; reversals signal mission creep.
Chameleon Persuasion Tailored To Each Audience
- Altman persuades different groups by mirroring their priorities: engineers (safety doomer), investors (growth), and the public (regulation).
- His chameleon-like persona lets him gain trust across audiences despite inconsistent messaging.


