Total Leo (Audio)

Intelligent Machines 862: Ménage à Claude

Mar 19, 2026
Rumman Chowdhury, AI ethics and policy leader who founded Humane Intelligence, discusses who gets to define intelligence and why companies shift blame to their creations. She explores moral outsourcing, agency, and the need for independent oversight. Conversations touch on contextual evaluation, public red teaming, local inference, and preserving consumer choice and privacy.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

TikTok Example Of Economic Threat Feeling

  • AI feels like companies want to bypass humans as intermediaries to get directly to users' wallets.
  • Chowdhury cites a TikTok remark that companies seem irritated that they need to go through us to get to our wallets.
ADVICE

Run Contextual Evaluations Not Generic Tests

  • Use contextual evaluations tailored to real-world use cases instead of generic benchmarks.
  • Example: test an in-car voice assistant for safety, distraction and routing accuracy rather than general conversational ability.
INSIGHT

Focus On Today's AI Harms

  • Obsessing over speculative AGI catastrophes distracts from present harms like biased hiring, wrongful surveillance and denial of services.
  • Chowdhury urges prioritizing measurable, immediate harms over hypothetical existential scenarios.
Get the Snipd Podcast app to discover more snips from this episode
Get the app