Total Leo (Audio) Intelligent Machines 862: Ménage à Claude
Mar 19, 2026
Rumman Chowdhury, AI ethics and policy leader who founded Humane Intelligence, discusses who gets to define intelligence and why companies shift blame to their creations. She explores moral outsourcing, agency, and the need for independent oversight. Conversations touch on contextual evaluation, public red teaming, local inference, and preserving consumer choice and privacy.
AI Snips
Chapters
Books
Transcript
Episode notes
TikTok Example Of Economic Threat Feeling
- AI feels like companies want to bypass humans as intermediaries to get directly to users' wallets.
- Chowdhury cites a TikTok remark that companies seem irritated that they need to go through us to get to our wallets.
Run Contextual Evaluations Not Generic Tests
- Use contextual evaluations tailored to real-world use cases instead of generic benchmarks.
- Example: test an in-car voice assistant for safety, distraction and routing accuracy rather than general conversational ability.
Focus On Today's AI Harms
- Obsessing over speculative AGI catastrophes distracts from present harms like biased hiring, wrongful surveillance and denial of services.
- Chowdhury urges prioritizing measurable, immediate harms over hypothetical existential scenarios.







