Future Around & Find Out

AI doesn't do anything. We do. | Rumman Chowdhury on reclaiming agency and rejecting "moral outsourcing"

37 snips
Apr 28, 2026
Rumman Chowdhury, an ethical AI leader and founder of Human Intelligence PBC, reminds us that people—not algorithms—make choices. She discusses moral outsourcing, bias bounties and red teaming, how simple prompts break guardrails, why benchmarks mislead, and ways builders can reclaim agency with better evaluation, legal protections, and agentic AI.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Bias Bounties Started From A Twitter Hackathon

  • Rumman built a nonprofit bias-bounty program after hosting Twitter's first algorithmic bias bounty and being laid off post-Musk takeover.
  • The program let non-programmers test models and show how user feedback should feed design.
INSIGHT

Grandma Hack Reveals Guardrail Mechanics

  • Hackers and ordinary users reveal the same failure mechanisms: manipulating a model's 'helpful, harmless, honest' goals.
  • The 'grandma hack' shows framing content as harmless can bypass guardrails and produce dangerous outputs.
ADVICE

Do Assurance Before Running Benchmarks

  • Treat assurance as product work, not just legal audit; define success first, then pick tests.
  • Rumman warns companies often put benchmarks before design, causing unpredictable agent behavior in production.
Get the Snipd Podcast app to discover more snips from this episode
Get the app