GreenPill

Season 10. Episode 1: Full Stack AI Alignment and Human Flourishing with Joe Edelman

Oct 3, 2025
Joe Edelman, founder of the Meaning Alignment Institute, discusses AI alignment and human flourishing. He critiques existing methods like RLHF, advocating for deeper 'thick models' of value to guide AI and institutions. Joe shares lessons from social media's failure and proposes four ambitious moonshots: super negotiators, public resource regulators, market intermediaries, and value stewardship agents. He emphasizes the need for collaboration across disciplines to ensure that technological advances align with genuine human values and societal well-being.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Values Versus Norms

  • Distinguish between norms (shared rules) and values (personal decision heuristics) when modeling people.
  • Both can be elicited deeply and represented with grammars or type systems for consistent use across systems.
ADVICE

Propagate Values Across The Stack

  • Ensure thick models travel up and down the organizational stack so user goals align with business and regulatory metrics.
  • Embed these criteria in product success metrics and bonus structures for systemic integrity.
INSIGHT

Growth Depends On What We Reward

  • Full stack alignment isn't inherently anti-growth; it depends on what the economy rewards.
  • Without integrity, markets can grow into passive, extractive economies rather than flourishing innovation systems.
Get the Snipd Podcast app to discover more snips from this episode
Get the app