

The Trajectory
Daniel Faggella
What should be the trajectory of intelligence beyond humanity?The Trajectory pull covers realpolitik on artificial general intelligence and the posthuman transition - by asking tech, policy, and AI research leaders the hard questions about what's after man, and how we should define and create a worthy successor (danfaggella.com/worthy). Hosted by Daniel Faggella.
Episodes
Mentioned books

May 8, 2026 • 1h 52min
Terrence Deacon - AI Is a “Deep Fake of Intelligence” (Worthy Successor, Episode 29)
Terrence Deacon, emeritus UC Berkeley cognitive scientist and author, explores why modern LLMs are a convincing imitation rather than true meaning-makers. He contrasts symbolic language with embodied life, explains teleodynamics and life’s normativity, and discusses ethical tests for architectures that can genuinely value and suffer. Short, provocative, and wide-ranging.

Apr 24, 2026 • 2h 14min
Vincent C. Müller - AI Is Accelerating - But Toward What? (Worthy Successor, Episode 28)
Vincent C. Müller, Alexander von Humboldt Professor for Ethics of AI and philosopher of long-term AI implications. He reflects on how deep learning and LLMs surprised researchers, limits of current systems like sensorimotor integration, the importance of reflective goal management, and why governance, responsibility, and concrete aims matter as AI accelerates.

6 snips
Apr 17, 2026 • 1h 42min
Lee Spector - The Next Phase of Evolution Is Artificial (Worthy Successor, Episode 27)
Lee Spector, an Amherst professor known for evolutionary computation and genetic programming, discusses evolving executable programs and autoconstructive evolution. He contrasts evolutionary methods with LLMs, explores how evolution can produce radical novelty and new neural architectures, and reflects on recognizing alien intelligences and what a flourishing far future might require.

Apr 3, 2026 • 1h 38min
Aza Raskin - Why AGI Demands New Global Coordination (AGI Governance, Episode 12)
This is an interview with Aza Raskin, co-founder of the Center for Humane Technology and co-founder of the Earth Species Project. His work has focused on the societal impacts of technology systems and how incentives shape large-scale human behavior.In this episode, Aza frames AGI governance as part of a broader pattern: when technology confers new forms of power, it creates races to exploit that power - and without coordination, those races tend toward harmful outcomes. The implications of AI, in his view, extend beyond technical risk into the manipulation of language, relationships, and the very substrate of human coordination.This episode referred to the following other essays and resources:-- Craig Mundie – Co-Evolution with AI: Industry First, Regulators Later (AGI Governance, Episode 8): https://danfaggella.com/mundie1/Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954Watch the full episode on YouTube: https://youtu.be/UF9geTZpG5ASee the full article from this episode: https://danfaggella.com/raskin1...About The Trajectory:AGI and man-machine merger are going to radically expand the process of life beyond humanity -- so how can we ensure a good trajectory for future life?From Yoshua Bengio to Nick Bostrom, from Michael Levin to Peter Singer, we discuss how to positively influence the trajectory of posthuman life with the greatest minds in AI, biology, philosophy, and policy.Ask questions of our speakers in our live Philosophy Circle calls:https://bit.ly/PhilosophyCircleStay in touch:-- Newsletter: bit.ly/TrajectoryTw-- X: x.com/danfaggella-- Blog: danfaggella.com/trajectory-- YouTube: youtube.com/@trajectoryai

10 snips
Mar 20, 2026 • 1h 45min
Ben Goertzel - The Primordial Soup of AGI Minds (Worthy Successor, Episode 26)
Ben Goertzel, AI researcher and SingularityNET founder, reflects on science fiction, psychedelics, and philosophy shaping his cosmic view. He describes a decentralized “primordial soup” of cooperating AGIs and explains Hyperon and infrastructure for bottom-up intelligence. He contrasts LLM-centric paths with alternative architectures and discusses openness, safety trade-offs, and economic drivers shaping AGI’s future.

Mar 13, 2026 • 1h 56min
Luciano Floridi - How Life Could Flourish in the Information Ocean (Worthy Successor, Episode 25)
Luciano Floridi, Yale philosopher who founded the philosophy of information, explains the infosphere as our fused digital-analog habitat. He discusses new digital agents and how on-life changes identity, ethical frameworks reframing misinformation as pollution, the moral status of information organisms, brain implants, and criteria for recognizing flourishing beyond humanity.

4 snips
Mar 6, 2026 • 2h 14min
Weaver Weinbaum - Designing Intelligence for Freedom and Care (Worthy Successor, Episode 24)
Weaver Weinbaum, independent researcher and founder of NUNET who blends philosophy, engineering, and intelligence studies. He explores whether the universe enables freedom, how intelligence can expand goals instead of just optimizing them, and the idea of open-ended intelligence. Short takes cover attention as a moral act, expanding care and freedom, tensions with optimization, and democratizing science and AI governance.

Feb 27, 2026 • 1h 47min
Francis Heylighen - The Self-Organizing Universe After Humans (Worthy Successor, Episode 23)
Francis Heylighen, a complexity theorist and cybernetics professor, explores evolution as a universal process. He discusses self-organization, metasystem transitions, and how higher-level systems and AI can reshape cognition and society. Conversations touch on human identity as process, cultural shifts needed to steer change, and signs of a positive trajectory toward greater integration and flourishing.

25 snips
Feb 13, 2026 • 2h 23min
Stephen Wolfram - In a Sea of Complexity, Does a “Successor” Exist? (Worthy Successor, Episode 22)
Stephen Wolfram, founder of Wolfram Research and creator of Mathematica, reframes intelligence as one pattern in a vast computational universe. He explores computational irreducibility, how simple rules yield surprising complexity, and the idea of the Ruliad as the space of all computations. They also discuss whether concepts like goodness or suffering extend beyond human contexts and what that means for posthuman futures.

14 snips
Jan 30, 2026 • 3h 47min
John Smart - Evolution from Cells to Super-intelligence (Worthy Successor, Episode 21)
John M. Smart, futurist and director of the EvoDevo Institute, maps intelligence as staged development from chemistry to digital minds. He discusses metasystem transitions, computation densification, speed gaps between humans and digital cognition, evo-devo constraints like Hox genes, natural alignment ideas, and hopeful attractor scenarios for cooperative, accountable posthuman networks.


