Ezra Chapman #Curious

Ezra Chapman
undefined
Mar 25, 2026 • 1h 48min

AI Outgrows Humanity… Then This Happens | Rich Mulholland

Rich Mulholland, entrepreneur, author and global speaker on curiosity and the future of human value. He explores how abundant AI might soften human agency, the economics and geopolitics of AI power, and why curiosity, resilience and communication will matter more. They discuss brain-computer interfaces, digital twins, and how choosing what to think may become a core human skill.
undefined
Mar 17, 2026 • 1h 40min

AI Ethics Leader: How Humans Thrive In An AI-Driven World #0041

Ileana Iliana Grosse-Buening, a global AI ethics and digital well‑being leader, champions human and planetary flourishing. She explores how AI can erode attention, trust, memory, and relationships. Topics include diverse global AI narratives, designing for wellbeing over engagement, cognitive offloading risks, and practical steps for digital wellbeing and agency.
undefined
Mar 10, 2026 • 1h 38min

Inside Silicon Valley: A Billion-Dollar Strategist on AI, Power & Leadership

What happens when artificial intelligence starts reshaping careers, companies, and the culture of work itself?In this deep-dive conversation with Julian Lighton — Silicon Valley strategist, executive coach, and former senior leader at some of the world’s largest technology companies — we explore the real impact of AI on the workforce, leadership, and the future of careers.While many believe AI will instantly replace millions of jobs, Julian argues the reality is more complex. AI today is transforming tasks rather than entire professions — but that shift could still dramatically reshape entry-level careers, corporate structures, and how the next generation builds their future.We discuss:🔹 Why up to 25% of graduate jobs could disappear in the coming years🔹 Why AI hasn’t yet delivered the productivity boom many expected🔹 How automation is transforming professional and technical services🔹 The growing challenge for graduates entering the workforce🔹 Why Silicon Valley culture has shifted from long-term company building to short-term valuation🔹 The hidden anxiety and pressure inside modern tech companiesIn this episode we also explore:• Why telling everyone to “follow their passion” is often bad advice• The six principles successful people consistently follow• Why networking still determines long-term career success• How to rethink career strategy in an AI-driven economy• Why understanding your strengths matters more than chasing trendsJulian argues that the biggest shift AI will bring isn’t just technological — it’s how people define work, success, and identity in a rapidly changing world.The question isn’t whether the economy will change.It’s whether we’re prepared for the careers that will exist on the other side.⸻🔗 Guest & Host LinksJulian LightonLinkedIn: https://www.linkedin.com/in/julianlighton1/Ezra ChapmanLinkedIn: https://www.linkedin.com/in/ezrachapman/#podcast #artificialintelligence #futureofwork #careers #leadership #AI
undefined
Mar 4, 2026 • 1h 35min

Futurist: What Happens To Justice When Machines Make The Decisions? #0039

Alastair Wilson-Gough, futurist and global tax and restructuring expert, maps AI’s seismic impact on justice, taxation, governance and work. He discusses taxing digital consumption, compute as sovereign infrastructure, robotic dexterity tipping points, risks of outsourcing judgment to machines, and how abundance and automation reshape purpose, resilience, and global power dynamics.
undefined
Feb 24, 2026 • 1h 33min

Tech insider: The Ugly Truth About Who Really Builds The Future #0038

What if AI isn’t as intelligent as we think — but still powerful enough to reshape everything?In this deep-dive conversation with tech investor Dan Bowyer, we explore the uncomfortable truth behind the AI gold rush — from overhyped AGI claims to the looming bubble risk no one wants to talk about.Dan openly admits his job is to back extreme founders — the kind willing to run through walls to build the next billion-dollar company. But he also argues that we’re massively overestimating large language models… and underestimating the real transformation happening at the application layer.From Apple’s quiet AI strategy to the fragility of today’s venture capital system, this episode unpacks what happens when synthetic intelligence collides with capital markets, geopolitics, and human psychology.We discuss:🔹 Why LLMs are “not that smart” — and may never reach AGI🔹 Whether we’re at peak AI hype🔹 The AI bubble and the hidden debt risk inside Big Tech🔹 Why Apple — not OpenAI — could dominate the AI agent economy🔹 How AI is reshaping healthcare, law, and manufacturing🔹 The coming wave of autonomous agents in business🔹 Why venture capital may be broken in Europe🔹 Whether more women in power would reshape tech entirelyIn this episode, we also explore:• The psychology of founders who win in AI• Why 99% of AI corporate projects fail — and why that’s a good sign• The geopolitical shifts accelerated by AI and Trump• The future of personal AI agents controlling your digital life• Whether productivity gains will outpace job displacementThis isn’t just about technology.It’s about capital, power, morality — and who really controls the future.If AI is the Fourth Industrial Revolution, the real question becomes:Are we building the future responsibly — or just inflating the biggest bubble in history?⸻🔗 Guest & Host LinksDan BowyerLinkedIn: https://www.linkedin.com/in/danbowyer/Ezra ChapmanLinkedIn: https://www.linkedin.com/in/ezrachapman/#podcast #artificialintelligence #venturecapital #AI #technology #economy
undefined
Feb 17, 2026 • 1h 53min

The Immense Power Of Combining AI With Engineered Biology | Thomas Gorochowski #0037

What if we could reprogram living matter the same way we program software?In this deep-dive conversation with Professor Thomas Gorochowski, biological engineer and former Turing Fellow, we explore the rapidly emerging field of engineering biology — and how AI is accelerating our ability to rewrite the code of life itself. From reprogramming immune cells to hunt down cancer, to designing entirely new biological machines, this episode dives into how computation and biology are merging in ways that once felt like science fiction.We discuss:🔹 How immune cells are already being reprogrammed to target cancer🔹 Whether we could realistically cure the majority of diseases within 10 years🔹 Why biology may be the most sustainable technology on Earth🔹 The rise of biological “computation” and programmable cells🔹 Why AI models like AlphaFold are transforming drug discovery🔹 The economic and ethical bottlenecks slowing medical breakthroughsIn this episode, we also explore:• The concept of biological systems as self-powering computers• Why evolution is a self-improving loop• The limits of “scaling” in medicine and AI• The future of biological computing and silicon-biology hybrids• Whether we’re approaching an exponential inflection point in human healthThis isn’t just about medicine — it’s about understanding the underlying operating system of life.If biology is programmable, the question becomes: who writes the code?
undefined
Feb 11, 2026 • 1h 13min

The Big Difference Between Human & Artificial Minds With Gaurav Suri, Scientist at Stanford #0036

What if the current path to Artificial General Intelligence (AGI) is a dead end?In this deep-dive conversation with Gaurav Suri, a neuroscientist at Stanford University and co-author of The Emergent Mind, we explore the biological limits of Artificial Intelligence and why the "scale" hypothesis might be wrong.While tech giants are betting everything on adding more data and compute, Gaurav argues that we are hitting a "scale bottleneck". He explains why true intelligence isn't just about processing power, it’s about having biological "needs" like hunger, thirst, and survival that drive meaningful goals. Without a body, AI may never bridge the gap to true understanding.We discuss the mechanistic view of the mind, why AI empathy is merely "pattern matching" rather than shared experience, and why being a "human chauvinist" is the only way to ensure AI remains a tool rather than a master.🔹 Why "Scaling" data is no longer enough to create intelligence🔹 The "Hard Problem" of Consciousness: Can electricity create experience?🔹 Why AI can write a poem but cannot feel the "surprise" of poetry🔹 The debate on AI Relationships: Can you truly fall in love with a bot?🔹 Jevons Paradox: Why efficient AI will actually consume more human resourcesIn this video, we explore:• The "Ant Colony" metaphor: How intelligence emerges from simple units• Why AI lacks the "Goal Directedness" required for AGI• The difference between "Simulated Empathy" and biological connection• Why AI is vanilla: The problem with averaging out human creativity• How to view humanity as the "Consciousness of the Universe"This isn’t just a tech debate, it’s a neuroscience masterclass on why being "biological" is still our greatest competitive advantage in an artificial world.👉 Watch until the end for Gaurav’s reflection on why we must remain the "choice makers" in our own lives.🔗 Guest & Host LinksGaurav SuriLinkedIn: https://www.linkedin.com/in/gaurav-suri-5a68738/Ezra ChapmanLinkedIn: https://www.linkedin.com/in/ezrachapman/Pre-Order Gaurav's new book "The Emergent Mind"Macmillan: https://www.panmacmillan.com/authors/gaurav-suri/the-emergent-mind/9781035088348Amazon: https://www.amazon.co.uk/dp/B0FBWG5KR4/?bestFormat=true&k=the%20emergent%20mind&ref_=nb_sb_ss_w_scx-ent-bk-ww_k0_1_14_de&crid=T5HFUZ2VS3K6&sprefix=the%20emergent%20m#podcast #artificialintelligence #neuroscience #TheEmergentMind #AGI #philosophy
undefined
Feb 3, 2026 • 1h 47min

The Blueprint For AI Success In 2026 | Ray Eitel-Porter #0036

What if the biggest risk of AI isn’t that it destroys humanity, but that it makes us forget how to think?In this eye-opening conversation with Ray Eitel-Porter, Global Responsible AI Lead and author of Governing the Machine, we explore the hidden dangers of our rapid shift from "using" technology to "relying" on it.From the findings of a shocking MIT study on cognitive decline to the rise of "Agentic AI" in 2026, this episode challenges the narrative that AI is just a productivity tool. Ray argues that we are facing a crisis of "cognitive obesity" where outsourcing our thinking to algorithms might leave us unable to function when the machine stops.We discuss why 2026 will be the year of the "AI Agent", why treating AI as your best friend is a dangerous trap, and how businesses can navigate the fine line between innovation and existential risk.🔹 Why "Cognitive Obesity" is the next global health crisis🔹 The "Machine Stops" scenario: What happens if we forget how to do the work?🔹 Why 2026 is predicted to be the year of "Agentic AI"🔹 The dangers of emotional attachment and AI "best friends"🔹 How to govern the machine before it governs usIn this video, we explore:• The MIT study revealing how AI lowers cognitive engagement• Why "Agentic AI" changes everything (from advice to execution)• The risk of hallucinations vs. human error• Why we need "Universal High Income" to survive the job crisis• Practical steps to future-proof your brain against AI relianceThis isn’t just a debate about regulation, it’s a guide on how to stay cognitively fit in an age of automated intelligence.👉 Watch until the end for Ray’s prediction on the "AI Forensics" teams of the future.🔗 Guest & Host LinksRay Eitel-PorterLinkedIn: https://www.linkedin.com/in/rayeitelporterEzra ChapmanLinkedIn: https://www.linkedin.com/in/ezrachapman/#podcast #artificialintelligence #governingthemachine #futureofwork #agenticAI #technologyandsociety
undefined
Jan 27, 2026 • 1h 48min

Why You Should Stop Chasing Money | Rich Mulholland, Global Entrepreneur

Rich Mulholland, entrepreneur, author and global speaker known for talks on ambition and curiosity. Conversations range from why chasing money can hollow out authenticity to redefining success by finding your “enough.” They explore how AI, longevity and changing usefulness reshape purpose, plus the future value of curiosity, communication and choosing meaningful work over constant hustle.
undefined
Jan 21, 2026 • 2h 11min

2027: The Year Everything Changes | Smartphone Pioneer David Wood

In this episode, I sit down with David Wood, a leading global futurist and pioneer of the smartphone era, to discuss the concept of “Sustainable Superabundance.”We explore his prediction for a “phase transition” in 2027, where AI adoption suddenly accelerates like water turning to steam, and how this shift will drive the cost of energy, food, and healthcare toward zero.David explains why we must move beyond “Universal Basic Income” to “Universal Generous Income,” the risks of bio-tech in the wrong hands, and how humanity can transition to a post-scarcity world without collapsing into chaos.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app