Invisible Machines podcast by UX Magazine

Invisible Machines
undefined
Apr 2, 2026 • 1h

Inside The Infinity Machine ft Sebastian Mallaby

There's a book about artificial intelligence that doesn't start with Sam Altman. It doesn't start with Elon Musk. It starts in 1994, at Cambridge, where a teenager named Demis Hassabis is reading Gödel, Escher, Bach and concluding, before most of his professors would have agreed, that first-order logic can't be the full answer to building intelligence.Sebastian Mallaby spent years inside that story. His new book, The Infinity Machine: Demis Hassabis, DeepMind, and the Quest for Superintelligence, is the most serious attempt yet to explain not just what AI is, but why the people building it can't stop. His answer draws on a line Jeff Hinton borrowed from Robert Oppenheimer: invention is sweet. A scientist, given the chance to build something, simply cannot resist. The consequences come later.In this conversation, Mallaby joins Josh Tyson and Robb Wilson to explore the full sweep of the Demis Hassabis story — from game designer to neuroscientist to Nobel laureate to the man running Google's flagship AI lab. They talk about why DeepMind was built the way it was, with neuroscientists and physicists and probabilistic mathematicians before AI was even a field, and how that cross-disciplinary foundation ended up mattering more than anyone expected. They talk about what the defeat of the world Go champion felt like from the inside, the humans who gave up and the ones who discovered new depths. And they talk about what it means that the internet, a thing nobody built to train AI, turns out to be exactly the fuel the industrial revolution of intelligence needed. Demis's own metaphor: it's like dinosaurs that died and turned into oil. Nobody designed it for this. It just happened to be there.The conversation also gets into what Mallaby calls the infinity machine: the reason the kind of inductive learning AI uses requires almost infinite examples to be reliable, and why the name captures something the scaling law charts obscure. Why the internet taught us more about the range of human experience than Hassabis expected. Why gaming runs so deep through the entire history of machine intelligence. And what it actually means to ask whether a machine is intelligent, when the people who built DeepMind weren't sure they had a definition.---------- Support our show by supporting our sponsors!This episode is supported by OneReach.aiForged over a decade of R&D and proven in 10,000+ deployments, OneReach.ai’s GSX is the first complete AI agent runtime environment (circa 2019) — a hardened AI agent architecture for enterprise control and scale. Backed by UC Berkeley, recognized by Gartner, and trusted across highly regulated industries, including healthcare, finance, government and telecommunications.A complete system for accelerating AI adoption — design, train, test, deploy, monitor, and orchestrate neurosymbolic applications (agents). - Use any AI models- Build and deploy intelligent agents fast- Create guardrails for organizational alignment- Enterprise-grade security and governanceBook a free demo: https://onereach.ai/book-a-demo/?utm_source=soundcloud&utm_medium=social&utm_campaign=podcast_s7e6&utm_content=1 ---------- The revised and significantly updated second edition of our bestselling book about succeeding with AI agents, Age of Invisible Machines, is available everywhere: Amazon — https://bit.ly/4hwX0a5#ai #invisiblemachines #podcast #techpodcast #aipodcast #deepmind #DemisHassabis#InfinityMachine#agi #machinelearning #alphago #futureofai
undefined
Mar 19, 2026 • 49min

Friction Is the Feature with Jennifer Pahlka | Invisible Machines S7E5

The IRS has roughly 60,000 fax machines, and nobody can get rid of them. Not because there’s a law that says you have to use them (there almost certainly isn’t), but because likely decades ago a memo got written, somebody interpreted fax machines as the most secure transmission method, and that memo calcified into what Jennifer Pahlka calls "folk law," a perceived rule that nobody can locate, nobody can challenge, and everybody treats as immutable.Folk law looms large in the American government right now. Cascades of rigidity built from outdated interpretations of rules that were flexible to begin with, administered by people who were never asked whether any of it was working. Jennifer Pahlka, who wrote Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better, is the founder and former executive director of Code for America, and was Deputy CTO for Government Innovation in the Obama White House. She’s working on the gap between what government is supposed to do and what it actually does. In this conversation, Robb, Josh, and Jennifer go deep on what’s actually broken and what it would take to fix it.The folk law problem is real, but it's not the deepest one. The deeper dysfunction: government is structurally designed to be faithful to process rather than outcomes. Oversight bodies don't ask whether people got the benefit. They ask whether you followed the procedure. That incentive structure produces "rationing by friction" — where the hardest programs to navigate self-select for the people who need help least and exclude the people with the most chaotic lives, the fewest resources, and the most at stake.Her Recoding America team is already working with states to build something Robb describes as a P&L for regulation. Not just removing rules, but assigning friction costs, finding where wet signatures are still required for no reason, and surfacing the trade-offs that have never been explicitly named. LLMs are uniquely good at this. The question isn't whether the technology can help. It's whether the political will to use it correctly can be assembled in time.---------- Support our show by supporting our sponsors!This episode is supported by OneReach.aiForged over a decade of R&D and proven in 10,000+ deployments, OneReach.ai’s GSX is the first complete AI agent runtime environment (circa 2019) — a hardened AI agent architecture for enterprise control and scale. Backed by UC Berkeley, recognized by Gartner, and trusted across highly regulated industries, including healthcare, finance, government and telecommunications.A complete system for accelerating AI adoption — design, train, test, deploy, monitor, and orchestrate neurosymbolic applications (agents). Use any AI models- Build and deploy intelligent agents fast- Create guardrails for organizational alignment- Enterprise-grade security and governanceBook a free demo: https://onereach.ai/book-a-demo/?utm_source=soundcloud&utm_medium=social&utm_campaign=podcast_s7e5&utm_content=1  ---------- The revised and significantly updated second edition of our bestselling book about succeeding with AI agents, Age of Invisible Machines, is available everywhere: Amazon — https://bit.ly/4hwX0a5#ai #government #govtech #JenniferPahlka#RecodingAmerica#publicpolicy #enterpriseai #doge#bureaucracy #invisiblemachines #podcast #techpodcast #aipodcast
undefined
Feb 27, 2026 • 51min

AI Brings Cheap Prediction & Expensive Change ft Avi Goldfarb | Invisible Machines Podcast

Most organizations are still implementing AI as point solutions, dropping new technology into existing workflows to do the same work, just slightly better. The real value lies in system solutions that completely transform how organizations operate. Avi Goldfarb, economist and co-author of Prediction Machines, joins Robb and Josh to explain why AI adoption follows predictable economic principles and why internal resistance, not technology limitations, is the primary barrier to transformation.This conversation, recorded back in 2023, reminds us that most organizations continue to struggle with the same issues surrounding systemic change in 2026. Goldfarb's core argument: AI is fundamentally cheap prediction. Just as the internet made search and copying cheap, AI makes prediction cheap. When something becomes a commodity, the complements, the things that work alongside it, become more valuable. This includes compute power (benefiting Microsoft, Amazon, Google), unique data, and crucially, human judgment.The problem? System solutions require organizational transformation. They create winners and losers inside companies. When AI enables insurance companies to shift from pricing risk (the domain of powerful underwriters) to reducing risk (requiring marketing and behavior change expertise), the power structure fractures. Vested interests resist. Departments see their importance diminished. For leaders evaluating AI investments, the question isn't whether to adopt AI, it's whether you're willing to pursue system transformation and confront the organizational disruption that creates real value.Chapters 00:00 - Intro: Avi Goldfarb on AI as “cheap prediction”01:37 - Have LLMs changed the prediction framework?03:36 - Do we need “new economics” for generative AI?04:15 - What got cheaper on the internet: search, copying, communication05:07 - What becomes more valuable as prediction gets cheap? (complements)05:49 - OneReach.ai sponsor: runtime for AI agents (GSX)06:46 - AI adoption inside companies: invest in people + workflows08:13 - Unintended consequences: jobs, bias, discrimination09:47 - The bigger question: new value creation (not just replacement)10:33 - Upskilling: writing and opportunity expansion for millions12:30 - "No more excuses”: using ChatGPT for clearer communication14:50 - Social media déjà vu: noise, polarization, participation17:04 - Intermediaries changed: self-publishing, music, podcasting19:06 - AI commoditization: $600 models + implications for OpenAI22:36 - Where the money is: compute, data, and complements (not predictions)---------- Support our show by supporting our sponsors!This episode is supported by OneReach.aiForged over a decade of R&D and proven in 10,000+ deployments, OneReach.ai’s GSX is the first complete AI agent runtime environment (circa 2019) — a hardened AI agent architecture for enterprise control and scale. Backed by UC Berkeley, recognized by Gartner, and trusted across highly regulated industries, including healthcare, finance, government and telecommunications.A complete system for accelerating AI adoption - design, train, test, deploy, monitor, and orchestrate neurosymbolic applications (agents). - Use any AI models- Build and deploy intelligent agents fast- Create guardrails for organizational alignment- Enterprise-grade security and governanceBook a free demo:https://onereach.ai/book-a-demo/?utm_source=soundcloud&utm_medium=social&utm_campaign=podcast_s7e4&utm_content=1 ---------- The revised and significantly updated second edition of our bestselling book about succeeding with AI agents, Age of Invisible Machines, is available everywhere: Amazon — https://bit.ly/4hwX0a5#InvisibleMachines #Podcast #TechPodcast#AIPodcast#AI #AIStrategy#DigitalTransformation#AIAdoption#FutureOfWork#ChangeManagement#PredictionMachines#AILeadership#BusinessTransformation#AIEconomics#EnterpriseAI
undefined
Feb 13, 2026 • 44min

What AI as Cheap Prediction Means for Enterprise ft Joshua Gans | Invisible Machines Podcast

Joshua Gans, economist and Rotman School professor who co-wrote Prediction Machines, reframes AI as cheaper prediction. He discusses how lower prediction costs reduce decision friction, flatten hierarchies, supercharge frontline work, and create new organizational designs. The conversation covers LLMs as prediction tools, digital twins, anticipatory logistics, risks of selecting your own usurper, and why banning AI backfires.
undefined
Jan 29, 2026 • 1h 18min

Why Canonical Knowledge Is the Foundation for Enterprise AI ft Joe DosSantos, VP at Workday

Joe DosSantos, VP of Enterprise Data and Analytics at Workday, leads data infrastructure strategy. He discusses why a single authoritative source of truth matters for enterprise AI. Short conversations cover canonical knowledge, semantic layers that translate human meaning into machine-readable formats, and reviving data governance as the unglamorous foundation for reliable AI.
undefined
Jan 15, 2026 • 1h 13min

Ben Goertzel on the Decentralization of AI | Invisible Machines S7E1

Ben Goertzel, the researcher who helped popularize the terms "AGI" and “singularity”, as one of the most influential modern champions and systematizers of AGI, returns to Invisible Machines to discuss the decentralization of AI and what's actually missing from today's most advanced systems with Robb Wilson and Josh Tyson.As enterprises rush to deploy AI agents and LLMs reshape workflows, a critical question emerges: who controls the infrastructure? Goertzel argues that while big tech dominates model development, a tension is building between centralized hegemony and decentralized, open systems — the same dynamic that shaped the internet itself.In this wide-ranging conversation, Goertzel discusses his current work on Hyperon (the successor to OpenCog) and the ASI Chain, systems designed to enable decentralized AGI development. He explains why the rapid cycles of AI hype and disappointment — the traditional "AI winters and summers" — no longer slow progress the way they once did. The speed of change has accelerated into what he calls a "mathematical singularity," where six-month cycles replace decades-long shifts.---------- Support our show by supporting our sponsors!This episode is supported by OneReach.aiForged over a decade of R&D  and proven in 10,000+ deployments, OneReach.ai’s GSX is the first complete AI agent runtime environment (circa 2019) — a hardened AI agent architecture for enterprise control and scale. Backed by UC Berkeley, recognized by Gartner, and trusted across highly regulated industries, including healthcare, finance, government and telecommunications.A complete system for accelerating AI adoption — design, train, test, deploy, monitor, and orchestrate neurosymbolic applications (agents). Use any AI modelsBuild and deploy intelligent agents fastCreate guardrails for organizational alignmentEnterprise-grade security and governanceRequest free prototype: https://onereach.ai/prototype/?utm_source=soundcloud&utm_medium=social&utm_campaign=podcast_s7e1&utm_content=1 ---------- The revised and significantly updated second edition of our bestselling book about succeeding with AI agents, Age of Invisible Machines, is available everywhere: Amazon — https://bit.ly/4hwX0a5#InvisibleMachines #Podcast #TechPodcast#AIPodcast#AI#AGI#ArtificialIntelligence#AgenticAI#DecentralizedAI#AIInfrastructure#AIAgents#FutureOfAI#Singularity
undefined
Dec 31, 2025 • 55min

Why AI Scaffolding Matters More than Use Cases ft Erika Flowers | Invisible Machines S6E12

We’re in a moment when organizations are approaching agentic AI backwards, chasing flashy use cases instead of building the scaffolding that makes AI agents actually work at scale. Erika Flowers, who led NASA’s AI Readiness Initiative and has advised Meta, Google, Netflix, and Intuit, joins Robb and Josh for a frank and funny conversation about what's broken in enterprise AI adoption. She dismantles the myth of the "big sexy AI use case" and explains why most AI projects fail before they start. The trio makes the case that we're entering a post-software world, whether organizations are ready or not. Listen and learn why the scaffolding— or agent runtime — matters more than use cases, why organizational gaps kill AI projects, how to move projects from pilot to production, and what "post-software" actually means for enterprises.Check out Erika’s podcast, “Flower Power Hour”: https://open.spotify.com/show/15BTSl9fWiH3QTmVAYj6FdLearn more about Erika at www.helloerikaflowers.com/0:09 - NASA AI Readiness Explained | Erica Flowers on Agentic AI & Runtimes1:48 - Why the “Big Sexy AI Use Case” Is a Lie2:42 - AI Didn’t Start with ChatGPT: What NASA Has Been Doing for 30 Years4:24 - Why AI Runtimes Matter More Than Any Single Use Case5:21 - The Hidden AI Problem: Legacy Data, Silos & Organizational Reality7:13 - The Boring AI That Actually Works (And Why Enterprises Ignore It)8:10 - The AI Arms Race Nobody Understands9:22 - AI Scaffolding Explained: The Metaphor Every Leader Needs to Hear12:12 - AI Readiness Is Cultural Change, Not Just Technology14:38 - From Parking Lots to Companies: How Simple AI Agents Quietly Scale17:01 - Why Most AI Features Feel Useless in Real Products19:08 - Stop Automating Spreadsheets: Ask AI the Question Instead25:06 - The Post-Software Era: Why Designers Aren’t Enough Anymore28:33 - UI Is a Medium: How AI Will Absorb Interfaces Entirely46:24 - Infinite Content, Human Creativity, and the Future After AI---------- Support our show by supporting our sponsors!This episode is supported by OneReach.aiForged over a decade of R&D  and proven in 10,000+ deployments, OneReach.ai’s GSX is the first complete AI agent runtime environment (circa 2019) — a hardened AI agent architecture for enterprise control and scale. Backed by UC Berkeley, recognized by Gartner, and trusted across highly regulated industries, including healthcare, finance, government and telecommunications.A complete system for accelerating AI adoption - design, train, test, deploy, monitor, and orchestrate neurosymbolic applications (agents). Use any AI modelsBuild and deploy intelligent agents fastCreate guardrails for organizational alignmentEnterprise-grade security and governanceRequest free prototype: https://onereach.ai/prototype/?utm_source=soundcloud&utm_medium=social&utm_campaign=podcast_s6e12&utm_content=1  ---------- The revised and significantly updated second edition of our bestselling book about succeeding with AI agents, Age of Invisible Machines, is available everywhere: Amazon — https://bit.ly/4hwX0a5#InvisibleMachines #Podcast #TechPodcast#AIPodcast#AI #AgenticAI#AIAgents#DigitalTransformation#AIReadiness #AIDeployment#AISoftware#AITransformation#AIAdoption#AIProjects#NASA#AgentRuntime#Innovation#AIUseCase
undefined
Dec 17, 2025 • 53min

5 Predictions for Agentic AI in 2026 | Invisible Machines Podcast S6E11

As 2025 draws to a close, Robb and Josh look back on some of the conversations they had this year both on the podcast and advising major enterprises and government leaders to offer their predictions for agentic AI in 2026. With major disruptive forces like outbound AI in the hands of consumers and agent runtime environments allowing organizations to create scalable infrastructure for AI agents, next year could see seismic changes in the way investors look at companies, and the ways companies look at themselves. Featuring a look at the components of an agent runtime, as well as previews of upcoming episodes with returning guest Ben Goertzel of SingularityNET and Joshua Gans, co-author of Prediction Machines, this episode is required viewing for anyone charged with finding ROI with agentic AI. 00:00 – Introduction to 2026 Agentic AI Predictions01:12 – Outbound AI Arrives02:30 – Scaling vs. Inventing AI04:55 – Ben Goertzel Preview06:45 – Scrappy Innovation in AI08:20 – Invisible Work Explained10:00 – Agents Job-Hunting for You11:15 – Bottom-Up AI Adoption13:10 – Layoffs, Knowledge Loss & AI15:00 – The “Fake AI Expert” Problem16:25 – Why Runtimes Matter18:00 – What IDWs Actually Do20:00 – Canonical Knowledge for Agents28:20 – Invisible Work Demo37:10 – Simulation Becomes the Next Frontier---------- Support our show by supporting our sponsors!This episode is supported by OneReach.aiForged over a decade and proven in 10,000+ deployments, OneReach.ai’s GSX is the first complete AI agent runtime environment (circa 2019) — a hardened AI agent architecture for enterprise control and scale. Backed by UC Berkeley, recognized by Gartner, and trusted across highly regulated industries, including healthcare, finance, government and telecommunications.A complete system for accelerating AI adoption - design, train, test, deploy, monitor, and orchestrate neurosymbolic applications (agents). Use any AI modelsBuild and deploy intelligent agents fastCreate guardrails for organizational alignmentEnterprise-grade security and governanceRequest free prototype:https://onereach.ai/prototype/?utm_source=soundcloud&utm_medium=social&utm_campaign=podcast_s6e11&utm_content=1 ---------- The revised and significantly updated second edition of our bestselling book about succeeding with AI agents, Age of Invisible Machines, is available everywhere: Amazon — https://bit.ly/4hwX0a5#InvisibleMachines #Podcast #TechPodcast#AIPodcast#AI #AgenticAI#AIAgents#DigitalTransformation#AI2026#2026Predictions#ArtificialIntelligence#FutureOfWork#AITrends#AIRuntime#IntelligentDigitalWorkers#AIInvestment#EnterprisAI#AIStrategy#AIROIStrategy#AITransformation#InvisibleMachines#AIRuntime
undefined
12 snips
Nov 28, 2025 • 1h 6min

Marc Hijink, author of Focus: The ASML Way | Invisible Machines Podcast

In this fascinating discussion, Marc Hijink, a financial reporter and technology columnist for NRC, dives into the world of ASML and its pivotal role in semiconductor manufacturing. He reveals how the company's extreme ultraviolet lithography is critical to AI technology and everyday devices. Exploring the complexities of chip production, he discusses ASML's unique non-hierarchical culture and its necessity for problem-solving. Hijink also touches on geopolitical tensions in tech supply chains and the cultural nuances that define ASML's strategic partnerships.
undefined
Nov 16, 2025 • 56min

Siloed Security? Forget AI Adoption

Omar Santos is a Distinguished Engineer directing AI Security at Cisco. He’s here for a frank conversation about the realities of security in the agentic era. As more software is created on-the-fly by AI agents at the request of humans, security has to become an ever-present layer. Security will be built into complete agent runtime environments and will require constant human oversight and intervention, augmented by the ability to simulate outcomes to avoid risk. Omar is also the Co-Chair of the Coalition for Secure AI, and these are the things he’s thinking about on a daily basis. He sits down with Robb and Josh at the end of a travel blitz that included work surrounding OpenAI’s Stargate Project, a four-year $500b plan for new AI infrastructure in the United States. The trio discuss how the ongoing training of models and the rising demand for inference continue to push the demand for security across burgeoning technology ecosystems. ---------- Support our show by supporting our sponsors!This episode is supported by OneReach.ai — creators of Generative Studio X (GSX), the first complete AI Agent Runtime Environment (V1 circa 2019). Forged over a decade of R&D and proven in 10,000+ deployments, GSX lets enterprises design, build, and orchestrate secure, scalable AI agents and systems. Trusted across healthcare, finance, government, and telecom. Use any AI modelsBuild and deploy intelligent agents fastCreate guardrails for organizational alignmentEnterprise-grade security and governanceAvoid vendor lock-in.Backed by UC Berkeley and recognized by Gartner.Before you build or buy another AI solution, think about getting an AI system.Book a Demo: https://onereach.ai/book-a-demo/?utm_source=soundcloud&utm_medium=social&utm_campaign=podcast_s6e9&utm_content=1 ---------- The revised and significantly updated second edition of our bestselling book about succeeding with AI agents, Age of Invisible Machines, is available everywhere: Amazon — https://bit.ly/4hwX0a5Chapters - 00:00 - Intro and episode setup00:33 - Meet Omar Santos and his role in AI security01:00 - Security as the new programming02:20 - Coalition for Secure AI and security as a new language04:45 - Identity, access, and AI agents06:09 - Scaling models and mega data centers09:04 - Training vs inference and the compute explosion12:54 - Budgets, compute, and hybrid human–AI security teams15:16 - Checklists, guardrails, and spec-driven development20:00 - From IDEs to agent swarms and background agents25:19 - CodeGuard, rules for coding agents, and secure SDLC32:00 - Why doing nothing is the biggest AI security risk39:30 - Validating AI, AI safety levels, and open source dilemmas46:00 - Private networks, insider AI agents, and embedded security51:00 - Simulation, digital twins, and business-wide risk modeling#InvisibleMachines #Podcast #TechPodcast#AIPodcast#AI #AgenticAI#AIAgents#DigitalTransformation#Cybersecurity#AIInfrastructure#AIOrchestration#AIManagement#TechLeadership#Innovation#ResponsibleAI#AIStandards#Cisco#OpenAI#StargateProject#AISecurity#Technology

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app