

Hidden Layers: AI and the People Behind It
KUNGFU.AI
Hidden Layers: AI and the People Behind It, is a series focused on all things artificial intelligence. Hosted by our Co-Founder and CTO, Ron Green, who uses his 20+ years of AI experience to break down complex topics into digestible, engaging conversations. If you’re a tech professional, or just looking to better understand the world of AI, you’re in the right place. Each episode will explore cutting-edge technical advances, discuss the art of the possible, and review some of the incredible work being done in the field.
Episodes
Mentioned books

Mar 12, 2026 • 32min
The "AI Bubble" Bubble | EP.51
Is the AI bubble narrative itself a bubble?Billions of dollars are flowing into chips, data centers, and frontier models. From the outside, it can look speculative. But from inside the industry, the signal looks very different.In this episode of Hidden Layers, Ron Green is joined by Michael Wharton and Dr. ZZ Si to discuss what it actually feels like to build with AI today. They explore rapid advances in model capabilities, the growing power of coding agents, and why many organizations are still struggling to absorb the productivity gains AI already enables.They also examine the massive capital investment in AI infrastructure and debate what signals would actually indicate the industry has hit a plateau.00:00 – Is the AI Bubble Narrative Itself a Bubble?03:00 – Rapid Advances in AI Model Capabilities05:35 – Coding Agents and the Changing Development Workflow09:30 – Benchmarks Showing AI Capability Acceleration16:20 – Verifying AI Outputs and the Limits of Evaluation18:20 – CAPEX, Chips, and the Dot-Com Bubble Comparison21:50 – What Would Actually Signal an AI Bubble26:30 – Why AI May Become a Utility

Feb 19, 2026 • 30min
Did AI Kill Programming? | EP. 50
Are AI coding tools actually replacing programmers, or just changing how software gets built? In this episode of Hidden Layers, Ron Green sits down with Dr. ZZ Si and Michael Wharton to unpack what has shifted with modern coding agents, what has not, and where the hype breaks down.They share concrete examples from their own workflows, including how coding tools have moved from autocomplete to handling larger chunks of work, and why the real bottleneck is no longer writing syntax, but defining intent, architecture, and product direction. The conversation also explores how these tools are reshaping team velocity, why senior engineers tend to get more leverage from AI than junior developers, and the risks of weakening the talent pipeline if companies stop investing in early-career engineers.The episode closes with a candid look at what skills will matter most in an AI-assisted world, how abstraction layers are changing the role of programmers, and whether we may already be near peak computer science graduates.00:00 – The rise of AI coding tools03:07 – How workflows are changing06:27 – Team velocity and delivery speed08:19 – Product thinking vs. engineering execution09:46 – Is programming actually dying?11:41 – What “programming” means now15:23 – Senior vs. junior developer leverage16:33 – The developer talent pipeline18:21 – Ego, identity, and automation19:08 – Before vs. after: building with AI22:30 – Debugging and fixing issues with AI24:42 – Spec-writing and product shaping with AI26:49 – The future of computer science grads29:20 – Closing reflections

Jan 22, 2026 • 30min
Your AI Is Too Big, Too Expensive, and Probably Wrong | EP. 49
What if the most powerful AI in your organization isn’t the biggest model you can buy, but the one trained on data only you own?In this episode of Hidden Layers, Ron Green is joined by Dr. ZZ Si and Michael Wharton to break down why domain-specific AI models consistently outperform general-purpose systems in real enterprise environments. They explore how narrowly scoped models deliver higher accuracy, lower costs, better reliability, and stronger governance, especially when built on proprietary data.Through real-world examples spanning finance, industrial systems, healthcare, and document understanding, the conversation tackles when to build custom models, when to rely on APIs, and how to identify AI initiatives that actually make it into production. The takeaway is clear: focus beats scale, and specificity is often the fastest path to durable competitive advantage.Chapters00:00:00 What Is Domain-Specific AI00:01:15 General Models vs. Focused Systems00:02:48 Performance, Cost, and Model Size00:04:13 Proprietary Data as Advantage00:07:58 Why AI Fails in Production00:08:42 Real-World Domain-Specific Examples00:10:54 How to Decide What to Build00:14:53 Scale, Accuracy, and Uncertainty00:18:49 The Spectrum of Domain-Specific AI00:27:01 What We’d Build Differently Today

Dec 17, 2025 • 41min
AI Year in Review – Key Moments, Hot Takes, and 2026 Predictions | EP. 48
2025 was another defining year for artificial intelligence. In this special AI Year in Review episode of Hidden Layers, Ron Green is joined by Emma Pirchalski, Michael Wharton, and Dr. ZZ Si to break down what actually mattered in AI this year.The team recaps the biggest developments from 2025, revisits their predictions from 2024 to see what held up (and what didn’t), and shares honest, experience-driven predictions for 2026. Topics include multimodal models, agents, enterprise adoption, governance gaps, workforce impact, ROI pressure, and where AI is truly headed next.This episode cuts past hype to focus on what leaders, builders, and decision-makers should actually be watching as AI moves from experimentation to execution.Chapters00:00:00 Welcome and Introduction to 2025 AI Year in Review00:00:56 Emma's Working Models Podcast Announcement00:01:48 Top AI Developments of 202500:16:29 Reviewing 2025 Predictions00:25:08 2026 Predictions00:36:49 Closing Thoughts

Nov 13, 2025 • 37min
Why Agentic AI Isn’t Ready for Prime Time—Yet | EP. 47
Artificial intelligence is shifting from prediction to autonomy—and “agentic AI” is leading the charge. In this episode of Hidden Layers, KUNGFU.AI’s Ron Green, Dr. ZZ Si, and Michael Wharton unpack what it really means for machines to act on their own, what’s hype versus real progress, and how far we are from true artificial general intelligence (AGI).They discuss how coding agents are transforming development workflows, why agentic AI is both overhyped and underutilized, the challenges of scaling reliable autonomy, the connection between AGI, biology, and lifelong learning, and whether new architectures or cognitive inspiration will take us the rest of the way.00:00 – Intro: From prediction to autonomy01:30 – What is agentic AI?05:00 – Coding agents and creative workflows08:00 – Reliability, risk, and real-world use12:30 – The agentic hype cycle16:00 – Why businesses underuse (and overuse) AI19:00 – Narrow AI and domain-specific intelligence22:00 – The AGI timeline debate26:00 – Learning from biology and cognition33:00 – Lifelong learning and what’s missing today

Sep 25, 2025 • 31min
Why AI Hallucinates (and Why It Might Never Stop) | EP. 46
In this episode of Hidden Layers, Ron is joined by Michael Wharton and Dr. ZZ Si to explore one of the most pressing and puzzling issues in AI: hallucinations. Large language models can tackle advanced topics like medicine, coding, and physics, yet still generate false information with complete confidence. The discussion unpacks why hallucinations happen, whether they’re truly inevitable, and what cutting-edge research says about detecting and reducing them. From OpenAI’s latest paper on the mathematical inevitability of hallucinations to new techniques for real-time detection, the team explores what this means for AI’s reliability in real-world applications.

Sep 3, 2025 • 25min
GPT-5 Release Fallout, AGI Timeline, Google's Genie 3 and Meta's DINO V3 | EP. 45
In this episode of Hidden Layers, we dive into the most important AI developments of the month. We cover OpenAI’s highly anticipated and controversial GPT-5 release, debate where we really are on the AGI timeline, explore groundbreaking new world models like Google’s Genie 3 and Tencent’s Huanyuan Gamecraft, and unpack Meta’s DINO V3 image encoder breakthrough.

Aug 16, 2025 • 28min
Bridging Physics and AI for Smarter Climate Decisions | EP. 44
In this episode of Hidden Layers, host Ron talks with Dr. Hannah Lu, assistant professor at the University of Texas at Austin and core faculty at the Odin Institute for Computational Engineering and Sciences. Dr. Lu is pioneering the use of AI-powered surrogate models to make complex scientific simulations—like CO₂ absorption in geological formations—faster, more accurate, and more useful for real-world decision-making.They discuss:How surrogate models work and why they’re so powerfulThe challenges of applying AI to physics-based systemsHow digital twins and uncertainty quantification are shaping the future of environmental modelingThe intersection of generative AI, physics constraints, and climate science

Jul 16, 2025 • 28min
Apple AI Collapse, Diffusion Video Boom, Copyright Wars & More | EP. 42
In this episode of Hidden Layers: Decoded, Ron Green, Dr. ZZ Si, and Michael Wharton unpack July’s biggest AI developments—from flawed reasoning tests to surprising training breakthroughs.Apple’s “Illusion of Thinking” paper draws sharp critiques—from both humans and language models. Meta revives a forgotten 2019 attention mechanism to reshape scaling laws. Video generation tools from BlackForest Labs and others hit new levels of realism and interactivity. Federal courts weigh in on Anthropic and Meta’s use of copyrighted training data. A one-line tweak in training recurrent models dramatically boosts performance on long sequences. Cloudflare announces it will block AI scrapers by default—though it might be too late.From Transformer alternatives to copyright battles, this episode dives into the fast-moving intersection of AI research, engineering, and regulation.

Jun 18, 2025 • 40min
Rewiring AI: What Happens When You Start with the Brain, Not the Data | EP.42
In this episode of Hidden Layers, Ron Green sits down with Dr. Karl Friston—world-renowned neuroscientist and originator of the Free Energy Principle—and Dan Mapes, founder of Verses AI and the Spatial Web Foundation. Together, they explore how neuroscience is beginning to reshape artificial intelligence.They break down complex but powerful ideas like active inference, biologically plausible AI, and collective intelligence. You'll hear how concepts from brain science are influencing next-gen AI architectures and what the future might hold beyond large language models.From the limitations of backpropagation to the promise of decentralized, embodied, and domain-specific models, this is a deep dive into the future of intelligent systems—and the science behind them.


