The New Stack Podcast

The New Stack
undefined
Apr 1, 2026 • 22min

Edge-forward: Akamai eyes sweet spot between centralized & decentralized AI inference

At KubeCon + CloudNativeCon Europe 2026, Lena Hall and Thorsten Hans of Akamai outlined how the company is evolving from a CDN provider into a developer-focused cloud platform for AI. Akamai’s strategy centers on low-latency, distributed computing, combining managed Kubernetes, serverless functions, and a distributed AI inference platform to support modern workloads. With a global footprint of core and “distributed reach” datacenters, Akamai aims to bring compute closer to users while still leveraging centralized infrastructure for heavier processing. This hybrid model enables faster feedback loops critical for applications like fraud detection, robotics, and conversational AI. To address concerns about complexity, Akamai emphasizes managed infrastructure and self-service tools that abstract away integration challenges. Its platform supports open source through managed Kubernetes and pre-packaged tools, simplifying deployment. Akamai also invests in serverless technologies like WebAssembly-based functions, enabling developers to build and deploy globally distributed applications quickly. Overall, the company prioritizes developer experience, allowing teams to focus on application logic rather than infrastructure management. Learn more from The New Stack about the latest developments around how Akamai is transforming to a developer-focused cloud platform for AI. Akamai Picks Up Hosting for Kernel.org Should You Care About Fermyon Wasm Functions on Akamai? Join our community of newsletter subscribers to stay on top of the news and at the top of your game.   
undefined
Mar 24, 2026 • 44min

Kubernetes co-founder Brendan Burns: AI-generated code will become as invisible as assembly

In this episode of The New Stack Makers, Microsoft Corporate Vice President and Technical Fellow, Brendan Burns discusses how AI is reshaping Kubernetes and modern infrastructure. Originally designed for stateless applications, Kubernetes is evolving to support AI workloads that require complex GPU scheduling, co-location, and failure sensitivity. Features like Dynamic Resource Allocation and projects such as KAITO introduce AI-specific capabilities, while maintaining Kubernetes’ core strength: vendor-neutral extensibility.  Burns highlights that AI also changes how systems are monitored. Success is no longer binary; it depends on answer quality, user feedback, and large-scale testing using thousands of prompts and even AI evaluators.  On software development, Burns argues that the industry’s focus on reviewing AI-generated code is temporary. Just as developers stopped inspecting compiler output, AI-generated code will become a disposable artifact validated by tests and specifications. This shift will redefine engineering roles and may lead to programming languages designed for machines rather than humans, signaling a fundamental transformation in how software is built and maintained. Learn more from The New Stack about the latest developments around how AI is reshaping Kubernetes and modern infrastructure: How To Use AI To Design Intelligent, Adaptable Infrastructure The AI Infrastructure crisis: When ambition meets ancient systems  Join our community of newsletter subscribers to stay on top of the news and at the top of your game. 
undefined
Mar 20, 2026 • 29min

AI can write your infrastructure code. There's a reason most teams won't let it.

Marcin Wyszynski, technical co-founder of Spacelift and OpenTofu and former SRE at Google and Facebook, talks about how AI now generates infrastructure-as-code and why that changes team workflows. He explores risks when people run generated configs without understanding them. He describes Spacelift Intent, a hybrid approach that lets LLMs act on clouds while enforcing deterministic guardrails.
undefined
Mar 6, 2026 • 44min

OutSystems CEO on how enterprises can successfully adopt vibe coding

Woodson Martin, CEO of OutSystems and leader in enterprise low-code/AI platforms. He discusses blending AI agents with data, workflows, APIs and human oversight. Topics include document processing, decision support, personalization, platform guardrails and model swapping. Real customer ROI and how governed platforms enable safe vibe coding in large organizations.
undefined
Mar 2, 2026 • 44min

Inception Labs says its diffusion LLM is 10x faster than Claude, ChatGPT, Gemini

Stefano Ermon, co-founder and CEO of Inception Labs and former Stanford researcher who adapted diffusion to language, discusses Mercury 2, a diffusion-based LLM. He explains how diffusion refines text in parallel rather than token-by-token. Topics include why diffusion speeds inference, Mercury 2’s 5–10x latency gains, hardware and developer trade-offs, and target low-latency use cases.
undefined
10 snips
Feb 20, 2026 • 52min

NanoClaw's answer to OpenClaw is minimal code, maximum isolation

Gavriel (Gabriel) Cohen, co-founder of NanoClaw and AI-native marketing entrepreneur, built a minimalist, containerized alternative to OpenClaw to fix security and architecture flaws. He talks about spotting risky dependencies and massive unaudited code, why OS-level isolation and container-per-agent matters, and designing NanoClaw as a tiny, auditable runtime built on Claude Code skills.
undefined
9 snips
Feb 19, 2026 • 20min

The developer as conductor: Leading an orchestra of AI agents with the feature flag baton

Michael Beemer, Dynatrace product/technical leader focused on OpenFeature and observability. Andrew Norris, former DevCycle CEO now product manager at Dynatrace specializing in feature flagging and progressive delivery. They discuss using feature flags as safeguards for AI-generated code. They cover integrating DevCycle into Dynatrace for feature-level observability. They explore flag lifecycle, scalability, and OpenFeature standards.
undefined
8 snips
Feb 13, 2026 • 23min

The reason AI agents shouldn’t touch your source code — and what they should do instead

Alois Reitbauer, Chief Technology Strategist at Dynatrace focused on observability and autonomous operations. He explains why AI agents should change configurations with feature flags instead of rewriting code. He covers integrating feature management with observability, using flags as safety guardrails, and how constrained AI actions enable safer automated operations.
undefined
12 snips
Feb 11, 2026 • 57min

You can’t fire a bot: The blunt truth about AI slop and your job

Matan-Paul Shetrit, Director of Product Management at Writer who builds enterprise AI and agentic systems. He discusses building enterprise-grade, specialized models and the need for version control and predictability. He explains context and judgment graphs, agent orchestration, and how AI shifts people into editorial and supervisory roles.
undefined
15 snips
Feb 10, 2026 • 57min

GitLab CEO on why AI isn't helping enterprise ship code faster

Bill Staples, CEO of GitLab and a leader in DevOps and enterprise AI, explains why faster code generation does not speed enterprise software delivery. He discusses how reviews, CI/CD, security, and compliance are the real bottlenecks. Staples outlines GitLab’s Duo Agent Platform and the importance of context-rich, platform-level automation to unlock delivery at scale.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app