The AI Native Dev - from Copilot today to AI Native Software Development tomorrow

Tessl
undefined
51 snips
Mar 24, 2026 • 1h 2min

Stop Maintaining Your Code. Start Replacing It

Chad Fowler, VC and former Wunderlist CTO known for pragmatic engineering and the Phoenix Architecture, outlines treating code as disposable and systems as assets. He discusses tiny, replaceable services, spec-first designs, deploying AI-written code safely, shadow specs from agents, and building systems that work even with weaker LLMs.
undefined
54 snips
Mar 17, 2026 • 35min

We Scanned 3,984 Skills — 1 in 7 Can Hack Your Machine

Brian Vermeer, a security pro at Snyk focused on developer-facing tooling, explains risks hidden in agent skills. They scanned nearly 4,000 skills and found widespread critical issues. Brian breaks down prompt injection, obfuscation tricks, supply-chain and credential risks, how trusted skills can turn malicious, and how Snyk’s agent scan and registry integrations help spot problems before you install.
undefined
62 snips
Mar 10, 2026 • 1h 2min

The Greatest Time to Build a Startup (The AI-Native Advantage)

Daniel Jones, partner at re:cinq who helps firms adopt agentic coding and AI-native development. He unpacks hidden pitfalls of agentic development, why tests and version control matter with agents, how to manage context to avoid hallucinations, and practical rollout and platform strategies for scaling AI in enterprises.
undefined
37 snips
Mar 3, 2026 • 45min

Why Your Agent Needs Memory, Not Just Context

Richmond Alake, Director of AI Developer Experience at Oracle, brings experience in developer advocacy and memory engineering. He argues agent failures stem from memory, not models. He compares file systems and databases for agent memory. He explains skills as SOPs and outlines the memory lifecycle, security trade offs, and how continuous learning will merge agent and training loops.
undefined
4 snips
Feb 25, 2026 • 34min

Cisco Principal Engineer's Fix for AI Code Security

John Groetzinger, Principal Engineer at Cisco who built CodeGuard, a security skills layer for AI coding agents. He explains how CodeGuard teaches agents to write and review code securely. They discuss simplifying security guidance, packaging skills across IDEs, measuring activation and using task evals. John also covers design lessons, when to run evaluations, and why Cisco open sourced the project.
undefined
34 snips
Feb 17, 2026 • 31min

Why Context Beats Every Prompt You'll Ever Write

Guy Podjarny, a seasoned tech leader building developer tools and agent enablement platforms, explores why managing context matters more than crafting prompts. He breaks down context layers like policies, platform docs, and application state. Short practical takes cover building regression and torture tests, the Context Development Lifecycle, and treating context as a reusable organizational asset.
undefined
11 snips
Feb 10, 2026 • 57min

From IBM Acquisition to AI-Native Observability | Dash0 CEO

Mirko Novakovic, founder/CEO building Dash0 and former founder of Instana, pioneers AI-native observability. He discusses OpenTelemetry’s fit for LLMs, designing tooling and UX for agent consumers, agent-driven triage and dashboards, and how agents can capture and democratize production debugging knowledge.
undefined
104 snips
Feb 3, 2026 • 45min

The End of Fragmented Agent Context

They reveal results from testing 1,000+ agent skills and which skills help or hurt agent performance. They explain why anecdotal evidence is not enough and show how systematic evals and task reviews catch surprises. They describe treating skills as versioned software with package management, CI/CD, and observability to make reuse reliable.
undefined
50 snips
Jan 27, 2026 • 57min

The Developer Skills That Will Actually Survive AI

Thomas Dohmke, former CEO of GitHub and startup founder now building AI-native developer tools. He contrasts startup agility with incumbent scale. He talks about AI-native workflows and agents changing how code gets written. He predicts agent-to-agent collaboration, composable toolchains, and says the enduring skill is learning how to learn.
undefined
47 snips
Jan 20, 2026 • 20min

How Too Much Information Destroys Agent Performance

Itamar Friedman, CEO of Qodo and an expert in multi-agent systems, joins Robert Brennan, CEO of OpenHands and AI orchestration specialist, to discuss the pitfalls of AI agent performance. They reveal that one-third of developer-reported AI output is incorrect and emphasize the critical difference between creative coding agents and structured review agents. Excessive information can hinder agent efficacy. Robert shares insights on scaling maintenance via cloud agents and breaking tasks into manageable parts, highlighting the need for human checks to build trust.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app