AI Security Podcast

TechRiot.io
undefined
15 snips
Mar 18, 2026 • 51min

Questions Every CISO Must Ask AI Security Vendors

They cut through RSAC hype to examine the surge of AI agents and why definitions matter. The conversation highlights enterprise search as critical context for internal AI. They argue for a centralized AI platform within security teams and stress asking vendors about API access and observability. They warn about rapid zero-day exploitation and the push to consolidate vendors while building internal automation.
undefined
42 snips
Mar 5, 2026 • 60min

Will Foundation Models Kill Security Startups?

Did Anthropic just kill the AppSec industry? Following the announcement of Claude Code Security, a tool that finds, reasons about, and fixes code vulnerabilities, major security stocks dropped by 8% .In this episode of the AI Security Podcast, Ashish and Caleb break down the reality behind the hype. Caleb explains why using AI for SAST (Static Application Security Testing) is "a no-brainer," noting that many open-source projects and startups have already been doing exactly what Anthropic announced . We discuss why this actually validates the shift toward AI-automated remediation.The conversation goes deeper into the future of the cybersecurity market: Will giant foundation models start acquiring security companies? Will they offer "premium gas" (cheaper tokens) for building on their platforms? And most importantly, what does this mean for AppSec engineers whose jobs involve triaging false positives?Questions asked:(00:00) Introduction: The Claude Code Security Announcement(02:50) What is Claude Code Security? (Finding & Reasoning about VULNs) (03:50) Market Overreaction: Why Security Stocks Dropped 8% (05:10) Why AI-Powered SAST is Not New (OpenAI & Open Source doing it already) (07:20) Will AI Take AppSec Jobs? (Triaging False Positives) (09:00) "Shift Left" on Steroids: Auto-Fixing and PR Submission (11:30) The Threat to Legacy Vendors: Why CrowdStrike's Moat is Safe (14:30) Historical Context: AI is the New Calculator/Typewriter (18:20) The "Gasoline" Theory: Foundation Models as Fuel (21:00) Will Anthropic Acquire Security Startups? (26:30) Anthropic's Go-To-Market Strategy: Building AI SOCs (33:30) Startup Survival: Can Innovation Outpace Big Tech? (41:30) The Future of Threat Intel: Is the Legacy Moat Disappearing? (48:20) Negotiating with Vendors using AI Leverage (53:30) Using Evals for Organizational Anomaly Detection
undefined
38 snips
Feb 11, 2026 • 47min

How to Build Your Own AI Chief of Staff with Claude Code

Caleb Sima, a venture investor and AI builder who created Pepper, a multi-agent AI chief of staff. He explains building Pepper with Claude Code, how it spins up specialist agents and even builds its own tools. They cover rapid prototyping, auto-generating branding and websites, black-box testing that files GitHub bugs, and security risks from app sprawl and shared memory.
undefined
24 snips
Jan 28, 2026 • 1h 1min

AI Security 2026 Predictions: The "Zombie Tool" Crisis & The Rise of AI Platforms

Predictions about an incoming “zombie tool” crisis where unmaintained internal AI tools rot as staff churn. Debate over rising, possibly fixed AI token costs and the shift from many features to centralized AI platform teams. Discussion of a capability plateau where models improve but feel the same, plus persistent threats like prompt injection and identity-related “confused deputy” risks.
undefined
56 snips
Jan 23, 2026 • 51min

Why AI Agents Fail in Production: Governance, Trust & The "Undo" Button

Dev Rishi, GM of AI at Rubrik and former Predibase CEO, shares lessons from building and deploying generative AI for enterprises. He discusses why agents stall in read-only mode, the three top IT fears—shadow agents, governance, and the need to undo damage—and the concept of Agent Rewind. The conversation also covers real-time policy enforcement, using small language models as judges, and protocol debates like MCP vs A2A.
undefined
17 snips
Dec 19, 2025 • 1h 3min

AI Security 2025 Wrap: 9 Predictions Hit & The AI Bubble Burst of 2026

Reflecting on 2025, the hosts reveal their accuracy in predictions, triumphantly hitting 9 out of 9. They discuss the impact of SOC automation, the struggles of AI production systems, and the surge in AI Red Teaming amid rising costs. Looking to 2026, they boldly predict the inevitable bursting of the AI bubble and the rise of self-fine-tuning models. They raise eyebrows over the role of 'AI Engineers' and share insights on data security's increasing importance due to regulatory pressures. A year-end wrap that’s both insightful and entertaining!
undefined
5 snips
Dec 10, 2025 • 39min

AI Paywall for Browsers & The End of the Open Web?

Cloudflare's new policy requires AI bots to pay for crawling web content, raising questions about the future of the open web. The hosts discuss how this could lead to a system where information is treated as currency. They explore the security implications, emphasizing the need for strict identity checks for AI and human access. A new open-source browser, Ladybird, is introduced as a competitor to Chromium, focusing on payment integration for content. The idea of browsers becoming payment gateways is also examined, hinting at a shift toward consumer micropayments.
undefined
19 snips
Dec 3, 2025 • 51min

Build vs. Buy in AI Security: Why Internal Prototypes Fail & The Future of CodeMender

The debate on whether to build or buy AI security tools heats up with insights on Google's CodeMender, which autonomously finds and fixes vulnerabilities. The challenges of scaling prototypes into production-grade solutions lead to alarming failures within 18 months. They discuss incentives for internal teams that drive unnecessary AI expansion, potentially igniting an AI bubble. Predictions emerge about the shift towards auto-personalized security products that adapt to environments, as the hype around 'agentic AI' raises more questions than answers.
undefined
49 snips
Nov 6, 2025 • 58min

Inside the 29.5 Million DARPA AI Cyber Challenge: How Autonomous Agents Find & Patch Vulns

Michael Brown, Principal Security Engineer at Trail of Bits and leader of the Buttercup project in DARPA's AI Cyber Challenge, shares insights into building autonomous AI systems for vulnerability detection. He reveals how Buttercup, despite its initial skepticism, impressed with high-quality patch generation thanks to a 'best of both worlds' approach combining AI with traditional methods. Michael also discusses the competition's unique challenges, the importance of robust engineering, and practical tips for applying AI in security tasks. The future of Buttercup aims at automatic bug fixes at scale for the open-source community.
undefined
34 snips
Oct 23, 2025 • 52min

Anthropic's AI Threat Report: Real Attacks, Simulated Competence & The Future of Defense

Dive into the alarming findings of a recent AI Threat Intelligence report. Discover how AI-enabled biohacking and extortion strategies are transforming cybercrime. Learn about North Korean IT workers leveraging AI to simulate technical skills for Fortune 500 jobs. Explore the rise of ransomware-as-a-service, making sophisticated attacks accessible to less skilled actors. The discussion also highlights gaps in identity verification and the complexities of AI in scaling fraud and malware, revealing a landscape where AI is professionalizing existing threats.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app