The Boring AppSec Podcast

The Boring AppSec Podcast
undefined
Mar 11, 2026 • 1h 1min

Ep 37: The Future of Security Testing in an AI-Driven World with Jason Haddix

Jason Haddix, CEO of Arcanum Information Security and creator of the Bug Hunter’s Methodology, blends pen-testing chops with AI tooling. He talks about AI automating recon and code analysis. He explores embedding personal methodology into agents. He covers prompt-injection defenses, agent orchestration, and why evaluation benchmarks matter.
undefined
Mar 2, 2026 • 50min

Ep 36: Discussing AI's Current State of Affairs

They explore how AI is reshaping AppSec workflows, from agent orchestration to new UI paradigms. They debate risks like prompt injection, secret handling, and rapid OpenClaw adoption. They discuss threat modeling as living context graphs, building accurate software inventories, and whether AppSec will merge into engineering. They close on verification, open source churn, and the gap between AI lab claims and shipped products.
undefined
Feb 16, 2026 • 50min

Ep 35: Exploring Security After Determinism with Jens Ernstberger

In this episode, we sit down with Jens to explore why AI agents fundamentally break traditional security assumptions, from API keys and browser sessions to composability and access control.Drawing parallels to DeFi exploits and smart contract failures, he explains why agent identity, short-lived delegated authorization, and zero trust aren’t optional add-ons, but the foundation for safely running autonomous systems.We also dive into context compression as both a performance and security challenge, the real difference between MCP and skills, and a future where humans may stop reviewing code altogether. As agents become the primary actors on the internet, even writing itself begins to change in an AI-scraped world.If agents are non-deterministic by design, the real question becomes: where do we reintroduce determinism?Tune in for a deep dive!Connect with Jens Ernstberger:Website: https://ernstberger.xyz/LinkedIn: https://www.linkedin.com/in/jens-ernstberger-phd-96b0ba14a/Connect with Anshuman:LinkedIn: ⁠⁠⁠⁠⁠⁠anshumanbhartiya⁠⁠X: ⁠⁠⁠⁠⁠⁠https://x.com/anshuman_bh⁠⁠Website: ⁠⁠⁠⁠⁠⁠https://anshumanbhartiya.com/⁠⁠⁠⁠⁠⁠Instagram: ⁠⁠anshuman.bhartiya⁠⁠⁠⁠Connect with Sandesh:LinkedIn: ⁠⁠⁠⁠⁠⁠anandsandesh⁠⁠X: ⁠⁠⁠⁠⁠⁠https://x.com/JubbaOnJeans
undefined
Feb 2, 2026 • 57min

Security at Scale in a Probabilistic World with Ankur Chakraborty

In this episode, Ankur Chakraborty discusses the evolution of AI security, emphasizing the importance of foundational security principles in the context of generative AI. He explores the challenges of scaling security measures in an era of rapid feature deployment and the necessity of integrating AI tools into security practices. The conversation delves into the balance between human oversight and autonomous systems, the significance of context in security decision-making, and the evaluation of security tools based on their outcomes. The discussion highlights the need for better guardrails and the role of context engineering in enhancing security practices.
undefined
Jan 28, 2026 • 55min

The Future of Identity in AI Agents with Ian Livingstone

Ian Livingstone, CEO and co-founder of Keycard and serial builder focused on developer experience, explores agent identity in the AI era. He discusses non-deterministic AI behavior and why current service accounts fail. Conversation covers fine‑grained, federated permissions, the risks of agents accessing the public web, and how liability, insurance, and engineering practices must evolve.
undefined
Jan 19, 2026 • 50min

Rethinking Enterprise Security in an AI- and Platform-First World with Kane Narraway

In this episode, we sit down with Kane Narraway to unpack how enterprise security is changing as AI, platforms, and developer-driven security become the norm. Kane shares his path from digital forensics to leading security at Canva, and why understanding company culture matters just as much as choosing the right tools.We discuss why modern security is becoming platform-first, why much of the security vendor market optimizes for finding problems rather than fixing them, and why Kane believes security teams need more engineers and fewer manual processes.The conversation also digs into AI security, shadow IT (and shadow AI), and the real-world trade-offs between usability and control, especially as low-code and no-code tools become more common inside companies.Tune in for a deep dive!Connect with Kane Narraway:LinkedIn: https://www.linkedin.com/in/kane-n/Blog: https://kanenarraway.com/Connect with Anshuman:LinkedIn: ⁠⁠⁠⁠anshumanbhartiyaX: ⁠⁠⁠⁠https://x.com/anshuman_bhWebsite: ⁠⁠⁠⁠https://anshumanbhartiya.com/⁠⁠⁠⁠Instagram: anshuman.bhartiyaConnect with Sandesh:LinkedIn: ⁠⁠⁠⁠anandsandeshX: ⁠⁠⁠⁠https://x.com/JubbaOnJeans
undefined
Dec 15, 2025 • 52min

The Future of Developer Security with Travis McPeak

Travis McPeak, a security leader and entrepreneur, discusses the future of developer security, having led initiatives at major companies like Symantec and Netflix. He emphasizes the role of AI in shifting security 'left' and integrating it seamlessly into developer tools. Travis highlights the challenges of compliance in cloud security and how AI can make threat modeling feasible. He also debates the benefits and risks of AI for developers, particularly emphasizing the importance of ownership in using AI-generated code effectively.
undefined
Dec 4, 2025 • 52min

Scaling Product Security In The AI Era with Teja Myneedu

In this episode, we sit down with Teja Myneedu, Sr. Director, Security and Trust at Navan. He shares his philosophy on achieving security at scale, discussing some challenges and approaches specially in the AI era. Teja's career spans over two decades on the front lines of product security at hyper-growth companies like Splunk. He currently operates at the complex intersection of FinTech and corporate travel, where his responsibilities include securing financial transactions and ensuring the physical duty of care for global travelers.Key Takeaways• Scaling Security Philosophy: Security programs should be built on developer empathy and innovative solutions, scaling with context and automation.• Pragmatic Protection: Focus on incremental, practical improvements (like WAF rules) to secure the enterprise immediately, instead of letting the pursuit of perfection delay necessary defenses; security by obscurity is not always bad.• Flawed Prioritization: Prioritization frameworks are often flawed because they lack organizational and business context, which security tools fail to provide.• AI and Code Fixes: AI is changing the application security field by reducing the cognitive load on engineers and making it easier for security teams to propose vulnerability fixes (PRs).• The Authorization Dilemma: The biggest novel threat introduced by LLMs is the complexity of identity and authorization, as agents require delegate access and dynamically determine business logic.Tune in for a deep dive!Contacting Teja* LinkedIn: https://www.linkedin.com/in/myneedu/* Company Website: https://www.navan.comContacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/
undefined
Nov 24, 2025 • 46min

Architecting AI Security: Standards and Agentic Systems with Ken Huang

In this episode, we sit down with Ken Huang, a core architect behind modern AI security standards, to discuss the revolutionary challenges posed by agentic AI systems. Ken, who chairs the OWASP AIVSS project and co-chairs the AI safety working groups at the Cloud Security Alliance, breaks down how security professionals are writing the rulebook for a future driven by autonomous agents.Key TakeawaysAIVSS for Non-Deterministic Risk: The OWASP AIVSS project aims to provide a quantitative measure for core agent AI risks by applying an agent AI risk factor on top of CVSS, specifically addressing the autonomy and non-deterministic nature of AI agents.Need for Task-Scoped IAM: Traditional OAuth and SAML are inadequate for agentic systems because they provide coarse-grained, session-scoped access control. New authentication standards must be task-scoped, dynamically removing access once a specific task is complete, and driven by verifying the agent’s intent.A2A Security Requires New Protocols: Agent-to-Agent communication (A2A) introduces security issues beyond traditional API security (like BOLA). New systems must utilize protocols for Agent Capability Discovery and Negotiation—validated by digital signatures—to ensure the trustworthiness and promised quality of service from interacting agents.Goal Manipulation is a Critical Threat: Sophisticated attacks often utilize context engineering to execute goal manipulation against agents. These attacks include gradually shifting an agent's objective (crescendo attack), using prompt injection to force the agent to expose secrets (malicious goal expansion), and forcing endless processing loops (exhaustion loop/denial of wallet).Tune in for a deep dive!Contacting Ken* LinkedIn: https://www.linkedin.com/in/kenhuang8/* Company Website: https://distributedapps.ai/* Substack: https://kenhuangus.substack.com/* Paper (Agent Capability Negotiation and Binding Protocol): https://arxiv.org/abs/2506.13590* Book (Securing AI Agents): https://www.amazon.com/Securing-Agents-Foundations-Frameworks-Real-World/dp/3032021294 * AIVSS: https://aivss.owasp.org/Contacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/
undefined
Oct 1, 2025 • 48min

The Attacker's Perspective on AI Security with Aryaman Behera

In this episode, hosts Sandesh and Anshuman chat with Aryaman Behera, the Co-Founder and CEO of Repello AI. Aryaman shares his unique journey from being a bug bounty hunter and the captain of India's top-ranked CTF team, InfoSec IITR, to becoming the CEO of an AI security startup. The discussion offers a deep dive into the attacker-centric mindset required to secure modern AI applications, which are fundamentally probabilistic and differ greatly from traditional deterministic software. Aryaman explains the technical details behind Repello's platform, which combines automated red teaming (Artemis) with adaptive guardrails (Argus) to create a continuous security feedback loop. The conversation explores the nuanced differences between AI safety and security, the critical role of threat modeling for agentic workflows, and the complex challenges of responsible disclosure for non-deterministic vulnerabilities.Key Takeaways- From Hacker to CEO: Aryaman discusses the transition from an attacker's mindset, focused on quick exploits, to a CEO's mindset, which requires patience and long-term relationship building with customers.- A New Kind of Threat: AI applications introduce a new attack surface built on prompts, knowledge bases, and probabilistic models, which increases the blast radius of potential security breaches compared to traditional software.- Automated Red Teaming and Defense: Repello’s platform consists of two core products: Artemis, an offensive AI red teaming platform that discovers failure modes , and - Argus, a defensive guardrail system. The platforms create a continuous feedback loop where vulnerabilities found by Artemis are used to calibrate and create policies for Argus.- Threat Modeling for AI Agents: For complex agentic systems, a black-box approach is often insufficient. Repello uses a gray-box method where a tool called AgentWiz helps customers generate a threat model based on the agent's workflow and capabilities, without needing access to the source code.- The Challenge of Non-Deterministic Vulnerabilities: Unlike traditional software vulnerabilities which are deterministic, AI exploits are probabilistic. An attack like a system prompt leak only needs to succeed once to be effective, even if it fails nine out of ten times.- The Future of Attacks is Multimodal: Aryaman predicts that as AI applications evolve, major new attack vectors will emerge from new interfaces like voice and image, as their larger latent space offers more opportunities for malicious embeddings.Tune in for a deep dive!Contacting Aryaman* LinkedIn: https://www.linkedin.com/in/aryaman-behera/* Company Website: https://repello.ai/Contacting Anshuman* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/* X: ⁠⁠⁠⁠https://x.com/anshuman_bh* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app