Security Weekly Podcast Network (Audio) Conducting Secure Code Analysis with LLMs - ASW #370
Feb 17, 2026
John Kinsella, security pro who blends technical and philosophical appsec views. Adrian Sanabria, AppSec and open source maintainer focused on practical tooling. They debate LLMs finding code flaws, noisy AI reports vs curated workflows, validating AI findings, costs of human-in-the-loop verification, open source maintenance pressure, and practical CI and economic tradeoffs.
AI Snips
Chapters
Transcript
Episode notes
LLMs Change Workload, Not The Question
- LLMs increase the volume of code and bug reports, but the underlying AppSec question — what security flaws exist — remains the same.
- Adrian Sanabria and Mike Shima note LLMs shift the problem toward triage and maintainership rather than eliminating it.
Curl Dropped Bounties Due To Noisy AI Reports
- Daniel Stenberg ended paid bug bounties for curl because of noisy, low-quality submissions amplified by AI.
- He still found value in curated AI outputs that revealed protocol mismatches and edge-case omissions.
Filter Reports To Preserve Trust
- Protect maintainers by filtering low-quality automated reports with reputation or captchas for submissions.
- Prioritize reports that include a clear reproduction or a proposed patch to limit noise.

