The AI Security Podcast

Harriet Farlow (HarrietHacks)
undefined
Mar 22, 2026 • 26min

How to get hired in AI security

If you’re trying to break into AI security, it can feel confusing — do you need to be a machine learning expert, a cybersecurity professional, or both? In this episode, we break down practical tips for getting hired in AI security, from the skills that actually matter to the types of projects and experience that can help you stand out. We discuss how to build relevant expertise in areas like adversarial machine learning, AI risk, and model security, as well as how to position yourself for roles in startups, research labs, and large tech companies. Whether you’re coming from a cybersecurity, data science, or general tech background, this episode will give you actionable advice on how to start building a career in one of the fastest-growing areas of technology. 🚀
undefined
Jan 25, 2026 • 10min

getting talks accepted into conferences! tips and tricks

Want to give a great conference talk (and not bore everyone to death)? In this episode, I share practical tips for giving a strong conference talk — from structuring your idea to actually delivering it on stage. #PublicSpeaking #Conferences #CFP #TechTalks #Cybersecurity #AI
undefined
Jan 18, 2026 • 37min

Do we need to secure model weights?

In this episode, we dig into model weight security — what it means, why it’s emerging as a critical issue in AI security, and whether the framing in the recent RAND report on securing AI model weights actually helps defenders and policymakers.We discuss the RAND report Securing AI Model Weights: Preventing Theft and Misuse of Frontier Models — exploring its core findings, including how model weights (the learnable parameters that encode what a model “knows”) are becoming high-value targets and the kinds of attack vectors that threat actors might use to steal or misuse them.#ai #aisecurity #cybersecurity 👉 Read the full RAND report here:https://www.rand.org/pubs/research_reports/RRA2849-1.html
undefined
Jan 11, 2026 • 28min

Model Context Protocol and Agent 2 Agent 🤖🕵️

In this episode, we dig into Model Context Protocol (MCP) and agent-to-agent (A2A) communication — what they are, why they matter, and where the real risks start to emerge.We cover:- What MCP actually enables beyond “tool calling”- How A2A changes the threat model for AI systems- Where trust boundaries break down when agents talk to each other- Why existing security assumptions don’t hold in agentic systems- What practitioners should be thinking about now (before this ships everywhere)This one’s for anyone working on AI systems, security, or governance who wants to understand what’s coming before it becomes a headline incident.As always: curious to hear your takes — especially where you think the biggest risks (or overblown fears) really are.
undefined
Jan 4, 2026 • 33min

Agentic AI Security | case studies by Microsoft, OWASP

As promised, I’m back with Tania for a deep dive into the wild world of agentic AI security — how modern AI agents break, misbehave, or get exploited, and what real case studies are teaching us. We’re unpacking insights from the Taxonomy of Failure Modes in Agentic AI Systems, the core paper behind today’s discussion, and exploring what these failures look like in practice.We also break down three great resources shaping the conversation right now:Microsoft’s Taxonomy of Failure Modes in Agentic AI Systems — a super clear breakdown of how agent failures emerge across planning, decision-making, and action loops: https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/Taxonomy-of-Failure-Mode-in-Agentic-AI-Systems-Whitepaper.pdfOWASP’s Agentic AI Threats & Mitigations — a practical, security-team-friendly guide to common attack paths and how to defend against them: https://genai.owasp.org/resource/agentic-ai-threats-and-mitigations/Unit 42’s Agentic AI Threats report — real-world examples of adversarial prompting, privilege escalation, and chain-of-trust issues showing up in deployed systems: https://unit42.paloaltonetworks.com/agentic-ai-threats/Join us as we translate the research, sift through what’s real vs. hype, and talk about what teams should be preparing for next 🚨🛡️.
undefined
Dec 23, 2025 • 4min

a hacky christmas message

A quick end-of-year message to say thanks. Thanks for being part of the channel this year — whether you’ve been watching quietly, sharing, or arguing with me in the comments. I really appreciate it.I hope you have a good Christmas and holiday period, whatever that looks like for you. Take a break if you can. See you in 2026.
undefined
Dec 21, 2025 • 13min

Three Black Hat talks at just 18! My interview with Bandana Kaur.

In this episode, I’m joined by Bandana Kaur — a cybersecurity researcher, speaker, and all-round superstar who somehow managed to do in her teens what most people are still figuring out in their thirties. 🤔Bandana is just 18 years old, entirely self-taught in cybersecurity, already working in the field, and recently gave three talks at Black Hat. Yes, three! 😱We talk about how she taught herself cybersecurity as a teenager, how she broke into the industry without a traditional pathway, and what it’s actually like being young (and very competent) in a field that still struggles with gatekeeping. Bandana shares what she focused on while learning, how she approached opportunities like conference speaking, and what she thinks matters most for people trying to get into security today.This conversation is part career advice, and part reminder that you don’t need permission — or a perfectly linear path — to do meaningful work in cybersecurity.Follow Bandana: @hackwithher
undefined
Dec 14, 2025 • 31min

Effective Altruism and AI with Good Ancestors CEO Greg Sadler | part 2

Remember that time I invited myself over to Greg's place with my camera? This is part 2 from that great conversation. I'm curious to hear whether you've heard a lot about EA? It's something really big in the AI world but I'm conscious a lot of people outside the bubble haven't heard of it. Let me know in the comments! Check out Greg's work here: https://www.goodancestors.org.au/MIT AI Risk Repository: https://airisk.mit.edu/The Life You Can Save (book): https://www.thelifeyoucansave.org/book/80,000 hours: https://80000hours.org/Learn more about AI capability and impacts: https://bluedot.org/
undefined
Dec 7, 2025 • 28min

AI Safety with CEO of Good Ancestors Greg Sadler | part 1

This week I invited myself over to Greg Sadler's place, the CEO of Good Ancestors, about AI safety. I brought sushi but I didn't have lunch so I ate most of it, and then I almost made him late for his next meeting. We specifically chat through AI capabilities, his work in policy, and building a not-profit. Greg is the kind of person who is so smart and cool that I feel like an absolute dummy interviewing him - so I know you're all going to like this episode. Stay tuned for part 2 where we dive into effective altruism and its intersection with AI!Check out Greg's work here: https://www.goodancestors.org.au/MIT AI Risk Repository: https://airisk.mit.edu/The Life You Can Save (book): https://www.thelifeyoucansave.org/book/80,000 hours: https://80000hours.org/Learn more about AI capability and impacts: https://bluedot.org/
undefined
Nov 24, 2025 • 30min

The United States AI Action Plan | will they win the AI race against China? 🤔

Hi! 👋 In this episode, we’re diving into the US AI Action Plan — the White House’s new roadmap for how America plans to lead in AI.. and beat China.We’ll look at what’s inside the plan, what it really means for AI security and regulation, and whether it’s more of a policy promise… or a political one.📄 You can read the full plan here:https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdfLet me know what you think — is this the kind of leadership AI needs, or a dangerous framing of AI capability?

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app