AI Security Podcast

TechRiot.io
undefined
Oct 23, 2024 • 28min

What is AI Native Security?

In this episode of the AI Cybersecurity Podcast, Caleb and Ashish sat down with Vijay Bolina, Chief Information Security Officer at Google DeepMind, to explore the evolving world of AI security. Vijay shared his unique perspective on the intersection of machine learning and cybersecurity, explaining how organizations like Google DeepMind are building robust, secure AI systems. We dive into critical topics such as AI native security, the privacy risks posed by foundation models, and the complex challenges of protecting sensitive user data in the era of generative AI. Vijay also sheds light on the importance of embedding trust and safety measures directly into AI models, and how enterprises can safeguard their AI systems. Questions asked: (00:00) Introduction (01:39) A bit about Vijay (03:32) DeepMind and Gemini (04:38) Training data for models (06:27) Who can build an AI Foundation Model? (08:14) What is AI Native Security? (12:09) Does the response time change for AI Security? (17:03) What should enterprise security teams be thinking about? (20:54) Shared fate with Cloud Service Providers for AI (25:53) Final Thoughts and Predictions
undefined
Sep 6, 2024 • 47min

BlackHat USA 2024 AI Cybersecurity Highlights

What were the key AI Cybersecurity trends at ⁠BlackHat USA⁠? In this episode of the AI Cybersecurity Podcast, hosts ⁠Ashish Rajan⁠ and ⁠Caleb Sima⁠ dive into the key insights from Black Hat 2024. From the AI Summit to the CISO Summit, they explore the most critical themes shaping the cybersecurity landscape, including deepfakes, AI in cybersecurity tools, and automation. The episode also features discussions on the rising concerns among CISOs regarding AI platforms and what these mean for security leaders. Questions asked: (00:00) Introduction (02:49) Black Hat, DEF CON and RSA Conference (07:18) Black Hat CISO Summit and CISO Concerns (11:14) Use Cases for AI in Cybersecurity (21:16) Are people tired of AI? (21:40) AI is mostly a side feature (25:06) LLM Firewalls and Access Management (28:16) The data security challenge in AI (29:28) The trend with Deepfakes (35:28) The trend of pentest automation (38:48) The role of an AI Security Engineer
undefined
Aug 21, 2024 • 34min

Our insights from Google's AI Misuse Report

The podcast explores alarming findings from Google's report on generative AI misuse, revealing over 200 incidents across healthcare and education. Hosts discuss the rise of deepfakes and AI-driven impersonation, stressing their ease of access and ethical dilemmas. The conversation also highlights the impact of misleading metrics in content creation and touches on the challenges of distinguishing between human and AI-generated content. Lastly, they emphasize the need for legal frameworks as AI technology evolves and shapes public opinion.
undefined
70 snips
Aug 2, 2024 • 1h 11min

AI Code Generation - Security Risks and Opportunities

Guy Podjarny, the Founder and CEO at Tessl, dives into the intriguing world of AI-generated code. He discusses its reliability compared to human coding, raising critical questions about trust. Security risks associated with AI code are highlighted, stressing the importance of human oversight and proactive measures. Guy also touches on the changing landscape of AI in software development, the need for automated security testing, and the evolving role of cybersecurity professionals. His insights offer a thought-provoking look at AI’s impact on coding and security.
undefined
13 snips
Jul 11, 2024 • 45min

Exploring Top AI Security Frameworks

The podcast explores various AI security frameworks like Databricks, NIST, and OWASP Top 10, comparing their key components and practical implementation strategies. It discusses the challenges of selecting the right framework, AI risk management, and the importance of governance and collaboration. The episode also touches on using Chat GPT for document analysis, Google AI Studio, and the progression of AI proficiency.
undefined
Jun 17, 2024 • 45min

Practical Applications and Future Predictions for AI Security in 2024

What is the current state and future potential of AI Security? This special episode was recorded LIVE at BSidesSF (thats why its a little noisy), as we were amongst all the exciting action. Clint Gibler, Caleb Sima and Ashish Rajan sat down to talk about practical uses of AI today, how AI will transform security operations, if AI can be trusted to manage permissions and the importance of understanding AI's limitations and strengths. Questions asked: (00:00) Introduction (02:24) A bit about Clint Gibler (03:10) What top of mind with AI Security? (04:13) tldr of Clint’s BSide SF Talk (08:33) AI Summarisation of Technical Content (09:47) Clint’s favourite part of the talk - Fuzzing (15:30) Questions Clint got about his talk (17:11) Human oversight and AI (25:04) Perfection getting in the way of good (30:15) AI on the engineering side (36:31) Predictions for AI Security Resources from this coversation: Caleb's Keynote at BSides SF Clint's Newsletter
undefined
4 snips
May 22, 2024 • 44min

AI Highlights from RSAC 2024 and BSides SF 2024

Key AI Security takeaways from RSA Conference 2024, BSides SF 2024 and all the fringe activities that happen in SF during that week. Caleb and Ashish were speakers, panelists, participating in several events during that week and this episode captures all the highlights from all the conversations they had and they trends they saw during what they dubbed the "Cybersecurity Fringe Festival” in SF. Questions asked: (00:00) Introduction (02:53) Caleb’s Keynote at BSides SF (05:14) Clint Gibler’s Bsides SF Talk (06:28) What are BSides Conferences? (13:55) Cybersecurity Fringe Festival (17:47) RSAC 2024 was busy (19:05) AI Security at RSAC 2024 (23:03) RSAC Innovation Sandbox (27:41) CSA AI Summit (28:43) Interesting AI Talks at RSAC (30:35) AI conversations at RSAC (32:32) AI Native Security (33:02) Data Leakage in AI Security (30:35) Is AI Security all that different? (39:26) How to filter vendors selling AI Solutions?
undefined
Apr 12, 2024 • 45min

How AI can be used in Cybersecurity Operations?

Ely Kahn, VP of Product at SentinelOne, discusses the impact of generative AI on cybersecurity, simplifying processes and empowering analysts. Topics include concerns with AI models, comparison to analysts without AI, preventing models from going into autopilot, and the use of multiple LLMs.
undefined
Apr 4, 2024 • 54min

The Evolution of Pentesting with AI

How is AI transforming traditional approaches to offensive security, pentesting, security posture management, security assessment, and even code security? Caleb and Ashish spoke to Rob Ragan, Principal Technology Strategist at Bishop Fox about how AI is being implemented in the world of offensive security and what the right way is to threat model an LLM. Questions asked: (00:00) Introductions (02:12) A bit about Rob Ragan (03:33) AI in Security Assessment and Pentesting (09:15) How is AI impacting pentesting? (14:50 )Where to start with AI implementation in offensive Security? (18:19) AI and Static Code Analysis (21:57) Key components of LLM pentesting (24:37) Testing whats inside a functional model? (29:37) Whats the right way to threat model an LLM? (33:52) Current State of Security Frameworks for LLMs (43:04) Is AI changing how Red Teamers operate? (44:46) A bit about Claude 3 (52:23) Where can you connect with Rob Resources spoken about in this episode: https://www.pentestmuse.ai/ https://github.com/AbstractEngine/pentest-muse-cli https://docs.garak.ai/garak/ https://github.com/Azure/PyRIT https://bishopfox.github.io/llm-testing-findings/ https://www.microsoft.com/en-us/research/project/autogen/
undefined
Mar 18, 2024 • 52min

AI's role in Security Operation Automation

What is the current reality for AI automation in Cybersecurity? Caleb and Ashish spoke to Edward Wu, founder and CEO of Dropzone AI about the current capabilities and limitations of AI technologies, particularly large language models (LLMs), in the cybersecurity domain. From the challenges of achieving true automation to the nuanced process of training AI systems for cyber defense, Edward, Caleb and Ashish shared their insights into the complexities of implementing AI and the importance of precision in AI prompt engineering, the critical role of reference data in AI performance, and how cybersecurity professionals can leverage AI to amplify their defense capabilities without expanding their teams. Questions asked: (00:00) Introduction (05:22) A bit about Edward Wu (08:31) What is a LLM? (11:36) Why have we not seen entreprise ready automation in cybersecurity? (14:37) Distilling the AI noise in the vendor landscape (18:02) Solving challenges with using AI in enterprise internally (21:35) How to deal with GenAI Hallucinations? (27:03) Protecting customer data from a RAG perspective (29:12) Protecting your own data from being used to train models (34:47) What skillset is required in team to build own cybersecurity LLMs? (38:50) Learn how to prompt engineer effectively

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app