CyberWire Daily

The internet joins the war.

Mar 5, 2026
Daniel Barbu, Director of EMEA Security at Adobe, talks about making AI security human-centered and collaboration-driven. He describes building a Security AI Guild, cultural shifts needed for AI adoption, and practical, people-first steps for trustworthy systems. The conversation highlights principles like shared ownership, transparency, and human-in-the-loop design.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI Amplifies Existing Security Data Problems

  • AI accelerates insight but amplifies existing data and process problems in security workflows.
  • Daniel Barbu explains poor input data and noisy alerts make AI outputs worse, increasing both false confidence and avoidance.
ANECDOTE

How Adobe Built A Security AI Guild

  • Adobe formed a Security AI Guild as a cross-team execution engine rather than a think tank or recurring meeting.
  • The guild uses three principles: outcomes first, clear ownership, and shared learning to push projects to production.
INSIGHT

Trustworthy AI Is A Social Design Problem

  • Trustworthy AI is built socially by cross-functional collaboration, not solely by engineers.
  • Barbu stresses shared responsibility across security, product, and data science plus human-in-the-loop design and guardrails.
Get the Snipd Podcast app to discover more snips from this episode
Get the app