
Tech Talks Daily 3524: Trust, Verification, and Ownership in the Age of AI, with eSentire's Alexander Feick
Dec 19, 2025
Join Alexander Feick, a cybersecurity expert and author of Trust and AI, as he explores the challenges organizations face when integrating AI into their systems. He highlights how many AI failures arise from broken ownership and invisible dependencies rather than technical glitches. With insights on distinguishing chatbot from embedded AI, he emphasizes the need for continuous trust measurement and verification. Alexander also discusses the rapid evolution of AI, sharing why he made his book freely available to address these urgent issues.
AI Snips
Chapters
Transcript
Episode notes
Generative AI Is Non‑Deterministic
- Generative AI is fundamentally different from deterministic software and can't be trusted the same way.
- Alexander Feick warns pilot success doesn't guarantee stable production performance due to model drift and hidden changes.
Red Teaming Revealed Prompt Injection Risks
- When ChatGPT arrived, Feick's newly formed eSentire Labs dove into generative AI experiments.
- Early red teaming revealed prompt injection and model poisoning risks that motivated his book.
Overtrust From Old Software Assumptions
- Organizations wrongly assume software remains deterministic and reliable after AI is added.
- That misplaced assumption causes blind spots like overtrusting pilots and missing ongoing verification.
