Ethical Machines How Do You Control Unpredictable AI?
5 snips
Jul 10, 2025 Walter Haydock, a former national security policy advisor and founder of StackAware, dives into the intriguing complexities of unpredictable AI. He discusses the dual nature of large language models—capable of both creativity and chaos. Haydock emphasizes the critical need for structured risk assessments to navigate the pitfalls of integrating agentic AI into organizations. He highlights dangers like data poisoning and calls for stricter testing and monitoring to ensure responsible AI deployment while fostering innovation.
AI Snips
Chapters
Books
Transcript
Episode notes
Prioritize People and Processes
- Focus on people and process before tools when managing cybersecurity and AI risk.
- Many risk issues stem from insufficient training and weak processes, not just technology gaps.
Limits of Current AI Risk Tools
- Current AI risk assessment tools cannot map entire risk surfaces or contextual risks in real-time.
- Self-assessment by AI models on themselves risks missing unknown failures.
Control Agentic AI Access and Actions
- Never entrust AI systems with authorization; restrict data access deterministically.
- Use human review for AI outputs, sample them, and limit resource use to mitigate integrity and availability risks.




