The Agentic Insider

AI Singularities: Are We Already There? - Mark Brady - The Agentic Insider - Episode #14

14 snips
Jul 10, 2025
Join Mark Brady, the Deputy Chief Data Officer at KBR Inc. and former Chief Data Officer of the US Space Force, as he unpacks the dual singularities of AI that could reshape humanity. He explores the critical need for safety objective functions in AI systems and the risks posed by autonomous AIs creating other AIs. The conversation shifts to the future of AI and human coexistence while diving into its applications in defense. Brady also shares insights from his own journey and career tips for aspiring data scientists.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Asimov's Laws as Safety Functions

  • Asimov's three laws of robotics function as safety objective functions to prevent robots from harming humans.
  • These laws provide motivation or objectives for AI behavior linked to safety instead of unrestricted goals.
ADVICE

Prioritize Safety Objective Functions

  • Embed safety objective functions that override other objectives in AI systems.
  • Avoid AI objectives that promote replication or resource maximization to prevent competition with humans.
ADVICE

Prepare for Existential AI Risks

  • Organizations should prepare for AI risks by embedding safety objectives in systems.
  • Focus less on bias concerns and more on existential risks like safety overrides in AI.
Get the Snipd Podcast app to discover more snips from this episode
Get the app