ChinaTalk

How Can the Pentagon Trust AI?

Aug 1, 2023
Jane Pinelis, Chief AI Engineer at The Johns Hopkins University Applied Physics Laboratory, and Karson Elmgren, a researcher at CSET, dive into the complexities of AI integration in the military. They discuss the urgent need for AI assurance in defense, contrasting agile methodologies used in recent conflicts with bureaucratic challenges. The conversation also explores transparency and trust in AI systems, emphasizing the importance of human-AI collaboration and risk assessment as critical factors in military operations and decision-making.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI Regulation: Commercial vs. DoD

  • Commercial AI lacks regulation, exemplified by early ChatGPT's lack of safety constraints.
  • The DoD, however, operates under regulations like 3000.09 for autonomous weapon systems.
ADVICE

Traceability over Transparency

  • Traceability is more important than full transparency or explainability in AI systems for warfighters.
  • Proper documentation and accessibility are crucial for relevant personnel, not necessarily understanding every piece of software.
INSIGHT

Contextual Risk Assessment

  • AI assurance helps quantify and qualify risks associated with AI technologies.
  • Context, situation, and operational needs determine the acceptable risk level, not a fixed performance threshold.
Get the Snipd Podcast app to discover more snips from this episode
Get the app