The Times Tech Podcast

AI safety meets OpenClaw – what India’s AI summit tells us

14 snips
Feb 20, 2026
Karina Prunkle, researcher at France’s INRIA and Oxford affiliate, led the International AI Safety Report. She discusses India's summit spotlight, OpenClaw's viral agent and its acquisition, agentic AI risks like data access and liability, and the gaps between fast tech change and policy. The conversation highlights safeguards, labour shifts and whether global summits can steer AI safely.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Viral OpenClaw Origin Story

  • Peter Steinberger built a viral personal agent called OpenClaw that millions used in weeks.
  • OpenAI bought the project and promised it will remain open source under a foundation.
INSIGHT

Agents Moving From Theory To Reality

  • OpenClaw demonstrates agents running on personal machines and acting as persistent digital assistants.
  • That shift makes agentic AI feel real and forces labs to rethink deployment and safety.
INSIGHT

Liability Looms Over Personal Agents

  • Agent deployment raises liability questions akin to owning a dangerous dog: who's responsible for harms?
  • Companies will likely push liability toward individual users rather than accept risk themselves.
Get the Snipd Podcast app to discover more snips from this episode
Get the app