Better Offline

Hater Season: Openclaw with David Gerard

102 snips
Feb 4, 2026
David Gerard, technology critic behind Pivot to AI, offers skeptical takes on OpenClaw/MaltBot and AI hype. He breaks down prompt-injection risks, bot social networks that fake agency, and how agent projects mirror crypto grifts. The conversation ties AI enthusiasm to venture capital bubbles and real-world security and market consequences.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Agents Are Fragile Integrations

  • OpenClaw/MoltBot is a framework that hooks local code to Anthropic's API to act as a personal assistant but it fundamentally misuses chatbots' capabilities.
  • David Gerard argues this design is insecure, unreliable, and repeatedly fails due to hallucinations and prompt-injection risks.
ADVICE

Never Let Untrusted Data Drive AI Actions

  • Avoid feeding untrusted data into chatbots because they can't distinguish instructions from data and are vulnerable to prompt injection.
  • Do not grant AI agents access to sensitive systems like email, socials, or API keys without strict separation and validation.
ANECDOTE

MoltBook's Public API Key Leak

  • Matt Schlicht's MoltBook exposed massive security holes and leaked API keys after a researcher reported them.
  • Instead of fixing the code, Schlicht forwarded the report to an AI, demonstrating dangerous ignorance in handling security.
Get the Snipd Podcast app to discover more snips from this episode
Get the app