The Attention Mechanism with Andrew Mayne

The Cursor Issue

18 snips
Mar 24, 2026
Andrew Mayne, former OpenAI science communicator and AI practitioner, breaks down the Cursor controversy and why undisclosed base models raise transparency and safety questions. He explores IDE telemetry, multi-model flexibility vs. integrated lab tooling, and the pros and cons of a ChatGPT superapp. He also introduces RentAHuman.AI and how agent-human bounties could change task execution.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Cursor Friction Over Undisclosed Base Model

  • Cursor presented Composer 2 as their model but built it by fine-tuning a Chinese base model (Moonshot's Kimi), raising questions about transparency.
  • Andrew Mayne explains the issue: Composer 2 outperforms its base yet the base (Kimi) wasn't credited, causing community friction about disclosures and teacher-student training.
INSIGHT

Teacher Models Speed Model Development

  • Using API outputs from frontier models as training data is a faster route to scale because a 'teacher' model supplies cleaned tokens for a 'student' model.
  • Mayne contrasts this with Anthropic's approach, which trains from raw text and worries about labs scraping many tokens from Claude/OpenAI to bootstrap models.
INSIGHT

IDEs Become Unofficial Model Researchers

  • Third-party IDEs like Cursor collect telemetry and user queries that can reveal model behaviors, giving them operational knowledge rivaling frontier labs.
  • Mayne warns that telemetry plus access to many models can make IDEs unusually well-informed about how Anthropic/OpenAI models perform.
Get the Snipd Podcast app to discover more snips from this episode
Get the app