Limitless: An AI Podcast

Kimi K2.5: The Best New Model is Open-Source (Again!)

26 snips
Jan 29, 2026
A new open-source multimodal model can turn screen recordings into full websites in minutes. They cover how the model runs up to 100 sub-agents in parallel and uses an orchestrator with thousands of tools. Conversation highlights cost cuts to pennies per million tokens, video-to-code and creative 3D blueprint uses, and why open weights are shifting developer adoption.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

From Screen Recording To Live Website

  • Josh fed Kimi K2.5 a screen recording of Anthropic's website and got a full replica published in about 25 minutes.
  • The demo produced a working website without additional prompts, showcasing video-to-code capability.
INSIGHT

Multimodal Training Unlocks Video Understanding

  • Kimi K2.5 was trained on about 15 trillion tokens spanning text, audio, and visual data.
  • Multimodal training lets it understand videos and convert visual input directly into functional outputs like code.
INSIGHT

Parallel Sub-Agents Speed Up Workflows

  • Kimi K2.5 can spawn up to 100 sub-agents that run tasks in parallel.
  • Parallel sub-agents cut execution time significantly, e.g., a 4.5x speedup on complex tasks.
Get the Snipd Podcast app to discover more snips from this episode
Get the app