
TechCrunch Industry News EXCLUSIVE: Luma launches creative AI agents powered by its new ‘Unified Intelligence’ models
Mar 5, 2026
A new platform coordinates multiple AI systems to produce end-to-end creative work across text, images, video and audio. A single multimodal “unified intelligence” model handles audio, visual, language and spatial reasoning. Use cases include campaign variations, localization and iterative self-critique to refine creative assets at scale.
AI Snips
Chapters
Transcript
Episode notes
Unified Intelligence Combines Thought And Pixels
- Luma built a single multimodal reasoning model called UniOne that combines language, images, audio, video, and spatial reasoning.
- Amit Jain says UniOne lets the system both 'think in language' and 'imagine and render in pixels,' enabling unified intelligence beyond separate single-modality models.
Replace Multiprompt Workflows With Coordinating Agents
- Use agentic systems to avoid prompting dozens of separate models; Luma Agents coordinate multiple AI models for end-to-end creative work.
- The agents plan, generate across text/image/video/audio, and interface with models like Ray 314, VO3, Seadream, and Eleven Labs voice models.
Persistent Context And Self Critique Speed Creative Loops
- Luma Agents keep persistent context across assets, collaborators, and iterations so creative threads stay coherent.
- The system can self-critique and iteratively refine outputs, applying the same loop that makes coding agents effective to creative production.
