Amélie Chatelain and Antoine Chaffin from LightOn are leading the way in the next generation of search powered by Multi-Vector representations and Late Interaction. The podcast begins with what motivates them to work on Multi-Vector Search, continuing to discuss particular details such as the combination between lexical and semantic search, as well as bi-encoder speed with cross encoder accuracy. This discussion continues to present insights about training multi-vector models and how they differ from their single-vector predecessors. The conversation continues into particular successes of Late Interaction such as code, reasoning-intensive, and multimodal retrieval. Agents are great at searching with grep, but they are even better with ColGrep! Reasoning-Intensive Retrieval is a step change in how we think about search systems, beautifully enabled by both Late Interaction models and Agentic Search. Further, Multimodal Search, such as matching text with videos, is seeing massive benefits from Multi-Vector representations. The podcast continues to dive into the cost of MaxSim and how efficient methods such as MUVERA and PLAID can help. The podcast concludes with a presentation of their recent work on ColBERT-Zero, pre-training with Late Interaction instead of Single-Vector Dense Embedding models. LightOn are also the developers of PyLate, the world's leading open-source library for training these kinds of models.Chapters0:00 An Introduction to Multi-Vector Search6:00 Multi- vs. Single-Vector8:55 Comparison with Cross Encoders15:55 ColGrep for Coding Agents30:34 Reasoning-Intensive Retrieval42:02 Multimodal Multi-Vector48:34 The Cost of Multi-Vector53:26 MUVERA and PLAID1:06:18 ColBERT-Zero and PyLate1:08:35 ColBERT-Zero and PyLate