
World Models Are Here—But It’s Still the GPT-2 Phase
The Data Exchange with Ben Lorica
00:00
Frontier scaling and inference efficiency
Jeff discusses leveraging LLM infrastructure advances and the path to make world-model inference more efficient and scalable.
Play episode from 22:20
Transcript


