The a16z Show cover image

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado

The a16z Show

00:00

In-context learning as posterior updating

Vishal Misra describes how examples in a prompt shift token probabilities toward the right DSL, making in-context learning look like Bayesian updating.

Play episode from 08:33
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app