Lex Fridman Podcast

#490 – State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI

5452 snips
Feb 1, 2026
Sebastian Raschka, hands-on ML educator and author of practical LLM guides, and Nathan Lambert, post-training lead at AI2 and RLHF specialist, discuss China vs US competition, which chatbots excel at coding and long context, open vs closed model tradeoffs, architectural tweaks like MOE, where progress really comes from (systems, data, post-training), RL with verifiable rewards, scaling laws, tool use and agents, and timelines toward AGI.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

Quick Bash Script Saved A Trip

  • Sebastian recounts unplugging his GPU before a trip and using a fast model to generate a bash script in seconds.
  • The quick answer saved him time and a stressful moment with family waiting in the car.
ADVICE

Use LLMs To Frame, Not Distract

  • Use LLMs as a focused research assistant, but avoid rabbit holes; prompt them for structured context before diving into the wider internet.
  • Treat LLMs as a readable home rather than another distracting browser tab.
INSIGHT

RLVR Unlocked Stepwise Reasoning

  • RL with verifiable rewards (RLVR) amplifies stepwise reasoning and tool use by grading correctness on verifiable tasks.
  • Nathan Lambert credits RLVR scaling for major capability gains in 2025.
Get the Snipd Podcast app to discover more snips from this episode
Get the app