

Jakub Pachocki
Chief Scientist at OpenAI and researcher focused on model capabilities, reinforcement learning, and AI alignment, leading work on reasoning, coding agents, and AI for science.
Top 3 podcasts with Jakub Pachocki
Ranked by the Snipd community

908 snips
Sep 25, 2025 • 52min
From Vibe Coding to Vibe Researching: OpenAI’s Mark Chen and Jakub Pachocki
OpenAI's Jakub Pachocki, Chief Scientist focused on reasoning and long-horizon systems, and Mark Chen, Chief Research Officer overseeing core research, dive into the future of AI. They unveil the ambitious roadmap for GPT-5, aiming to enhance reasoning and agentic behavior. Fascinating discussions explore the surprising capabilities of GPT-5 in math and science. They candidly discuss evolving evaluation methods, the significance of reinforcement learning, and the quest for an automated researcher. Hiring talent, balancing research with product goals, and resource allocation are also on their agenda.

284 snips
Aug 15, 2025 • 40min
Episode 5 - Defining AGI and the road ahead
Jakub Pachocki, Chief Scientist at OpenAI, and Szymon Sidor, a researcher at OpenAI, discuss the intriguing world of artificial general intelligence (AGI). They explore the potential of AI in automating scientific discovery and the significance of math competitions in shaping AI capabilities. With insights into reasoning breakthroughs, the duo reveals how close we are to achieving AGI and shares their journey from high school competitors to AI leaders. Their conversation also stresses the importance of mentorship and trust in an evolving AI landscape.

159 snips
Apr 9, 2026 • 59min
Ep 84: OpenAI’s Chief Scientist on Continual Learning Hype, RL Beyond Code, & Future Alignment Directions
Jakub Pachocki, OpenAI Chief Scientist focused on model capabilities, RL, and alignment. He discusses the rise of coding agents and autonomous research tools. He talks about math and physics as benchmarks, extending reinforcement learning to long-horizon tasks, chain-of-thought monitoring for alignment, and the societal risks of highly automated AI research organizations.


