
The 80,000 Hours Podcast on Artificial Intelligence (September 2023) Six: Richard Ngo on large language models, OpenAI, and striving to make the future go well
11 snips
Sep 2, 2023 This podcast explores the understanding and potential risks of large language models like GPT-3 and ChatGPT. Richard Ngo from OpenAI discusses AI governance, concerns surrounding these models, and the challenges of AI behavior prediction. They also delve into the development of general AI, situational awareness in AI systems, and the need to study and modify goal formation in neural networks. The podcast concludes with discussions on the challenges of understanding AI behaviors, exploring utopia and the role of technology, and alternative history thought experiments.
AI Snips
Chapters
Transcript
Episode notes
Start With Hands On ML Work To Find Key Questions
- Pursue hands-on empirical work early: start by building and probing models to discover important open questions before committing to narrow theoretical paths.
- Ngo recommends following comparative advantage and getting obsessed with practical experimentation first.
Training Compute vs Runtime Compute Distinction
- Training compute differs from runtime compute: we already can run human-scale compute but training equivalent models likely requires far more compute and data.
- Ngo cites reports (Joe Kalfus/Jaya Ajaya Kottra) suggesting training human-equivalent models may be plausible this decade given trends.
Training Is Expensive but Inference Is Cheap
- Training a cutting-edge large language model costs millions; inference per query costs cents, so one training run can enable thousands of deployed instances.
- Ngo gives ballpark $1M–$10M training vs ~¢0.01 per use, enabling mass deployment once trained.
