
The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch 20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton
67 snips
Aug 28, 2024 Arvind Narayanan, a Princeton professor and co-author of "AI Snake Oil," challenges the myth that simply adding more compute equates to better AI performance. He emphasizes that data quality, not just volume, is crucial for advancements in AI. The conversation dives into the future of AI models, debating whether we'll have a few large dominant models or many specialized ones. Narayanan critiques current generative AI pitfalls and stresses the importance of genuine user experiences over misleading benchmark scores. His insights offer a fresh perspective on AI's evolving landscape.
AI Snips
Chapters
Books
Transcript
Episode notes
Smaller Models
- Smaller AI models are becoming more prevalent due to cost pressures and expanded possibilities.
- On-device deployment improves privacy and reduces server costs.
Cost Paradox
- Reduced model costs often lead to increased usage (Jevons paradox).
- Examples like email scanning and code generation show how cost remains a factor even with Moore's Law.
LLM Evaluation
- Evaluating Large Language Models (LLMs) is difficult because benchmarks don't reflect real-world use.
- Focusing on benchmarks leads to optimization for tests, not real-world performance.





