
Machine Learning Street Talk (MLST) #031 WE GOT ACCESS TO GPT-3! (With Gary Marcus, Walid Saba and Connor Leahy)
21 snips
Nov 28, 2020 This conversation features Gary Marcus, a psychology and neuroscience professor, known for critiquing deep learning, alongside Waleed Sabah, an expert in natural language understanding, and Connor Leahy, a proponent of large language models. They dive into GPT-3's strengths and weaknesses, the philosophical implications of AI creativity, and the importance of integrating reasoning with pattern recognition. The dialogue also critiques AI's limitations in understanding language and explores future possibilities for achieving true artificial general intelligence.
AI Snips
Chapters
Books
Transcript
Episode notes
Updating GPT-3's Knowledge
- For GPT-3, adding new information requires retraining the entire system.
- Ideally, AI should incorporate new facts without complete retraining.
Prompt Engineering Limitations
- Prompt engineering in GPT-3 is essentially modifying the autocomplete problem.
- It uses human knowledge, not an internal system, which isn't scientifically sound.
GPT-3's Scaling Laws and Subjective Experience
- GPT-3's strength lies in scaling laws; larger models perform better predictably.
- Its ability to handle natural language questions makes it subjectively different from other models.






