
650: SparseGPT: Remove 100 Billion Parameters but Retain 100% Accuracy
Super Data Science: ML & AI Podcast with Jon Krohn
00:00
Exploring the Pruning Technique Sparse GPT for Large Language Models
Researchers introduce sparse GPT, a pruning technique, to remove over half of GPT-3's parameters without losing accuracy, potentially reducing model size by up to 90% and saving costs. The chapter also touches upon an upcoming NLP conference and a free access deal for the O'Reilly platform.
Play episode from 00:00
Transcript


