Meta's Code Llama, an AI code generator challenging OpenAI's dominance, and the implications of AI models training themselves. Experiments with multilingual speech synthesis to generate a fake phishing call on our mother. Deep dive into the evolution of GPT models, fine-tuning GPT-3.5 Turbo announced by OpenAI, and the potential of AI-generated unit tests for code. Exploration of 11 labs' voice cloning technology and its practical applications. Discussion on phishing pranks and hardware investments for AGI development.
01:03:44
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Code Llama's Context Advantage
Meta released Code Llama with a 100k context window which makes it powerful for coding tasks requiring whole-codebase awareness.
Running models locally enables deep integration into IDEs and massive offline iterations without expensive API calls.
insights INSIGHT
AI-Generated Alignment Data Works
Meta used synthetic instruction generation to produce huge alignment datasets from few human examples and got better human-rated results.
Over 50% of AI-generated examples were correct and useful despite noise, improving aligned model performance.
insights INSIGHT
Temperature Unlocks Novel Training Data
Varying model temperature and iterative prompting yields novel, diverse synthetic examples for instruction tuning.
That diversity appears to unlock more capability from base models than expected.
Get the Snipd Podcast app to discover more snips from this episode
This week, the Zuck strikes again - Meta unveils a state of the art AI code generator to challenge OpenAI's dominance. We explore the implications of AI models training themselves, and how it could accelerate capabilities. Then we put 11 labs' multilingual speech synthesis to the test, using it to generate a fake phishing call on our mother. Don't miss our scandalous experiments pushing AI to its limits in this jam-packed episode!
If you like the pod, please consider subbing, liking, commenting etc. xox
CHAPTERS: ===== 00:00 - Rehearsal of Phishing Our Mother (Cold Open) 00:19 - Meta's Code Llama 08:24 - Unnatural Instruction to Train AI Models 15:06 - Why Didn't Meta Release the Unnatural Instruction Code Llama Model? The Sparks of AGI? 16:50 - Evolution of GPT: Is Unnatural Instruction The Next Evolution of Models? 23:04 - DeepMind's Reinforced Self-Training ReST for Language Modeling paper and thoughts on future models 36:09 - Fine Tuning GPT-3.5 Turbo Announced by OpenAI: Should You Just Fine Tune Open Source? 44:05 - ElevenLabs Out of Beta and Multilingual v2: Explained by AI Us. 48:12 - Chris Tried to Figure Out AI Phishing 53:03 - Rehearsing Phishing Our Mother Call & Implications of This AI Tech 59:43 - How Much We Lost Not Investing in NVIDIA 1:01:29 - AI Bros Give Investment Advice