This Day in AI Podcast

EP29: Meta's Code Llama, Unnatural Instruction, Phishing Our Mother & OpenAI's GPT3.5 Fine Tuning

15 snips
Aug 25, 2023
Meta's Code Llama, an AI code generator challenging OpenAI's dominance, and the implications of AI models training themselves. Experiments with multilingual speech synthesis to generate a fake phishing call on our mother. Deep dive into the evolution of GPT models, fine-tuning GPT-3.5 Turbo announced by OpenAI, and the potential of AI-generated unit tests for code. Exploration of 11 labs' voice cloning technology and its practical applications. Discussion on phishing pranks and hardware investments for AGI development.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Code Llama's Context Advantage

  • Meta released Code Llama with a 100k context window which makes it powerful for coding tasks requiring whole-codebase awareness.
  • Running models locally enables deep integration into IDEs and massive offline iterations without expensive API calls.
INSIGHT

AI-Generated Alignment Data Works

  • Meta used synthetic instruction generation to produce huge alignment datasets from few human examples and got better human-rated results.
  • Over 50% of AI-generated examples were correct and useful despite noise, improving aligned model performance.
INSIGHT

Temperature Unlocks Novel Training Data

  • Varying model temperature and iterative prompting yields novel, diverse synthetic examples for instruction tuning.
  • That diversity appears to unlock more capability from base models than expected.
Get the Snipd Podcast app to discover more snips from this episode
Get the app