Last Week in AI

#101 - GPT-4chan, ’Sentient’ AI, Tesla Crash Probe, BIG-bench, DALL-E mini

Jun 17, 2022
Dive into the intriguing fallout from the GPT-4Chan controversy, raising questions about the ethics of AI deployment. A Google engineer's bold claim about AI sentience stirs debate, while safety concerns escalate around Tesla's Autopilot technology. Explore groundbreaking GPU-powered discoveries from the James Webb Space Telescope, and the quest to decode ancient Egyptian texts. The podcast highlights the release of BIG-bench, a rigorous benchmark for language models. Lastly, enjoy a look at the bizarre creations from DALL-E mini that are captivating users.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Model Accessibility

  • Releasing a pre-trained model significantly lowers the barrier to misuse, like spreading hate speech.
  • While datasets and training scripts existed, a readily available model amplifies the potential harm.
ADVICE

Restricting Model Access

  • Restrict access to potentially harmful models, even for research purposes.
  • Hugging Face's gating feature offers a way to control access and prevent widespread misuse.
ANECDOTE

LaMDA Sentience Claims

  • Google engineer Blake Lemoine claimed LaMDA, a language model, was sentient.
  • He published conversations, revealing LaMDA expressing fears and desires, leading to his suspension.
Get the Snipd Podcast app to discover more snips from this episode
Get the app