AI Snips
Chapters
Transcript
Episode notes
Transformer Pretraining Fueled GPT Breakthroughs
- Transformer architectures and pretraining enabled language models to predict long-range context and generate coherent paragraphs.
- Scott traces Word2Vec to Attention Is All You Need and OpenAI's progression from GPT-2 (1.5B) to GPT-3 (175B) and InstructGPT.
How GANs Turn Noise Into Convincing Images
- Image generators used adversarial training to iteratively refine noise into photorealistic images.
- Scott explains the GAN loop: a generator tweaks noise until a recognizer can't tell the image is synthetic.
Update Testing Methods To Verify Student Work
- Reinstate assessment methods that verify original thought, like in-person or proctored exams.
- Scott cites professors switching from take-home essays to supervised pen-and-paper tests after ChatGPT produced indistinguishable answers.


