
Machine Learning Street Talk (MLST) #103 - Prof. Edward Grefenstette - Language, Semantics, Philosophy
48 snips
Feb 11, 2023 Edward Grefenstette, Head of Machine Learning at Cohere and Honorary Professor at UCL, delves into the fascinating intersection of language, semantics, and philosophy. He discusses the complexities of understanding semantics in AI, particularly in moral contexts, and highlights the significance of Reinforcement Learning from Human Feedback (RLHF) for enhancing model performance. Grefenstette also tackles deep learning's 'Swiss cheese problem' and explores philosophical insights on intelligence, agency, and the nature of creativity in relation to AI.
AI Snips
Chapters
Books
Transcript
Episode notes
Pragmatics and Code in LLMs
- Instruction-following language models demonstrate shallow pragmatic understanding, like resolving binary implicature.
- Including code in training data may enhance natural language understanding due to its strong syntax-semantics link and grounding.
Extending Montagovian Semantics
- Edward Grefenstette's doctoral research extended Montagovian semantics, combining syntactic structure with distributional vector representations.
- Inspired by Bob Coecke, they used a category theoretic framework from quantum information flow to model language.
RLHF Skepticism
- Be skeptical of rushing to use RLHF.
- Prioritize high-quality data and cheaper annotation mechanisms over RLHF for stable signals.




