
Machine Learning Street Talk (MLST) #57 - Prof. Melanie Mitchell - Why AI is harder than we think
18 snips
Jul 25, 2021 In this engaging discussion, Professor Melanie Mitchell, a leading expert in complexity and AI, teams up with Letitia Parcalabescu, an AI researcher and YouTuber. They tackle the contrasting cycles of optimism and disappointment in AI development. Topics include the challenges of achieving common-sense reasoning and effective analogy-making in machine learning. They delve into the philosophical underpinnings of intelligence, the nuances of creativity in AI, and the limitations of current neural networks, all while advocating for a deeper understanding of both human and artificial cognition.
AI Snips
Chapters
Books
Transcript
Episode notes
Deep Learning Brittleness
- Deep learning systems, like predecessors, suffer brittleness and unpredictable errors in unfamiliar situations.
- Their susceptibility to shortcut learning, statistically-driven and not always logically sound, is a key factor.
Dartmouth Workshop Challenges
- The Dartmouth Workshop (1956), AI's birthplace, faced funding shortfalls and disagreements among participants.
- Despite optimism, achieving consensus on AI's direction proved difficult from the outset.
Persistent AI Debates
- From AI's inception, differing approaches existed: deductive reasoning, inductive methods, and biologically inspired programs.
- These same arguments about the best path to AI continue today.












