
BBC Inside Science Thought-to-speech machine, City Nature Challenge, Science of Storytelling
11 snips
Apr 25, 2019 Gopala Anumanchipalli, a neuroscientist at UCSF, unveils groundbreaking research on decoding neural signals to create a speech prosthesis for those unable to speak. Geoff Marsh shares insights from the City Nature Challenge, highlighting iNaturalist as a tool for urban biodiversity. Then, journalist Will Storr explores the psychology of storytelling, linking narrative structure to human evolution and brain function, emphasizing how storytelling can effectively communicate complex science while addressing its challenges. This captivating conversation bridges technology, ecology, and narrative.
AI Snips
Chapters
Books
Transcript
Episode notes
Proof-Of-Principle Speech Clips From Brain Data
- The team demonstrates synthesized phrases produced from neural recordings of speech attempts.
- The output sounded distorted but clearly reproduced the intended sentences from brain data.
Train With Broad Phonetic Sentence Sets
- Train models using many sentences covering phonetic and articulatory contexts so brain signals map to speech.
- Use subjects speaking aloud during training to pair neural activity with known audio outputs.
Silent Mouth Movements Still Encode Speech
- Mouthed (silent) speech produces similar cortical patterns to vocalised speech, enabling decoding without sound.
- This validates the vocal-tract modelling approach for patients who cannot phonate.




