The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington
undefined
Jan 24, 2022 • 36min

Solving the Cocktail Party Problem with Machine Learning, w/ ‪Jonathan Le Roux - #555

Jonathan Le Roux, a Senior Principal Research Scientist at Mitsubishi Electric Research Laboratories, dives into the fascinating world of the cocktail party problem, where he tackles the challenge of separating speech from noise and other voices. He discusses his innovative paper on the 'cocktail fork problem,' which categorizes audio into speech, music, and sound effects. Le Roux explores the evolution of machine learning techniques in audio processing and reveals insights on how advanced models can enhance clarity in noisy environments.
undefined
Jan 20, 2022 • 36min

Machine Learning for Earthquake Seismology with Karianne Bergen - #554

In this engaging discussion, Karianne Bergen, an assistant professor at Brown University specializing in earthquake seismology and machine learning, delves into her innovative research. She shares insights on using machine learning to detect weak seismic signals and the challenges of distinguishing real earthquakes from noise. Karianne also emphasizes the need for tailored machine learning solutions in seismology and highlights the shifting landscape of scientists' understanding of machine learning, advocating for stronger educational frameworks in the field.
undefined
11 snips
Jan 17, 2022 • 46min

The New DBfication of ML/AI with Arun Kumar - #553

In this engaging conversation, Arun Kumar, an associate professor at UC San Diego known for his work on Cerebro and SortingHat, discusses the exciting concept of 'DBfication' in machine learning. He emphasizes how merging ML and database technologies can enhance efficiency and scalability. Arun shares insights on his innovative tools, Cerebro for optimal model selection and SortingHat for automating data prep. Their integration could significantly improve machine learning workflows, showcasing the future potential of MLOps and collaborative efforts in both fields.
undefined
Jan 13, 2022 • 30min

Building Public Interest Technology with Meredith Broussard - #552

Meredith Broussard, an associate professor at NYU and research director at the NYU Alliance for Public Interest Technology, dives into the critical junction of technology and societal fairness. She discusses her NeurIPS talk on making technology anti-racist and accessible, emphasizing the importance of algorithmic accountability to combat biases in areas like predictive policing. The conversation also explores the ethical dilemmas posed by AI in education, advocating for inclusive tech solutions that address systemic inequalities and foster responsible practices.
undefined
Jan 10, 2022 • 39min

A Universal Law of Robustness via Isoperimetry with Sebastien Bubeck - #551

Sebastian Bubeck, a Senior Principal Research Manager at Microsoft, discusses his award-winning paper on the universal law of robustness via isoperimetry. He explains the significance of convex optimization in machine learning and its applications to multi-armed bandit problems. The conversation delves into the necessity of overparameterization in neural networks for data interpolation and its implications for adversarial robustness. Bubeck also explores isoperimetry’s connection to neural networks and the challenges of scaling training methods.
undefined
Jan 6, 2022 • 1h 18min

Trends in NLP with John Bohannon - #550

Join John Bohannon, Director of Science at Primer AI, as he dives into the evolving landscape of NLP. He shares key insights on how NLP has shifted from rapid innovation to a more incremental phase and is now ‘eating’ the rest of machine learning. The discussion also covers groundbreaking advancements like multilingual models, the integration of NLP with computer vision, and the ethical implications of large language models. Explore challenges in benchmarking and innovative future applications in context management and gaming.
undefined
25 snips
Jan 3, 2022 • 58min

Trends in Computer Vision with Georgia Gkioxari - #549

Georgia Gkioxari, a research scientist at Meta AI specializing in computer vision, dives into the year's groundbreaking advancements. She discusses how Neural Radiance Fields (NeRF) are reshaping 3D scene reconstruction and the advantages of transformers over CNNs in image recognition. Gkioxari examines the evolving role of ImageNet and the exciting challenges posed by emerging fields like the metaverse. Additionally, she highlights promising startups and the collaborative future for hardware and software researchers in the AI landscape.
undefined
11 snips
Dec 27, 2021 • 37min

Kids Run the Darndest Experiments: Causal Learning in Children with Alison Gopnik - #548

In this engaging discussion, Alison Gopnik, a UC Berkeley professor known for her work in psychology and philosophy, delves into how children learn about the world through causal inference. She reveals how kids' exploration mirrors the scientific method, highlighting parallels between their learning and advancements in AI. Gopnik emphasizes the importance of understanding complex causal relationships and encourages using insights from children's learning to improve machine learning models and address social biases in AI design.
undefined
Dec 23, 2021 • 36min

Hypergraphs, Simplicial Complexes and Graph Representations of Complex Systems with Tina Eliassi-Rad - #547

In this engaging conversation, Tina Eliassi-Rad, a Northeastern University professor specializing in network science and machine learning, dives into the intricacies of graph representations in complex systems. She highlights the challenges of accurately modeling epidemics and the implications of asymmetric information in economic networks. Tina also discusses her workshop talk, emphasizing the disconnect between data sourcing and modeling practices. With insights on graph theory and network interventions, this discussion is a treasure trove for AI enthusiasts!
undefined
5 snips
Dec 20, 2021 • 53min

Deep Learning, Transformers, and the Consequences of Scale with Oriol Vinyals - #546

Oriol Vinyals, Lead of the Deep Learning team at DeepMind, shares his insights on the evolving landscape of AI. He discusses the state of transformer models and their potential limitations, as well as the recent paper on StarCraft II Unplugged, exploring the depth of offline reinforcement learning. The conversation delves into translating gaming AI innovations into real-world applications and examines advancements in multimodal few-shot learning. Vinyals also reflects on the consequences of scale in deep learning, inviting thoughts on future directions.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app