The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington
undefined
Sep 2, 2021 • 46min

Advancing Robotic Brains and Bodies with Daniela Rus - #515

Daniela Rus, Director of CSAIL and Deputy Dean of Research at MIT, shares her fascinating journey in robotics and AI. She offers insights into the history and impact of CSAIL, emphasizing the importance of physicality in robotics. Rus discusses her innovative work in soft robotics and the creation of a mini surgeon robot, as well as advancements in autonomous vehicles. The integration of AI and robotics is explored, highlighting efficiency and safety in machine learning, showcasing their potential to transform various fields.
undefined
Aug 30, 2021 • 46min

Neural Synthesis of Binaural Speech From Mono Audio with Alexander Richard - #514

In this discussion, Alexander Richard, a research scientist at Facebook Reality Labs and ICLR Best Paper Award winner, shares insights into his groundbreaking work on binaural audio synthesis. He dives into the challenges of audio representation in noisy environments and the complex process of generating realistic spatial audio from mono sources. Richard also highlights the difficulties of dynamic time warping and the need for accurate 3D measurements in virtual reality. His thoughts on Codec Avatars and future research directions promise to reshape how we experience sound and presence in virtual spaces.
undefined
Aug 26, 2021 • 36min

Using Brain Imaging to Improve Neural Networks with Alona Fyshe - #513

Today we’re joined by Alona Fyshe, an assistant professor at the University of Alberta. We caught up with Alona on the heels of an interesting panel discussion that she participated in, centered around improving AI systems using research about brain activity. In our conversation, we explore the multiple types of brain images that are used in this research, what representations look like in these images, and how we can improve language models without knowing explicitly how the brain understands the language. We also discuss similar experiments that have incorporated vision, the relationship between computer vision models and the representations that language models create, and future projects like applying a reinforcement learning framework to improve language generation.The complete show notes for this episode can be found at twimlai.com/go/513.
undefined
Aug 23, 2021 • 50min

Adaptivity in Machine Learning with Samory Kpotufe - #512

In this engaging conversation, Samory Kpotufe, an associate professor at Columbia University, delves into the complexities of adaptive algorithms in machine learning. He highlights the importance of self-tuning algorithms that can adjust to varying data. The discussion covers transfer learning, emphasizing practical applications and challenges. Samory also touches on innovative methods in unsupervised learning and anomaly detection, especially within resource-constrained devices. His insights into the intersection of fractals and high-dimensional data add a fascinating layer to the conversation.
undefined
Aug 19, 2021 • 44min

A Social Scientist’s Perspective on AI with Eric Rice - #511

Eric Rice, an associate professor at USC and co-director of the USC Center for Artificial Intelligence in Society, sheds light on the intersection of AI and social science. He shares his experiences working on projects like HIV prevention for homeless youth and using machine learning to aid in housing resource allocation. Eric emphasizes the need for interdisciplinary collaboration and discusses how social scientists approach assessment differently than computer scientists, focusing on real-world impacts of AI solutions.
undefined
Aug 16, 2021 • 42min

Applications of Variational Autoencoders and Bayesian Optimization with José Miguel Hernández Lobato - #510

José Miguel Hernández Lobato, a machine learning lecturer at the University of Cambridge, shares insights on the fusion of Bayesian learning and deep learning in molecular design. He discusses innovative methods for predicting chemical reactions and explores the challenges of sample efficiency in reinforcement learning. José elaborates on deep generative models, their role in molecular property prediction, and strategies for enhancing the robustness of machine learning through invariant risk minimization. His research reveals exciting pathways in optimizing molecule discovery.
undefined
19 snips
Aug 12, 2021 • 47min

Codex, OpenAI’s Automated Code Generation API with Greg Brockman - #509

Greg Brockman, co-founder and CTO of OpenAI, dives into the innovative Codex API, which extends the capabilities of GPT-3 for coding tasks. He discusses the key differences in performance between Codex and GPT-3, emphasizing Codex's reliability with programming instructions. The potential of Codex as an educational tool is highlighted, alongside its implications for job automation and fairness in AI. Brockman also details the Copilot collaboration with GitHub and the exciting rollout strategies for engaging users with this groundbreaking technology.
undefined
Aug 9, 2021 • 32min

Spatiotemporal Data Analysis with Rose Yu - #508

In this engaging discussion, Rose Yu, an assistant professor at UC San Diego, delves into her groundbreaking work on machine learning for spatiotemporal data. She explains how integrating physical principles and symmetry enhances neural network architectures. The conversation covers innovative approaches in climate modeling, including turbulent prediction and the application of Physics Guided AI. Rose also addresses uncertainty quantification in models, crucial for applications like COVID-19 forecasting, showcasing the importance of confidence in predictions.
undefined
8 snips
Aug 5, 2021 • 51min

Parallelism and Acceleration for Large Language Models with Bryan Catanzaro - #507

In this engaging discussion, Bryan Catanzaro, VP of Applied Deep Learning Research at NVIDIA, delves into high-performance computing's intersection with AI. He reveals insights about the Megatron framework for training large language models and the three parallelism types that enhance model efficiency. Bryan also highlights the challenges in supercomputing, the pioneering Deep Learning Super Sampling technology for gaming graphics, and innovative methods for generating high-resolution synthetic data to improve image quality in AI applications.
undefined
Aug 2, 2021 • 54min

Applying the Causal Roadmap to Optimal Dynamic Treatment Rules with Lina Montoya - #506

Join Lina Montoya, a postdoctoral researcher at UNC Chapel Hill focused on causal inference in precision medicine. She dives into her innovative work on Optimal Dynamic Treatment rules, particularly in the U.S. criminal justice system. Lina discusses the critical role of neglected assumptions in causal inference, the super learner algorithm's impact on predicting treatment effectiveness, and future research directions aimed at optimizing therapy delivery in resource-constrained settings like rural Kenya. This engaging discussion highlights the intersection of AI, healthcare, and justice.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app