The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington
undefined
24 snips
May 23, 2022 • 48min

Principle-centric AI with Adrien Gaidon - #575

Adrien Gaidon, head of ML research at Toyota Research Institute, shares his insights on principle-centric AI and self-supervised learning. He presents an intriguing fourth perspective in the data-centric AI debate. The discussion covers innovative applications of synthetic data, particularly in autonomous vehicles, and the ethical challenges of machine learning. Adrien emphasizes integrating fundamental principles with data to foster advancements in AI, and how a curiosity-driven approach can enhance model robustness in high-stakes fields like healthcare.
undefined
24 snips
May 19, 2022 • 37min

Data Debt in Machine Learning with D. Sculley - #574

D. Sculley, a director on the Google Brain team known for his insights on technical debt in machine learning, dives into the evolving concept of data debt. He discusses the integral role data quality plays in data-centric AI and highlights common sources of data debt. The conversation touches on innovative strategies like causal inference graphs and stress testing for improving model robustness. Sculley also explores the community's proactive steps to mitigate these issues, emphasizing a shift towards more accountable data practices.
undefined
May 16, 2022 • 39min

AI for Enterprise Decisioning at Scale with Rob Walker - #573

Rob Walker, VP at Pegasystems, returns to share his expertise in AI and machine learning for customer engagement. He tackles the 'next best' decision-making dilemma and distinguishes it from recommender systems. The conversation dives into machine learning's coexistence with heuristic methods and tackles challenges around responsible AI practices. Rob also discusses the significance of feature stores and the balance between traditional models and deep learning, all while gearing up for the upcoming PegaWorld conference.
undefined
17 snips
May 12, 2022 • 42min

Data Rights, Quantification and Governance for Ethical AI with Margaret Mitchell - #572

Meg Mitchell, Chief Ethics Scientist at Hugging Face, dives into the crucial interplay between ethical AI and data governance. She discusses her transition from big tech to prioritizing coding in her current role, emphasizing the importance of diverse data representation. Meg highlights evolving data curation practices, ethical documentation through Model Cards, and the pressing need for transparency to mitigate biases in AI. The conversation also touches on challenges in distinguishing AI-generated content from human-written material, raising concerns about misinformation.
undefined
11 snips
May 9, 2022 • 53min

Studying Machine Intelligence with Been Kim - #571

Been Kim, a staff research scientist at Google Brain and ICLR 2022 speaker, dives into the fascinating world of AI interpretability. She discusses the current state of interpretability techniques, exploring how Gestalt principles can enhance our understanding of neural networks. Been proposes a novel language for human-AI communication, aimed at improving collaboration and transparency. The conversation also touches on the evolution of AI tools, the unique insights from AlphaZero in chess, and the implications of model fingerprints for data privacy.
undefined
May 2, 2022 • 38min

Advances in Neural Compression with Auke Wiggers - #570

Auke Wiggers, an AI research scientist at Qualcomm, dives into the exciting realm of neural data compression. He discusses how generative models and transformer architectures are revolutionizing image and video coding. The conversation highlights the shift from traditional techniques to neural codecs that learn from examples, and the impressive real-time performance on mobile devices. Auke also touches on innovations like transformer-based transform coding and shares insights from recent ICLR papers, showcasing the future of efficient data compression.
undefined
10 snips
Apr 25, 2022 • 46min

Mixture-of-Experts and Trends in Large-Scale Language Modeling with Irwan Bello - #569

Irwan Bello, a research scientist formerly with Google Brain and now part of a stealth AI startup, dives into the world of sparse expert models. He discusses his recent work on designing effective architectures that improve performance while managing computational costs. The conversation uncovers how the mixture-of-experts technique can extend beyond NLP to various tasks, including vision. Bello also shares insights on enhancing alignment in language models through instruction tuning and the challenges of optimizing these large-scale systems.
undefined
16 snips
Apr 18, 2022 • 52min

Daring to DAIR: Distributed AI Research with Timnit Gebru - #568

Timnit Gebru, founder of the Distributed AI Research Institute, joins the conversation to share her journey after her controversial departure from Google. She discusses the challenges of establishing independent research structures and the need for ethical AI practices. The importance of fairness beyond technical terms is highlighted, along with tackling systemic issues. Timnit also explores innovative projects, like examining spatial apartheid using AI. Throughout, she emphasizes the value of diverse voices and community engagement in reshaping AI research.
undefined
Apr 11, 2022 • 50min

Hierarchical and Continual RL with Doina Precup - #567

In this engaging conversation, Doina Precup, a Research team lead at DeepMind Montreal and a professor at McGill University, dives into her research on hierarchical and continual reinforcement learning. She discusses how agents can learn abstract representations and the critical role of reward specifications in shaping intelligent behaviors. Doina draws intriguing parallels between hierarchical RL and CNNs while exploring the challenges and future of reinforcement learning in dynamic environments, all while emphasizing the importance of adaptability and multi-level reasoning.
undefined
11 snips
Apr 4, 2022 • 30min

Open-Source Drug Discovery with DeepChem with Bharath Ramsundar - #566

Bharath Ramsundar, founder and CEO of Deep Forest Sciences, shares his expertise in AI-driven drug discovery and molecular design. He delves into the challenges biotech firms face in integrating AI, highlighting the need for collaboration and a solid infrastructure. The discussion includes the innovative DeepChem library and its datasets like MoleculeNet, which aim to enhance drug development processes. Bharath also emphasizes the importance of chemistry-aware validation methods for better model generalization and the evolving partnership between AI and traditional sciences.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app