The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington
undefined
16 snips
Aug 28, 2023 • 45min

Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644

Sophia Sanborn, a postdoctoral scholar at UC Santa Barbara, blends neuroscience and AI in her groundbreaking research. She dives into the universality of neural representations, showcasing how both biological systems and deep networks can efficiently find consistent features. The conversation also highlights her innovative work on Bispectral Neural Networks, linking Fourier transforms to group theory, and explores the potential of geometric deep learning to transform CNNs. Sanborn reveals the striking similarities between artificial and biological neural structures, presenting a fascinating convergence of insights.
undefined
9 snips
Aug 21, 2023 • 34min

Inverse Reinforcement Learning Without RL with Gokul Swamy - #643

Gokul Swamy, a Ph.D. student at Carnegie Mellon’s Robotics Institute, dives into the intriguing world of inverse reinforcement learning. He unpacks the challenges of mimicking human decision-making without direct reinforcement signals. Topics include streamlining AI learning through expert guidance and the complexities of medical decision-making with missing data. Gokul also discusses safety in multitask learning, emphasizing the balance between efficiency and safety in AI systems. His insights pave the way for future research in enhancing AI’s learning capabilities.
undefined
26 snips
Aug 14, 2023 • 38min

Explainable AI for Biology and Medicine with Su-In Lee - #642

Su-In Lee, a professor at the University of Washington's Paul G. Allen School of Computer Science, discusses her research on explainable AI in biology and medicine. She emphasizes the importance of interdisciplinary collaboration for improving cancer and Alzheimer's treatments. The conversation delves into the robustness of explainable AI techniques, the challenges of handling biomedical data, and the role of machine learning in drug combination therapies. Su-In also highlights innovative methods for personalized patient care and predictive insights in oncology.
undefined
21 snips
Aug 7, 2023 • 39min

Transformers On Large-Scale Graphs with Bayan Bruss - #641

Bayan Bruss, Vice President of Applied ML Research at Capital One, dives into groundbreaking research on applying machine learning in finance. He discusses two key papers presented at ICML, focusing on interpretability in image representations and the innovative global graph transformer model. Listeners will learn about tackling computational challenges, the balance between model sparsity and performance, and the significance of embedding dimensions. With insights into advancing deep learning techniques, this conversation opens new avenues for efficiency in machine learning.
undefined
39 snips
Jul 31, 2023 • 37min

The Enterprise LLM Landscape with Atul Deo - #640

Atul Deo, General Manager of Amazon Bedrock, brings a wealth of experience in software development and product engineering. He dives into the intricacies of training large language models in enterprises, discussing the challenges and advantages of pre-trained models. The conversation highlights retrieval augmented generation (RAG) for improved query responses, as well as the complexities of implementing LLMs at scale. Atul also unveils insights into Bedrock, a managed service designed to streamline generative AI app development for businesses.
undefined
13 snips
Jul 24, 2023 • 37min

BloombergGPT - an LLM for Finance with David Rosenberg - #639

David Rosenberg, head of the machine learning strategy team at Bloomberg, discusses the fascinating development of BloombergGPT, a tailored large language model for finance. He dives into its unique architecture, validation methods, and performance benchmarks, revealing how it successfully integrates finance-specific data. David also addresses the challenges of processing financial information and the importance of ethical considerations in AI deployment, especially regarding bias and the necessity for human oversight.
undefined
31 snips
Jul 17, 2023 • 48min

Are LLMs Good at Causal Reasoning? with Robert Osazuwa Ness - #638

In this discussion, Robert Osazuwa Ness, a senior researcher at Microsoft Research, delves into the intriguing world of causal reasoning in large language models like GPT-3.5 and GPT-4. He examines their strengths and limitations, emphasizing the need for proper benchmarks and the importance of domain knowledge in causal analysis. Robert also highlights innovative methods for improving model performance through tailored reinforcement learning techniques and discusses the role of prompt engineering in enhancing causal inference tasks.
undefined
8 snips
Jul 10, 2023 • 38min

Privacy vs Fairness in Computer Vision with Alice Xiang - #637

Alice Xiang, a Lead Research Scientist at Sony AI and Global Head of AI Ethics at Sony Group Corporation, shares her expertise on the critical issues of privacy and fairness in computer vision. She discusses the impact of data privacy laws and the dangers of unauthorized data use, emphasizing the importance of ethical practices in AI. Alice highlights the history of unethical data collection and the challenges posed by generative technologies. Solutions such as community engagement and interdisciplinary collaboration are also explored, alongside the need for robust AI regulation.
undefined
59 snips
Jul 3, 2023 • 48min

Unifying Vision and Language Models with Mohit Bansal - #636

In this engaging discussion, Mohit Bansal, a Parker Professor and Director of the MURGe-Lab at UNC, dives into the unification of vision and language models. He highlights the benefits of shared knowledge in AI, introducing innovative models like UDOP and VL-T5 that achieve top results with fewer parameters. The conversation also tackles the challenges of evaluating generative AI, addressing biases and the importance of data efficiency. Mohit shares insights on balancing advancements in multimodal models with responsible usage and the future of explainability in AI.
undefined
Jun 26, 2023 • 53min

Data Augmentation and Optimized Architectures for Computer Vision with Fatih Porikli - #635

Fatih Porikli, Senior Director of Technology at Qualcomm AI Research, shares insights from over 30 years in computer vision. He explores cutting-edge topics such as data augmentation techniques, optimized architectures, and advances in optical flow for video analysis. The conversation delves into the use of language models for fine-grained labeling, enhancing 3D object detection, and the role of generative AI in model efficiency. Fatih also discusses training neural networks and innovative approaches to integrating various data sources for improved accuracy.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app