

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington
Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.
Episodes
Mentioned books

38 snips
Jun 19, 2023 • 57min
Mojo: A Supercharged Python for AI with Chris Lattner - #634
In a captivating discussion, Chris Lattner, co-founder and CEO of Modular AI and creator of the Swift programming language, dives into Mojo, a groundbreaking programming language designed for AI developers. He explains how Mojo bridges the gap between Python's ease of use and C++'s performance, tackling the limitations posed by Python, particularly the global interpreter lock. Lattner emphasizes Mojo's compatibility with existing Python libraries, its potential to enhance AI workflows, and the need for a unified approach in AI model deployment.

10 snips
Jun 12, 2023 • 40min
Stable Diffusion and LLMs at the Edge with Jilei Hou - #633
Jilei Hou, VP of Engineering at Qualcomm Technologies, specializes in information theory and signal processing. He discusses the rise of generative AI and the advancement of deploying these models on edge devices. Challenges like model size and inference latency are highlighted, alongside solutions like quantization for optimizing performance. The conversation also dives into local optimization techniques that drastically reduce computation times for diffusion models. Jilei emphasizes the need for multimodal models, reshaping AI interactions and future innovations.

22 snips
Jun 5, 2023 • 47min
Modeling Human Behavior with Generative Agents with Joon Sung Park - #632
Joon Sung Park, a PhD student at Stanford University, is passionate about creating AI systems that address human challenges. He discusses his groundbreaking work on generative agents that mimic believable human behavior, emphasizing the role of context in AI interactions. The conversation delves into the complexities of long-term memory in agents and the significance of knowledge graphs for information retrieval. Joon also challenges traditional views on AI's worldview, exploring how emergent behaviors can reshape human-computer interaction.

6 snips
May 29, 2023 • 39min
Towards Improved Transfer Learning with Hugo Larochelle - #631
Hugo Larochelle, a research scientist at Google DeepMind, shares his groundbreaking work on transfer learning and neural knowledge mobilization. He dives into the significance of pre-training and fine-tuning in AI models, discussing the challenges and innovations in applying these techniques across diverse fields. Hugo also enlightens listeners on context-aware code generation and the evolution of large language models, revealing how they enhance code completion. Additionally, he sheds light on the creation of the Transactions on Machine Learning Research journal, advocating for more rigorous and open scientific publishing.

10 snips
May 22, 2023 • 28min
Language Modeling With State Space Models with Dan Fu - #630
Join Dan Fu, a PhD student at Stanford, as he dives into the evolving landscape of language modeling. He discusses the limitations of state space models and explores innovative techniques like Flash Attention, which enhances memory efficiency for processing longer sequences. Dan also shares insights on using synthetic languages to improve models and the quest for alternatives that outperform current attention-based methods. His research promises exciting advancements for the future of AI in understanding language.

4 snips
May 15, 2023 • 43min
Building Maps and Spatial Awareness in Blind AI Agents with Dhruv Batra - #629
Dhruv Batra, an associate professor at Georgia Tech and research director at Meta's FAIR team, shares groundbreaking insights on blind navigation agents. He discusses the emergence of maps within these agents and the importance of the embodiment hypothesis for true intelligence. The conversation explores the distinctions between cognitive and robotic mapping, innovations in AI's navigational capabilities using multilayer LSTMs, and the crucial role of memory in spatial awareness. Batra emphasizes the need for responsible data usage and the fascinating evolution of AI methodologies in navigation.

77 snips
May 8, 2023 • 41min
AI Agents and Data Integration with GPT and LLaMa with Jerry Liu - #628
Join Jerry Liu, co-founder and CEO of Llama Index, as he discusses the innovative creation of this platform that links external data with large language models. He shares insights on the challenges of integrating private data, the importance of automation in decision-making, and the evolution of AI agents. Liu also dives into strategies for optimizing complex queries and highlights the transformative potential of AI in processing unstructured data. Get ready to explore how technology can revolutionize data management!

7 snips
May 1, 2023 • 33min
Hyperparameter Optimization through Neural Network Partitioning with Christos Louizos - #627
Christos Louizos, an ML researcher at Qualcomm Technologies, dives into cutting-edge topics like hyperparameter optimization and federated learning. He discusses innovative techniques for speeding up transformers and optimizing computational graphs. You'll learn about effective methods for adapting models during distribution shifts and the significance of data partitioning. Louizos also highlights the challenges in federated learning and its implications for data privacy and efficiency, setting the stage for future advancements in the field.

10 snips
Apr 24, 2023 • 38min
Are LLMs Overhyped or Underappreciated? with Marti Hearst - #626
In this engaging discussion, Marti Hearst, a UC Berkeley Professor and expert in natural language processing, shares her insights on AI language models. She raises questions about their supposed cognition and their potential for misinformation. The conversation dives into the evolution of search technology and tools like ChatGPT, emphasizing the need for human oversight. Marti also highlights her groundbreaking work in search user interfaces and the intersection of language and visualization, revealing how text influences information retention.

63 snips
Apr 17, 2023 • 60min
Are Large Language Models a Path to AGI? with Ben Goertzel - #625
Ben Goertzel, CEO of SingularityNET and a pioneer in AGI research, dives into the future of artificial general intelligence. He discusses the limitations of current Large Language Models and advocates for a decentralized approach to AGI akin to the internet's rollout. Conversations touch on integrating neural networks with symbolic logic for more effective AI systems, and the creative potential of LLMs in music generation. Ben also shares insights from his work with the OpenCog framework and the ethical implications of emerging AGI technologies.


