The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington
undefined
40 snips
Nov 6, 2023 • 48min

AI Sentience, Agency and Catastrophic Risk with Yoshua Bengio - #654

Yoshua Bengio, a leading AI safety researcher from Université de Montréal, joins the conversation to discuss the dire risks posed by advanced AI technologies. He highlights the potential for AI to manipulate, spread disinformation, and concentrate power, raising alarm over its impact on democracy. The discussion dives into the complexities of AI safety, agency, and the troubling distinction between mimicking emotion and true sentience. Bengio advocates for robust safety measures, regulatory frameworks, and an urgent need to align AI developments with human values.
undefined
20 snips
Oct 30, 2023 • 44min

Delivering AI Systems in Highly Regulated Environments with Miriam Friedel - #653

Miriam Friedel, Senior Director of ML Engineering at Capital One, shares her insights on deploying AI tools in regulated environments. She discusses creating a culture of collaboration and the importance of standardized tooling. Miriam highlights strategies like using open-source tools for compliance and speed, and dives into the challenges of maintaining consistency across large organizations. Her thoughts on building a 'unicorn' team and making smart build vs. buy decisions for MLOps offer a fresh perspective on the future of enterprise AI.
undefined
78 snips
Oct 23, 2023 • 40min

Mental Models for Advanced ChatGPT Prompting with Riley Goodside - #652

Riley Goodside, a staff prompt engineer at Scale AI, shares insights on mastering prompt engineering for large language models. He dives into the limitations and capabilities of LLMs, emphasizing the intricacies of autoregressive inference. Goodside discusses the effectiveness of zero-shot vs. k-shot prompting and the crucial role of Reinforcement Learning from Human Feedback. He highlights how effective prompting acts as a scaffolding structure to achieve desired AI responses, blending technical skill with strategic thinking.
undefined
42 snips
Oct 16, 2023 • 1h 19min

Multilingual LLMs and the Values Divide in AI with Sara Hooker - #651

Sara Hooker, Director at Cohere and head of Cohere For AI, dives into the fascinating world of multilingual language models and responsible AI. She discusses challenges in data quality, the Mixture of Experts technique, and the need for better collaboration between researchers and hardware architects. Sara highlights the emotional connection language models create in society, as well as safety concerns regarding universal AI models. The conversation emphasizes the importance of open science, inclusivity, and responsible practices in AI development for a harmonious future.
undefined
18 snips
Oct 9, 2023 • 39min

Scaling Multi-Modal Generative AI with Luke Zettlemoyer - #650

In this discussion, Luke Zettlemoyer, a University of Washington professor and Meta research manager, dives into the fascinating realm of multimodal generative AI. He highlights the transformative impact of integrating text and images, illustrating advancements like DALL-E 3. Zettlemoyer explains the significance of open science for AI development and the complexities of data in enhancing model performance. Topics also include the role of self-alignment in training and the future of multimodal AI amidst rising technology costs and the need for better assessment methods.
undefined
12 snips
Oct 2, 2023 • 49min

Pushing Back on AI Hype with Alex Hanna - #649

In this engaging discussion, Alex Hanna, Director of Research at the Distributed AI Research Institute (DAIR), dives into the complexities of AI hype and its societal impacts. He delves into the origins of AI excitement and how it drives commercialization. Alex also sheds light on DAIR's innovative projects, including language technologies for low-resource languages in Ethiopia. The conversation tackles crucial topics like the politics of data sets and the ethical challenges in AI data sourcing, emphasizing the importance of critical evaluation and community engagement.
undefined
Sep 25, 2023 • 44min

Personalization for Text-to-Image Generative AI with Nataniel Ruiz - #648

Nataniel Ruiz, a research scientist at Google, shares insights on personalizing text-to-image AI models. He delves into DreamBooth, an innovative algorithm that enables personalized image generation using few user-provided images. The discussion covers the effectiveness of fine-tuning diffusion models and challenges like language drift, along with solutions like prior preservation loss. Nataniel also discusses advancements in his other projects like HyperDreamBooth and the creation of specialized datasets to enhance language reasoning in generative AI.
undefined
19 snips
Sep 18, 2023 • 41min

Ensuring LLM Safety for Production Applications with Shreya Rajpal - #647

Shreya Rajpal, Founder and CEO of Guardrails AI, dives deep into the critical topic of ensuring safety and reliability in language models for production use. She discusses the various risks associated with LLMs, especially the challenges of hallucinations and their implications. The conversation navigates the need for robust evaluation metrics and innovative tools like Guardrails, an open-source project designed to enforce model correctness. Shreya also highlights the importance of validation systems and their role in enhancing the safety of NLP applications.
undefined
33 snips
Sep 11, 2023 • 59min

What’s Next in LLM Reasoning? with Roland Memisevic - #646

In this discussion, Roland Memisevic, Senior Director at Qualcomm AI Research, explores the future of language in AI systems. He highlights the shift from noun-centric to verb-centric datasets, enhancing AI's cognitive learning. Memisevic delves into the creation of Fitness Ally, an interactive fitness AI that integrates sensory feedback for a more human-like interaction. The conversation also covers advancements in visual grounding and reasoning in language models, noting their potential for more robust AI agents. A fascinating glimpse into the evolving landscape of AI!
undefined
11 snips
Sep 4, 2023 • 42min

Is ChatGPT Getting Worse? with James Zou - #645

In this conversation, James Zou, an assistant professor at Stanford known for his work in biomedical data science, dives into the evolving landscape of ChatGPT. He examines its fluctuating performance over recent months, discussing intriguing comparisons between versions. The potential for surgical AI enhancements inspires thoughts on the future of large language models. Zou also shares innovative insights on using Twitter data to build medical imaging datasets, addressing the challenges of data quality and oversight in AI for healthcare applications.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app