undefined

Sebastian Raschka

Independent LLM researcher and author focused on large language models, reasoning techniques, and practical AI tool integration, and author of books on building and reasoning with LLMs.

Top 5 podcasts with Sebastian Raschka

Ranked by the Snipd community
undefined
311 snips
Feb 26, 2026 • 1h 19min

AI Trends 2026: OpenClaw Agents, Reasoning LLMs, and More with Sebastian Raschka - #762

Sebastian Raschka, independent LLM researcher and author of books on building reasoning models. He breaks down 2026 trends: the move from scaling to reasoning-focused post-training and inference tricks. Talks practical agentic workflows and local agents like OpenClaw, tool integration vs model quality, architecture shifts (MOE, attention tweaks), long-context trade-offs, and challenges in continual learning.
undefined
11 snips
Nov 21, 2024 • 1h 6min

Build LLMs From Scratch with Sebastian Raschka #52

Sebastian Raschka, a Senior Staff Research Engineer at Lightning AI and bestselling author, dives into the art of building large language models. He shares insights on two significant open-source libraries, PyTorch Lightning and LitGPT, that enhance LLM training and deployment. The discussion shifts to his new book, where he outlines essential steps in LLM training and contrasts models like GPT-2 with the latest Llama 3. Sebastian also explores the universe of multimodal LLMs and their potential, highlighting exciting developments on the horizon.
undefined
11 snips
Mar 19, 2024 • 1h 48min

767: Open-Source LLM Libraries and Techniques, with Dr. Sebastian Raschka

Dr. Sebastian Raschka, Author of Machine Learning Q and AI, talks about PyTorch Lightning, LLM development opportunities, DoRA vs LoRA, and being a successful AI educator in a fascinating discussion with Jon Krohn.
undefined
10 snips
Aug 1, 2024 • 1h 4min

Interviewing Sebastian Raschka on the state of open LLMs, Llama 3.1, and AI education

Sebastian Raschka, a staff research engineer at Lightning AI and AI educator, dives into the dynamic landscape of open language models. He discusses the evolution of Llama 3.1 and its implications for AI research. Sebastian shares insights from his experience as an Arxiv moderator, shedding light on the challenges of navigating academic papers. The conversation also covers advancements in model training techniques, the importance of ethics in AI, and how open access enhances AI education. Tune in for a fascinating look at the future of AI and language models!
undefined
May 15, 2024 • 1h 52min

Episode 26: Developing and Training LLMs From Scratch

Sebastian Raschka discusses developing and training large language models (LLMs) from scratch, covering topics like prompt engineering, fine-tuning, and RAG systems. They explore the skills, resources, and hardware needed, the lifecycle of LLMs, live coding to create a spam classifier, and the importance of hands-on experience. They also touch on using PyTorch Lightning and fabric for managing large models, and reveal insights on techniques in natural language processing models and evaluating LLMs for classification problems.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app