undefined

Tim Dettmers

Assistant professor (Carnegie Mellon University) and research scientist at the Allen Institute for AI, focused on efficient deep learning, quantization, and coding agents with notable work such as QLoRA and model compression.

Top 3 podcasts with Tim Dettmers

Ranked by the Snipd community
undefined
122 snips
Jan 22, 2026 • 1h 4min

The End of GPU Scaling? Compute & The Agent Era — Tim Dettmers (Ai2) & Dan Fu (Together AI)

Tim Dettmers, an assistant professor at Carnegie Mellon University, and Dan Fu, an assistant professor at UC San Diego, dive deep into the future of AGI. They debate the limitations of current hardware versus the untapped potential of efficient utilization. Tim warns of physical constraints like the von Neumann bottleneck, while Dan emphasizes better performance through optimized kernels. The conversation also reveals how agents can enhance productivity, with practical advice on leveraging them effectively for work automation and innovation in AI architectures.
undefined
40 snips
Nov 7, 2024 • 1h 16min

Interviewing Tim Dettmers on open-source AI: Agents, scaling, quantization and what's next

Join Tim Dettmers, a leading figure in open-source AI development and a future Carnegie Mellon professor, as he shares insights on the transformative potential of open-source AI models. He discusses the challenges of quantization and GPU resource efficiency, emphasizing their role in driving innovation. Tim also explores the evolving landscape of AI technology, comparing its impact to the internet revolution, while addressing the delicate balance between academic research and real-world applications. His passionate perspective offers a fresh take on the future of AI!
undefined
Aug 10, 2023 • 1h 7min

AI on your phone? Tim Dettmers on quantization of neural networks — #41

Tim Detmer, leader in quantization, discusses his background and research program. Topics include large language models, quantization, democratization of AI technology, future of AI. Other chapters cover challenges of dyslexia, neural network progress, quantization in improving efficiency, cost and efficiency of consumer GPUs for AI models, compression and algorithms in network transfer, and limitations of neural networks.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app