The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

The Case for Hardware-ML Model Co-design with Diana Marculescu - #391

9 snips
Jul 13, 2020
Diana Marculescu, a Professor of Electrical and Computer Engineering at UT Austin, dives into the intriguing world of hardware-aware machine learning. She discusses the necessity of co-designing hardware and ML models for maximizing efficiency. Key topics include optimizing neural networks for edge devices, profiling for power and latency in GPUs, and innovative approaches to architecture search. Diana also emphasizes the critical need for adaptable designs and the future potential of deep learning driven by hardware advancements.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Co-design for Efficiency

  • Co-designing hardware and machine learning models is crucial for optimal performance.
  • This involves considering hardware constraints during model design and tailoring hardware to specific machine learning applications.
INSIGHT

Hardware Optimization Focus

  • Traditional hardware optimization focused on proxies like FLOPS and model size.
  • Now, the focus is shifting towards directly optimizing hardware for metrics like latency, power, and energy.
ADVICE

Modeling for Efficiency

  • Use machine learning to build power and latency models for neural network components.
  • This allows for quick estimation of these metrics without running the model, improving optimization.
Get the Snipd Podcast app to discover more snips from this episode
Get the app