
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) Simplifying On-Device AI for Developers with Siddhika Nevrekar - #697
Aug 12, 2024
Siddhika Nevrekar, Head of AI Hub at Qualcomm Technologies, discusses simplifying on-device AI for developers. She highlights the shift from cloud to local device processing, emphasizing privacy and offline access. The conversation covers challenges in optimizing AI across varied hardware and the collaboration needed between AI frameworks and manufacturers. Siddhika also introduces Qualcomm's AI Hub, aimed at streamlining model testing and fostering innovation in IoT, autonomous vehicles, and enhancing user experiences with AI-integrated solutions.
AI Snips
Chapters
Transcript
Episode notes
Testing Surface Explosion
- Consider device diversity (phones, tablets, laptops), OS variations (Android, Windows, Linux), and runtime choices (ONNX, TF Lite).
- These factors significantly expand the testing surface for on-device AI.
Runtime Optimization
- Leverage runtimes (ONNX, TF Lite, DirectML) which handle workload distribution across hardware reasonably well.
- For advanced use cases (gaming, video processing), fine-grained control may be necessary.
Fragmentation in On-Device AI
- The lack of a single driving organization behind on-device AI has led to fragmentation.
- Divergent development priorities of different companies contribute to the challenge.

