
The Reasoning Show How Tensorflow is Evolving
Sep 9, 2020
Andres Rodriguez, a senior principal engineer at Intel who builds deep learning systems bridging data science, hardware, and compilers. He discusses TensorFlow v2’s shift to dynamic execution and where it shines (vision, language, recommendations). He covers when specialized hardware matters and Intel’s CPU and accelerator work. He also outlines practical skills and tools to get started with TensorFlow.
AI Snips
Chapters
Transcript
Episode notes
TensorFlow Simplifies End-to-End Deep Learning
- TensorFlow is designed to simplify building, training, and deploying deep learning models by abstracting math and implementation details.
- It provides dataset tooling, model instantiation functions, automatic gradient computation, and multi-backend support for CPU/GPU/TPU.
Graphs Map Naturally To Neural Networks
- Deep learning maps naturally to computation graphs where nodes are operations and edges carry data, making TensorFlow a strong fit for neural networks.
- TensorFlow v2 added EagerMode (dynamic graphs) to better support models with complex control flow.
Three Workloads Drive Most Deep Learning
- Three deep learning areas dominate production use: computer vision, language tasks, and recommender systems.
- Computer vision is most mature with many pretrained models; recommenders drive monetization at hyperscalers despite limited public datasets.
