

Gradient Dissent: Conversations on AI
Lukas Biewald
Join Lukas Biewald on Gradient Dissent, an AI-focused podcast brought to you by Weights & Biases. Dive into fascinating conversations with industry giants from NVIDIA, Meta, Google, Lyft, OpenAI, and more. Explore the cutting-edge of AI and learn the intricacies of bringing models into production.
Episodes
Mentioned books

Apr 1, 2021 • 49min
Vladlen Koltun — The Power of Simulation and Abstraction
From legged locomotion to autonomous driving, Vladlen explains how simulation and abstraction help us understand embodied intelligence.---Vladlen Koltun is the Chief Scientist for Intelligent Systems at Intel, where he leads an international lab of researchers working in machine learning, robotics, computer vision, computational science, and related areas. Connect with Vladlen:Personal website: http://vladlen.info/LinkedIn: https://www.linkedin.com/in/vladlenkoltun/---0:00 Sneak peek and intro1:20 "Intelligent Systems" vs "AI"3:02 Legged locomotion9:26 The power of simulation14:32 Privileged learning18:19 Drone acrobatics20:19 Using abstraction to transfer simulations to reality25:35 Sample Factory for reinforcement learning34:30 What inspired CARLA and what keeps it going41:43 The challenges of and for roboticsLinks DiscussedLearning quadrupedal locomotion over challenging terrain (Lee et al., 2020): https://robotics.sciencemag.org/content/5/47/eabc5986.abstractDeep Drone Acrobatics (Kaufmann et al., 2020): https://arxiv.org/abs/2006.05768Sample Factory: Egocentric 3D Control from Pixels at 100000 FPS with Asynchronous Reinforcement Learning (Petrenko et al., 2020): https://arxiv.org/abs/2006.11751CARLA: https://carla.org/---Check out the transcription and discover more awesome ML projects:http://wandb.me/vladlen-koltun-podcastGet our podcast on these platforms:Apple Podcasts: http://wandb.me/apple-podcastsSpotify: http://wandb.me/spotifyGoogle: http://wandb.me/google-podcastsYouTube: http://wandb.me/youtubeSoundcloud: http://wandb.me/soundcloud---Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:http://wandb.me/slackOur gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:https://wandb.ai/gallery

Mar 25, 2021 • 39min
Dominik Moritz — Building Intuitive Data Visualization Tools
Dominik shares the story and principles behind Vega and Vega-Lite, and explains how visualization and machine learning help each other.---Dominik is a co-author of Vega-Lite, a high-level visualization grammar for building interactive plots. He's also a professor at the Human-Computer Interaction Institute Institute at Carnegie Mellon University and an ML researcher at Apple.Connect with DominikTwitter: https://twitter.com/domoritzGitHub: https://github.com/domoritzPersonal website: https://www.domoritz.de/---0:00 Sneak peek, intro1:15 What is Vega-Lite?5:39 The grammar of graphics9:00 Using visualizations creatively11:36 Vega vs Vega-Lite16:03 ggplot2 and machine learning18:39 Voyager and the challenges of scale24:54 Model explainability and visualizations31:24 Underrated topics: constraints and visualization theory34:38 The challenge of metrics in deployment36:54 In between aggregate statistics and individual examplesLinks DiscussedVega-Lite: https://vega.github.io/vega-lite/Data analysis and statistics: an expository overview (Tukey and Wilk, 1966): https://dl.acm.org/doi/10.1145/1464291.1464366Slope chart / slope graph: https://vega.github.io/vega-lite/examples/line_slope.htmlVoyager: https://github.com/vega/voyagerDraco: https://github.com/uwdata/dracoCheck out the transcription and discover more awesome ML projects:http://wandb.me/gd-domink-moritz---Get our podcast on these platforms:Apple Podcasts: http://wandb.me/apple-podcastsSpotify: http://wandb.me/spotifyGoogle: http://wandb.me/google-podcastsYouTube: http://wandb.me/youtubeSoundcloud: http://wandb.me/soundcloud---Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:http://wandb.me/slackOur gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:https://wandb.ai/gallery

Mar 18, 2021 • 49min
Cade Metz — The Stories Behind the Rise of AI
How Cade got access to the stories behind some of the biggest advancements in AI, and the dynamic playing out between leaders at companies like Google, Microsoft, and Facebook.Cade Metz is a New York Times reporter covering artificial intelligence, driverless cars, robotics, virtual reality, and other emerging areas. Previously, he was a senior staff writer with Wired magazine and the U.S. editor of The Register, one of Britain’s leading science and technology news sites. His first book, "Genius Makers", tells the stories of the pioneers behind AI.Get the book: http://bit.ly/GeniusMakersFollow Cade on Twitter: https://twitter.com/CadeMetz/And on Linkedin: https://www.linkedin.com/in/cademetz/Topics discussed:0:00 sneak peek, intro3:25 audience and charachters7:18 *spoiler alert* AGI11:01 book ends, but story goes on17:31 overinflated claims in AI23:12 Deep Mind, OpenAI, building AGI29:02 neuroscience and psychology, outsiders34:35 Early adopters of ML38:34 WojNet, where is credit due?42:45 press covering AI46:38 Aligning technology and needRead the transcript and discover awesome ML projects:http://wandb.me/cade-metzGet our podcast on these platforms:Apple Podcasts: http://wandb.me/apple-podcastsSpotify: http://wandb.me/spotifyGoogle: http://wandb.me/google-podcastsYouTube: http://wandb.me/youtubeSoundcloud: http://wandb.me/soundcloudTune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:http://wandb.me/salonJoin our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:http://wandb.me/slackOur gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:https://wandb.ai/gallery

Mar 11, 2021 • 56min
Dave Selinger — AI and the Next Generation of Security Systems
Learn why traditional home security systems tend to fail and how Dave’s love of tinkering and deep learning are helping him and the team at Deep Sentinel avoid those same pitfalls. He also discusses the importance of combatting racial bias by designing race-agnostic systems and what their approach is to solving that problem.Dave Selinger is the co-founder and CEO of Deep Sentinel, an intelligent crime prediction and prevention system that stops crime before it happens using deep learning vision techniques. Prior to founding Deep Sentinel, Dave co-founded RichRelevance, an AI recommendation company.https://www.deepsentinel.com/https://www.meetup.com/East-Bay-Tri-Valley-Machine-Learning-Meetup/https://twitter.com/daveselingerTopics covered:0:00 Sneak peek, smart vs dumb cameras, intro0:59 What is Deep Sentinel, how does it work?6:00 Hardware, edge devices10:40 OpenCV Fork, tinkering16:18 ML Meetup, Climbing the AI research ladder20:36 Challenge of Safety critical applications27:03 New models, re-training, exhibitionists and voyeurs31:17 How do you prove your cameras are better?34:24 Angel investing in AI companies38:00 Social responsibility with data43:33 Combatting bias with data systems52:22 Biggest bottlenecks productionGet our podcast on these platforms:Apple Podcasts: http://wandb.me/apple-podcastsSpotify: http://wandb.me/spotifyGoogle: http://wandb.me/google-podcastsYouTube: http://wandb.me/youtubeSoundcloud: http://wandb.me/soundcloudRead the transcript and discover more awesome machine learning material here: http://wandb.me/Dave-selinger-podcastTune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:http://wandb.me/salonJoin our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:http://wandb.me/slackOur gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:https://wandb.ai/gallery

Mar 4, 2021 • 54min
Tim & Heinrich — Democraticizing Reinforcement Learning Research
Since reinforcement learning requires hefty compute resources, it can be tough to keep up without a serious budget of your own. Find out how the team at Facebook AI Research (FAIR) is looking to increase access and level the playing field with the help of NetHack, an archaic rogue-like video game from the late 80s.Links discussed:The NetHack Learning Environment: https://ai.facebook.com/blog/nethack-learning-environment-to-advance-deep-reinforcement-learning/Reinforcement learning, intrinsic motivation: https://arxiv.org/abs/2002.12292Knowledge transfer:https://arxiv.org/abs/1910.08210Tim Rocktäschel is a Research Scientist at Facebook AI Research (FAIR) London and a Lecturer in the Department of Computer Science at University College London (UCL). At UCL, he is a member of the UCL Centre for Artificial Intelligence and the UCL Natural Language Processing group. Prior to that, he was a Postdoctoral Researcher in the Whiteson Research Lab, a Stipendiary Lecturer in Computer Science at Hertford College, and a Junior Research Fellow in Computer Science at Jesus College, at the University of Oxford.https://twitter.com/_rocktHeinrich Kuttler is an AI and machine learning researcher at Facebook AI Research (FAIR) and before that was a research engineer and team lead at DeepMind.https://twitter.com/HeinrichKuttlerhttps://www.linkedin.com/in/heinrich-kuttler/Topics covered:0:00 a lack of reproducibility in RL1:05 What is NetHack and how did the idea come to be?5:46 RL in Go vs NetHack11:04 performance of vanilla agents, what do you optimize for18:36 transferring domain knowledge, source diving22:27 human vs machines intrinsic learning28:19 ICLR paper - exploration and RL strategies35:48 the future of reinforcement learning43:18 going from supervised to reinforcement learning45:07 reproducibility in RL50:05 most underrated aspect of ML, biggest challenges?Get our podcast on these other platforms:Apple Podcasts: http://wandb.me/apple-podcastsSpotify: http://wandb.me/spotifyGoogle: http://wandb.me/google-podcastsYouTube: http://wandb.me/youtubeSoundcloud: http://wandb.me/soundcloudTune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:http://wandb.me/salonJoin our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:http://wandb.me/slackOur gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:https://wandb.ai/gallery

Feb 18, 2021 • 46min
Daphne Koller — Digital Biology and the Next Epoch of Science
From teaching at Stanford to co-founding Coursera, insitro, and Engageli, Daphne Koller reflects on the importance of education, giving back, and cross-functional research.Daphne Koller is the founder and CEO of insitro, a company using machine learning to rethink drug discovery and development. She is a MacArthur Fellowship recipient, member of the National Academy of Engineering, member of the American Academy of Arts and Science, and has been a Professor in the Department of Computer Science at Stanford University. In 2012, Daphne co-founded Coursera, one of the world's largest online education platforms. She is also a co-founder of Engageli, a digital platform designed to optimize student success. https://www.insitro.com/https://www.insitro.com/jobshttps://www.engageli.com/https://www.coursera.org/Follow Daphne on Twitter: https://twitter.com/DaphneKollerhttps://www.linkedin.com/in/daphne-koller-4053a820/Topics covered:0:00 Giving back and intro2:10 insitro's mission statement and Eroom's Law3:21 The drug discovery process and how ML helps10:05 Protein folding15:48 From 2004 to now, what's changed?22:09 On the availability of biology and vision datasets26:17 Cross-functional collaboration at insitro28:18 On teaching and founding Coursera31:56 The origins of Engageli36:38 Probabilistic graphic models39:33 Most underrated topic in ML43:43 Biggest day-to-day challengesGet our podcast on these other platforms:Apple Podcasts: http://wandb.me/apple-podcastsSpotify: http://wandb.me/spotifyGoogle: http://wandb.me/google-podcastsYouTube: http://wandb.me/youtubeSoundcloud: http://wandb.me/soundcloudTune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:http://wandb.me/salonJoin our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:http://wandb.me/slackOur gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:https://wandb.ai/gallery

Feb 11, 2021 • 36min
Piero Molino — The Secret Behind Building Successful Open Source Projects
Piero shares the story of how Ludwig was created, as well as the ins and outs of how Ludwig works and the future of machine learning with no code.Piero is a Staff Research Scientist in the Hazy Research group at Stanford University. He is a former founding member of Uber AI, where he created Ludwig, worked on applied projects (COTA, Graph Learning for Uber Eats, Uber’s Dialogue System), and published research on NLP, Dialogue, Visualization, Graph Learning, Reinforcement Learning, and Computer Vision.Topics covered:0:00 Sneak peek and intro1:24 What is Ludwig, at a high level?4:42 What is Ludwig doing under the hood?7:11 No-code machine learning and data types14:15 How Ludwig started17:33 Model performance and underlying architecture21:52 On Python in ML24:44 Defaults and W&B integration28:26 Perspective on NLP after 10 years in the field31:49 Most underrated aspect of ML33:30 Hardest part of deploying ML models in the real worldLearn more about Ludwig: https://ludwig-ai.github.io/ludwig-docs/Piero's Twitter: https://twitter.com/w4nderlus7Follow Piero on Linkedin: https://www.linkedin.com/in/pieromolino/?locale=en_USGet our podcast on these other platforms:Apple Podcasts: http://wandb.me/apple-podcastsSpotify: http://wandb.me/spotifyGoogle: http://wandb.me/google-podcastsYouTube: http://wandb.me/youtubeSoundcloud: http://wandb.me/soundcloudTune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:http://wandb.me/salonJoin our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:http://wandb.me/slackOur gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:https://wandb.ai/gallery

Feb 5, 2021 • 49min
Rosanne Liu — Conducting Fundamental ML Research as a Nonprofit
How Rosanne is working to democratize AI research and improve diversity and fairness in the field through starting a non-profit after being a founding member of Uber AI Labs, doing lots of amazing research, and publishing papers at top conferences.Rosanne is a machine learning researcher, and co-founder of ML Collective, a nonprofit organization for open collaboration and mentorship. Before that, she was a founding member of Uber AI. She has published research at NeurIPS, ICLR, ICML, Science, and other top venues. While at school she used neural networks to help discover novel materials and to optimize fuel efficiency in hybrid vehicles.ML Collective: http://mlcollective.org/Controlling Text Generation with Plug and Play Language Models: https://eng.uber.com/pplm/LCA: Loss Change Allocation for Neural Network Training: https://eng.uber.com/research/lca-loss-change-allocation-for-neural-network-training/Topics covered0:00 Sneak peek, Intro1:53 The origin of ML Collective5:31 Why a non-profit and who is MLC for?14:30 LCA, Loss Change Allocation18:20 Running an org, research vs admin work20:10 Advice for people trying to get published24:15 on reading papers and Intrinsic Dimension paper36:25 NeurIPS - Open Collaboration40:20 What is your reward function?44:44 Underrated aspect of ML47:22 How to get involved with MLCGet our podcast on these other platforms:Apple Podcasts: http://wandb.me/apple-podcastsSpotify: http://wandb.me/spotifyGoogle: http://wandb.me/google-podcastsYouTube: http://wandb.me/youtubeTune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:http://wandb.me/salonJoin our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:http://wandb.me/slackOur gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:https://wandb.ai/gallery

Jan 28, 2021 • 47min
Sean Gourley — NLP, National Defense, and Establishing Ground Truth
In this episode of Gradient Dissent, Primer CEO Sean Gourley and Lukas Biewald sit down to talk about NLP, working with vast amounts of information, and how crucially it relates to national defense. They also chat about their experience of being second-time founders coming from a data science background and how it affects the way they run their companies. We hope you enjoy this episode!Sean Gourley is the founder and CEO Primer, a natural language processing startup in San Francisco. Previously, he was CTO of Quid an augmented intelligence company that he cofounded back in 2009. And prior to that, he worked on self-repairing nano circuits at NASA Ames. Sean has a PhD in physics from Oxford, where his research as a road scholar focused on graph theory, complex systems, and the mathematical patterns underlying modern war.Follow Sean on Twitter:https://primer.ai/https://twitter.com/sgourleyTopics Covered:0:00 Sneak peek, intro1:42 Primer's mission and purpose4:29 The Diamond Age – How do we train machines to observe the world and help us understand it7:44 a self-writing Wikipedia9:30 second-time founder11:26 being a founder as a data scientist15:44 commercializing algorithms17:54 Is GPT-3 worth the hype? The mind-blowing scale of transformers23:00 AI Safety, military/defense29:20 disinformation, does ML play a role?34:55 Establishing ground truth and informational provenance39:10 COVID misinformation, Masks, division44:07 most underrated aspect of ML45:09 biggest bottlenecks in ML?Visit our podcasts homepage for transcripts and more episodes!www.wandb.com/podcastGet our podcast on these other platforms:YouTube: http://wandb.me/youtubeSoundcloud: http://wandb.me/soundcloudApple Podcasts: http://wandb.me/apple-podcastsSpotify: http://wandb.me/spotifyGoogle: http://wandb.me/google-podcastsJoin our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:http://wandb.me/salonJoin our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:http://wandb.me/slackOur gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.https://wandb.ai/gallery

4 snips
Jan 21, 2021 • 50min
Peter Wang — Anaconda, Python, and Scientific Computing
Peter Wang talks about his journey of being the CEO of and co-founding Anaconda, his perspective on the Python programming language, and its use for scientific computing.Peter Wang has been developing commercial scientific computing and visualization software for over 15 years. He has extensive experience in software design and development across a broad range of areas, including 3D graphics, geophysics, large data simulation and visualization, financial risk modeling, and medical imaging.Peter’s interests in the fundamentals of vector computing and interactive visualization led him to co-found Anaconda (formerly Continuum Analytics). Peter leads the open source and community innovation group.As a creator of the PyData community and conferences, he devotes time and energy to growing the Python data science community and advocating and teaching Python at conferences around the world. Peter holds a BA in Physics from Cornell University.Follow peter on Twitter: https://twitter.com/pwanghttps://www.anaconda.com/Intake: https://www.anaconda.com/blog/intake-...https://pydata.org/Scientific Data Management in the Coming Decade paper: https://arxiv.org/pdf/cs/0502008.pdfTopics covered:0:00 (intro) Technology is not value neutral; Don't punt on ethics1:30 What is Conda?2:57 Peter's Story and Anaconda's beginning6:45 Do you ever regret choosing Python?9:39 On other programming languages17:13 Scientific Data Management in the Coming Decade21:48 Who are your customers?26:24 The ML hierarchy of needs30:02 The cybernetic era and Conway's Law34:31 R vs python42:19 Most underrated: Ethics - Don't Punt46:50 biggest bottlenecks: open-source, pythonVisit our podcasts homepage for transcripts and more episodes!www.wandb.com/podcastGet our podcast on these other platforms:YouTube: http://wandb.me/youtubeSoundcloud: http://wandb.me/soundcloudApple Podcasts: http://wandb.me/apple-podcastsSpotify: http://wandb.me/spotifyGoogle: http://wandb.me/google-podcastsJoin our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:http://wandb.me/salonJoin our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:http://wandb.me/slackOur gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.https://wandb.ai/gallery


