MIT Technology Review Narrated

How Pokémon Go is giving delivery robots an inch-perfect view of the world

12 snips
Mar 25, 2026
The story explores how billions of crowd‑sourced landmark photos are training a precise world model. It highlights centimeter-accurate visual positioning and how that improves delivery robot pickup and dropoff. It describes a pilot where robot cameras adapt to AR-trained maps and contrasts real-world living maps with rival spatial mapping approaches.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Massive Crowdsourced Dataset Enables Centimeter Localization

  • Niantic Spatial trained a visual positioning model on 30 billion player-captured urban images to localize devices within centimeters.
  • Each image includes rich metadata (precise phone position, orientation, motion) across thousands of shots per hotspot, improving robustness.
ANECDOTE

Coco's Robot Fleet Uses Niantic's Model To Avoid Getting Lost

  • Coco Robotics runs ~1,000 sidewalk delivery robots that carry pizzas and groceries across multiple cities and have done over half a million deliveries.
  • Robots travel ~5 mph, use four cameras at hip height, and now ingest Niantic Spatial's model to fix GPS weaknesses in dense urban areas.
INSIGHT

Urban Canyons Break GPS For Sidewalk Navigation

  • GPS often drifts dozens of meters in urban canyons, making it unreliable for precise sidewalk-level tasks.
  • Visual positioning gives a fix where the blue GPS dot can put you on a different block or wrong side of the street.
Get the Snipd Podcast app to discover more snips from this episode
Get the app