
AI for Founders with Ryan Estes The uncanny valley is real
Mar 5, 2026
Marcin Dymczyk, founder of SevenSense and robotics engineer from ETH Zurich, builds vision-based autonomy for mobile robots. He discusses using cameras, IMU, and odometry instead of markers or LiDAR. He explains deploying vision on forklifts and cleaning machines, designing camera arrays, and making robots communicate intent and negotiate space with humans.
AI Snips
Chapters
Transcript
Episode notes
Build The Robot Brain Not The Whole Robot
- SevenSense built the "eyes and brains" for mobile robots instead of entire robots to focus on sensing, compute, and spatial reasoning.
- That specialization filled a market gap for robots that previously followed rigid paths and couldn't adapt to dynamic warehouse environments.
Vision Navigation Avoids Floor Rewrites
- Vision-based navigation uses natural features so robots don't need infrastructure like QR codes, magnetic tape, or floor redesigns.
- Cameras plus IMU and odometry let robots build maps and localize similarly to human perception, increasing adaptability across sites.
Camera Geometry Mirrors Human Eyes
- Camera placement is application specific; stereo baselines similar to human eye spacing (~10–12 cm) are optimal for close-range object recognition.
- Deploying 6–8 cameras around a robot increases robustness against occlusion by loads or walls.
