
RoboPapers Ep#70: A Systematic Study of Data Modalities and Strategies for Co-training Large Behavior Models for Robot Manipulation
Co-training has become a key part of the recipe for training large robotics models; it means that you mix some proportion of real robot data with other data sources, like simulation or egocentric human video data. This is especially important because robotics data tends to lack diversity which can be somewhat compensated for by the inclusion of these other modalities.
And yet there has not been a sizable study on what constitute good practices for cotraining until now! We talk to Fanqi Lin and Jose Barreiros about their new work, a massive study which evaluated 89 policies over thousands of rollouts to tell us which forms of co-training were most useful for robotics.
Watch episode 70 of RoboPapers, with Michael Cho and Chris Paxton, now!
Abstract
Large behavior models have shown strong dexterous manipulation capabilities by extending imitation learning to large-scale training on multi-task robot data, yet their generalization remains limited by the insufficient robot data coverage. To expand this coverage without costly additional data collection, recent work relies on co-training: jointly learning from target robot data and heterogeneous data modalities. However, how different co-training data modalities and strategies affect policy performance remains poorly understood. We present a large-scale empirical study examining five co-training data modalities: standard vision-language data, dense language annotations for robot trajectories, cross-embodiment robot data, human videos, and discrete robot action tokens across single- and multi-phase training strategies. Our study leverages 4,000 hours of robot and human manipulation data and 50M vision-language samples to train vision-language-action policies. We evaluate 89 policies over 58,000 simulation rollouts and 2,835 real-world rollouts. Our results show that co-training with forms of vision-language and cross-embodiment robot data substantially improves generalization to distribution shifts, unseen tasks, and language following, while discrete action token variants yield no significant benefits. Combining effective modalities produces cumulative gains and enables rapid adaptation to unseen long-horizon dexterous tasks via fine-tuning. Training exclusively on robot data degrades the visiolinguistic understanding of the vision-language model backbone, while co-training with effective modalities restores these capabilities. Explicitly conditioning action generation on chain-of-thought traces learned from co-training data does not improve performance in our simulation benchmark. Together, these results provide practical guidance for building scalable generalist robot policies.
Learn More
Project page: https://co-training-lbm.github.io
ArXiV: https://arxiv.org/abs/2602.01067
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit robopapers.substack.com
