
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
LEGATO: Cross-Embodiment Imitation Using a Grasping Tool
Mingyo Seo, H. Andy Park, Shenli Yuan, Yuke Zhu†, Luis Sentis†
Abstract
Cross-embodiment imitation learning enables policies trained on specific embodiments to transfer across different robots, unlocking the potential for large-scale imitation learning that is both cost-effective and highly reusable. This paper presents LEGATO, a cross-embodiment imitation learning framework for visuomotor skill transfer across varied kinematic morphologies. We introduce a handheld gripper that unifies action and observation spaces, allowing tasks to be defined consistently across robots. We train visuomotor policies on task demonstrations using this gripper through imitation learning, applying transformation to a motion-invariant space for computing the training loss. Gripper motions generated by the policies are retargeted into high-degree-of-freedom whole-body motions using inverse kinematics for deployment across diverse embodiments. Our evaluations in simulation and real-robot experiments highlight the framework's effectiveness in learning and transferring visuomotor skills across various robots.
Citing
@article{seo2024legato,
title={LEGATO: Cross-Embodiment Imitation Using a Grasping Tool},
author={Seo, Mingyo and Park, H. Andy and Yuan, Shenli and Zhu, Yuke and
and Sentis, Luis},
journal={IEEE Robotics and Automation Letters (RA-L)},
year={2025}
}
- Downloads last month
- 4
Models trained or fine-tuned on kiwi-sherbet/LEGATO
