Moto: Latent Motion Token as the Bridging Language for Robot Manipulation
Abstract
Recent developments in Large Language Models pre-trained on extensive corpora have shown significant success in various natural language processing tasks with minimal fine-tuning. This success offers new promise for robotics, which has long been constrained by the high cost of action-labeled data. We ask: given the abundant video data containing interaction-related knowledge available as a rich "corpus", can a similar generative pre-training approach be effectively applied to enhance robot learning? The key challenge is to identify an effective representation for autoregressive pre-training that benefits robot manipulation tasks. Inspired by the way humans learn new skills through observing dynamic environments, we propose that effective robotic learning should emphasize motion-related knowledge, which is closely tied to low-level actions and is hardware-agnostic, facilitating the transfer of learned motions to actual robot actions. To this end, we introduce Moto, which converts video content into latent Motion Token sequences by a Latent Motion Tokenizer, learning a bridging "language" of motion from videos in an unsupervised manner. We pre-train Moto-GPT through motion token autoregression, enabling it to capture diverse visual motion knowledge. After pre-training, Moto-GPT demonstrates the promising ability to produce semantically interpretable motion tokens, predict plausible motion trajectories, and assess trajectory rationality through output likelihood. To transfer learned motion priors to real robot actions, we implement a co-fine-tuning strategy that seamlessly bridges latent motion token prediction and real robot control. Extensive experiments show that the fine-tuned Moto-GPT exhibits superior robustness and efficiency on robot manipulation benchmarks, underscoring its effectiveness in transferring knowledge from video data to downstream visual manipulation tasks.
Community
Project page: https://chenyi99.github.io/moto/
Code: https://github.com/TencentARC/Moto
Paper: https://arxiv.org/pdf/2412.04445
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Latent Action Pretraining from Videos (2024)
- Learn from Downstream and Be Yourself in Multimodal Large Language Model Fine-Tuning (2024)
- Pre-trained Visual Dynamics Representations for Efficient Policy Learning (2024)
- MotionGPT-2: A General-Purpose Motion-Language Model for Motion Generation and Understanding (2024)
- On the Surprising Effectiveness of Attention Transfer for Vision Transformers (2024)
- IGOR: Image-GOal Representations are the Atomic Control Units for Foundation Models in Embodied AI (2024)
- CLIP-RT: Learning Language-Conditioned Robotic Policies from Natural Language Supervision (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper