|
---
|
|
task_categories:
|
|
- robotics
|
|
datasets:
|
|
- USC-GVL/Humanoid-X
|
|
pipeline_tag: robotics
|
|
---
|
|
|
|
<div align="center"> |
|
<h1> <img src="assets/icon.png" width="50" /> UH-1 </h1> |
|
</div> |
|
<h5 align="center"> |
|
<a href="https://usc-gvl.github.io/UH-1/">π Homepage</a> | <a href="https://huggingface.co/datasets/USC-GVL/Humanoid-X">β Dataset</a> | <a href="https://huggingface.co/datasets/USC-GVL/UH-1">π€ Models</a> | <a href="">π Paper</a> | <a href="">π» Code</a> |
|
</h5> |
|
|
|
|
|
This repo contains the officail model checkpoints for the paper "[Learning from Massive Human Videos for Universal Humanoid Pose Control]()" |
|
If you like our project, please give us a star β on GitHub for latest update. |
|
|
|
Our model checkpoints contain a transformer model `UH1_Transformer.pth` and an action tokenizer `UH1_Action_Tokenizer.pth`. For the usage/inference of these models, please refer to the [codes]() here. |
|
|
|
![Alt text](assets/teaser.png) |
|
|
|
# UH-1 Model Architecture |
|
|
|
![Alt text](assets/model.png) |
|
|
|
# UH-1 Real Robot Demo Results |
|
|
|
![Alt text](assets/realbot.png) |
|
|
|
# Citation |
|
|
|
If you find our work helpful, please cite us: |
|
|
|
```bibtex |
|
|
|
``` |