Update README.md
Browse files
README.md
CHANGED
@@ -1,21 +1,26 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
**Scaling Robot Learning with Heterogeneous Pre-training**
|
5 |
|
6 |
-
|
7 |
|
8 |
-
|
9 |
|
10 |
-
**Citation**
|
11 |
|
|
|
|
|
|
|
|
|
|
|
12 |
@inproceedings{wang2024hpt,
|
13 |
-
author = {Lirui Wang, Xinlei Chen, Jialiang Zhao,
|
14 |
-
title = {Scaling
|
15 |
-
booktitle = {
|
16 |
year = {2024}
|
17 |
}
|
|
|
|
|
18 |
|
19 |
-
|
20 |
|
21 |
-
|
|
|
1 |
+
# 🦾 Heterogenous Pre-trained Transformers
|
2 |
+
|
3 |
+
[Lirui Wang](https://liruiw.github.io/), [Xinlei Chen](https://xinleic.xyz/), [Jialiang Zhao](https://alanz.info/), [Kaiming He](https://people.csail.mit.edu/kaiming/)
|
|
|
4 |
|
5 |
+
Neural Information Processing Systems (Spotlight), 2024
|
6 |
|
7 |
+
You can find more details on our [project page](https://liruiw.github.io/hpt). An alternative clean implementation of HPT in Hugging Face can also be found [here](https://github.com/liruiw/lerobot/tree/hpt_squash/lerobot/common/policies/hpt).
|
8 |
|
|
|
9 |
|
10 |
+
**TL;DR:** HPT aligns different embodiment to a shared latent space and investigates the scaling behaviors in policy learning. Put a scalable transformer in the middle of your policy and don’t train from scratch!
|
11 |
+
|
12 |
+
|
13 |
+
If you find HPT useful in your research, please consider citing:
|
14 |
+
```
|
15 |
@inproceedings{wang2024hpt,
|
16 |
+
author = {Lirui Wang, Xinlei Chen, Jialiang Zhao, Kaiming He},
|
17 |
+
title = {Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers},
|
18 |
+
booktitle = {Neurips},
|
19 |
year = {2024}
|
20 |
}
|
21 |
+
```
|
22 |
+
|
23 |
|
24 |
+
## Contact
|
25 |
|
26 |
+
If you have any questions, feel free to contact me through email (liruiw@mit.edu). Enjoy!
|