stonet2000
commited on
Commit
•
b0025af
1
Parent(s):
917568e
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,81 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
tags:
|
6 |
+
- robotics
|
7 |
+
- reinforcement learning
|
8 |
+
- embodied ai
|
9 |
+
- computer vision
|
10 |
+
- simulation
|
11 |
+
- Embodied AI
|
12 |
+
size_categories:
|
13 |
+
- 1M<n<10M
|
14 |
+
task_categories:
|
15 |
+
- reinforcement-learning
|
16 |
+
- robotics
|
17 |
+
viewer: false
|
18 |
---
|
19 |
+
|
20 |
+
# ManiSkill Data
|
21 |
+
|
22 |
+
![teaser](https://github.com/haosulab/ManiSkill2/blob/main/figures/teaser_v2.jpg?raw=true)
|
23 |
+
|
24 |
+
[![PyPI version](https://badge.fury.io/py/mani-skill2.svg)](https://badge.fury.io/py/mani-skill2) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/haosulab/ManiSkill2/blob/main/examples/tutorials/1_quickstart.ipynb)
|
25 |
+
[![Docs status](https://img.shields.io/badge/docs-passing-brightgreen.svg)](https://haosulab.github.io/ManiSkill2)
|
26 |
+
[![Discord](https://img.shields.io/discord/996566046414753822?logo=discord)](https://discord.gg/x8yUZe5AdN)
|
27 |
+
<!-- [![Docs](https://github.com/haosulab/ManiSkill2/actions/workflows/gh-pages.yml/badge.svg)](https://haosulab.github.io/ManiSkill2) -->
|
28 |
+
|
29 |
+
ManiSkill is a unified benchmark for learning generalizable robotic manipulation skills powered by [SAPIEN](https://sapien.ucsd.edu/). **It features 20 out-of-box task families with 2000+ diverse object models and 4M+ demonstration frames**. Moreover, it empowers fast visual input learning algorithms so that **a CNN-based policy can collect samples at about 2000 FPS with 1 GPU and 16 processes on a workstation**. The benchmark can be used to study a wide range of algorithms: 2D & 3D vision-based reinforcement learning, imitation learning, sense-plan-act, etc.
|
30 |
+
This is the huggingface datasets page for all data related to [ManiSkill2](https://github.com/haosulab/ManiSkill2),
|
31 |
+
including **assets, robot demonstrations, and pretrained models.** Note previously there is a ManiSkill and ManiSkill2, we are rebranding it all to just ManiSkill and the python package versioning tells you which iteration.
|
32 |
+
|
33 |
+
For detailed information about ManiSkill, head over to our [GitHub repository](https://github.com/haosulab/ManiSkill2), [website](https://maniskill2.github.io/), or [ICLR 2023 paper](https://arxiv.org/abs/2302.04659)
|
34 |
+
[documentation](https://maniskill.readthedocs.io/en/dev/)
|
35 |
+
|
36 |
+
**Note that to download the data you must use the mani_skill package to do so as shown below, currently loading through HuggingFace datasets does not work as intended just yet**
|
37 |
+
|
38 |
+
## Assets
|
39 |
+
|
40 |
+
Some environments require you to download additional assets, which are stored here.
|
41 |
+
|
42 |
+
You can download task-specific assets by running
|
43 |
+
|
44 |
+
```
|
45 |
+
python -m mani_skill.utils.download_asset ${ENV_ID}
|
46 |
+
```
|
47 |
+
|
48 |
+
## Demonstration Data
|
49 |
+
|
50 |
+
We provide a command line tool (mani_skill.utils.download_demo) to download demonstrations from here.
|
51 |
+
|
52 |
+
```
|
53 |
+
# Download the demonstration dataset for a specific task
|
54 |
+
python -m mani_skill2.utils.download_demo ${ENV_ID}
|
55 |
+
# Download the demonstration datasets for all rigid-body tasks to "./demos"
|
56 |
+
python -m mani_skill2.utils.download_demo rigid_body -o ./demos
|
57 |
+
```
|
58 |
+
|
59 |
+
To learn how to use the demonstrations and what environments are available, go to the demonstrations documentation page: https://maniskill.readthedocs.io/en/dev/user_guide/datasets/datasets.html
|
60 |
+
|
61 |
+
|
62 |
+
## License
|
63 |
+
|
64 |
+
All rigid body environments in ManiSkill are licensed under fully permissive licenses (e.g., Apache-2.0).
|
65 |
+
|
66 |
+
The assets are licensed under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
|
67 |
+
|
68 |
+
## Citation
|
69 |
+
|
70 |
+
If you use ManiSkill or its assets, models, and demonstrations, please cite using the following BibTeX entry for now:
|
71 |
+
|
72 |
+
```
|
73 |
+
@inproceedings{gu2023maniskill2,
|
74 |
+
title={ManiSkill2: A Unified Benchmark for Generalizable Manipulation Skills},
|
75 |
+
author={Gu, Jiayuan and Xiang, Fanbo and Li, Xuanlin and Ling, Zhan and Liu, Xiqiaing and Mu, Tongzhou and Tang, Yihe and Tao, Stone and Wei, Xinyue and Yao, Yunchao and Yuan, Xiaodi and Xie, Pengwei and Huang, Zhiao and Chen, Rui and Su, Hao},
|
76 |
+
booktitle={International Conference on Learning Representations},
|
77 |
+
year={2023}
|
78 |
+
}
|
79 |
+
```
|
80 |
+
|
81 |
+
A ManiSkill3 bibtex will be made later.
|