Datasets:
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,25 @@ license: cc-by-nc-sa-4.0
|
|
6 |
|
7 |
## Dataset Summary
|
8 |
|
9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
## Citation
|
12 |
|
|
|
6 |
|
7 |
## Dataset Summary
|
8 |
|
9 |
+
This repository contains the official release of datasets for the [CoRL 2023](https://www.corl2023.org/) paper "MimicGen: A Data Generation System for Scalable Robot Learning using Human Demonstrations".
|
10 |
+
|
11 |
+
[**[Website]**](https://mimicgen.github.io)   [**[Paper]**](https://openreview.net/forum?id=dk-2R1f_LR)
|
12 |
+
|
13 |
+
## Dataset Structure
|
14 |
+
|
15 |
+
Each dataset is an hdf5 file that is readily compatible with [robomimic](https://robomimic.github.io/) --- the structure is explained [here](https://robomimic.github.io/docs/datasets/overview.html#dataset-structure).
|
16 |
+
|
17 |
+
As described in the paper, each task has a default reset distribution (D_0). Source human demonstrations (usually 10 demos) were collected on this distribution and MimicGen was subsequently used to generate large datasets (usually 1000 demos) across different task reset distributions (e.g. D_0, D_1, D_2), objects, and robots.
|
18 |
+
|
19 |
+
The datasets are split into different types:
|
20 |
+
|
21 |
+
- **source**: source human datasets used to generate all data -- this generally consists of 10 human demonstrations collected on the D_0 variant for each task.
|
22 |
+
- **core**: datasets generated with MimicGen for different task reset distributions. These correspond to the core set of results in Figure 4 of the paper.
|
23 |
+
- **object**: datasets generated with MimicGen for different objects. These correspond to the results in Appendix G of the paper.
|
24 |
+
- **robot**: datasets generated with MimicGen for different robots. These correspond to the results in Appendix F of the paper.
|
25 |
+
- **large_interpolation**: datasets generated with MimicGen using much larger interpolation segments. These correspond to the results in Appendix H in the paper.
|
26 |
+
|
27 |
+
**Note**: We found that the large_interpolation datasets pose a significant challenge for imitation learning, and have substantial room for improvement.
|
28 |
|
29 |
## Citation
|
30 |
|