File size: 11,606 Bytes
9547af2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1f303a4
9547af2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1f303a4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
---
# Example metadata to be added to a dataset card.  
# Full dataset card template at https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md
language:
- en
license: mit  # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
tags:
- robotics
- manipulation
- rearrangement
- computer-vision
- reinforcement-learning
- imitation-learning
- rgbd
- rgb
- depth
- low-level-control
- whole-body-control
- home-assistant
- simulation
- maniskill
annotations_creators:
- machine-generated  # Generated from RL policies with filtering
language_creators:
- machine-generated
language_details: en-US
pretty_name: ManiSkill-HAB TidyHouse Dataset
size_categories:
- 1M<n<10M  # Dataset has 18K episodes with 3.6M transitions
# source_datasets:  # None, original
task_categories:
- robotics
- reinforcement-learning
task_ids:
- grasping
- task-planning

configs:
  - config_name: pick-002_master_chef_can
    data_files:
    - split: trajectories
      path: pick/002_master_chef_can.h5
    - split: metadata
      path: pick/002_master_chef_can.json
  
  - config_name: pick-003_cracker_box
    data_files:
    - split: trajectories
      path: pick/003_cracker_box.h5
    - split: metadata
      path: pick/003_cracker_box.json
  
  - config_name: pick-004_sugar_box
    data_files:
    - split: trajectories
      path: pick/004_sugar_box.h5
    - split: metadata
      path: pick/004_sugar_box.json
  
  - config_name: pick-005_tomato_soup_can
    data_files:
    - split: trajectories
      path: pick/005_tomato_soup_can.h5
    - split: metadata
      path: pick/005_tomato_soup_can.json
  
  - config_name: pick-007_tuna_fish_can
    data_files:
    - split: trajectories
      path: pick/007_tuna_fish_can.h5
    - split: metadata
      path: pick/007_tuna_fish_can.json
  
  - config_name: pick-008_pudding_box
    data_files:
    - split: trajectories
      path: pick/008_pudding_box.h5
    - split: metadata
      path: pick/008_pudding_box.json
  
  - config_name: pick-009_gelatin_box
    data_files:
    - split: trajectories
      path: pick/009_gelatin_box.h5
    - split: metadata
      path: pick/009_gelatin_box.json
  
  - config_name: pick-010_potted_meat_can
    data_files:
    - split: trajectories
      path: pick/010_potted_meat_can.h5
    - split: metadata
      path: pick/010_potted_meat_can.json
  
  - config_name: pick-024_bowl
    data_files:
    - split: trajectories
      path: pick/024_bowl.h5
    - split: metadata
      path: pick/024_bowl.json
  
  - config_name: place-002_master_chef_can
    data_files:
    - split: trajectories
      path: place/002_master_chef_can.h5
    - split: metadata
      path: place/002_master_chef_can.json
  
  - config_name: place-003_cracker_box
    data_files:
    - split: trajectories
      path: place/003_cracker_box.h5
    - split: metadata
      path: place/003_cracker_box.json
  
  - config_name: place-004_sugar_box
    data_files:
    - split: trajectories
      path: place/004_sugar_box.h5
    - split: metadata
      path: place/004_sugar_box.json
  
  - config_name: place-005_tomato_soup_can
    data_files:
    - split: trajectories
      path: place/005_tomato_soup_can.h5
    - split: metadata
      path: place/005_tomato_soup_can.json
  
  - config_name: place-007_tuna_fish_can
    data_files:
    - split: trajectories
      path: place/007_tuna_fish_can.h5
    - split: metadata
      path: place/007_tuna_fish_can.json
  
  - config_name: place-008_pudding_box
    data_files:
    - split: trajectories
      path: place/008_pudding_box.h5
    - split: metadata
      path: place/008_pudding_box.json
  
  - config_name: place-009_gelatin_box
    data_files:
    - split: trajectories
      path: place/009_gelatin_box.h5
    - split: metadata
      path: place/009_gelatin_box.json
  
  - config_name: place-010_potted_meat_can
    data_files:
    - split: trajectories
      path: place/010_potted_meat_can.h5
    - split: metadata
      path: place/010_potted_meat_can.json
  
  - config_name: place-024_bowl
    data_files:
    - split: trajectories
      path: place/024_bowl.h5
    - split: metadata
      path: place/024_bowl.json

# # Optional. This part can be used to store the feature types and size of the dataset to be used in python. This can be automatically generated using the datasets-cli.
# dataset_info:
#   features:
#     - name: {feature_name_0}    # Example: id
#       dtype: {feature_dtype_0}  # Example: int32
#     - name: {feature_name_1}    # Example: text
#       dtype: {feature_dtype_1}  # Example: string
#     - name: {feature_name_2}    # Example: image
#       dtype: {feature_dtype_2}  # Example: image
#     # Example for SQuAD:
#     # - name: id
#     #   dtype: string
#     # - name: title
#     #   dtype: string
#     # - name: context
#     #   dtype: string
#     # - name: question
#     #   dtype: string
#     # - name: answers
#     #   sequence:
#     #     - name: text
#     #       dtype: string
#     #     - name: answer_start
#     #       dtype: int32
#   config_name: {config_name}  # Name of the dataset subset. Example for glue: sst2
#   splits:
#     - name: {split_name_0}                  # Example: train
#       num_bytes: {split_num_bytes_0}        # Example for SQuAD: 79317110
#       num_examples: {split_num_examples_0}  # Example for SQuAD: 87599
#   download_size: {dataset_download_size}   # Example for SQuAD: 35142551
#   dataset_size: {dataset_size}             # Example for SQuAD: 89789763

# It can also be a list of multiple subsets (also called "configurations"):
# ```yaml
# dataset_info:
#   - config_name: {config0}
#     features:
#       ...
#   - config_name: {config1}
#     features:
#       ...
# ```

# # Optional. If you want your dataset to be protected behind a gate that users have to accept to access the dataset. More info at https://huggingface.co/docs/hub/datasets-gated
# extra_gated_fields:
# - {field_name_0}: {field_type_0}  # Example: Name: text
# - {field_name_1}: {field_type_1}  # Example: Affiliation: text
# - {field_name_2}: {field_type_2}  # Example: Email: text
# - {field_name_3}: {field_type_3}  # Example for speech datasets: I agree to not attempt to determine the identity of speakers in this dataset: checkbox
# extra_gated_prompt: {extra_gated_prompt}  # Example for speech datasets: By clicking on “Access repository” below, you also agree to not attempt to determine the identity of speakers in the dataset.

# # Optional. Add this if you want to encode a train and evaluation info in a structured way for AutoTrain or Evaluation on the Hub
# train-eval-index:
#   - config: {config_name}           # The dataset subset name to use. Example for datasets without subsets: default. Example for glue: sst2
#     task: {task_name}               # The task category name (same as task_category). Example: question-answering
#     task_id: {task_type}            # The AutoTrain task id. Example: extractive_question_answering
#     splits:
#       train_split: train            # The split to use for training. Example: train
#       eval_split: validation        # The split to use for evaluation. Example: test
#     col_mapping:                    # The columns mapping needed to configure the task_id.
#     # Example for extractive_question_answering:
#       # question: question
#       # context: context
#       # answers:
#       #   text: text
#       #   answer_start: answer_start
#     metrics:
#       - type: {metric_type}         # The metric id. Example: wer. Use metric id from https://hf.co/metrics
#         name: {metric_name}         # Tne metric name to be displayed. Example: Test WER
---

# ManiSkill-HAB TidyHouse Dataset

**[Paper](https://arxiv.org/abs/2412.13211)** 
| **[Website](https://arth-shukla.github.io/mshab)** 
| **[Code](https://github.com/arth-shukla/mshab)** 
| **[Models](https://huggingface.co/arth-shukla/mshab_checkpoints)** 
| **[(Full) Dataset](https://arth-shukla.github.io/mshab/#dataset-section)** 
| **[Supplementary](https://sites.google.com/view/maniskill-hab)**


<!-- Provide a quick summary of the dataset. -->

Whole-body, low-level control/manipulation demonstration dataset for ManiSkill-HAB TidyHouse.

## Dataset Details

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->

Demonstration dataset for ManiSkill-HAB TidyHouse. Each subtask/object combination (e.g pick 002_master_chef_can) has 1000 successful episodes (200 samples/demonstration) gathered using [RL policies](https://huggingface.co/arth-shukla/mshab_checkpoints) fitered for safe robot behavior with a rule-based event labeling system.

TidyHouse contains the Pick and Place subtasks. Relative to the other MS-HAB long-horizon tasks (PrepareGroceries, SetTable), TidyHouse Pick is approximately medium difficulty, while TidyHouse Place is medium-to-hard difficulty (on a scale of easy-medium-hard).

### Related Datasets

Full information about the MS-HAB datasets (size, difficulty, links, etc), including the other long horizon tasks, are available [on the ManiSkill-HAB website](https://arth-shukla.github.io/mshab/#dataset-section).

- [ManiSkill-HAB PrepareGroceries Dataset](https://huggingface.co/datasets/arth-shukla/MS-HAB-PrepareGroceries)
- [ManiSkill-HAB SetTable Dataset](https://huggingface.co/datasets/arth-shukla/MS-HAB-SetTable)

## Uses

<!-- Address questions around how the dataset is intended to be used. -->

### Direct Use

This dataset can be used to train vision-based learning from demonstrations and imitation learning methods, which can be evaluated with the [MS-HAB environments](https://github.com/arth-shukla/mshab). This dataset may be useful as synthetic data for computer vision tasks as well.

### Out-of-Scope Use

While blind state-based policies can be trained on this dataset, it is recommended to train vision-based policies to handle collisions and obstructions.

## Dataset Structure

Each subtask/object combination has files `[SUBTASK]/[OBJECT].json` and `[SUBTASK]/[OBJECT].h5`. The JSON file contains episode metadata, event labels, etc, while the HDF5 file contains the demonstration data.

## Dataset Creation

<!-- TODO (arth): link paper appendix, maybe html, for the event labeling system -->
The data is gathered using [RL policies](https://huggingface.co/arth-shukla/mshab_checkpoints) fitered for safe robot behavior with a rule-based event labeling system.

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

The dataset is purely synthetic.

While MS-HAB supports high-quality ray-traced rendering, this dataset uses ManiSkill's default rendering for data generation due to efficiency. However, users can generate their own data with the [data generation code](https://github.com/arth-shukla/mshab/blob/main/mshab/utils/gen/gen_data.py).

<!-- TODO (arth): citation -->
## Citation

```
@article{shukla2024maniskillhab,
	author		 = {Arth Shukla and Stone Tao and Hao Su},
	title        = {ManiSkill-HAB: A Benchmark for Low-Level Manipulation in Home Rearrangement Tasks},
	journal      = {CoRR},
	volume       = {abs/2412.13211},
	year         = {2024},
	url          = {https://doi.org/10.48550/arXiv.2412.13211},
	doi          = {10.48550/ARXIV.2412.13211},
	eprinttype   = {arXiv},
	eprint       = {2412.13211},
	timestamp    = {Mon, 09 Dec 2024 01:29:24 +0100},
	biburl       = {https://dblp.org/rec/journals/corr/abs-2412-13211.bib},
	bibsource    = {dblp computer science bibliography, https://dblp.org}
}
```