arth-shukla commited on
Commit
9547af2
1 Parent(s): 021007e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +299 -0
README.md ADDED
@@ -0,0 +1,299 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ # Example metadata to be added to a dataset card.
3
+ # Full dataset card template at https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md
4
+ language:
5
+ - en
6
+ license: mit # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
7
+ tags:
8
+ - robotics
9
+ - manipulation
10
+ - rearrangement
11
+ - computer-vision
12
+ - reinforcement-learning
13
+ - imitation-learning
14
+ - rgbd
15
+ - rgb
16
+ - depth
17
+ - low-level-control
18
+ - whole-body-control
19
+ - home-assistant
20
+ - simulation
21
+ - maniskill
22
+ annotations_creators:
23
+ - machine-generated # Generated from RL policies with filtering
24
+ language_creators:
25
+ - machine-generated
26
+ language_details: en-US
27
+ pretty_name: ManiSkill-HAB TidyHouse Dataset
28
+ size_categories:
29
+ - 1M<n<10M # Dataset has 18K episodes with 3.6M transitions
30
+ # source_datasets: # None, original
31
+ task_categories:
32
+ - robotics
33
+ - reinforcement-learning
34
+ task_ids:
35
+ - grasping
36
+ - task-planning
37
+
38
+ configs:
39
+ - config_name: pick-002_master_chef_can
40
+ data_files:
41
+ - split: trajectories
42
+ path: pick/002_master_chef_can.h5
43
+ - split: metadata
44
+ path: pick/002_master_chef_can.json
45
+
46
+ - config_name: pick-003_cracker_box
47
+ data_files:
48
+ - split: trajectories
49
+ path: pick/003_cracker_box.h5
50
+ - split: metadata
51
+ path: pick/003_cracker_box.json
52
+
53
+ - config_name: pick-004_sugar_box
54
+ data_files:
55
+ - split: trajectories
56
+ path: pick/004_sugar_box.h5
57
+ - split: metadata
58
+ path: pick/004_sugar_box.json
59
+
60
+ - config_name: pick-005_tomato_soup_can
61
+ data_files:
62
+ - split: trajectories
63
+ path: pick/005_tomato_soup_can.h5
64
+ - split: metadata
65
+ path: pick/005_tomato_soup_can.json
66
+
67
+ - config_name: pick-007_tuna_fish_can
68
+ data_files:
69
+ - split: trajectories
70
+ path: pick/007_tuna_fish_can.h5
71
+ - split: metadata
72
+ path: pick/007_tuna_fish_can.json
73
+
74
+ - config_name: pick-008_pudding_box
75
+ data_files:
76
+ - split: trajectories
77
+ path: pick/008_pudding_box.h5
78
+ - split: metadata
79
+ path: pick/008_pudding_box.json
80
+
81
+ - config_name: pick-009_gelatin_box
82
+ data_files:
83
+ - split: trajectories
84
+ path: pick/009_gelatin_box.h5
85
+ - split: metadata
86
+ path: pick/009_gelatin_box.json
87
+
88
+ - config_name: pick-010_potted_meat_can
89
+ data_files:
90
+ - split: trajectories
91
+ path: pick/010_potted_meat_can.h5
92
+ - split: metadata
93
+ path: pick/010_potted_meat_can.json
94
+
95
+ - config_name: pick-024_bowl
96
+ data_files:
97
+ - split: trajectories
98
+ path: pick/024_bowl.h5
99
+ - split: metadata
100
+ path: pick/024_bowl.json
101
+
102
+ - config_name: place-002_master_chef_can
103
+ data_files:
104
+ - split: trajectories
105
+ path: place/002_master_chef_can.h5
106
+ - split: metadata
107
+ path: place/002_master_chef_can.json
108
+
109
+ - config_name: place-003_cracker_box
110
+ data_files:
111
+ - split: trajectories
112
+ path: place/003_cracker_box.h5
113
+ - split: metadata
114
+ path: place/003_cracker_box.json
115
+
116
+ - config_name: place-004_sugar_box
117
+ data_files:
118
+ - split: trajectories
119
+ path: place/004_sugar_box.h5
120
+ - split: metadata
121
+ path: place/004_sugar_box.json
122
+
123
+ - config_name: place-005_tomato_soup_can
124
+ data_files:
125
+ - split: trajectories
126
+ path: place/005_tomato_soup_can.h5
127
+ - split: metadata
128
+ path: place/005_tomato_soup_can.json
129
+
130
+ - config_name: place-007_tuna_fish_can
131
+ data_files:
132
+ - split: trajectories
133
+ path: place/007_tuna_fish_can.h5
134
+ - split: metadata
135
+ path: place/007_tuna_fish_can.json
136
+
137
+ - config_name: place-008_pudding_box
138
+ data_files:
139
+ - split: trajectories
140
+ path: place/008_pudding_box.h5
141
+ - split: metadata
142
+ path: place/008_pudding_box.json
143
+
144
+ - config_name: place-009_gelatin_box
145
+ data_files:
146
+ - split: trajectories
147
+ path: place/009_gelatin_box.h5
148
+ - split: metadata
149
+ path: place/009_gelatin_box.json
150
+
151
+ - config_name: place-010_potted_meat_can
152
+ data_files:
153
+ - split: trajectories
154
+ path: place/010_potted_meat_can.h5
155
+ - split: metadata
156
+ path: place/010_potted_meat_can.json
157
+
158
+ - config_name: place-024_bowl
159
+ data_files:
160
+ - split: trajectories
161
+ path: place/024_bowl.h5
162
+ - split: metadata
163
+ path: place/024_bowl.json
164
+
165
+ # # Optional. This part can be used to store the feature types and size of the dataset to be used in python. This can be automatically generated using the datasets-cli.
166
+ # dataset_info:
167
+ # features:
168
+ # - name: {feature_name_0} # Example: id
169
+ # dtype: {feature_dtype_0} # Example: int32
170
+ # - name: {feature_name_1} # Example: text
171
+ # dtype: {feature_dtype_1} # Example: string
172
+ # - name: {feature_name_2} # Example: image
173
+ # dtype: {feature_dtype_2} # Example: image
174
+ # # Example for SQuAD:
175
+ # # - name: id
176
+ # # dtype: string
177
+ # # - name: title
178
+ # # dtype: string
179
+ # # - name: context
180
+ # # dtype: string
181
+ # # - name: question
182
+ # # dtype: string
183
+ # # - name: answers
184
+ # # sequence:
185
+ # # - name: text
186
+ # # dtype: string
187
+ # # - name: answer_start
188
+ # # dtype: int32
189
+ # config_name: {config_name} # Name of the dataset subset. Example for glue: sst2
190
+ # splits:
191
+ # - name: {split_name_0} # Example: train
192
+ # num_bytes: {split_num_bytes_0} # Example for SQuAD: 79317110
193
+ # num_examples: {split_num_examples_0} # Example for SQuAD: 87599
194
+ # download_size: {dataset_download_size} # Example for SQuAD: 35142551
195
+ # dataset_size: {dataset_size} # Example for SQuAD: 89789763
196
+
197
+ # It can also be a list of multiple subsets (also called "configurations"):
198
+ # ```yaml
199
+ # dataset_info:
200
+ # - config_name: {config0}
201
+ # features:
202
+ # ...
203
+ # - config_name: {config1}
204
+ # features:
205
+ # ...
206
+ # ```
207
+
208
+ # # Optional. If you want your dataset to be protected behind a gate that users have to accept to access the dataset. More info at https://huggingface.co/docs/hub/datasets-gated
209
+ # extra_gated_fields:
210
+ # - {field_name_0}: {field_type_0} # Example: Name: text
211
+ # - {field_name_1}: {field_type_1} # Example: Affiliation: text
212
+ # - {field_name_2}: {field_type_2} # Example: Email: text
213
+ # - {field_name_3}: {field_type_3} # Example for speech datasets: I agree to not attempt to determine the identity of speakers in this dataset: checkbox
214
+ # extra_gated_prompt: {extra_gated_prompt} # Example for speech datasets: By clicking on “Access repository” below, you also agree to not attempt to determine the identity of speakers in the dataset.
215
+
216
+ # # Optional. Add this if you want to encode a train and evaluation info in a structured way for AutoTrain or Evaluation on the Hub
217
+ # train-eval-index:
218
+ # - config: {config_name} # The dataset subset name to use. Example for datasets without subsets: default. Example for glue: sst2
219
+ # task: {task_name} # The task category name (same as task_category). Example: question-answering
220
+ # task_id: {task_type} # The AutoTrain task id. Example: extractive_question_answering
221
+ # splits:
222
+ # train_split: train # The split to use for training. Example: train
223
+ # eval_split: validation # The split to use for evaluation. Example: test
224
+ # col_mapping: # The columns mapping needed to configure the task_id.
225
+ # # Example for extractive_question_answering:
226
+ # # question: question
227
+ # # context: context
228
+ # # answers:
229
+ # # text: text
230
+ # # answer_start: answer_start
231
+ # metrics:
232
+ # - type: {metric_type} # The metric id. Example: wer. Use metric id from https://hf.co/metrics
233
+ # name: {metric_name} # Tne metric name to be displayed. Example: Test WER
234
+ ---
235
+
236
+ # ManiSkill-HAB TidyHouse Dataset
237
+
238
+ **[Paper (arXiv TBA)]()**
239
+ | **[Website](https://arth-shukla.github.io/mshab)**
240
+ | **[Code](https://github.com/arth-shukla/mshab)**
241
+ | **[Models](https://huggingface.co/arth-shukla/mshab_checkpoints)**
242
+ | **[(Full) Dataset](https://arth-shukla.github.io/mshab/#dataset-section)**
243
+ | **[Supplementary](https://sites.google.com/view/maniskill-hab)**
244
+
245
+
246
+ <!-- Provide a quick summary of the dataset. -->
247
+
248
+ Whole-body, low-level control/manipulation demonstration dataset for ManiSkill-HAB TidyHouse.
249
+
250
+ ## Dataset Details
251
+
252
+ ### Dataset Description
253
+
254
+ <!-- Provide a longer summary of what this dataset is. -->
255
+
256
+ Demonstration dataset for ManiSkill-HAB TidyHouse. Each subtask/object combination (e.g pick 002_master_chef_can) has 1000 successful episodes (200 samples/demonstration) gathered using [RL policies](https://huggingface.co/arth-shukla/mshab_checkpoints) fitered for safe robot behavior with a rule-based event labeling system.
257
+
258
+ TidyHouse contains the Pick and Place subtasks. Relative to the other MS-HAB long-horizon tasks (PrepareGroceries, SetTable), TidyHouse Pick is approximately medium difficulty, while TidyHouse Place is medium-to-hard difficulty (on a scale of easy-medium-hard).
259
+
260
+ ### Related Datasets
261
+
262
+ Full information about the MS-HAB datasets (size, difficulty, links, etc), including the other long horizon tasks, are available [on the ManiSkill-HAB website](https://arth-shukla.github.io/mshab/#dataset-section).
263
+
264
+ - [ManiSkill-HAB PrepareGroceries Dataset](https://huggingface.co/datasets/arth-shukla/MS-HAB-PrepareGroceries)
265
+ - [ManiSkill-HAB SetTable Dataset](https://huggingface.co/datasets/arth-shukla/MS-HAB-SetTable)
266
+
267
+ ## Uses
268
+
269
+ <!-- Address questions around how the dataset is intended to be used. -->
270
+
271
+ ### Direct Use
272
+
273
+ This dataset can be used to train vision-based learning from demonstrations and imitation learning methods, which can be evaluated with the [MS-HAB environments](https://github.com/arth-shukla/mshab). This dataset may be useful as synthetic data for computer vision tasks as well.
274
+
275
+ ### Out-of-Scope Use
276
+
277
+ While blind state-based policies can be trained on this dataset, it is recommended to train vision-based policies to handle collisions and obstructions.
278
+
279
+ ## Dataset Structure
280
+
281
+ Each subtask/object combination has files `[SUBTASK]/[OBJECT].json` and `[SUBTASK]/[OBJECT].h5`. The JSON file contains episode metadata, event labels, etc, while the HDF5 file contains the demonstration data.
282
+
283
+ ## Dataset Creation
284
+
285
+ <!-- TODO (arth): link paper appendix, maybe html, for the event labeling system -->
286
+ The data is gathered using [RL policies](https://huggingface.co/arth-shukla/mshab_checkpoints) fitered for safe robot behavior with a rule-based event labeling system.
287
+
288
+ ## Bias, Risks, and Limitations
289
+
290
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
291
+
292
+ The dataset is purely synthetic.
293
+
294
+ While MS-HAB supports high-quality ray-traced rendering, this dataset uses ManiSkill's default rendering for data generation due to efficiency. However, users can generate their own data with the [data generation code](https://github.com/arth-shukla/mshab/blob/main/mshab/utils/gen/gen_data.py).
295
+
296
+ <!-- TODO (arth): citation -->
297
+ <!-- ## Citation [TBA]
298
+
299
+ [Citation TBA] -->