Tr0612 commited on
Commit
6ca540f
·
verified ·
1 Parent(s): 7a6e6dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -164
README.md CHANGED
@@ -5,143 +5,73 @@ task_categories:
5
  - reinforcement-learning
6
  tags:
7
  - metaworld
 
8
  - robotics
9
  - manipulation
10
  - multi-task
11
- - r3m
12
  - vision-language
13
- - imitation
 
14
  size_categories:
15
- - 1K<n<10K
16
  language:
17
  - en
18
- pretty_name: Short-MetaWorld Dataset
19
- dataset_info:
20
- features:
21
- - name: image
22
- dtype: image
23
- - name: state
24
- dtype:
25
- sequence: float32
26
- - name: action
27
- dtype:
28
- sequence: float32
29
- - name: prompt
30
- dtype: string
31
- - name: task_name
32
- dtype: string
33
- splits:
34
- - name: train
35
- num_bytes: 1900000000
36
- num_examples: 40000
37
- download_size: 1900000000
38
- dataset_size: 1900000000
39
  ---
40
 
41
- # Short-MetaWorld Dataset
42
 
43
  ## Overview
44
 
45
- Short-MetaWorld is a curated dataset from Meta-World containing **Multi-Task 10 (MT10)** and **Meta-Learning 10 (ML10)** tasks with **100 successful trajectories per task** and **20 steps per trajectory**. This dataset is specifically designed for multi-task robot learning, imitation learning, and vision-language robotics research.
46
-
47
- ## 🚀 Quick Start
48
-
49
- ```python
50
- from short_metaworld_loader import load_short_metaworld
51
- from torch.utils.data import DataLoader
52
 
53
- # Load the dataset
54
- dataset = load_short_metaworld("./", image_size=224)
 
 
 
55
 
56
- # Create a DataLoader
57
- dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
58
 
59
- # Get a sample
60
- sample = dataset[0]
61
- print(f"Image shape: {sample['image'].shape}")
62
- print(f"State: {sample['state']}")
63
- print(f"Action: {sample['action']}")
64
- print(f"Task: {sample['task_name']}")
65
- print(f"Prompt: {sample['prompt']}")
66
- ```
67
-
68
- ## 📁 Dataset Structure
69
-
70
- ```
71
- short-MetaWorld/
72
- ├── README.txt # Original dataset documentation
73
  ├── short-MetaWorld/
74
- ├── img_only/ # 224x224 RGB images
75
- ── button-press-topdown-v2/
76
- │ │ ├── 0/ # Trajectory 0
77
- │ │ │ │ ├── 0.jpg # Step 0
78
- │ │ │ ├── 1.jpg # Step 1
79
- │ │ │ └── ...
80
- │ │ ├── 1/ # Trajectory 1
81
- │ │ │ └── ...
82
- │ │ ├── door-open-v2/
83
- │ │ └── ...
84
- │ └── r3m-processed/ # R3M processed features
85
- │ └── r3m_MT10_20/
86
- │ ├── button-press-topdown-v2.pkl
87
- │ ├── door-open-v2.pkl
88
- │ └── ...
89
- └── r3m-processed/ # Additional R3M data
90
- └── r3m_MT10_20/
91
- ├── mt50_task_prompts.json # Task descriptions & prompts
92
- ├── short_metaworld_loader.py # Dataset loader
93
- └── requirements.txt
94
- ```
95
-
96
- ## 🎯 Tasks Included
97
-
98
- ### Multi-Task 10 (MT10)
99
- - `button-press-topdown-v2` - Press button from above
100
- - `door-open-v2` - Open door by pulling handle
101
- - `drawer-close-v2` - Close drawer
102
- - `drawer-open-v2` - Open drawer
103
- - `peg-insert-side-v2` - Insert peg into hole
104
- - `pick-place-v2` - Pick up object and place on target
105
-
106
- ### Meta-Learning 10 (ML10)
107
- Additional tasks for meta-learning evaluation.
108
-
109
- ## 📊 Data Format
110
-
111
- - **Images**: 224×224 RGB images in JPEG format
112
- - **States**: 7-dimensional robot state vectors (joint positions)
113
- - **Actions**: 4-dimensional continuous control actions
114
- - **Prompts**: Natural language task descriptions in 3 styles:
115
- - `simple`: Brief task description
116
- - `detailed`: Comprehensive task explanation
117
- - `task_specific`: Context-specific variations
118
- - **R3M Features**: Pre-processed visual representations using R3M model
119
-
120
- ## 💾 Loading the Dataset
121
-
122
- The dataset comes with a comprehensive loader (`short_metaworld_loader.py`):
123
-
124
- ```python
125
- # Load specific tasks
126
- mt10_tasks = [
127
- "reach-v2", "push-v2", "pick-place-v2", "door-open-v2",
128
- "drawer-open-v2", "drawer-close-v2", "button-press-topdown-v2",
129
- "button-press-v2", "button-press-wall-v2", "button-press-topdown-wall-v2"
130
- ]
131
- dataset = load_short_metaworld("./", tasks=mt10_tasks)
132
-
133
- # Load all available tasks
134
- dataset = load_short_metaworld("./")
135
-
136
- # Get dataset statistics
137
- stats = dataset.get_dataset_stats()
138
- print(f"Total steps: {stats['total_steps']}")
139
- print(f"Tasks: {stats['tasks']}")
140
-
141
- # Get task-specific prompts
142
- task_info = dataset.get_task_info("pick-place-v2")
143
- print(task_info['detailed']) # Detailed task description
144
- ```
145
 
146
  ## 🔬 Research Applications
147
 
@@ -153,49 +83,6 @@ This dataset is designed for:
153
  - **Meta-Learning**: Adapt quickly to new manipulation tasks
154
  - **Robot Policy Training**: End-to-end visuomotor control
155
 
156
- ## 📈 Dataset Statistics
157
-
158
- - **Total trajectories**: 2,000 (100 per task × 20 tasks)
159
- - **Total steps**: ~40,000 (20 steps per trajectory)
160
- - **Image resolution**: 224×224 RGB
161
- - **State dimension**: 7 (robot joint positions)
162
- - **Action dimension**: 4 (continuous control)
163
- - **Dataset size**: ~1.9GB
164
-
165
- ## 🛠️ Installation
166
-
167
- ```bash
168
- pip install torch torchvision Pillow numpy
169
- ```
170
-
171
- ## 📖 Citation
172
-
173
- If you use this dataset, please cite:
174
-
175
- ```bibtex
176
- @inproceedings{yu2020meta,
177
- title={Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning},
178
- author={Yu, Tianhe and Quillen, Deirdre and He, Zhanpeng and Julian, Ryan and Hausman, Karol and Finn, Chelsea and Levine, Sergey},
179
- booktitle={Conference on robot learning},
180
- pages={1094--1100},
181
- year={2020},
182
- organization={PMLR}
183
- }
184
-
185
- @inproceedings{nair2022r3m,
186
- title={R3M: A Universal Visual Representation for Robot Manipulation},
187
- author={Nair, Suraj and Rajeswaran, Aravind and Kumar, Vikash and Finn, Chelsea and Gupta, Abhinav},
188
- booktitle={Conference on Robot Learning},
189
- pages={892--902},
190
- year={2023},
191
- organization={PMLR}
192
- }
193
- ```
194
-
195
- ## 📧 Contact
196
-
197
- - Original dataset: liangzx@connect.hku.hk
198
- - Questions about this upload: Open an issue in the dataset repository
199
 
200
  ## ⚖️ License
201
 
 
5
  - reinforcement-learning
6
  tags:
7
  - metaworld
8
+ - short-metaworld
9
  - robotics
10
  - manipulation
11
  - multi-task
 
12
  - vision-language
13
+ - imitation-learning
14
+ - r3m
15
  size_categories:
16
+ - 10K<n<100K
17
  language:
18
  - en
19
+ pretty_name: Short-MetaWorld-VLA (v2+v3)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ---
21
 
22
+ # Short-MetaWorld-VLA (v2 + v3)
23
 
24
  ## Overview
25
 
26
+ This dataset contains a short MetaWorld collection used for VLA-style training and evaluation.
 
 
 
 
 
 
27
 
28
+ Current local structure includes:
29
+ - **24 task files** in `r3m_MT10_20` (`12 v2 + 12 v3`)
30
+ - **100 trajectories per task**
31
+ - **20 or 50 steps per trajectory** (task/version dependent)
32
+ - **84,000 total step samples** from PKL action/state streams
33
 
34
+ ## Dataset Structure
 
35
 
36
+ short-metaworld-vla/
37
+ ├── mt50_task_prompts.json
38
+ ├── short_metaworld_loader.py
39
+ ├── requirements.txt
 
 
 
 
 
 
 
 
 
 
40
  ├── short-MetaWorld/
41
+ ├── img_only/
42
+ ── <task>/<trajectory>/<step>.jpg
43
+ ── r3m-processed/
44
+ �� └── r3m_MT10_20/
45
+ ├── <task>-v2.pkl
46
+ ── <task>-v3.pkl
47
+ ── data.pkl
48
+ └── r3m-processed/
49
+ ── r3m_MT10_20/
50
+
51
+ ## Data Format
52
+
53
+ Per step:
54
+ - `image`: RGB frame (`.jpg`)
55
+ - `state`: **39D** float vector
56
+ - `action`: **4D** float vector
57
+ - `prompt`: task language instruction (from `mt50_task_prompts.json`)
58
+ - `task_name`: task identifier (e.g. `button-press-topdown-v3`)
59
+
60
+ ## Tasks
61
+
62
+ Includes both `-v2` and `-v3` variants such as:
63
+ - basketball
64
+ - button-press-topdown
65
+ - door-open
66
+ - drawer-open / drawer-close
67
+ - peg-insert-side
68
+ - pick-place
69
+ - push
70
+ - reach
71
+ - sweep
72
+ - window-open / window-close
73
+ - plus v3-only tasks in this dump (e.g. `handle-pull-v3`, `stick-pull-v3`)
74
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ## 🔬 Research Applications
77
 
 
83
  - **Meta-Learning**: Adapt quickly to new manipulation tasks
84
  - **Robot Policy Training**: End-to-end visuomotor control
85
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86
 
87
  ## ⚖️ License
88