JeasLee commited on
Commit
4ad5cc9
·
verified ·
1 Parent(s): 1b121d2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +350 -0
README.md ADDED
@@ -0,0 +1,350 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RoboInter-Data: Intermediate Representation Annotations for Robot Manipulation
2
+
3
+ Rich, dense, per-frame **intermediate representation annotations** for robot manipulation, built on top of [DROID](https://droid-dataset.github.io/) and [RH20T](https://rh20t.github.io/). Developed as part of the [RoboInter](https://github.com/InternRobotics/RoboInter) project. You can try our [**Online demo**](https://huggingface.co/spaces/wz7in/robointer-demo).
4
+
5
+ The annotations cover 230k episodes and include: subtasks,
6
+ primitive skills, segmentation, gripper/object bounding boxes, placement proposals, affordance boxes,
7
+ grasp poses, traces, contact points, etc. And each with a quality rating (Primary / Secondary).
8
+
9
+ ## Dataset Structure
10
+
11
+ ```
12
+ RoboInter-Data/
13
+
14
+ ├── Annotation_with_action_lerobotv21/ # [Main] LeRobot v2.1 format (actions + annotations + videos)
15
+ │ ├── lerobot_droid_anno/ # DROID: 152,986 episodes
16
+ │ └── lerobot_rh20t_anno/ # RH20T: 82,894 episodes
17
+
18
+ ├── Annotation_pure/ # Annotation-only LMDB (no actions/videos)
19
+ │ └── annotations/ # 35 GB, all 235,920 episodes
20
+
21
+ ├── Annotation_raw/ # Original unprocessed annotations
22
+ │ ├── droid_annotation.pkl # Raw DROID annotations (~20 GB)
23
+ │ ├── rh20t_annotation.pkl # Raw RH20T annotations (~11 GB)
24
+ │ └── segmentation_npz.zip.* # Segmentation masks (~50 GB, split archives)
25
+
26
+ ├── Annotation_demo_app/ # Small demo subset for online visualization
27
+ │ ├── demo_data/ # LMDB annotations for 20 sampled videos
28
+ │ └── videos/ # 20 MP4 videos
29
+
30
+ ├── Annotation_demo_larger/ # Larger demo subset for local visualization
31
+ │ ├── demo_annotations/ # LMDB annotations for 120 videos
32
+ │ └── videos/ # 120 MP4 videos
33
+
34
+ ├── All_Keys_of_Primary.json # Episode names where all annotations are Primary quality
35
+ ├── RoboInter_Data_Qsheet.json # Per-episode quality ratings for each annotation type
36
+ ├── RoboInter_Data_Qsheet_value_stats.json# Distribution statistics of quality ratings
37
+ ├── RoboInter_Data_RawPath_Qmapping.json # Mapping: original data source path -> episode splits & quality
38
+ ├── range_nop.json # Non-idle frame ranges for all 230k episodes
39
+ ├── range_nop_droid_all.json # Non-idle frame ranges (DROID only)
40
+ ├── range_nop_rh20t_all.json # Non-idle frame ranges (RH20T only)
41
+ ├── val_video.json # Validation set: 7,246 episode names
42
+ └── VideoID_2_SegmentationNPZ.json # Episode video ID -> segmentation NPZ file path mapping
43
+ ```
44
+
45
+ ---
46
+
47
+ ## 1. Annotation_with_action_lerobotv21 (Recommended)
48
+
49
+ The primary data format. Contains **actions + observations + annotations** in [LeRobot v2.1](https://github.com/huggingface/lerobot) format (parquet + MP4 videos), ready for policy training.
50
+
51
+ ### Directory Layout
52
+
53
+ ```
54
+ lerobot_droid_anno/ (or lerobot_rh20t_anno/)
55
+ ├── meta/
56
+ │ ├── info.json # Dataset metadata (fps=10, features, etc.)
57
+ │ ├── episodes.jsonl # Episode information
58
+ │ └── tasks.jsonl # Task/instruction mapping
59
+ ├── data/
60
+ │ └── chunk-{NNN}/ # Parquet files (1,000 episodes per chunk)
61
+ │ └── episode_{NNNNNN}.parquet
62
+ └── videos/
63
+ └── chunk-{NNN}/
64
+ ├── observation.images.primary/
65
+ │ └── episode_{NNNNNN}.mp4
66
+ └── observation.images.wrist/
67
+ └── episode_{NNNNNN}.mp4
68
+ ```
69
+
70
+ ### Data Fields
71
+
72
+ | Category | Field | Shape / Type | Description |
73
+ |----------|-------|-------------|-------------|
74
+ | **Core** | `action` | (7,) float64 | Delta EEF: [dx, dy, dz, drx, dry, drz, gripper] |
75
+ | | `state` | (7,) float64 | EEF state: [x, y, z, rx, ry, rz, gripper] |
76
+ | | `observation.images.primary` | (180, 320, 3) video | Primary camera RGB |
77
+ | | `observation.images.wrist` | (180, 320, 3) video | Wrist camera RGB |
78
+ | **Annotation** | `annotation.instruction_add` | string | Structured task language instruction |
79
+ | | `annotation.substask` | string | Current subtask description |
80
+ | | `annotation.primitive_skill` | string | Primitive skill label (pick, place, push, ...) |
81
+ | | `annotation.object_box` | JSON `[[x1,y1],[x2,y2]]` | Manipulated object bounding box |
82
+ | | `annotation.gripper_box` | JSON `[[x1,y1],[x2,y2]]` | Gripper bounding box |
83
+ | | `annotation.trace` | JSON `[[x,y], ...]` | Future 10-step gripper trajectory |
84
+ | | `annotation.contact_frame` | JSON int | Frame index when gripper contacts object |
85
+ | | `annotation.contact_points` | JSON `[x, y]` | Contact point pixel coordinates |
86
+ | | `annotation.affordance_box` | JSON `[[x1,y1],[x2,y2]]` | Gripper box at contact frame |
87
+ | | `annotation.state_affordance` | JSON `[x,y,z,rx,ry,rz]` | 6D EEF state at contact frame |
88
+ | | `annotation.placement_proposal` | JSON `[[x1,y1],[x2,y2]]` | Target placement bounding box |
89
+ | | `annotation.time_clip` | JSON `[[s,e], ...]` | Subtask temporal segments |
90
+ | **Quality** | `Q_annotation.*` | string | Quality rating: `"Primary"` / `"Secondary"` / `""` |
91
+
92
+ ### Quick Start
93
+ The dataloader is located at our RoboInter [Codebase](https://github.com/InternRobotics/RoboInter/blob/main/RoboInterData/lerobot_dataloader).
94
+
95
+ ```python
96
+ from lerobot_dataloader import create_dataloader
97
+
98
+ # Single dataset
99
+ dataloader = create_dataloader(
100
+ "path/to/Annotation_with_action_lerobotv21/lerobot_droid_anno",
101
+ batch_size=32,
102
+ action_horizon=16,
103
+ )
104
+
105
+ for batch in dataloader:
106
+ images = batch["observation.images.primary"] # (B, H, W, 3)
107
+ actions = batch["action"] # (B, 16, 7)
108
+ trace = batch["annotation.trace"] # JSON strings
109
+ skill = batch["annotation.primitive_skill"] # List[str]
110
+ break
111
+
112
+ # Multiple datasets (DROID + RH20T)
113
+ dataloader = create_dataloader(
114
+ [
115
+ "path/to/lerobot_droid_anno",
116
+ "path/to/lerobot_rh20t_anno",
117
+ ],
118
+ batch_size=32,
119
+ action_horizon=16,
120
+ )
121
+ ```
122
+
123
+ ### Filtering by Quality & Frame Range
124
+
125
+ ```python
126
+ from lerobot_dataloader import create_dataloader, QAnnotationFilter
127
+
128
+ dataloader = create_dataloader(
129
+ "path/to/lerobot_droid_anno",
130
+ batch_size=32,
131
+ range_nop_path="path/to/range_nop.json", # Remove idle frames
132
+ q_filters=[
133
+ QAnnotationFilter("Q_annotation.trace", ["Primary"]),
134
+ QAnnotationFilter("Q_annotation.gripper_box", ["Primary", "Secondary"]),
135
+ ],
136
+ )
137
+ ```
138
+
139
+ For full dataloader documentation and transforms, see: [RoboInterData/lerobot_dataloader](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterData/lerobot_dataloader).
140
+
141
+ ### Format Conversion Scripts
142
+
143
+ The LeRobot v2.1 data was converted using:
144
+
145
+ - **DROID**: [convert_droid_to_lerobot_anno_fast.py](https://github.com/InternRobotics/RoboInter/blob/main/RoboInterData/convert_to_lerobot/convert_droid_to_lerobot_anno_fast.py)
146
+ - **RH20T**: [convert_rh20t_to_lerobot_anno_fast.py](https://github.com/InternRobotics/RoboInter/blob/main/RoboInterData/convert_to_lerobot/convert_rh20t_to_lerobot_anno_fast.py)
147
+
148
+ ---
149
+
150
+ ## 2. Annotation_pure (Annotation-Only LMDB)
151
+
152
+ Contains **only the intermediate representation annotations** (no action data, no videos) stored as a single LMDB database. Useful for lightweight access to annotations or as input for the LeRobot conversion pipeline. The format conversion scripts and corresponding lightweight dataloader functions are provided in [lmdb_tool](https://github.com/InternRobotics/RoboInter/blob/main/RoboInterData/lmdb_tool). You can downloade high-resolution
153
+ videos by following [Droid hr_video_reader](https://github.com/InternRobotics/RoboInter/blob/main/RoboInterData/hr_video_reader) and [RH20T API](https://github.com/rh20t/rh20t_api).
154
+
155
+ ### Data Format
156
+
157
+ Each LMDB key is an episode name (e.g., `"3072_exterior_image_1_left"`). The value is a dict mapping frame indices to per-frame annotation dicts:
158
+
159
+ ```python
160
+ {
161
+ 0: { # frame_id
162
+ "time_clip": [[0, 132], [132, 197], [198, 224]], # subtask segments
163
+ "instruction_add": "pick up the red cup", # language instruction
164
+ "substask": "reach for the cup", # current subtask
165
+ "primitive_skill": "reach", # skill label
166
+ "segmentation": None, # (stored separately in Annotation_raw)
167
+ "object_box": [[45, 30], [120, 95]], # manipulated object bbox
168
+ "placement_proposal": [[150, 80], [220, 140]], # target placement bbox
169
+ "trace": [[x, y], ...], # next 10 gripper waypoints
170
+ "gripper_box": [[60, 50], [100, 80]], # gripper bbox
171
+ "contact_frame": 101, # contact event frame (−1 if past contact)
172
+ "state_affordance": [0.1, 0.2, 0.3, 0.4, 0.5, 0.6],# 6D EEF state at contact
173
+ "affordance_box": [[62, 48], [98, 82]], # gripper bbox at contact frame
174
+ "contact_points": [[75, 65], [85, 65]], # contact pixel coordinates
175
+ ...
176
+ },
177
+ 1: { ... },
178
+ ...
179
+ }
180
+ ```
181
+
182
+ ### Reading LMDB
183
+
184
+ ```python
185
+ import lmdb
186
+ import pickle
187
+
188
+ lmdb_path = "Annotation_pure/annotations"
189
+ env = lmdb.open(lmdb_path, readonly=True, lock=False, readahead=False)
190
+
191
+ with env.begin() as txn:
192
+ # List all episode keys
193
+ cursor = txn.cursor()
194
+ for key, value in cursor:
195
+ episode_name = key.decode("utf-8")
196
+ episode_data = pickle.loads(value)
197
+
198
+ # Access frame 0
199
+ frame_0 = episode_data[0]
200
+ print(f"{episode_name}: {frame_0['instruction_add']}")
201
+ print(f" object_box: {frame_0['object_box']}")
202
+ print(f" trace: {frame_0['trace'][:3]}...") # first 3 waypoints
203
+ break
204
+
205
+ env.close()
206
+ ```
207
+
208
+ ### CLI Inspection Tool
209
+
210
+ ```bash
211
+ cd RoboInter/RoboInterData/lmdb_tool
212
+
213
+ # Basic info
214
+ python read_lmdb.py --lmdb_path Annotation_pure/annotations --action info
215
+
216
+ # View a specific episode
217
+ python read_lmdb.py --lmdb_path Annotation_pure/annotations --action item --key "3072_exterior_image_1_left"
218
+
219
+ # Field coverage statistics
220
+ python read_lmdb.py --lmdb_path Annotation_pure/annotations --action stats --key "3072_exterior_image_1_left"
221
+
222
+ # Multi-episode summary
223
+ python read_lmdb.py --lmdb_path Annotation_pure/annotations --action summary --limit 100
224
+ ```
225
+
226
+ ---
227
+
228
+ ## 3. Annotation_raw (Original Annotations)
229
+
230
+ The original, unprocessed annotation files before conversion to LMDB format. These files are large and slow to load.
231
+
232
+ | File | Size | Description |
233
+ |------|------|-------------|
234
+ | `droid_annotation.pkl` | ~20 GB | Raw DROID intermediate representation annotations |
235
+ | `rh20t_annotation.pkl` | ~11 GB | Raw RH20T intermediate representation annotations |
236
+ | `segmentation_npz.zip.*` | ~50 GB | Object segmentation masks (split archives) |
237
+
238
+ ### Reading Raw PKL
239
+ ```bash
240
+ cd /RoboInter-Data/Annotation_raw
241
+ cat segmentation_npz.zip.* > segmentation_npz.zip
242
+ unzip segmentation_npz.zip
243
+ ```
244
+
245
+ ```python
246
+ import pickle
247
+
248
+ with open("Annotation_raw/droid_annotation.pkl", "rb") as f:
249
+ droid_data = pickle.load(f) # Warning: ~20 GB, takes several minutes
250
+
251
+ # droid_data[episode_key] contains raw intermediate representation data
252
+ # including: all_language, all_gripper_box, all_grounding_box, all_contact_point, all_traj, etc.
253
+ ```
254
+
255
+ > To convert raw PKL to the LMDB format used in `Annotation_pure`, see the conversion script in the [RoboInter repository](https://github.com/InternRobotics/RoboInter).
256
+
257
+ ---
258
+
259
+ ## 4. Demo Subsets (Annotation_demo_app & Annotation_demo_larger)
260
+
261
+ Pre-packaged subsets for quick visualization using the [RoboInterData-Demo](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterData-Demo) Gradio app. Both subsets share the same LMDB annotation format + MP4 video structure.
262
+
263
+ | Subset | Videos | Size | Use Case |
264
+ |--------|--------|------|----------|
265
+ | `Annotation_demo_app` | 20 | ~929 MB | HuggingFace Spaces [online demo](https://huggingface.co/spaces/wz7in/robointer-demo) |
266
+ | `Annotation_demo_larger` | 120 | ~12 GB | Local visualization with more examples |
267
+
268
+ ### Running the Visualizer
269
+
270
+ ```bash
271
+ git clone https://github.com/InternRobotics/RoboInter.git
272
+ cd RoboInter/RoboInterData-Demo
273
+
274
+ # Option A: Use the small demo subset (for Spaces)
275
+ ln -s /path/to/Annotation_demo_app/demo_data ./demo_data
276
+ ln -s /path/to/Annotation_demo_app/videos ./videos
277
+
278
+ # Option B: Use the larger demo subset (for local)
279
+ ln -s /path/to/Annotation_demo_larger/demo_annotations ./demo_data
280
+ ln -s /path/to/Annotation_demo_larger/videos ./videos
281
+
282
+ pip install -r requirements.txt
283
+ python app.py
284
+ # Open http://localhost:7860
285
+ ```
286
+
287
+ The visualizer supports all annotation types: object segmentation masks, gripper/object/affordance bounding boxes, trajectory traces, contact points, grasp poses, and language annotations (instructions, subtasks, primitive skills).
288
+
289
+ ---
290
+
291
+ ## 5. Metadata JSON Files
292
+
293
+ ### Quality & Filtering
294
+
295
+ | File | Description |
296
+ |------|-------------|
297
+ | `All_Keys_of_Primary.json` | List of 65,515 episode names where **all** annotation types are rated Primary quality. |
298
+ | `RoboInter_Data_Qsheet.json` | Per-episode quality ratings for every annotation type. Each entry contains `Q_instruction_add`, `Q_substask`, `Q_trace`, etc. with values `"Primary"`, `"Secondary"`, or `null`. |
299
+ | `RoboInter_Data_Qsheet_value_stats.json` | Distribution of quality ratings across all episodes. |
300
+ | `RoboInter_Data_RawPath_Qmapping.json` | Mapping from original data source paths to episode splits and their quality ratings. |
301
+
302
+ ### Frame Ranges (Idle Frame Removal)
303
+
304
+ | File | Description |
305
+ |------|-------------|
306
+ | `range_nop.json` | Non-idle frame ranges for all 235,920 episodes (DROID + RH20T). |
307
+ | `range_nop_droid_all.json` | Non-idle frame ranges for DROID episodes only. |
308
+ | `range_nop_rh20t_all.json` | Non-idle frame ranges for RH20T episodes only. |
309
+
310
+ Format: `{ "episode_name": [start_frame, end_frame, valid_length] }`
311
+
312
+ ```python
313
+ import json
314
+
315
+ with open("range_nop.json") as f:
316
+ range_nop = json.load(f)
317
+
318
+ # Example: "3072_exterior_image_1_left": [12, 217, 206]
319
+ # Means: valid action frames are 12~217, total 206 valid frames
320
+ # (frames 0~11 and 218+ are idle/stationary)
321
+ ```
322
+
323
+ ### Other
324
+
325
+ | File | Description |
326
+ |------|-------------|
327
+ | `val_video.json` | List of 7,246 episode names reserved for the validation set. |
328
+ | `VideoID_2_SegmentationNPZ.json` | Mapping from episode video ID to the corresponding segmentation NPZ file path in `Annotation_raw/segmentation_npz`. `null` if no segmentation is available. |
329
+
330
+ ---
331
+
332
+ ## Related Resources
333
+
334
+ | Resource | Link |
335
+ |----------|------|
336
+ | Project | [RoboInter](https://github.com/InternRobotics/RoboInter) |
337
+ | VQA Dataset | [RoboInter-VQA](https://huggingface.co/datasets/InternRobotics/RoboInter-VQA) |
338
+ | VLM Checkpoints | [RoboInter-VLM](https://huggingface.co/InternRobotics/RoboInter-VLM) |
339
+ | LMDB Tool | [RoboInterData/lmdb_tool](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterData/lmdb_tool) |
340
+ | High-Resolution Video Reader | [RoboInterData/hr_video_reader](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterData/hr_video_reader) |
341
+ | LeRobot DataLoader | [RoboInterData/lerobot_dataloader](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterData/lerobot_dataloader) |
342
+ | LeRobot Conversion | [RoboInterData/convert_to_lerobot](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterData/convert_to_lerobot) |
343
+ | Demo Visualizer | [RoboInterData-Demo](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterData-Demo) |
344
+ | Online Demo | [HuggingFace Space](https://huggingface.co/spaces/wz7in/robointer-demo) |
345
+ | Raw DROID Dataset | [droid-dataset.github.io](https://droid-dataset.github.io/) |
346
+ | Raw RH20T Dataset | [rh20t.github.io](https://rh20t.github.io/) |
347
+
348
+ ## License
349
+
350
+ Please refer to the original dataset licenses for [RoboInter](https://github.com/InternRobotics/RoboInter), [DROID](https://droid-dataset.github.io/), and [RH20T](https://rh20t.github.io/).