Raffael-Kultyshev commited on
Commit
4688ec8
·
verified ·
1 Parent(s): 8f7b2fb

Update README: 147 episodes (removed 147-149), updated schema & structure

Browse files
Files changed (1) hide show
  1. README.md +56 -108
README.md CHANGED
@@ -25,16 +25,16 @@ RGB-D hand manipulation dataset captured with iPhone 13 TrueDepth sensor for hum
25
 
26
  ---
27
 
28
- ## 📊 Dataset Overview
29
 
30
  | Metric | Value |
31
  |--------|-------|
32
- | Episodes | 97 |
33
- | Total Frames | ~28,000 |
34
  | FPS | 30 |
35
  | Tasks | 10 manipulation tasks |
36
- | Total Duration | ~15.5 minutes |
37
- | Avg Episode Length | ~9.6 seconds |
38
 
39
  ### Task Distribution
40
 
@@ -51,101 +51,82 @@ RGB-D hand manipulation dataset captured with iPhone 13 TrueDepth sensor for hum
51
  | Task 9 | Screw the cap on your bottle | 10 |
52
  | Task 10 | Pick up two objects, put on bed | 10 |
53
 
 
 
54
  ---
55
 
56
- ## 📁 Repository Structure
57
 
58
  ```
59
  humanoid-robots-training-dataset/
60
 
61
  ├── data/
62
- ── chunk-000/ # Parquet files (97 episodes)
63
- ├── episode_000000.parquet
64
- ── episode_000001.parquet
 
 
65
  │ └── ...
66
 
67
  ├── videos/
68
- ── chunk-000/rgb/ # MP4 videos (synchronized)
69
- ├── episode_000000.mp4
 
 
 
70
  │ └── ...
71
 
72
  ├── meta/ # Metadata & Annotations
73
  │ ├── info.json # Dataset configuration (LeRobot format)
74
  │ ├── stats.json # Feature min/max/mean/std statistics
75
  │ ├── events.json # Disturbance & recovery annotations
 
 
 
 
 
76
  ---
77
 
78
- ## 🎯 Data Schema
79
 
80
  ### Parquet Columns (per frame)
81
 
82
  | Column | Type | Description |
83
  |--------|------|-------------|
84
- | `episode_index` | int64 | Episode number (0-96) |
85
  | `frame_index` | int64 | Frame within episode |
86
  | `timestamp` | float64 | Time in seconds |
87
  | `language_instruction` | string | Task description |
88
- | `observation.state` | float[252] | 21 hand joints × 2 hands × 6 DoF |
89
- | `action` | float[252] | Same as state (for imitation learning) |
90
- | `observation.images.rgb` | struct | Video path + timestamp |
91
-
92
- ### 6-DoF Hand Pose Format
 
 
93
 
94
- Each joint has 6 values: `[x_cm, y_cm, z_cm, yaw_deg, pitch_deg, roll_deg]`
95
 
96
  **Coordinate System:**
97
  - Origin: Camera (iPhone TrueDepth)
98
  - X: Right (positive)
99
- - Y: Down (positive)
100
  - Z: Forward (positive, into scene)
101
 
102
  ---
103
 
104
- ## 🏷️ Motion Semantics Annotations
105
 
106
  **File:** `meta/annotations_motion_v1_frames.json`
107
 
108
  Coarse temporal segmentation with motion intent, phase, and error labels.
109
 
110
- ### Annotation Schema
111
-
112
- ```json
113
- {
114
- "episode_id": "Task1_Vid2",
115
- "segments": [
116
- {
117
- "start_frame": 54,
118
- "end_frame_exclusive": 140,
119
- "motion_type": "grasp", // What action is being performed
120
- "temporal_phase": "start", // start | contact | manipulate | end
121
- "actor": "both_hands", // left_hand | right_hand | both_hands
122
- "target": {
123
- "type": "cloth_region", // cloth_region | object | surface
124
- "value": "bottom_edge" // Specific target identifier
125
- },
126
- "state": {
127
- "stage": "unfolded", // Task-specific state
128
- "flatness": "wrinkled", // For folding tasks only
129
- "symmetry": "asymmetric" // For folding tasks only
130
- },
131
- "error": "none" // misalignment | slip | drop | none
132
- }
133
- ]
134
- }
135
- ```
136
-
137
  ### Motion Types
138
  `grasp` | `pull` | `align` | `fold` | `smooth` | `insert` | `rotate` | `open` | `close` | `press` | `hold` | `release` | `place`
139
 
140
- ### Why Motion Annotations?
141
- - **Temporal Structure**: Know when manipulation phases begin/end
142
- - **Intent Understanding**: What the human intends to do, not just kinematics
143
- - **Error Detection**: Labeled failure modes (slip, drop, misalignment)
144
- - **Training Signal**: Richer supervision for imitation learning
145
-
146
  ---
147
 
148
- ## 📋 Events Metadata
149
 
150
  **File:** `meta/events.json`
151
 
@@ -170,15 +151,7 @@ Disturbances and recovery actions for select episodes.
170
 
171
  ---
172
 
173
- ## 📈 Depth Quality Metrics
174
-
175
- | Metric | Description | Dataset Average |
176
- |--------|-------------|-----------------|
177
- | `valid_depth_pct` | % frames with valid depth at hand | 95.5% ✅ |
178
-
179
- ---
180
-
181
- ## 🚀 Usage
182
 
183
  ### With LeRobot
184
 
@@ -187,63 +160,39 @@ from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
187
 
188
  dataset = LeRobotDataset("DynamicIntelligence/humanoid-robots-training-dataset")
189
 
190
- # Access episode
191
  episode = dataset[0]
192
- state = episode["observation.state"] # [252] hand pose (both hands)
193
- rgb = episode["observation.images.rgb"] # Video frame
194
- task = episode["language_instruction"] # Task description
195
  ```
196
 
197
- ### Loading Motion Annotations
198
 
199
  ```python
200
- import json
201
  from huggingface_hub import hf_hub_download
202
 
203
- # Download annotations
204
  path = hf_hub_download(
205
  repo_id="DynamicIntelligence/humanoid-robots-training-dataset",
206
- filename="meta/annotations_motion_v1_frames.json",
207
  repo_type="dataset"
208
  )
209
-
210
- with open(path) as f:
211
- annotations = json.load(f)
212
-
213
- # Get segments for Task1
214
- task1_episodes = annotations["tasks"]["Task1"]["episodes"]
215
- for ep in task1_episodes:
216
- print(f"{ep['episode_id']}: {len(ep['segments'])} segments")
217
- ```
218
-
219
- ### Combining Pose + Annotations
220
-
221
- ```python
222
- # Get frame-level motion labels
223
- def get_motion_label(frame_idx, segments):
224
- for seg in segments:
225
- if seg["start_frame"] <= frame_idx < seg["end_frame_exclusive"]:
226
- return seg["motion_type"], seg["temporal_phase"]
227
- return None, None
228
-
229
- # Example: label each frame
230
- for frame_idx in range(episode["frame_index"].max()):
231
- motion, phase = get_motion_label(frame_idx, episode_annotations["segments"])
232
- if motion:
233
- print(f"Frame {frame_idx}: {motion} ({phase})")
234
  ```
235
 
236
  ---
237
 
238
- ## 📖 Citation
239
 
240
  If you use this dataset in your research, please cite:
241
 
242
  ```bibtex
243
- @dataset{dynamic_intelligence_2024,
244
  author = {Dynamic Intelligence},
245
  title = {Egocentric Human Motion Annotation Dataset},
246
- year = {2024},
247
  publisher = {Hugging Face},
248
  url = {https://huggingface.co/datasets/DynamicIntelligence/humanoid-robots-training-dataset}
249
  }
@@ -251,25 +200,24 @@ If you use this dataset in your research, please cite:
251
 
252
  ---
253
 
254
- ## 📧 Contact
255
 
256
- **Email:** shayan@dynamicintelligence.company
257
  **Organization:** [Dynamic Intelligence](https://dynamicintelligence.company)
258
 
259
  ---
260
 
261
- ## 🖼️ Hand Landmark Reference
262
 
263
  ![Hand Landmarks](https://huggingface.co/datasets/DynamicIntelligence/humanoid-robots-training-dataset/resolve/main/assets/hand_landmarks.png)
264
 
265
- Each hand has 21 tracked joints. The `observation.state` contains 6-DoF (x, y, z, yaw, pitch, roll) for each joint.
266
 
267
  ---
268
 
269
- ## 👁️ Visualizer Tips
270
 
271
- When using the [DI Hand Pose Sample Dataset Viewer](https://huggingface.co/spaces/DynamicIntelligence/dynamic_intelligence_sample_data):
272
 
273
- - **Enable plots**: Click the white checkbox next to joint names (e.g., `left_thumb_cmc_yaw_deg`) to show that data in the graph
274
- - **Why not all enabled by default?**: To prevent browser lag, only a few plots are active initially
275
  - **Full data access**: All joint data is available in the parquet files under `Files and versions`
 
25
 
26
  ---
27
 
28
+ ## Dataset Overview
29
 
30
  | Metric | Value |
31
  |--------|-------|
32
+ | Episodes | 147 |
33
+ | Total Frames | ~72,000 |
34
  | FPS | 30 |
35
  | Tasks | 10 manipulation tasks |
36
+ | Total Duration | ~40 minutes |
37
+ | Avg Episode Length | ~16.3 seconds |
38
 
39
  ### Task Distribution
40
 
 
51
  | Task 9 | Screw the cap on your bottle | 10 |
52
  | Task 10 | Pick up two objects, put on bed | 10 |
53
 
54
+ > **Note:** Task distribution is approximate and will be updated with per-episode language instructions.
55
+
56
  ---
57
 
58
+ ## Repository Structure
59
 
60
  ```
61
  humanoid-robots-training-dataset/
62
 
63
  ├── data/
64
+ ── chunk-000/ # Parquet files (episodes 0-99)
65
+ ├── episode_000000.parquet
66
+ │ └── ...
67
+ │ └── chunk-001/ # Parquet files (episodes 100-146)
68
+ │ ├── episode_000100.parquet
69
  │ └── ...
70
 
71
  ├── videos/
72
+ ── chunk-000/rgb/ # MP4 videos (episodes 0-99)
73
+ ├── episode_000000.mp4
74
+ │ │ └── ...
75
+ │ └── chunk-001/rgb/ # MP4 videos (episodes 100-146)
76
+ │ ├── episode_000100.mp4
77
  │ └── ...
78
 
79
  ├── meta/ # Metadata & Annotations
80
  │ ├── info.json # Dataset configuration (LeRobot format)
81
  │ ├── stats.json # Feature min/max/mean/std statistics
82
  │ ├── events.json # Disturbance & recovery annotations
83
+ │ └── annotations_motion_v1_frames.json # Motion semantic annotations
84
+
85
+ └── README.md
86
+ ```
87
+
88
  ---
89
 
90
+ ## Data Schema
91
 
92
  ### Parquet Columns (per frame)
93
 
94
  | Column | Type | Description |
95
  |--------|------|-------------|
96
+ | `episode_index` | int64 | Episode number (0-146) |
97
  | `frame_index` | int64 | Frame within episode |
98
  | `timestamp` | float64 | Time in seconds |
99
  | `language_instruction` | string | Task description |
100
+ | `observation.camera_pose` | float[6] | Camera 6-DoF (x, y, z, roll, pitch, yaw) |
101
+ | `observation.left_hand` | float[9] | Left hand keypoints (wrist + thumb + index) |
102
+ | `observation.right_hand` | float[9] | Right hand keypoints (wrist + index + middle) |
103
+ | `action.camera_delta` | float[6] | Camera delta 6-DoF |
104
+ | `action.left_hand_delta` | float[9] | Left hand delta keypoints |
105
+ | `action.right_hand_delta` | float[9] | Right hand delta keypoints |
106
+ | `rgb` | video | Synchronized RGB video frame |
107
 
108
+ ### 6-DoF Format
109
 
110
  **Coordinate System:**
111
  - Origin: Camera (iPhone TrueDepth)
112
  - X: Right (positive)
113
+ - Y: Down (positive)
114
  - Z: Forward (positive, into scene)
115
 
116
  ---
117
 
118
+ ## Motion Semantics Annotations
119
 
120
  **File:** `meta/annotations_motion_v1_frames.json`
121
 
122
  Coarse temporal segmentation with motion intent, phase, and error labels.
123
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
124
  ### Motion Types
125
  `grasp` | `pull` | `align` | `fold` | `smooth` | `insert` | `rotate` | `open` | `close` | `press` | `hold` | `release` | `place`
126
 
 
 
 
 
 
 
127
  ---
128
 
129
+ ## Events Metadata
130
 
131
  **File:** `meta/events.json`
132
 
 
151
 
152
  ---
153
 
154
+ ## Usage
 
 
 
 
 
 
 
 
155
 
156
  ### With LeRobot
157
 
 
160
 
161
  dataset = LeRobotDataset("DynamicIntelligence/humanoid-robots-training-dataset")
162
 
 
163
  episode = dataset[0]
164
+ state = episode["observation.camera_pose"] # [6] camera 6-DoF
165
+ rgb = episode["observation.images.rgb"] # Video frame
166
+ task = episode["language_instruction"] # Task description
167
  ```
168
 
169
+ ### Direct Parquet Access
170
 
171
  ```python
172
+ import pandas as pd
173
  from huggingface_hub import hf_hub_download
174
 
 
175
  path = hf_hub_download(
176
  repo_id="DynamicIntelligence/humanoid-robots-training-dataset",
177
+ filename="data/chunk-000/episode_000000.parquet",
178
  repo_type="dataset"
179
  )
180
+ df = pd.read_parquet(path)
181
+ print(df.columns.tolist())
182
+ print(f"Frames: {len(df)}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
183
  ```
184
 
185
  ---
186
 
187
+ ## Citation
188
 
189
  If you use this dataset in your research, please cite:
190
 
191
  ```bibtex
192
+ @dataset{dynamic_intelligence_2025,
193
  author = {Dynamic Intelligence},
194
  title = {Egocentric Human Motion Annotation Dataset},
195
+ year = {2025},
196
  publisher = {Hugging Face},
197
  url = {https://huggingface.co/datasets/DynamicIntelligence/humanoid-robots-training-dataset}
198
  }
 
200
 
201
  ---
202
 
203
+ ## Contact
204
 
205
+ **Email:** shayan@dynamicintelligence.company
206
  **Organization:** [Dynamic Intelligence](https://dynamicintelligence.company)
207
 
208
  ---
209
 
210
+ ## Hand Landmark Reference
211
 
212
  ![Hand Landmarks](https://huggingface.co/datasets/DynamicIntelligence/humanoid-robots-training-dataset/resolve/main/assets/hand_landmarks.png)
213
 
214
+ Each hand has tracked joints. The `observation.left_hand` and `observation.right_hand` contain 3D keypoints for key finger joints.
215
 
216
  ---
217
 
218
+ ## Visualizer
219
 
220
+ Explore the dataset interactively: [DI Hand Pose Sample Dataset Viewer](https://huggingface.co/spaces/DynamicIntelligence/dynamic_intelligence_sample_data)
221
 
222
+ - **Enable plots**: Click the white checkbox next to joint names to show data in the graph
 
223
  - **Full data access**: All joint data is available in the parquet files under `Files and versions`