Update README.md
Browse files
README.md
CHANGED
|
@@ -12,13 +12,18 @@ Download with:
|
|
| 12 |
huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
|
| 13 |
```
|
| 14 |
|
| 15 |
-
|
|
|
|
|
|
|
| 16 |
|
| 17 |
-
Contents of train/val_v2.0
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
- **
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
#### Index-to-State Mapping (NEW)
|
| 24 |
```
|
|
|
|
| 12 |
huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
|
| 13 |
```
|
| 14 |
|
| 15 |
+
Changes from v1.1:
|
| 16 |
+
- New train and val dataset of 100 hours, replacing the v1.1 datasets
|
| 17 |
+
- Blur applied to faces
|
| 18 |
|
| 19 |
+
Contents of train/val_v2.0:
|
| 20 |
+
|
| 21 |
+
The training dataset is shareded into 100 independent shards. The shapes and definitions of the arrays are as follows (N is the number of frames).
|
| 22 |
+
|
| 23 |
+
- **video_{shard}.bin**: 8x8x8 image patches at 30hz, with 17 frame temporal window, encoded using [NVIDIA Cosmos Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer).
|
| 24 |
+
- **segment_indicies** - For video `n` and frame `i`, `segment_idx_n[i]` uniquely points to the segment index that frame `i` came from. You may want to use this to separate non-contiguous frames from different videos (transitions).
|
| 25 |
+
- **robot_states** - States arrays defined in `Index-to-State Mapping` stored in `np.float32` format. For video `n` and frame `i`, the corresponding state is given by `states_n[i]`.
|
| 26 |
+
- **metadata** - The `metadata.json` file provides high-level information about the entire dataset, while `metadata_[n].json` files contain specific details for each individual video `n`.
|
| 27 |
|
| 28 |
#### Index-to-State Mapping (NEW)
|
| 29 |
```
|