WoW-world-model commited on
Commit
60d9331
Β·
verified Β·
1 Parent(s): b8c5611

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -0
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```markdown
2
+ ---
3
+ language:
4
+ - en
5
+ pretty_name: WoW-1 Benchmark Samples
6
+ tags:
7
+ - robotics
8
+ - physical-reasoning
9
+ - causal-reasoning
10
+ - action-understanding
11
+ - video-understanding
12
+ - embodied-ai
13
+ - wow
14
+ - arxiv:2509.22642
15
+ license: mit
16
+ task_categories:
17
+ - video-classification
18
+ - action-generation
19
+ dataset_type: benchmark
20
+ size_categories:
21
+ - 1K<n<10K
22
+ ---
23
+
24
+ # 🧠 WoW-1 Benchmark Samples
25
+
26
+ **WoW-1 Benchmark Samples** is the official evaluation dataset released as part of the [WoW (World-Omniscient World Model)](https://github.com/wow-world-model/wow-world-model) project. This benchmark is designed to assess the physical consistency and causal reasoning capabilities of generative world models for robotics and embodied AI.
27
+
28
+ ## πŸ“˜ Dataset Overview
29
+
30
+ This dataset contains **612** natural language prompts representing real-world robot interaction tasks. These instructions are used to evaluate world models on their ability to understand and generate plausible, physically grounded responses in video or action space.
31
+
32
+ Each sample describes a short-term or long-horizon task involving:
33
+
34
+ - Object manipulation (e.g., _"Put the screw driver into the drawer"_)
35
+ - Physical causality (e.g., _"Pick up an egg and crack it into the bowl"_)
36
+ - Spatial reasoning (e.g., _"Move the lid from the black pot to the blue pan"_)
37
+ - State transitions (e.g., _"Turn off the light switch"_)
38
+
39
+ ## πŸ§ͺ Use Cases
40
+
41
+ This dataset is intended for:
42
+
43
+ - Evaluating generative video models on **physical realism**
44
+ - Testing embodied agents on **causal reasoning**
45
+ - Benchmarking **language-to-action** and **planning** models
46
+ - Training or fine-tuning **robotic manipulation** systems
47
+
48
+ ## πŸ”’ Format
49
+
50
+ - **Modality**: Text (natural language commands)
51
+ - **Format**: Plain text / JSON / Parquet
52
+ - **Example**:
53
+
54
+ ```json
55
+ {
56
+ "text": "Put the apples on the table into the basket."
57
+ }
58
+ ```
59
+
60
+ ## πŸ“Š Dataset Stats
61
+
62
+ - Number of samples: 612
63
+ - Text lengths: 11 to 230 characters
64
+ - Language: English
65
+
66
+ ## πŸ“Ž Example Samples
67
+
68
+ - `Clean the table surface`
69
+ - `Use the right arm to grab the pearl and give it to the left arm`
70
+ - `Open the door of the red microwave`
71
+ - `Place the tennis ball in the brown object`
72
+
73
+ ## πŸ”— Related Models
74
+
75
+ This dataset is used for evaluating models such as:
76
+
77
+ - `WoW-1-DiT-2B`, `WoW-1-DiT-7B`
78
+ - `WoW-1-Wan-14B`
79
+ - `SOPHIA`-guided generative models
80
+
81
+ ## πŸ“„ Related Paper
82
+
83
+ > **[WoW: Towards a World omniscient World model Through Embodied Interaction](https://arxiv.org/abs/2509.22642)**
84
+ > *Xiaowei Chi et al., 2025 β€” arXiv:2509.22642*
85
+
86
+ Please cite this paper if you use the dataset:
87
+
88
+ ```bibtex
89
+ @article{chi2025wow,
90
+ title={WoW: Towards a World omniscient World model Through Embodied Interaction},
91
+ author={Chi, Xiaowei and Jia, Peidong and Fan, Chun-Kai and Ju, Xiaozhu and Mi, Weishi and Qin, Zhiyuan and Zhang, Kevin and Tian, Wanxin and Ge, Kuangzhi and Li, Hao and others},
92
+ journal={arXiv preprint arXiv:2509.22642},
93
+ year={2025}
94
+ }
95
+ ```
96
+
97
+ ## 🌐 Project Links
98
+
99
+ - πŸ”¬ Project site: [wow-world-model.github.io](https://wow-world-model.github.io/)
100
+ - πŸ’» GitHub: [github.com/wow-world-model/wow-world-model](https://github.com/wow-world-model/wow-world-model)
101
+ - πŸ“œ ArXiv: [arxiv.org/abs/2509.22642](https://arxiv.org/abs/2509.22642)
102
+
103
+ ## πŸͺͺ License
104
+
105
+ This dataset is released under the [MIT License](https://opensource.org/licenses/MIT).
106
+
107
+ ---
108
+
109
+ πŸ€— We encourage the community to explore, evaluate, and extend this benchmark. Contributions and feedback are welcome via GitHub or the project website.
110
+ ```