Haoyuwu commited on
Commit
132b095
·
verified ·
1 Parent(s): b01815e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -3
README.md CHANGED
@@ -1,3 +1,67 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ size_categories:
4
+ - 100K<n<1M
5
+ ---
6
+
7
+ # MultiWorld Dataset
8
+
9
+ ## Dataset Summary
10
+
11
+ **MultiWorld** is a large-scale multi-agent multi-view video dataset collected for training video world models. It contains two complementary sources of data:
12
+
13
+ 1. **It Takes Two Gameplay Dataset**: 100+ hours of real human gameplay from the cooperative action-adventure game *It Takes Two*, featuring dual-agent synchronized actions with distinct first-person viewpoints.
14
+ 2. **RoboFactory Manipulation Dataset**: Multi-robot manipulation trajectories spanning 4 tasks with 2-4 agents and variable camera viewpoints, including both success and failure episodes.
15
+
16
+ This dataset is the official release accompanying the paper *"MultiWorld: Scalable Multi-Agent Multi-View Video World Models"*.
17
+
18
+ - **Homepage:** https://multi-world.github.io
19
+ - **Repository:** https://github.com/CIntellifusion/MultiWorld
20
+ - **Paper:** [arXiv:XXXX.XXXXX](https://arxiv.org/abs/XXXX.XXXXX)
21
+ ---
22
+
23
+ ## Dataset Details
24
+
25
+ ### It Takes Two Gameplay
26
+
27
+ | Property | Value |
28
+ |----------|-------|
29
+ | **Total Duration** | 100+ hours |
30
+ | **Frame Rate** | 60 FPS |
31
+ | **Resolution** | 480 × 960 |
32
+ | **Agents** | 2 players |
33
+ | **Viewpoints** | 2 distinct first-person views per episode |
34
+ | **Actions** | Synchronized keyboard and mouse actions per agent |
35
+ | **Modality** | RGB video + discrete/continuous action vectors |
36
+
37
+ The gameplay videos are captured from real human players cooperating in the game. Each frame is accompanied by per-agent action labels capturing keyboard presses and mouse movements.
38
+
39
+ ### RoboFactory Manipulation
40
+
41
+ | Property | Value |
42
+ |----------|-------|
43
+ | **Tasks** | 4 multi-robot manipulation tasks |
44
+ | **Agents** | 2–4 robots per task |
45
+ | **Viewpoints** | Variable camera configurations per task |
46
+ | **Resolution** | 256 × 320 |
47
+ | **Success Episodes** | 1,000 per task |
48
+ | **Failure Episodes** | 2,000 per task |
49
+ | **Modality** | RGB video + robot proprioception + actions |
50
+
51
+ Tasks include collaborative stacking, pushing, and pick-and-place scenarios. Both successful and failed trajectories are included to support learning robust world models and failure prediction.
52
+
53
+
54
+ ---
55
+
56
+ ### Possible Usage
57
+
58
+ The dataset is intended for research in:
59
+ - Video world models
60
+ - Multi-agent video generation
61
+ - Multi-view consistent video generation.
62
+
63
+ ---
64
+
65
+ ### Contact
66
+
67
+ For questions about the dataset, please open an issue on the [GitHub repository](https://github.com/CIntellifusion/MultiWorld) or contact the authors.