Transformers
Diffusers
Safetensors
wruisi commited on
Commit
7374bcd
·
verified ·
1 Parent(s): 72f26e5

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +171 -3
README.md CHANGED
@@ -1,3 +1,171 @@
1
- ---
2
- license: cc
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Wan-AI/Wan2.2-I2V-A14B-Diffusers
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ ---
7
+
8
+ # VBVR: A Very Big Video Reasoning Suite
9
+
10
+ <a href="" target="_blank">
11
+ <img alt="Code" src="https://img.shields.io/badge/SenseNova_SI-Code-100000?style=flat-square&logo=github&logoColor=white" height="20" />
12
+ </a>
13
+ <a href="" target="_blank">
14
+ <img alt="arXiv" src="https://img.shields.io/badge/arXiv-SenseNova_SI-red?logo=arxiv" height="20" />
15
+ </a>
16
+
17
+ <a href="" target="_blank">
18
+ <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR-Leaderboard-ffc107?color=ffc107&logoColor=white" height="20" />
19
+ </a>
20
+
21
+
22
+ ## Overview
23
+ Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture, enabling intuitive reasoning over motion, interaction, and causality. Rapid progress in video models has focused primarily on visual quality. Systematically studying video reasoning and its scaling behavior suffers from a lack of video reasoning (training) data. To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks and over one million video clips—approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench, a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers, enabling reproducible and interpretable diagnosis of video reasoning capabilities. Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization to unseen reasoning tasks. *Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning.*
24
+
25
+ <table>
26
+ <thead>
27
+ <tr>
28
+ <th>Model</th>
29
+ <th>Overall</th>
30
+ <th>ID</th>
31
+ <th>ID-Abst.</th>
32
+ <th>ID-Know.</th>
33
+ <th>ID-Perc.</th>
34
+ <th>ID-Spat.</th>
35
+ <th>ID-Trans.</th>
36
+ <th>OOD</th>
37
+ <th>OOD-Abst.</th>
38
+ <th>OOD-Know.</th>
39
+ <th>OOD-Perc.</th>
40
+ <th>OOD-Spat.</th>
41
+ <th>OOD-Trans.</th>
42
+ </tr>
43
+ </thead>
44
+ <tbody>
45
+
46
+ <tr>
47
+ <td><strong>Human</strong></td>
48
+ <td>0.974</td><td>0.960</td><td>0.919</td><td>0.956</td><td>1.00</td><td>0.95</td><td>1.00</td>
49
+ <td>0.988</td><td>1.00</td><td>1.00</td><td>0.990</td><td>1.00</td><td>0.970</td>
50
+ </tr>
51
+
52
+ <tr style="background:#F2F0EF;font-weight:700;text-align:center;">
53
+ <td colspan="14"><em>Open-source Models</em></td>
54
+ </tr>
55
+
56
+ <tr>
57
+ <td>CogVideoX1.5-5B-I2V</td>
58
+ <td>0.273</td><td>0.283</td><td>0.241</td><td>0.328</td><td>0.257</td><td>0.328</td><td>0.305</td>
59
+ <td>0.262</td><td><u>0.281</u></td><td>0.235</td><td>0.250</td><td><strong>0.254</strong></td><td>0.282</td>
60
+ </tr>
61
+
62
+ <tr>
63
+ <td>HunyuanVideo-I2V</td>
64
+ <td>0.273</td><td>0.280</td><td>0.207</td><td>0.357</td><td>0.293</td><td>0.280</td><td><u>0.316</u></td>
65
+ <td>0.265</td><td>0.175</td><td><strong>0.369</strong></td><td>0.290</td><td><u>0.253</u></td><td>0.250</td>
66
+ </tr>
67
+
68
+ <tr>
69
+ <td><strong>Wan2.2-I2V-A14B</strong></td>
70
+ <td><strong>0.371</strong></td><td><strong>0.412</strong></td><td><strong>0.430</strong></td>
71
+ <td><strong>0.382</strong></td><td><strong>0.415</strong></td><td><strong>0.404</strong></td>
72
+ <td><strong>0.419</strong></td><td><strong>0.329</strong></td>
73
+ <td><strong>0.405</strong></td><td>0.308</td><td><strong>0.343</strong></td>
74
+ <td>0.236</td><td><u>0.307</u></td>
75
+ </tr>
76
+
77
+ <tr>
78
+ <td><u>LTX-2</u></td>
79
+ <td><u>0.313</u></td><td><u>0.329</u></td><td><u>0.316</u></td>
80
+ <td><u>0.362</u></td><td><u>0.326</u></td><td><u>0.340</u></td>
81
+ <td>0.306</td><td><u>0.297</u></td>
82
+ <td>0.244</td><td><u>0.337</u></td><td><u>0.317</u></td>
83
+ <td>0.231</td><td><strong>0.311</strong></td>
84
+ </tr>
85
+
86
+ <tr style="background:#F2F0EF;font-weight:700;text-align:center;">
87
+ <td colspan="14"><em>Proprietary Models</em></td>
88
+ </tr>
89
+
90
+ <tr>
91
+ <td>Runway Gen-4 Turbo</td>
92
+ <td>0.403</td><td>0.392</td><td>0.396</td><td>0.409</td><td>0.429</td><td>0.341</td><td>0.363</td>
93
+ <td>0.414</td><td>0.515</td><td><u>0.429</u></td><td>0.419</td><td>0.327</td><td>0.373</td>
94
+ </tr>
95
+
96
+ <tr>
97
+ <td><strong>Sora 2</strong></td>
98
+ <td><strong>0.546</strong></td><td><strong>0.569</strong></td><td><u>0.602</u></td>
99
+ <td><u>0.477</u></td><td><strong>0.581</strong></td><td><strong>0.572</strong></td>
100
+ <td><strong>0.597</strong></td><td><strong>0.523</strong></td>
101
+ <td><u>0.546</u></td><td><strong>0.472</strong></td><td><strong>0.525</strong></td>
102
+ <td><strong>0.462</strong></td><td><strong>0.546</strong></td>
103
+ </tr>
104
+
105
+ <tr>
106
+ <td>Kling 2.6</td>
107
+ <td>0.369</td><td>0.408</td><td>0.465</td><td>0.323</td><td>0.375</td><td>0.347</td><td><u>0.519</u></td>
108
+ <td>0.330</td><td>0.528</td><td>0.135</td><td>0.272</td><td>0.356</td><td>0.359</td>
109
+ </tr>
110
+
111
+ <tr>
112
+ <td><u>Veo 3.1</u></td>
113
+ <td><u>0.480</u></td><td><u>0.531</u></td><td><strong>0.611</strong></td>
114
+ <td><strong>0.503</strong></td><td><u>0.520</u></td><td><u>0.444</u></td>
115
+ <td>0.510</td><td><u>0.429</u></td>
116
+ <td><strong>0.577</strong></td><td>0.277</td><td><u>0.420</u></td>
117
+ <td><u>0.441</u></td><td><u>0.404</u></td>
118
+ </tr>
119
+
120
+ <tr style="background:#F2F0EF;font-weight:700;text-align:center;">
121
+ <td colspan="14"><em>Data Scaling Strong Baseline</em></td>
122
+ </tr>
123
+
124
+ <tr>
125
+ <td><strong>VBVR-Wan2.2</strong></td>
126
+ <td><strong>0.685</strong></td><td><strong>0.760</strong></td><td><strong>0.724</strong></td>
127
+ <td><strong>0.750</strong></td><td><strong>0.782</strong></td><td><strong>0.745</strong></td>
128
+ <td><strong>0.833</strong></td><td><strong>0.610</strong></td>
129
+ <td><strong>0.768</strong></td><td><strong>0.572</strong></td><td><strong>0.547</strong></td>
130
+ <td><strong>0.618</strong></td><td><strong>0.615</strong></td>
131
+ </tr>
132
+
133
+ </tbody>
134
+ </table>
135
+
136
+ ## Release Information
137
+ VBVR-Wan2.2 is trained from Wan2.2-I2V-A14B without architectural modifications, as the goal of VBVR-Wan2.2 is to *investigate data scaling behavior* and provide a *strong baseline model* for the video reasoning research community. Leveraging the VBVR-Dataset, which to our knowledge constitutes one of the largest video reasoning datasets to date, VBVR-Wan2.2 achieved highest score on VBVR-Bench.
138
+
139
+ In this release, we present
140
+ [**VBVR-Wan2.2**](https://huggingface.co/Video-Reason/VBVR-Wan2.2) and
141
+ [**VBVR-Bench**](https://huggingface.co/Video-Reason/VBVR-Bench),
142
+ [**VBVR-Dataset**](https://huggingface.co/Video-Reason/VBVR-Dataset).
143
+
144
+
145
+ ## 🛠️ QuickStart
146
+
147
+ ### Installation
148
+
149
+ We recommend using [uv](https://docs.astral.sh/uv/) to manage the environment.
150
+
151
+ > uv installation guide: <https://docs.astral.sh/uv/getting-started/installation/#installing-uv>
152
+
153
+ ```bash
154
+ pip install torch>=2.4.0 torchvision>=0.19.0 transformers Pillow huggingface_hub[cli]
155
+ uv pip install git+https://github.com/huggingface/diffusers
156
+ ```
157
+
158
+ #### Example Code
159
+
160
+ ```bash
161
+ huggingface-cli download Video-Reason/VBVR-Wan2.2 --local-dir ./VBVR-Wan2.2
162
+
163
+ python example.py \
164
+ --model_path ./VBVR-Wan2.2
165
+ ```
166
+
167
+ ## 🖊️ Citation
168
+
169
+ ```bib
170
+
171
+ ```