VBVR-Bench-Data / README.md
nielsr's picture
nielsr HF Staff
Add image-to-video task category, paper link and sample usage
749cc1c verified
|
raw
history blame
5.93 kB
metadata
language:
  - en
license: apache-2.0
size_categories:
  - n<1K
task_categories:
  - image-to-video
pretty_name: VBVR-Bench-Data
tags:
  - video-generation
  - video-reasoning
configs:
  - config_name: VBVR-Bench-Data
    data_files:
      - split: test
        path: VBVR-Bench.json

VBVR: A Very Big Video Reasoning Suite

Project Page Code Paper Model Data Leaderboard

Overview

Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture, enabling intuitive reasoning over motion, interaction, and causality. Rapid progress in video models has focused primarily on visual quality. Systematically studying video reasoning and its scaling behavior suffers from a lack of video reasoning (training) data. To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks and over one million video clips—approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench, a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers, enabling reproducible and interpretable diagnosis of video reasoning capabilities.

For more details, please refer to the paper: A Very Big Video Reasoning Suite.

Sample Usage

To evaluate a model using the VBVR suite, you can use the official evaluation toolkit VBVR-EvalKit:

# Install the toolkit
git clone https://github.com/Video-Reason/VBVR-EvalKit.git && cd VBVR-EvalKit
python -m venv venv && source venv/bin/activate
pip install -e .

# Setup a model (example: SVD)
bash setup/install_model.sh --model svd --validate

# Inference
python examples/generate_videos.py --questions-dir /path/to/VBVR-Bench-Data --output-dir ./outputs --model svd

# Evaluation (VBVR-Bench)
python examples/score_videos.py --inference-dir ./outputs

Release Information

We are pleased to release the official VBVR-Bench test dataset, designed for standardized and rigorous evaluation of video-based visual reasoning models. The test split is designed along with the evaluation toolkit provided by Video-Reason at VBVR-EvalKit.

After running evaluation, you can compare your model’s performance on the public leaderboard at VBVR-Bench Leaderboard.

Data Structure

The dataset is organized by domain and task generator. For example:

In-Domain_50/
  G-31_directed_graph_navigation_data-generator/
    00000/
      first_frame.png
      final_frame.png
      ground_truth.mp4
      prompt.txt

Structure Description

  • In-Domain_50/Out-of-Domain_50: Evaluation splits indicating whether samples belong to in-domain or out-of-domain settings.
  • G-XXX_task-name_data-generator: A specific reasoning task category and its corresponding data generator.
  • 00000-00004: Individual sample instances.

Each sample directory contains:

  • first_frame.png: The initial frame of the video
  • final_frame.png: The final frame
  • ground_truth.mp4: The full video sequence
  • prompt.txt: The textual reasoning question or instruction

🖊️ Citation

@article{vbvr2026,
  title   = {A Very Big Video Reasoning Suite},
  author  = {Wang, Maijunxian and Wang, Ruisi and Lin, Juyi and Ji, Ran and
             Wiedemer, Thadd{\"{a}}us and Gao, Qingying and Luo, Dezhi and
             Qian, Yaoyao and Huang, Lianyu and Hong, Zelong and Ge, Jiahui and
             Ma, Qianli and He, Hang and Zhou, Yifan and Guo, Lingzi and
             Mei, Lantao and Li, Jiachen and Xing, Hanwen and Zhao, Tianqi and
             Yu, Fengyuan and Xiao, Weihang and Jiao, Yizheng and
             Hou, Jianheng and Zhang, Danyang and Xu, Pengcheng and
             Zhong, Boyang and Zhao, Zehong and Fang, Gaoyun and Kitaoka, John and
             Xu, Yile and Xu, Hua and Blacutt, Kenton and Nguyen, Tin and
             Song, Siyuan and Sun, Haoran and Wen, Shaoyue and He, Linyang and
             Wang, Runming and Wang, Yanzhi and Yang, Mengyue and Ma, Ziqiao and
             Milli{\`e}re, Rapha{\"{e}}l and Shi, Freda and Vasconcelos, Nuno and
             Khashabi, Daniel and Yuille, Alan and Du, Yilun and Liu, Ziming and
             Bo Li and Dahua Lin and Ziwei Liu and Vikash Kumar and Yijiang Li and
             Lei Yang and Zhongang Cai and Hokin Deng},
  journal = {arXiv preprint arXiv:2602.20159},
  year    = {2026},
  url     = {https://arxiv.org/abs/2602.20159}
}