VIPER / README.md
Monosail's picture
Add task categories, paper and code links (#1)
3b7ad08 verified
metadata
language:
  - en
license: mit
task_categories:
  - text-to-video
  - image-to-video
dataset_info:
  features:
    - name: id
      dtype: string
    - name: domain
      dtype: string
    - name: task_type
      dtype: string
    - name: prompt
      dtype: string
    - name: image
      dtype: image
    - name: reference_frames
      sequence: image
    - name: reference_text
      sequence: string
    - name: protocol
      sequence: string
configs:
  - config_name: default
    data_files:
      - split: train
        path: dataset.parquet

Beyond the Last Frame: Process-aware Evaluation for Generative Video Reasoning

Paper | Code

πŸ‘€ About VIPER

  • Overview: Process-aware evaluation for Generative Video Reasoning tasks.
  • Statistics: 309 carefully curated samples spanning 6 distinct domains (i.e., temporal, structural, symbolic, spatial, physics and planning reasoning).
  • New Metric: Process-outcome Consistency (POC@r). POC@r evaluate video correctness at both process- and outcome-level, with multiple frames uniformly sampled from the whole video at rate r, instead of the last frame only.


Dataset Statistics

Domain Distribution

Domain Total Samples Task Types
Physics 32 experiment, game
Planning 44 navigation, obj_manipulation
Spatial 60 block_rotate, dice, image_restore
Structural 70 chess, maze, sudoku, ttt
Symbolic 60 knowledge, math, multimodal
Temporal 43 obj_move, zoom

πŸ“¦ Dataset Usage

Download

from datasets import load_dataset

# Load the full dataset
dataset = load_dataset("Monosail/VIPER")

Data Fields

  • id: Unique identifier for the sample
  • domain: The reasoning domain (Physics, Planning, Spatial, Structural, Symbolic, Temporal)
  • task_type: Specific task category within the domain
  • prompt: Text prompt describing the task
  • image: The input image
  • reference_frames: Ground-truth image frames
  • reference_texts: Ground-truth text descriptions
  • protocol: Process-level task constraints

πŸ“ Citation

If you find our benchmark useful, please consider citing us:

@article{li2026viper,
  title={Beyond the Last Frame: Process-aware Evaluation for Generative Video Reasoning},
  author={Li, Yifan and Gu, Yukai and Min, Yingqian and Liu, Zikang Mirror and Du, Yifan and Zhou, Kun and Yang, Min and Zhao, Wayne Xin and Qiu, Minghui},
  journal={arXiv preprint arXiv:2512.24952},
  year={2025}
}