pretty_name: VisPhyBench
license: mit
language:
- en
VisPhyBench
To evaluate how well models reconstruct appearance and reproduce physically plausible motion, we introduce VisPhyBench, a unified evaluation protocol comprising 209 scenes derived from 108 physical templates that assesses physical understanding through the lens of code-driven resimulation in both 2D and 3D scenes, integrating metrics from different aspects. Each scene is also annotated with a coarse difficulty label (easy/medium/hard).
Dataset Details
- Created by: Jiarong Liang
- Language(s) (NLP): English
- License: MIT
- Repository:
https://github.com/TIGER-AI-Lab/VisPhyWorld
Uses
The dataset is used to evaluate how well models reconstruct appearance and reproduce physically plausible motion.
Dataset Structure
The difficulty of the two sets of VisPhyBench splits, sub (209) and test (49), are as follows: sub is 114/67/28 (54.5%/32.1%/13.4%), and test is 29/17/3 (59.2%/34.7%/6.1%) (Easy/Medium/Hard).
What each sample contains
VisPhyBench is provided as two splits:
sub: a larger split intended for evaluation and analysis.test: a smaller split subsampled fromsubfor quick sanity checks.
For each sample, we provide:
- A short video of a synthetic physical scene.
- A detection JSON (per sample) that describes the scene in the first frame.
- A difficulty label (easy/medium/hard) derived from the mean of eight annotators’ 1–5 ratings.
Detection JSON format
Each detection JSON includes:
image_size: the image width/height.coordinate_system: conventions for coordinates (e.g., origin and axis directions).objects: a list of detected objects. Each object includes:id: unique identifier.category: coarse geometry category.color_rgb: RGB color triplet.position: object position (e.g., center coordinates).bbox: bounding box coordinates and size.size: coarse size fields (e.g., radius/length/thickness).
These fields specify object locations and attributes precisely, which helps an LLM initialize objects correctly when generating executable simulation code.
BibTeX
@misc{visphybench2026,
title = {VisPhyBench},
author = {Liang, Jiarong and Ku, Max and Hui, Ka-Hei and Nie, Ping and Chen, Wenhu},
howpublished = {GitHub repository},
year = {2026},
url = {https://github.com/TIGER-AI-Lab/VisPhyWorld}
}