ViewSpatial-Bench / README.md
lidingm's picture
Update README.md
19a6988 verified
metadata
license: apache-2.0
task_categories:
  - visual-question-answering
language:
  - en
tags:
  - spatial-reasoning
  - cross-viewpoint localization
pretty_name: ViewSpatial-Bench
size_categories:
  - 1K<n<10K
configs:
  - config_name: ViewSpatial-Bench
    data_files:
      - split: test
        path: ViewSpatial-Bench.json

ViewSpatial-Bench: Evaluating Multi-perspective Spatial Localization in Vision-Language Models

arXiv github Webpage

Dataset Description

We introduce ViewSpatial-Bench, a comprehensive benchmark with over 5,700 question-answer pairs across 1,000+ 3D scenes from ScanNet and MS-COCO validation sets. This benchmark evaluates VLMs' spatial localization capabilities from multiple perspectives, specifically testing both egocentric (camera) and allocentric (human subject) viewpoints across five distinct task types.

ViewSpatial-Bench addresses a critical gap: while VLMs excel at spatial reasoning from their own perspective, they struggle with perspective-taking—adopting another entity's spatial frame of reference—which is essential for embodied interaction and multi-agent collaboration.The figure below shows the construction pipeline and example demonstrations of our benchmark.

ViewSpatial-Bench construction pipeline and example questions

The dataset contains the following fields:

Field Name Description
question_type Type of spatial reasoning task, includes 5 distinct categories for evaluating different spatial capabilities
image_path Path to the source image, includes data from two sources: scannetv2_val (ScanNet validation set) and val2017 (MS-COCO validation set)
question The spatial reasoning question posed to the model
answer The correct answer to the question
choices Multiple choice options available for the question
  • Language(s) (NLP): en
  • License: apache-2.0

Uses

I. With HuggingFace datasets library.

from datasets import load_dataset
ds = load_dataset("lidingm/ViewSpatial-Bench")

II. Evaluation using Open-Source Code.

Evaluate using our open-source evaluation code available on Github.(Coming Soon)

# Clone the repository
git clone https://github.com/lidingm/ViewSpatial-Bench.git
cd ViewSpatial-Bench

# Install dependencies
pip install -r requirements.txt

# Run evaluation
python evaluate.py --model_path your_model_path

You can configure the appropriate model parameters and evaluation settings according to the framework's requirements to obtain performance evaluation results on the ViewSpatial-Bench dataset.

Benchamrk

We provide benchmark results for various open-source models as well as GPT-4o and Gemini 2.0 Flash on our benchmark. More model evaluations will be added.

Model Camera-based Tasks Person-based Tasks Overall
Rel. Dir. Obj. Ori. Avg. Obj. Ori. Rel. Dir. Sce. Sim. Avg.
InternVL2.5 (2B) 38.5222.5932.79 47.0940.0225.7037.04 34.98
Qwen2.5-VL (3B) 43.4333.3339.80 39.1628.6228.5132.14 35.85
Qwen2.5-VL (7B) 46.6429.7240.56 37.0535.0428.7833.37 36.85
LLaVA-NeXT-Video (7B) 26.3419.2823.80 44.6838.6029.0537.07 30.64
LLaVA-OneVision (7B) 29.8426.1028.49 22.3931.0026.8826.54 27.49
InternVL2.5 (8B) 49.4141.2746.48 46.7942.0432.8540.20 43.24
Llama-3.2-Vision (11B) 25.2720.9823.73 51.2032.1918.8233.61 28.82
InternVL3 (14B) 54.6533.6347.09 33.4337.0531.8633.88 40.28
Kimi-VL-Instruct (16B) 26.8522.0925.14 63.0543.9420.2741.52 33.58
GPT-4o 41.4619.5833.57 42.9740.8626.7936.29 34.98
Gemini 2.0 Flash 45.2912.9533.66 41.1632.7821.9031.53 32.56
Random Baseline 25.1626.1025.50 24.6031.1226.3327.12 26.33

Citation

Coming Soon