Datasets:
id
int64 0
2.25k
| question
stringlengths 84
172
| answer
listlengths 1
1
| image_id
int64 23
2.42M
⌀ |
|---|---|---|---|
0 |
<image>
USER: Where is the dining table in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 397,133 |
1 |
<image>
USER: Where is the dining table in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 37,777 |
2 |
<image>
USER: Where is the refrigerator in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 37,777 |
3 |
<image>
USER: Where is the oven in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 37,777 |
4 |
<image>
USER: Where is the toilet in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 403,385 |
5 |
<image>
USER: Where is the toilet in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 331,352 |
6 |
<image>
USER: Where is the sink in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Top"
] | 331,352 |
7 |
<image>
USER: Where is the TV in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 386,912 |
8 |
<image>
USER: Where is the TV in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 386,912 |
9 |
<image>
USER: Where is the TV in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 491,497 |
10 |
<image>
USER: Where is the chair in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 491,497 |
11 |
<image>
USER: Where is the couch in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 491,497 |
12 |
<image>
USER: Where is the bird in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 289,393 |
13 |
<image>
USER: Where is the bird in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 289,393 |
14 |
<image>
USER: Where is the giraffe in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Top"
] | 289,393 |
15 |
<image>
USER: Where is the giraffe in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 289,393 |
16 |
<image>
USER: Where is the bus in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 17,627 |
17 |
<image>
USER: Where is the bus in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Top"
] | 303,818 |
18 |
<image>
USER: Where is the bus in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 303,818 |
19 |
<image>
USER: Where is the bus in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Top"
] | 460,347 |
20 |
<image>
USER: Where is the person in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 308,394 |
21 |
<image>
USER: Where is the bench in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 308,394 |
22 |
<image>
USER: Where is the person in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 456,496 |
23 |
<image>
USER: Where is the train in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 184,321 |
24 |
<image>
USER: Where is the stop sign in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 297,343 |
25 |
<image>
USER: Where is the stop sign in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Top"
] | 122,745 |
26 |
<image>
USER: Where is the cat in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 219,578 |
27 |
<image>
USER: Where is the person in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Top"
] | 25,560 |
28 |
<image>
USER: Where is the laptop in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 403,817 |
29 |
<image>
USER: Where is the tv in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 403,817 |
30 |
<image>
USER: Where is the umbrella in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Top"
] | 301,867 |
31 |
<image>
USER: Where is the dog in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 185,250 |
32 |
<image>
USER: Where is the person in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 356,427 |
33 |
<image>
USER: Where is the horse in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 16,228 |
34 |
<image>
USER: Where is the person in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 266,409 |
35 |
<image>
USER: Where is the skateboard in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 180,135 |
36 |
<image>
USER: Where is the dining table in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 109,798 |
37 |
<image>
USER: Where is the dining table in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 109,798 |
38 |
<image>
USER: Where is the dining table in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Top"
] | 369,370 |
39 |
<image>
USER: Where is the dining table in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 369,370 |
40 |
<image>
USER: Where is the cup in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 502,737 |
41 |
<image>
USER: Where is the potted plant in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 515,579 |
42 |
<image>
USER: Where is the chair in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 39,956 |
43 |
<image>
USER: Where is the cake in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 321,214 |
44 |
<image>
USER: Where is the chair in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Top"
] | 321,214 |
45 |
<image>
USER: Where is the chair in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 321,214 |
46 |
<image>
USER: Where is the bed in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 355,257 |
47 |
<image>
USER: Where is the person in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 355,257 |
48 |
<image>
USER: Where is the person in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 355,257 |
49 |
<image>
USER: Where is the chair in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 473,237 |
50 |
<image>
USER: Where is the pizza in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 473,237 |
51 |
<image>
USER: Where is the pizza in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 206,027 |
52 |
<image>
USER: Where is the couch in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 23,899 |
53 |
<image>
USER: Where is the couch in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 340,175 |
54 |
<image>
USER: Where is the chair in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 366,141 |
55 |
<image>
USER: Where is the chair in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 366,141 |
56 |
<image>
USER: Where is the person in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 428,454 |
57 |
<image>
USER: Where is the person in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 261,061 |
58 |
<image>
USER: Where is the mouse in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 524,456 |
59 |
<image>
USER: Where is the mouse in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 524,456 |
60 |
<image>
USER: Where is the person in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 213,086 |
61 |
<image>
USER: Where is the sink in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 508,730 |
62 |
<image>
USER: Where is the vase in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 550,426 |
63 |
<image>
USER: Where is the dining table in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 171,190 |
64 |
<image>
USER: Where is the person in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 580,294 |
65 |
<image>
USER: Where is the oven in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 580,294 |
66 |
<image>
USER: Where is the dog in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 494,869 |
67 |
<image>
USER: Where is the dog in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 494,869 |
68 |
<image>
USER: Where is the person in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 33,638 |
69 |
<image>
USER: Where is the refrigerator in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 186,980 |
70 |
<image>
USER: Where is the microwave in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Top"
] | 127,182 |
71 |
<image>
USER: Where is the microwave in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 127,182 |
72 |
<image>
USER: Where is the oven in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 127,182 |
73 |
<image>
USER: Where is the oven in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 127,182 |
74 |
<image>
USER: Where is the refrigerator in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 127,182 |
75 |
<image>
USER: Where is the refrigerator in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 127,182 |
76 |
<image>
USER: Where is the car in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 367,680 |
77 |
<image>
USER: Where is the bus in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 367,680 |
78 |
<image>
USER: Where is the cup in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 117,425 |
79 |
<image>
USER: Where is the bed in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 365,387 |
80 |
<image>
USER: Where is the bed in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 365,387 |
81 |
<image>
USER: Where is the toilet in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 365,387 |
82 |
<image>
USER: Where is the toilet in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 365,387 |
83 |
<image>
USER: Where is the chair in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 363,840 |
84 |
<image>
USER: Where is the keyboard in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 363,840 |
85 |
<image>
USER: Where is the keyboard in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 363,840 |
86 |
<image>
USER: Where is the dining table in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 214,720 |
87 |
<image>
USER: Where is the bus in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Top"
] | 278,848 |
88 |
<image>
USER: Where is the handbag in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 273,132 |
89 |
<image>
USER: Where is the handbag in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 273,132 |
90 |
<image>
USER: Where is the fire hydrant in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 87,875 |
91 |
<image>
USER: Where is the bus in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 338,428 |
92 |
<image>
USER: Where is the banana in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 269,314 |
93 |
<image>
USER: Where is the banana in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Left"
] | 269,314 |
94 |
<image>
USER: Where is the person in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 360,137 |
95 |
<image>
USER: Where is the person in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Right"
] | 360,137 |
96 |
<image>
USER: Where is the umbrella in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Top"
] | 122,046 |
97 |
<image>
USER: Where is the umbrella in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Top"
] | 512,836 |
98 |
<image>
USER: Where is the chair in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 84,477 |
99 |
<image>
USER: Where is the tie in the photo? Answer with left, right, top or bottom.
ASSISTANT:
|
[
"Bottom"
] | 562,243 |
Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas
This repository provides datasets associated with the paper Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas.
Code: https://github.com/shiqichen17/AdaptVis
Abstract
Large Vision Language Models (VLMs) have long struggled with spatial reasoning tasks. Surprisingly, even simple spatial reasoning tasks, such as recognizing "under" or "behind" relationships between only two objects, pose significant challenges for current VLMs. In this work, we study the spatial reasoning challenge from the lens of mechanistic interpretability, diving into the model's internal states to examine the interactions between image and text tokens. By tracing attention distribution over the image through out intermediate layers, we observe that successful spatial reasoning correlates strongly with the model's ability to align its attention distribution with actual object locations, particularly differing between familiar and unfamiliar spatial relationships. Motivated by these findings, we propose ADAPTVIS based on inference-time confidence scores to sharpen the attention on highly relevant regions when confident, while smoothing and broadening the attention window to consider a wider context when confidence is lower. This training-free decoding method shows significant improvement (e.g., up to a 50 absolute point improvement) on spatial reasoning benchmarks such as WhatsUp and VSR with negligible cost. We make code and data publicly available for research purposes at https://github.com/shiqichen17/AdaptVis.
Datasets
This repository provides the datasets used in the paper. The code to load and evaluate each dataset is available in dataset_zoo/aro_datasets.py in the GitHub repository. The Question and Answering data is located in prompt/.
The datasets are categorized for evaluating VLMs' performance on spatial reasoning tasks. They include:
COCO_one_objCOCO_two_objControlled_AControlled_BVG_one_objVG_two_obj
Sample Usage
Load with Hugging Face datasets
You can easily load any configuration of this dataset using the datasets library:
from datasets import load_dataset
# Load a specific configuration, e.g., 'COCO_one_obj'
dataset = load_dataset("AdaptVis/all_datasets", "COCO_one_obj")
# Access the test split
test_data = dataset["test"]
# Print an example
print(test_data[0])
Running Experiments with the Codebase
To set up the environment and run experiments for scaling_vis and adapt_vis methods from the original repository, follow these steps:
Setting Up the environment
git clone https://github.com/shiqichen17/AdaptVis.git
cd AdaptVis
mkdir data
mkdir output
pip install -r requirements.txt
Downloading the data
The data can be downloaded automatically when running experiments by setting --download=True (while running python main_aro.py or instantiating the dataset directly). Alternatively, you can download it manually from the Hugging Face Hub (this repository) or the provided Google Drive link in the GitHub README.
Running an example experiment
You can quickly run an example experiment using the provided run.sh script:
bash run.sh
Arguments
The run.sh script accepts various arguments to control the dataset, model, and method:
| Argument | Example | Description |
|---|---|---|
dataset |
Controlled_Images_A |
Specifies the dataset you want to evaluate. Can choose from Controlled_Images_A, Controlled_Images_B... |
model |
llava1.5 |
Specifies the model you want to use. |
method |
scaling_vis |
The method for evaluation. Can choose from "scaling_vis" or "adapt_vis". |
weight |
1.2 |
Coefficient for Scaling_vis. Can set from [0, 0.5, 0.8, 1.2, 1.5, 2.0]. |
weight1 |
0.5 |
Coefficient for AdaptVis. Can set from [0.5, 0.8]. |
weight2 |
1.2 |
Coefficient for AdaptVis. Can set from [1.2, 1.5, 2.0]. |
threshold |
0.3 |
Threshold for AdaptVis. |
Dataset Structure
The dataset contains multiple configurations, each with id, question, answer, and an image identifier (image_id or image_path). Below is a summary of the dataset splits and features:
dataset_info:
- config_name: COCO_one_obj
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answer
sequence: string
- name: image_id
dtype: int64
splits:
- name: test
num_bytes: 294466
num_examples: 2247
download_size: 32405
dataset_size: 294466
- config_name: COCO_two_obj
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answer
sequence: string
- name: image_id
dtype: int64
splits:
- name: test
num_bytes: 63915
num_examples: 440
download_size: 12129
dataset_size: 63915
- config_name: Controlled_A
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answer
sequence: string
- name: image_path
dtype: string
splits:
- name: test
num_bytes: 76379
num_examples: 412
download_size: 11149
dataset_size: 76379
- config_name: Controlled_B
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answer
sequence: string
- name: image_path
dtype: string
splits:
- name: test
num_bytes: 75988
num_examples: 408
download_size: 10560
dataset_size: 75988
- config_name: VG_one_obj
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answer
sequence: string
- name: image_id
dtype: string
splits:
- name: test
num_bytes: 172016
num_examples: 1160
download_size: 25361
dataset_size: 172016
- config_name: VG_two_obj
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answer
sequence: string
- name: image_id
dtype: string
splits:
- name: test
num_bytes: 47464
num_examples: 291
download_size: 11402
dataset_size: 47464
configs:
- config_name: COCO_one_obj
data_files:
- split: test
path: COCO_one_obj/test-*
- config_name: COCO_two_obj
data_files:
- split: test
path: COCO_two_obj/test-*
- config_name: Controlled_A
data_files:
- split: test
path: Controlled_A/test-*
- config_name: Controlled_B
data_files:
- split: test
path: Controlled_B/test-*
- config_name: VG_one_obj
data_files:
- split: test
path: VG_one_obj/test-*
- config_name: VG_two_obj
data_files:
- split: test
path: VG_two_obj/test-*
Citation
If you use this code or data, please consider citing our paper:
@misc{chen2025spatialreasoninghardvlms,
title={Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas},
author={Shiqi Chen and Tongyao Zhu and Ruochen Zhou and Jinghan Zhang and Siyang Gao and Juan Carlos Niebles and Mor Geva and Junxian He and Jiajun Wu and Manling Li},
year={2025},
eprint={2503.01773},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.01773},
}
- Downloads last month
- 129