|
--- |
|
license: cdla-permissive-2.0 |
|
dataset_info: |
|
- config_name: maze |
|
features: |
|
- name: id |
|
dtype: int32 |
|
- name: image |
|
dtype: image |
|
- name: prompt |
|
dtype: string |
|
- name: ground_truth |
|
dtype: string |
|
- name: task |
|
dtype: string |
|
- name: question_type |
|
dtype: string |
|
- name: target_options |
|
dtype: string |
|
- config_name: maze_text_only |
|
features: |
|
- name: id |
|
dtype: int32 |
|
- name: prompt |
|
dtype: string |
|
- name: ground_truth |
|
dtype: string |
|
- name: task |
|
dtype: string |
|
- name: question_type |
|
dtype: string |
|
- name: target_options |
|
dtype: string |
|
- config_name: spatial_grid |
|
features: |
|
- name: id |
|
dtype: int32 |
|
- name: image |
|
dtype: image |
|
- name: prompt |
|
dtype: string |
|
- name: ground_truth |
|
dtype: string |
|
- name: task |
|
dtype: string |
|
- name: question_type |
|
dtype: string |
|
- name: target_options |
|
dtype: string |
|
- config_name: spatial_grid_text_only |
|
features: |
|
- name: id |
|
dtype: int32 |
|
- name: prompt |
|
dtype: string |
|
- name: ground_truth |
|
dtype: string |
|
- name: task |
|
dtype: string |
|
- name: question_type |
|
dtype: string |
|
- name: target_options |
|
dtype: string |
|
- config_name: spatial_map |
|
features: |
|
- name: id |
|
dtype: int32 |
|
- name: image |
|
dtype: image |
|
- name: prompt |
|
dtype: string |
|
- name: ground_truth |
|
dtype: string |
|
- name: task |
|
dtype: string |
|
- name: question_type |
|
dtype: string |
|
- name: target_options |
|
dtype: string |
|
- config_name: spatial_map_text_only |
|
features: |
|
- name: id |
|
dtype: int32 |
|
- name: prompt |
|
dtype: string |
|
- name: ground_truth |
|
dtype: string |
|
- name: task |
|
dtype: string |
|
- name: question_type |
|
dtype: string |
|
- name: target_options |
|
dtype: string |
|
configs: |
|
- config_name: maze |
|
data_files: |
|
- split: val |
|
path: maze/maze_val.parquet |
|
- config_name: maze_text_only |
|
data_files: |
|
- split: val |
|
path: maze/maze_text_only_val.parquet |
|
- config_name: spatial_grid |
|
data_files: |
|
- split: val |
|
path: spatial_grid/spatial_grid_val.parquet |
|
- config_name: spatial_grid_text_only |
|
data_files: |
|
- split: val |
|
path: spatial_grid/spatial_grid_text_only_val.parquet |
|
- config_name: spatial_map |
|
data_files: |
|
- split: val |
|
path: spatial_map/spatial_map_val.parquet |
|
- config_name: spatial_map_text_only |
|
data_files: |
|
- split: val |
|
path: spatial_map/spatial_map_text_only_val.parquet |
|
--- |
|
|
|
A key question for understanding multimodal vs. language capabilities of models is what is |
|
the relative strength of the spatial reasoning and understanding in each modality, as spatial understanding is |
|
expected to be a strength for multimodality? To test this we created a procedurally generatable, synthetic dataset |
|
to testing spatial reasoning, navigation, and counting. These datasets are challenging and by |
|
being procedurally generated new versions can easily be created to combat the effects of models being trained |
|
on this data and the results being due to memorization. For each task, each question has an image and a text |
|
representation that is sufficient for answering each question. |
|
|
|
|
|
This dataset has three tasks that test: Spatial Understanding (Spatial-Map), Nav- |
|
igation (Maze), and Counting (Spatial-Grid). Each task has three conditions, with respect to the input |
|
modality, 1) text-only, input and a question, 2) vision-only, which is the standard task of visual-question an- |
|
swering that consists of a vision-only input and a question, and 3) vision-text includes both text and image |
|
representations with the question. Each condition includes 1500 |
|
images and text pairs for a total of 4500. |
|
|
|
__Spatial Map__ |
|
|
|
The dataset consists of spatial relationships for random layouts of symbolic objects with text names on white background. |
|
Each object is associated with a unique location name, such as Unicorn Umbrellas and Gale Gifts. To study the impact of modality, |
|
the textual representation of each input consists of pairwise relations such as Brews Brothers Pub |
|
is to the Southeast of Whale’s Watches. The questions include asking about the spatial |
|
relationships between two locations and the number of objects that meet specific spatial criteria. |
|
|
|
The dataset includes 3 conditions: text only, image only, and text+image. Each condition includes 1500 images and text pairs for a total of 4500. |
|
|
|
There are 3 question types: |
|
1) In which direction is one object to another (answer is a direction) |
|
2) Which object is to the direction of another (answer is an object name) |
|
3) How many objects are in a direction of another (answer is a number) |
|
|
|
Each question is multiple choice. |
|
|
|
__Maze__ |
|
|
|
The dataset consists of small mazes with questions asked about the maze. Each sample can be |
|
represented as colored blocks where different colors signify distinct elements: a green block marks |
|
the starting point (S), a red block indicates the exit (E), black blocks represent impassable walls, |
|
white blocks denote navigable paths, and blue blocks trace the path from S to E. The objective is to |
|
navigate from S to E following the blue path, with movement permitted in the four cardinal directions |
|
(up, down, left, right). Alternatively, each input can be depicted in textual format using ASCII code. |
|
The questions asked include counting the number of turns from S to E and determining the spatial relationship |
|
between S and E. |
|
|
|
The dataset includes 3 conditions: text only, image only, and text+image. Each condition includes 1500 images and text pairs for a total of 4500. |
|
|
|
There are 3 question types: |
|
1) How many right turns on the path from start to end (answer is a number) |
|
2) How many total turns on the path from start to end (answer is a number) |
|
3) Where is the exit releative to the start (answer is direction or yes/no) |
|
|
|
Each question is multiple choice. |
|
|
|
__Spatial Grid__ |
|
|
|
Each input consists of a grid of cells, each containing an image (e.g.,a rabbit). Alternatively, this grid |
|
can also be represented in a purely textual format; for instance, the first row might be described as: |
|
elephant | cat | giraffe | elephant | cat. The evaluations focus on tasks such as counting specific objects (e.g., rabbits) and |
|
identifying the object located at a specific coordinate in the grid (e.g., first row, second column). |
|
|
|
The dataset includes 3 conditions: text only, image only, and text+image. Each condition includes 1500 images and text pairs for a total of 4500 questions. |
|
|
|
There are 3 question types: |
|
1) How many blocks contain a specific animal (answer is a number) |
|
2) What animal is in one specific block, adressed by top-left, top, right, etc. (answer is an animal name) |
|
3) What animal is in one specific block, addressed by row, column (answer is an animal name) |
|
|
|
Each question is multiple choice. |
|
|
|
--- |
|
More details here: https://arxiv.org/pdf/2406.14852 |