Fangyu Liu commited on
Commit
f27dca7
1 Parent(s): 527ec6b

Update README.md

Browse files

### 1 Overview

The Visual Spatial Reasoning (VSR) corpus is a collection of caption-image pairs with true/false labels. Each caption describes the spatial relation of two individual objects in the image, and a vision-language model (VLM) needs to judge whether the caption is correctly describing the image (True) or not (False). Below are a few examples.

_The cat is behind the laptop_. (True) | _The cow is ahead of the person._ (False) | _The cake is at the edge of the dining table._ (True) | _The horse is left of the person._ (False)
:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:
![](http://images.cocodataset.org/train2017/000000119360.jpg) | ![](http://images.cocodataset.org/train2017/000000080336.jpg) | ![](http://images.cocodataset.org/train2017/000000261511.jpg) | ![](http://images.cocodataset.org/train2017/000000057550.jpg)

#### 1.1 Why VSR?
Understanding spatial relations is fundamental to achieve intelligence. Existing vision-language reasoning datasets are great but they compose multiple types of challenges and can thus conflate different sources of error.
The VSR corpus focuses specifically on spatial relations so we can have accurate diagnosis and maximum interpretability.

#### 1.2 What have we found?
Below are baselines' by-relation performances on VSR (random split).
![]([https://github.com/cambridgeltl/visual-spatial-reasoning/blob/master/figures/performance_by_relation_random_split_v4.png](https://github.com/cambridgeltl/visual-spatial-reasoning/blob/master/figures/performance_by_relation_random_split_v4.png?raw=true))
**_More data != better performance._** The relations are sorted by frequencies from left to right. The VLMs' by-relation performances have little correlation with relation frequency, meaning that more training data do not necessarily lead to better performance.

<img align="right" width="320" src="figures/performance_by_meta_cat_random_split_v4.png">

**_Understanding object orientation is hard._** After classifying spatial relations into meta-categories, we can clearly see that all models are at chance level for "orientation"-related relations (such as "facing", "facing away from", "parallel to", etc.).

For more findings and takeways including zero-shot split performance. check out our paper!

### 2 The VSR dataset: Splits, statistics, and meta-data

The VSR corpus, after validation, contains 10,972 data points with high agreement. On top of these, we create two splits (1) random split and (2) zero-shot split. For random split, we randomly split all data points into train, development, and test sets. Zero-shot split makes sure that train, development and test sets have no overlap of concepts (i.e., if *dog* is in test set, it is not used for training and development). Below are some basic statistics of the two splits.

split | train | dev | test | total
:------|:--------:|:--------:|:--------:|:--------:
random | 7,680 | 1,097 | 2,195 | 10,972
zero-shot | 4,713 | 231 | 616 | 5,560

Check out [`data/`](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data) for more details.

### 3 Baselines: Performance

We test three baselines, all supported in huggingface. They are VisualBERT [(Li et al. 2019)](https://arxiv.org/abs/1908.03557), LXMERT [(Tan and Bansal, 2019)](https://arxiv.org/abs/1908.07490) and ViLT [(Kim et al. 2021)](https://arxiv.org/abs/2102.03334).

model | random split | zero-shot
:-------------|:-------------:|:-------------:
*human* | *95.4* | *95.4*
CLIP (frozen) | 56.0 | 54.5
CLIP (finetuned)* | 65.1 | -
VisualBERT | 55.2 | 51.0
ViLT | 69.3 | **63.0**
LXMERT | **70.1** | 61.2

*CLIP (finetuned) result is from [here](https://github.com/Sohojoe/CLIP_visual-spatial-reasoning#--fine-tuning-results).



### Citation
If you find VSR useful:
```bibtex


@article

{Liu2022VisualSR,
title={Visual Spatial Reasoning},
author={Fangyu Liu and Guy Edward Toh Emerson and Nigel Collier},
journal={Transactions of the Association for Computational Linguistics},
year={2023},
}
```

Files changed (1) hide show
  1. README.md +12 -1
README.md CHANGED
@@ -1,3 +1,14 @@
1
  ---
2
  license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ task_categories:
4
+ - text-classification
5
+ - question-answering
6
+ language:
7
+ - en
8
+ tags:
9
+ - multimodal
10
+ - vision-and-language
11
+ pretty_name: VSR (zeroshot)
12
+ size_categories:
13
+ - 1K<n<10K
14
+ ---