Datasets:

Languages:
English
ArXiv:
License:
File size: 5,110 Bytes
f7ee813
94e488c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f7ee813
94e488c
 
 
c6c4d0d
94e488c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c6c4d0d
94e488c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
378fb23
 
94e488c
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: vasr
pretty_name: VASR
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- commonsense-reasoning
- visual-reasoning
task_ids: []
extra_gated_prompt: "By clicking on “Access repository” below, you also agree that you are using it solely for research purposes. The full license agreement is available in the dataset files."
---
# Dataset Card for VASR
- [Dataset Description](#dataset-description)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Colab notebook code for VASR evaluation with ViT](#colab-notebook-code-for-vasr-evaluation-with-clip)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
## Dataset Description
VASR is a challenging dataset for evaluating computer vision commonsense reasoning abilities. Given a triplet of images, the task is to select an image candidate B' that completes the analogy (A to A' is like B to what?). Unlike previous work on visual analogy that focused on simple image transformations, we tackle complex analogies requiring understanding of scenes. Our experiments demonstrate that state-of-the-art models struggle with carefully chosen distractors (±53%, compared to 90% human accuracy).
- **Homepage:** 
https://vasr-dataset.github.io/
- **Colab**
https://colab.research.google.com/drive/1HUg0aHonFDK3hVFrIRYdSEfpUJeY-4dI
- **Repository:**
https://github.com/vasr-dataset/vasr/tree/main/experiments
- **Paper:**
NA
- **Leaderboard:**
https://vasr-dataset.github.io/
- **Point of Contact:**
yonatanbitton1@gmail.com 
### Supported Tasks and Leaderboards
https://vasr.github.io/leaderboard. 
https://paperswithcode.com/dataset/vasr. 
## Colab notebook code for VASR evaluation with ViT
https://colab.research.google.com/drive/1HUg0aHonFDK3hVFrIRYdSEfpUJeY-4dI
### Languages
English. 
## Dataset Structure
### Data Fields
A: datasets.Image() - the first input image, **A**:A'  
A': datasets.Image() - the second input image, different from A' in a single key, A:**A'**  
B: datasets.Image() - the third input image, has the same different item as A, **B**:B'  
B': datasets.Image() - the forth image, which is the analogy solution. Different from B' in a single key (the same different one as in A:A'),  B:**B'**  
candidates_images: [datasets.Image()] - a list of candidate images solutions to the analogy  
label: datasets.Value("int64") - the index of the ground-truth solution  
candidates: [datasets.Value("string")] - a list of candidate string solutions to the analogy  
A_verb: datasets.Value("string") - the verb of the first input image A  
A'_verb: datasets.Value("string") - the verb of the second input image A'  
B_verb: datasets.Value("string") - the verb of the third input image B  
B'_verb: datasets.Value("string") - the verb of the forth image, which is the analogy solution  
diff_item_A: datasets.Value("string") - FrameNet key of the item that is different between **A**:A', in image A (which is the same as image B)  
diff_item_A_str_first: datasets.Value("string") - String representation of the FrameNet key of the item that is different between **A**:A', in image A  
diff_item_A': datasets.Value("string") - FrameNet key of the item that is different between A:**A'**, in image A' (which is the same as image B')  
diff_item_A'_str_first: datasets.Value("string") - String representation of the FrameNet key of the item that is different between A:**A'**, in image A'  

### Data Splits
There are three splits, TRAIN, VALIDATION, and TEST.  
Since there are four candidates and one solution, random chance is 25%. 

## Dataset Creation

We leverage situation recognition annotations and the CLIP model to generate a large set of 500k candidate analogies.
There are two types of labels: 
- Silver labels, obtained from the automatic generation.
- Gold labels, obtained from human annotations over the silver annotations.

In the huggingface version we provide only the gold labeled dataset. Please refer to the project website download page if you want to download the silver labels version. 

### Annotations

#### Annotation process

We paid Amazon Mechanical Turk Workers to solve analogies, five annotators for each analogy.
Workers were asked to select the image that best solves the analogy. 
The resulting dataset is composed of the 3,820 instances agreed upon with a majority vote of at least 3 annotators, which was obtained in 93% of the cases.

## Considerations for Using the Data

All associations were obtained with human annotators. 
All used images are from the imSitu dataset (http://imsitu.org/)  
Using this data is allowed for academic research alone.

### Licensing Information

CC-By 4.0 

### Citation Information

NA