|
--- |
|
annotations_creators: |
|
- crowdsourced |
|
language: |
|
- en |
|
language_creators: |
|
- found |
|
license: |
|
- cc-by-4.0 |
|
multilinguality: |
|
- monolingual |
|
paperswithcode_id: vasr |
|
pretty_name: VASR |
|
size_categories: |
|
- 1K<n<10K |
|
source_datasets: |
|
- original |
|
tags: |
|
- commonsense-reasoning |
|
- visual-reasoning |
|
task_ids: [] |
|
extra_gated_prompt: "By clicking on “Access repository” below, you also agree that you are using it solely for research purposes. The full license agreement is available in the dataset files." |
|
--- |
|
# Dataset Card for VASR |
|
- [Dataset Description](#dataset-description) |
|
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) |
|
- [Colab notebook code for VASR evaluation with ViT](#colab-notebook-code-for-vasr-evaluation-with-clip) |
|
- [Languages](#languages) |
|
- [Dataset Structure](#dataset-structure) |
|
- [Data Fields](#data-fields) |
|
- [Data Splits](#data-splits) |
|
- [Dataset Creation](#dataset-creation) |
|
- [Considerations for Using the Data](#considerations-for-using-the-data) |
|
- [Licensing Information](#licensing-information) |
|
- [Citation Information](#citation-information) |
|
## Dataset Description |
|
VASR is a challenging dataset for evaluating computer vision commonsense reasoning abilities. Given a triplet of images, the task is to select an image candidate B' that completes the analogy (A to A' is like B to what?). Unlike previous work on visual analogy that focused on simple image transformations, we tackle complex analogies requiring understanding of scenes. Our experiments demonstrate that state-of-the-art models struggle with carefully chosen distractors (±53%, compared to 90% human accuracy). |
|
- **Homepage:** |
|
https://vasr-dataset.github.io/ |
|
- **Colab** |
|
https://colab.research.google.com/drive/1HUg0aHonFDK3hVFrIRYdSEfpUJeY-4dI |
|
- **Repository:** |
|
https://github.com/vasr-dataset/vasr/tree/main/experiments |
|
- **Paper:** |
|
NA |
|
- **Leaderboard:** |
|
https://vasr-dataset.github.io/ |
|
- **Point of Contact:** |
|
yonatanbitton1@gmail.com |
|
### Supported Tasks and Leaderboards |
|
https://vasr.github.io/leaderboard. |
|
https://paperswithcode.com/dataset/vasr. |
|
## Colab notebook code for VASR evaluation with ViT |
|
https://colab.research.google.com/drive/1HUg0aHonFDK3hVFrIRYdSEfpUJeY-4dI |
|
### Languages |
|
English. |
|
## Dataset Structure |
|
### Data Fields |
|
A: datasets.Image() - the first input image, **A**:A' |
|
A': datasets.Image() - the second input image, different from A' in a single key, A:**A'** |
|
B: datasets.Image() - the third input image, has the same different item as A, **B**:B' |
|
B': datasets.Image() - the forth image, which is the analogy solution. Different from B' in a single key (the same different one as in A:A'), B:**B'** |
|
candidates_images: [datasets.Image()] - a list of candidate images solutions to the analogy |
|
label: datasets.Value("int64") - the index of the ground-truth solution |
|
candidates: [datasets.Value("string")] - a list of candidate string solutions to the analogy |
|
A_verb: datasets.Value("string") - the verb of the first input image A |
|
A'_verb: datasets.Value("string") - the verb of the second input image A' |
|
B_verb: datasets.Value("string") - the verb of the third input image B |
|
B'_verb: datasets.Value("string") - the verb of the forth image, which is the analogy solution |
|
diff_item_A: datasets.Value("string") - FrameNet key of the item that is different between **A**:A', in image A (which is the same as image B) |
|
diff_item_A_str_first: datasets.Value("string") - String representation of the FrameNet key of the item that is different between **A**:A', in image A |
|
diff_item_A': datasets.Value("string") - FrameNet key of the item that is different between A:**A'**, in image A' (which is the same as image B') |
|
diff_item_A'_str_first: datasets.Value("string") - String representation of the FrameNet key of the item that is different between A:**A'**, in image A' |
|
|
|
### Data Splits |
|
There are three splits, TRAIN, VALIDATION, and TEST. |
|
Since there are four candidates and one solution, random chance is 25%. |
|
|
|
## Dataset Creation |
|
|
|
We leverage situation recognition annotations and the CLIP model to generate a large set of 500k candidate analogies. |
|
There are two types of labels: |
|
- Silver labels, obtained from the automatic generation. |
|
- Gold labels, obtained from human annotations over the silver annotations. |
|
|
|
In the huggingface version we provide only the gold labeled dataset. Please refer to the project website download page if you want to download the silver labels version. |
|
|
|
### Annotations |
|
|
|
#### Annotation process |
|
|
|
We paid Amazon Mechanical Turk Workers to solve analogies, five annotators for each analogy. |
|
Workers were asked to select the image that best solves the analogy. |
|
The resulting dataset is composed of the 3,820 instances agreed upon with a majority vote of at least 3 annotators, which was obtained in 93% of the cases. |
|
|
|
## Considerations for Using the Data |
|
|
|
All associations were obtained with human annotators. |
|
All used images are from the imSitu dataset (http://imsitu.org/) |
|
Using this data is allowed for academic research alone. |
|
|
|
### Licensing Information |
|
|
|
CC-By 4.0 |
|
|
|
### Citation Information |
|
|
|
NA |
|
|
|
|