yonatanbitton commited on
Commit
94e488c
1 Parent(s): 18c4725

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -1
README.md CHANGED
@@ -1,3 +1,107 @@
1
  ---
2
- license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language:
5
+ - en
6
+ language_creators:
7
+ - found
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ paperswithcode_id: vasr
13
+ pretty_name: VASR
14
+ size_categories:
15
+ - 1K<n<10K
16
+ source_datasets:
17
+ - original
18
+ tags:
19
+ - commonsense-reasoning
20
+ - visual-reasoning
21
+ task_ids: []
22
+ extra_gated_prompt: "By clicking on “Access repository” below, you also agree that you are using it solely for research purposes. The full license agreement is available in the dataset files."
23
  ---
24
+ # Dataset Card for VASR
25
+ - [Dataset Description](#dataset-description)
26
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
27
+ - [Colab notebook code for Winogavil evaluation with CLIP](#colab-notebook-code-for-vasr-evaluation-with-clip)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Fields](#data-fields)
31
+ - [Data Splits](#data-splits)
32
+ - [Dataset Creation](#dataset-creation)
33
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
34
+ - [Licensing Information](#licensing-information)
35
+ - [Citation Information](#citation-information)
36
+ ## Dataset Description
37
+ VASR is a challenging dataset for evaluating computer vision commonsense reasoning abilities. Given a triplet of images, the task is to select an image candidate B' that completes the analogy (A to A' is like B to what?). Unlike previous work on visual analogy that focused on simple image transformations, we tackle complex analogies requiring understanding of scenes. Our experiments demonstrate that state-of-the-art models struggle with carefully chosen distractors (±53%, compared to 90% human accuracy).
38
+ - **Homepage:**
39
+ https://vasr-dataset.github.io/
40
+ - **Colab**
41
+ https://colab.research.google.com/drive/1HUg0aHonFDK3hVFrIRYdSEfpUJeY-4dI
42
+ - **Repository:**
43
+ https://github.com/vasr-dataset/vasr/tree/main/experiments
44
+ - **Paper:**
45
+ NA
46
+ - **Leaderboard:**
47
+ https://vasr-dataset.github.io/
48
+ - **Point of Contact:**
49
+ yonatanbitton1@gmail.com
50
+ ### Supported Tasks and Leaderboards
51
+ https://vasr.github.io/leaderboard.
52
+ https://paperswithcode.com/dataset/vasr.
53
+ ## Colab notebook code for Winogavil evaluation with CLIP
54
+ https://colab.research.google.com/drive/1HUg0aHonFDK3hVFrIRYdSEfpUJeY-4dI
55
+ ### Languages
56
+ English.
57
+ ## Dataset Structure
58
+ ### Data Fields
59
+ A: datasets.Image() - the first input image, **A**:A'
60
+ A': datasets.Image() - the second input image, different from A' in a single key, A:**A'**
61
+ B: datasets.Image() - the third input image, has the same different item as A, **B**:B'
62
+ B': datasets.Image() - the forth image, which is the analogy solution. Different from B' in a single key (the same different one as in A:A'), B:**B'**
63
+ candidates_images: [datasets.Image()] - a list of candidate images solutions to the analogy
64
+ label: datasets.Value("int64") - the index of the ground-truth solution
65
+ candidates: [datasets.Value("string")] - a list of candidate string solutions to the analogy
66
+ A_verb: datasets.Value("string") - the verb of the first input image A
67
+ A'_verb: datasets.Value("string") - the verb of the second input image A'
68
+ B_verb: datasets.Value("string") - the verb of the third input image B
69
+ B'_verb: datasets.Value("string") - the verb of the forth image, which is the analogy solution
70
+ diff_item_A: datasets.Value("string") - FrameNet key of the item that is different between **A**:A', in image A (which is the same as image B)
71
+ diff_item_A_str_first: datasets.Value("string") - String representation of the FrameNet key of the item that is different between **A**:A', in image A
72
+ diff_item_A': datasets.Value("string") - FrameNet key of the item that is different between A:**A'**, in image A' (which is the same as image B')
73
+ diff_item_A'_str_first: datasets.Value("string") - String representation of the FrameNet key of the item that is different between A:**A'**, in image A'
74
+
75
+ ### Data Splits
76
+ There are three splits, TRAIN, VALIDATION, and TEST.
77
+ Since there are four candidates and one solution, random chance is 25%.
78
+
79
+ ## Dataset Creation
80
+
81
+ We leverage situation recognition annotations and the CLIP model to generate a large set of 500k candidate analogies.
82
+ There are two types of labels:
83
+ - Silver labels, obtained from the automatic generation.
84
+ - Gold labels, obtained from human annotations over the silver annotations.
85
+
86
+ In the huggingface version we provide only the gold labeled dataset. Please refer to the project website download page if you want to download the silver labels version.
87
+
88
+ ### Annotations
89
+
90
+ #### Annotation process
91
+
92
+ We paid Amazon Mechanical Turk Workers to solve analogies, five annotators for each analogy.
93
+ Workers were asked to select the image that best solves the analogy.
94
+ The resulting dataset is composed of the 3,820 instances agreed upon with a majority vote of at least 3 annotators, which was obtained in 93% of the cases.
95
+
96
+ ## Considerations for Using the Data
97
+
98
+ All associations were obtained with human annotators.
99
+
100
+ ### Licensing Information
101
+
102
+ CC-By 4.0
103
+
104
+ ### Citation Information
105
+
106
+ NA
107
+