michelecafagna26 commited on
Commit
153c73c
1 Parent(s): f56d96d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +155 -45
README.md CHANGED
@@ -14,26 +14,10 @@ pretty_name: HL (High-Level Dataset)
14
  size_categories:
15
  - 10K<n<100K
16
  annotations_creators:
17
- - crowdsurced
18
  annotations_origin:
19
- - crowdsurced
20
  dataset_info:
21
- features:
22
- - name: file_name
23
- dtype: string
24
- captions:
25
- - name: scene
26
- sequence:
27
- dtype: string
28
- - name: action
29
- sequence:
30
- dtype: string
31
- - name: rationale
32
- sequence:
33
- dtype: string
34
- - name: object
35
- sequence:
36
- dtype: string
37
  splits:
38
  - name: train
39
  num_examples: 13498
@@ -45,7 +29,6 @@ dataset_info:
45
  ## Table of Contents
46
  - [Table of Contents](#table-of-contents)
47
  - [Dataset Description](#dataset-description)
48
- - [Dataset Summary](#dataset-summary)
49
  - [Supported Tasks](#supported-tasks)
50
  - [Languages](#languages)
51
  - [Dataset Structure](#dataset-structure)
@@ -69,73 +52,187 @@ dataset_info:
69
 
70
  ## Dataset Description
71
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
  - **Homepage:**
73
- - **Repository:
74
  - **Paper:**
75
  - **Point of Contact:**
76
 
77
- ### Dataset Summary
78
-
79
- [More Information Needed]
80
 
81
  ### Supported Tasks
82
 
 
 
 
 
 
83
  ### Languages
84
 
85
  English
86
 
87
  ## Dataset Structure
88
 
89
- [More Information Needed]
90
 
91
  ### Data Instances
92
 
93
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94
 
95
  ### Data Fields
96
 
97
- [More Information Needed]
 
 
 
 
98
 
99
  ### Data Splits
100
 
101
- [More Information Needed]
 
 
102
 
103
  ## Dataset Creation
104
 
105
- ### Curation Rationale
 
106
 
107
- [More Information Needed]
 
 
 
108
 
109
- ### Source Data
110
 
111
- [More Information Needed]
112
 
113
- #### Initial Data Collection and Normalization
 
 
 
 
114
 
115
- [More Information Needed]
116
 
117
- #### Who are the source language producers?
118
 
119
- [More Information Needed]
 
 
 
 
120
 
121
- ### Annotations
122
 
123
- [More Information Needed]
124
 
125
- #### Annotation process
 
 
 
 
 
 
 
 
 
 
126
 
127
- [More Information Needed]
128
 
129
  #### Who are the annotators?
130
 
131
- [More Information Needed]
132
 
133
  ### Personal and Sensitive Information
134
 
135
- [More Information Needed]
136
 
137
  ## Considerations for Using the Data
138
 
 
 
139
  ### Social Impact of Dataset
140
 
141
  [More Information Needed]
@@ -146,17 +243,30 @@ English
146
 
147
  ### Other Known Limitations
148
 
149
- [More Information Needed]
 
 
 
 
 
 
 
 
150
 
151
- ## Additional Information
 
 
 
 
152
 
153
  ### Dataset Curators
154
 
155
- [More Information Needed]
156
 
157
  ### Licensing Information
158
 
159
- [More Information Needed]
 
160
 
161
  ### Citation Information
162
 
 
14
  size_categories:
15
  - 10K<n<100K
16
  annotations_creators:
17
+ - crowdsourced
18
  annotations_origin:
19
+ - crowdsourced
20
  dataset_info:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  splits:
22
  - name: train
23
  num_examples: 13498
 
29
  ## Table of Contents
30
  - [Table of Contents](#table-of-contents)
31
  - [Dataset Description](#dataset-description)
 
32
  - [Supported Tasks](#supported-tasks)
33
  - [Languages](#languages)
34
  - [Dataset Structure](#dataset-structure)
 
52
 
53
  ## Dataset Description
54
 
55
+ The High-Level (HL) dataset aligns **object-centric descriptions** from [COCO](https://arxiv.org/pdf/1405.0312.pdf)
56
+ with **high-level descriptions** crowdsourced along 3 axes: **_scene_, _action_, _rationale_**
57
+
58
+ The HL dataset contains 149997 images from COCO and a total of 134973 crowdsourced captions (3 captions for each axis) aligned with ~749984 object-centric captions from COCO.
59
+
60
+ Each axis is collected by asking the following 3 questions:
61
+
62
+ 1) Where is the picture taken?
63
+ 2) What is the subject doing?
64
+ 3) Why is the subject doing it?
65
+
66
+ The high-level descriptions are the human interpretation of the images thus, they look more natural.
67
+ Each high-level description is provided with a _confidence score_, crowdsourced by an independent worker measuring the extent to which
68
+ the high-level description is likely given the corresponding image, question, and caption. The higher the score, the more the high-level caption can is close to commonsense (in a Likert scale from 1-5).
69
+
70
+
71
  - **Homepage:**
72
+ - **Repository:**
73
  - **Paper:**
74
  - **Point of Contact:**
75
 
 
 
 
76
 
77
  ### Supported Tasks
78
 
79
+ - image captioning
80
+ - visual question answering
81
+ - multimodal text-scoring
82
+ - zero-shot evaluation
83
+
84
  ### Languages
85
 
86
  English
87
 
88
  ## Dataset Structure
89
 
90
+ The dataset is provided with images from COCO and two metadata jsonl files containing the annotations
91
 
92
  ### Data Instances
93
 
94
+ An instance looks like this:
95
+ ```json
96
+ {
97
+ "file_name": "COCO_train2014_000000138878.jpg",
98
+ "captions": {
99
+ "scene": [
100
+ "in a car",
101
+ "the picture is taken in a car",
102
+ "in an office."
103
+ ],
104
+ "action": [
105
+ "posing for a photo",
106
+ "the person is posing for a photo",
107
+ "he's sitting in an armchair."
108
+ ],
109
+ "rationale": [
110
+ "to have a picture of himself",
111
+ "he wants to share it with his friends",
112
+ "he's working and took a professional photo."
113
+ ],
114
+ "object": [
115
+ "A man sitting in a car while wearing a shirt and tie.",
116
+ "A man in a car wearing a dress shirt and tie.",
117
+ "a man in glasses is wearing a tie",
118
+ "Man sitting in the car seat with button up and tie",
119
+ "A man in glasses and a tie is near a window."
120
+ ]
121
+ },
122
+ "confidence": {
123
+ "scene": [
124
+ 5,
125
+ 5,
126
+ 4
127
+ ],
128
+ "action": [
129
+ 5,
130
+ 5,
131
+ 4
132
+ ],
133
+ "rationale": [
134
+ 5,
135
+ 5,
136
+ 4
137
+ ]
138
+ },
139
+ "purity": {
140
+ "scene": [
141
+ -1.1760284900665283,
142
+ -1.0889461040496826,
143
+ -1.442818284034729
144
+ ],
145
+ "action": [
146
+ -1.0115827322006226,
147
+ -0.5917857885360718,
148
+ -1.6931917667388916
149
+ ],
150
+ "rationale": [
151
+ -1.0546956062316895,
152
+ -0.9740906357765198,
153
+ -1.2204363346099854
154
+ ]
155
+ },
156
+ "diversity": {
157
+ "scene": 25.965358893403383,
158
+ "action": 32.713305568898775,
159
+ "rationale": 2.658757840479801
160
+ }
161
+ }
162
+ ```
163
 
164
  ### Data Fields
165
 
166
+ - ```file_name```: original COCO filename
167
+ - ```captions```: Dict containing all the captions for the image. Each axis can be accessed with the axis name and it contains a list of captions.
168
+ - ```confidence```: Dict containing the captions confidence scores. Each axis can be accessed with the axis name and it contains a list of captions. Confidence scores are not provided for the _object_ axis (COCO captions).t
169
+ - ```purity score```: Dict containing the captions purity scores. The purity score measures the semantic similarity of the captions within the same axis (Bleurt-based).
170
+ - ```diversity score```: Dict containing the captions diversity scores. The diversity score measures the lexical diversity of the captions within the same axis (Self-BLEU-based).
171
 
172
  ### Data Splits
173
 
174
+ There are 14997 images and 134973 high-level captions split into:
175
+ - Train-val: 13498 images and 121482 high-level captions
176
+ - Test: 1499 images and 13491 high-level captions
177
 
178
  ## Dataset Creation
179
 
180
+ The dataset has been crowdsourced on Amazon Mechanical Turk.
181
+ From the paper:
182
 
183
+ >We randomly select 14997 images from the COCO 2014 train-val split. In order to answer questions related to _actions_ and _rationales_ we need to
184
+ > ensure the presence of a subject in the image. Therefore, we leverage the entity annotation provided in COCO to select images containing
185
+ > at least one person. The whole annotation is conducted on Amazon Mechanical Turk (AMT). We split the workload into batches in order to ease
186
+ >the monitoring of the quality of the data collected. Each image is annotated by three different annotators, therefore we collect three annotations per axis.
187
 
188
+ ### Curation Rationale
189
 
190
+ From the paper:
191
 
192
+ >In this work, we tackle the issue of grounding high-level linguistic concepts in the visual modality, proposing the High-Level (HL) Dataset: a
193
+ V\&L resource aligning existing object-centric captions with human-collected high-level descriptions of images along three different axes: _scenes_, _actions_ and _rationales_.
194
+ The high-level captions capture the human interpretation of the scene, providing abstract linguistic concepts complementary to object-centric captions
195
+ >used in current V\&L datasets, e.g. in COCO. We take a step further, and we collect _confidence scores_ to distinguish commonsense assumptions
196
+ >from subjective interpretations and we characterize our data under a variety of semantic and lexical aspects.
197
 
 
198
 
199
+ ### Source Data
200
 
201
+ - Images: COCO
202
+ - object axis annotations: COCO
203
+ - scene, action, rationale annotations: crowdsourced
204
+ - confidence scores: crowdsourced
205
+ - purity score and diversity score: automatically computed
206
 
207
+ #### Annotation process
208
 
209
+ From the paper:
210
 
211
+ >**Pilot** We run a pilot study with the double goal of collecting feedback and defining the task instructions.
212
+ >With the results from the pilot we design a beta version of the task and we run a small batch of cases on the crowd-sourcing platform.
213
+ >We manually inspect the results and we further refine the instructions and the formulation of the task before finally proceeding with the
214
+ >annotation in bulk. The final annotation form is shown in Appendix D.
215
+
216
+ >***Procedure*** The participants are shown an image and three questions regarding three aspects or axes: _scene_, _actions_ and _rationales_
217
+ > i,e. _Where is the picture taken?_, _What is the subject doing?_, _Why is the subject doing it?_. We explicitly ask the participants to use
218
+ >their personal interpretation of the scene and add examples and suggestions in the instructions to further guide the annotators. Moreover,
219
+ >differently from other VQA datasets like (Antol et al., 2015) and (Zhu et al., 2016), where each question can refer to different entities
220
+ >in the image, we systematically ask the same three questions about the same subject for each image. The full instructions are reported
221
+ >in Figure 1. For details regarding the annotation costs see Appendix A.
222
 
 
223
 
224
  #### Who are the annotators?
225
 
226
+ Turkers from Amazon Mechanical Turk
227
 
228
  ### Personal and Sensitive Information
229
 
230
+ There is no personal or sensitive information
231
 
232
  ## Considerations for Using the Data
233
 
234
+ [More Information Needed]
235
+
236
  ### Social Impact of Dataset
237
 
238
  [More Information Needed]
 
243
 
244
  ### Other Known Limitations
245
 
246
+ From the paper:
247
+
248
+ >**Quantitying grammatical errors**
249
+ We ask two expert annotators to correct grammatical errors in a sample of 9900 captions, 900 of which are shared between the two annotators.
250
+ > The annotators are shown the image caption pairs and they are asked to edit the caption whenever they identify a grammatical error.
251
+ The most common errors reported by the annotators are:
252
+ >- Misuse of prepositions
253
+ >- Wrong verb conjugation
254
+ >- Pronoun omissions
255
 
256
+ In order to quantify the extent to which the corrected captions differ from the original ones, we compute the Levenshtein distance (Levenshtein, 1966) between them.
257
+ We observe that 22.5\% of the sample has been edited and only 5\% with a Levenshtein distance greater than 10. This suggests a reasonable
258
+ level of grammatical quality overall, with no substantial grammatical problems. This can also be observed from the Levenshtein distance
259
+ distribution reported in Figure 2. Moreover, the human evaluation is quite reliable as we observe a moderate inter-annotator agreement
260
+ (alpha = 0.507, (Krippendorff, 2018) computed over the shared sample.
261
 
262
  ### Dataset Curators
263
 
264
+ Michele Cafagna
265
 
266
  ### Licensing Information
267
 
268
+ The Images and the object-centric captions follow the [COCO terms of Use](https://cocodataset.org/#termsofuse)
269
+ The remaining annotations are licensed under Apache-2.0 license.
270
 
271
  ### Citation Information
272