--- license: apache-2.0 task_categories: - image-to-text - question-answering - zero-shot-classification language: - en multilinguality: - monolingual task_ids: - text-scoring pretty_name: HL (High-Level Dataset) size_categories: - 10KWe randomly select 14997 images from the COCO 2014 train-val split. In order to answer questions related to _actions_ and _rationales_ we need to > ensure the presence of a subject in the image. Therefore, we leverage the entity annotation provided in COCO to select images containing > at least one person. The whole annotation is conducted on Amazon Mechanical Turk (AMT). We split the workload into batches in order to ease >the monitoring of the quality of the data collected. Each image is annotated by three different annotators, therefore we collect three annotations per axis. ### Curation Rationale From the paper: >In this work, we tackle the issue of **grounding high-level linguistic concepts in the visual modality**, proposing the High-Level (HL) Dataset: a V\&L resource aligning existing object-centric captions with human-collected high-level descriptions of images along three different axes: _scenes_, _actions_ and _rationales_. The high-level captions capture the human interpretation of the scene, providing abstract linguistic concepts complementary to object-centric captions >used in current V\&L datasets, e.g. in COCO. We take a step further, and we collect _confidence scores_ to distinguish commonsense assumptions >from subjective interpretations and we characterize our data under a variety of semantic and lexical aspects. ### Source Data - Images: COCO - object axis annotations: COCO - scene, action, rationale annotations: crowdsourced - confidence scores: crowdsourced - purity score and diversity score: automatically computed #### Annotation process From the paper: >**Pilot:** We run a pilot study with the double goal of collecting feedback and defining the task instructions. >With the results from the pilot we design a beta version of the task and we run a small batch of cases on the crowd-sourcing platform. >We manually inspect the results and we further refine the instructions and the formulation of the task before finally proceeding with the >annotation in bulk. The final annotation form is shown in Appendix D. >***Procedure:*** The participants are shown an image and three questions regarding three aspects or axes: _scene_, _actions_ and _rationales_ > i,e. _Where is the picture taken?_, _What is the subject doing?_, _Why is the subject doing it?_. We explicitly ask the participants to use >their personal interpretation of the scene and add examples and suggestions in the instructions to further guide the annotators. Moreover, >differently from other VQA datasets like (Antol et al., 2015) and (Zhu et al., 2016), where each question can refer to different entities >in the image, we systematically ask the same three questions about the same subject for each image. The full instructions are reported >in Figure 1. For details regarding the annotation costs see Appendix A. #### Who are the annotators? Turkers from Amazon Mechanical Turk ### Personal and Sensitive Information There is no personal or sensitive information ## Considerations for Using the Data [More Information Needed] ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations From the paper: >**Quantitying grammatical errors:** We ask two expert annotators to correct grammatical errors in a sample of 9900 captions, 900 of which are shared between the two annotators. > The annotators are shown the image caption pairs and they are asked to edit the caption whenever they identify a grammatical error. >The most common errors reported by the annotators are: >- Misuse of prepositions >- Wrong verb conjugation >- Pronoun omissions >In order to quantify the extent to which the corrected captions differ from the original ones, we compute the Levenshtein distance (Levenshtein, 1966) between them. >We observe that 22.5\% of the sample has been edited and only 5\% with a Levenshtein distance greater than 10. This suggests a reasonable >level of grammatical quality overall, with no substantial grammatical problems. This can also be observed from the Levenshtein distance >distribution reported in Figure 2. Moreover, the human evaluation is quite reliable as we observe a moderate inter-annotator agreement >(alpha = 0.507, (Krippendorff, 2018) computed over the shared sample. ### Dataset Curators Michele Cafagna ### Licensing Information The Images and the object-centric captions follow the [COCO terms of Use](https://cocodataset.org/#termsofuse) The remaining annotations are licensed under Apache-2.0 license. ### Citation Information ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```