VictorSanh HF staff commited on
Commit
bc92237
1 Parent(s): 7baad4d

Localized Narratives + Readme - open images subset

Browse files
Files changed (2) hide show
  1. LocalizedNarratives.py +140 -0
  2. README.md +148 -0
LocalizedNarratives.py ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Localized Narratives"""
16
+ import json
17
+ import datasets
18
+
19
+
20
+ _CITATION = """
21
+ @inproceedings{PontTuset_eccv2020,
22
+ author = {Jordi Pont-Tuset and Jasper Uijlings and Soravit Changpinyo and Radu Soricut and Vittorio Ferrari},
23
+ title = {Connecting Vision and Language with Localized Narratives},
24
+ booktitle = {ECCV},
25
+ year = {2020}
26
+ }
27
+ """
28
+
29
+ _DESCRIPTION = """
30
+ Localized Narratives, a new form of multimodal image annotations connecting vision and language.
31
+ We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing.
32
+ Since the voice and the mouse pointer are synchronized, we can localize every single word in the description.
33
+ This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data.
34
+ We annotated 849k images with Localized Narratives: the whole COCO, Flickr30k, and ADE20K datasets, and 671k images of Open Images, all of which we make publicly available.
35
+ """
36
+
37
+ _HOMEPAGE = "https://google.github.io/localized-narratives/"
38
+
39
+ _LICENSE = "CC BY 4.0"
40
+
41
+ _ANNOTATION_URLs = {
42
+ "train": [
43
+ "https://storage.googleapis.com/localized-narratives/annotations/open_images_train_v6_localized_narratives-00000-of-00010.jsonl",
44
+ "https://storage.googleapis.com/localized-narratives/annotations/open_images_train_v6_localized_narratives-00001-of-00010.jsonl",
45
+ "https://storage.googleapis.com/localized-narratives/annotations/open_images_train_v6_localized_narratives-00002-of-00010.jsonl",
46
+ "https://storage.googleapis.com/localized-narratives/annotations/open_images_train_v6_localized_narratives-00003-of-00010.jsonl",
47
+ "https://storage.googleapis.com/localized-narratives/annotations/open_images_train_v6_localized_narratives-00004-of-00010.jsonl",
48
+ "https://storage.googleapis.com/localized-narratives/annotations/open_images_train_v6_localized_narratives-00005-of-00010.jsonl",
49
+ "https://storage.googleapis.com/localized-narratives/annotations/open_images_train_v6_localized_narratives-00006-of-00010.jsonl",
50
+ "https://storage.googleapis.com/localized-narratives/annotations/open_images_train_v6_localized_narratives-00007-of-00010.jsonl",
51
+ "https://storage.googleapis.com/localized-narratives/annotations/open_images_train_v6_localized_narratives-00008-of-00010.jsonl",
52
+ "https://storage.googleapis.com/localized-narratives/annotations/open_images_train_v6_localized_narratives-00009-of-00010.jsonl",
53
+ ],
54
+ "validation": [
55
+ "https://storage.googleapis.com/localized-narratives/annotations/open_images_validation_localized_narratives.jsonl"
56
+ ],
57
+ "test": [
58
+ "https://storage.googleapis.com/localized-narratives/annotations/open_images_test_localized_narratives.jsonl"
59
+ ],
60
+ }
61
+
62
+
63
+ _FEATURES = datasets.Features(
64
+ {
65
+ "image": datasets.Image(),
66
+ "image_url": datasets.Value("string"),
67
+ "dataset_id": datasets.Value("string"),
68
+ "image_id": datasets.Value("string"),
69
+ "annotator_id": datasets.Value("int32"),
70
+ "caption": datasets.Value("string"),
71
+ "timed_caption": datasets.Sequence(
72
+ {
73
+ "utterance": datasets.Value("string"),
74
+ "start_time": datasets.Value("float32"),
75
+ "end_time": datasets.Value("float32"),
76
+ }
77
+ ),
78
+ "traces": datasets.Sequence(
79
+ datasets.Sequence(
80
+ {
81
+ "x": datasets.Value("float32"),
82
+ "y": datasets.Value("float32"),
83
+ "t": datasets.Value("float32"),
84
+ }
85
+ )
86
+ ),
87
+ "voice_recording": datasets.Value("string"),
88
+ }
89
+ )
90
+
91
+
92
+ class LocalizedNarrativesOpenImages(datasets.GeneratorBasedBuilder):
93
+ """Builder for Localized Narratives."""
94
+
95
+ VERSION = datasets.Version("1.0.0")
96
+
97
+ BUILDER_CONFIGS = [
98
+ datasets.BuilderConfig(name="OpenImages", version=VERSION, description="OpenImages subset of Localized Narratives"),
99
+ ]
100
+
101
+ DEFAULT_CONFIG_NAME = "OpenImages"
102
+
103
+ def _info(self):
104
+ return datasets.DatasetInfo(
105
+ description=_DESCRIPTION,
106
+ features=_FEATURES,
107
+ homepage=_HOMEPAGE,
108
+ license=_LICENSE,
109
+ citation=_CITATION,
110
+ )
111
+
112
+ def _split_generators(self, dl_manager):
113
+ annotation_files = dl_manager.download(_ANNOTATION_URLs)
114
+ return [
115
+ datasets.SplitGenerator(
116
+ name=split_name,
117
+ gen_kwargs={"annotation_list": annotation_list, "split": split_name},
118
+ )
119
+ for split_name, annotation_list in annotation_files.items()
120
+ ]
121
+
122
+ def _generate_examples(self, annotation_list: str, split: str):
123
+ counter = 0
124
+ for annotation_file in annotation_list:
125
+ with open(annotation_file, "r", encoding="utf-8") as fi:
126
+ for line in fi:
127
+ annotation = json.loads(line)
128
+ image_url = f"https://s3.amazonaws.com/open-images-dataset/{split}/{annotation['image_id']}.jpg"
129
+ yield counter, {
130
+ "image": image_url,
131
+ "image_url": image_url,
132
+ "dataset_id": annotation["dataset_id"],
133
+ "image_id": annotation["image_id"],
134
+ "annotator_id": annotation["annotator_id"],
135
+ "caption": annotation["caption"],
136
+ "timed_caption": annotation["timed_caption"],
137
+ "traces": annotation["traces"],
138
+ "voice_recording": annotation["voice_recording"],
139
+ }
140
+ counter += 1
README.md CHANGED
@@ -1,3 +1,151 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+
5
+ # Dataset Card for [Dataset Name]
6
+
7
+ ## Table of Contents
8
+ - [Table of Contents](#table-of-contents)
9
+ - [Dataset Description](#dataset-description)
10
+ - [Dataset Summary](#dataset-summary)
11
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
12
+ - [Languages](#languages)
13
+ - [Dataset Structure](#dataset-structure)
14
+ - [Data Instances](#data-instances)
15
+ - [Data Fields](#data-fields)
16
+ - [Data Splits](#data-splits)
17
+ - [Dataset Creation](#dataset-creation)
18
+ - [Curation Rationale](#curation-rationale)
19
+ - [Source Data](#source-data)
20
+ - [Annotations](#annotations)
21
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
22
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
23
+ - [Social Impact of Dataset](#social-impact-of-dataset)
24
+ - [Discussion of Biases](#discussion-of-biases)
25
+ - [Other Known Limitations](#other-known-limitations)
26
+ - [Additional Information](#additional-information)
27
+ - [Dataset Curators](#dataset-curators)
28
+ - [Licensing Information](#licensing-information)
29
+ - [Citation Information](#citation-information)
30
+ - [Contributions](#contributions)
31
+
32
+ ## Dataset Description
33
+
34
+ - **Homepage:** [https://google.github.io/localized-narratives/(https://google.github.io/localized-narratives/)
35
+ - **Repository:**: [https://github.com/google/localized-narratives](https://github.com/google/localized-narratives)
36
+ - **Paper:** [Connecting Vision and Language with Localized Narratives](https://arxiv.org/pdf/1912.03098.pdf)
37
+ - **Leaderboard:**
38
+ - **Point of Contact:**
39
+
40
+ ### Dataset Summary
41
+
42
+ Localized Narratives, a new form of multimodal image annotations connecting vision and language.
43
+ We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing.
44
+ Since the voice and the mouse pointer are synchronized, we can localize every single word in the description.
45
+ This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data.
46
+ We annotated 849k images with Localized Narratives: the whole COCO, Flickr30k, and ADE20K datasets, and 671k images of Open Images, all of which we make publicly available.
47
+
48
+ As of now, there is only the OpenImages subset, but feel free to contribute the other subset of Localized Narratives!
49
+
50
+ ### Supported Tasks and Leaderboards
51
+
52
+ [More Information Needed]
53
+
54
+ ### Languages
55
+
56
+ [More Information Needed]
57
+
58
+ ## Dataset Structure
59
+
60
+ ### Data Instances
61
+
62
+ Each instance has the following structure:
63
+ ```
64
+ {
65
+ dataset_id: 'mscoco_val2017',
66
+ image_id: '137576',
67
+ annotator_id: 93,
68
+ caption: 'In this image there are group of cows standing and eating th...',
69
+ timed_caption: [{'utterance': 'In this', 'start_time': 0.0, 'end_time': 0.4}, ...],
70
+ traces: [[{'x': 0.2086, 'y': -0.0533, 't': 0.022}, ...], ...],
71
+ voice_recording: 'coco_val/coco_val_137576_93.ogg'
72
+ }
73
+ ```
74
+
75
+ ### Data Fields
76
+
77
+ Each line represents one Localized Narrative annotation on one image by one annotator and has the following fields:
78
+
79
+ - `dataset_id`: String identifying the dataset and split where the image belongs, e.g. mscoco_val2017.
80
+ - `image_id` String identifier of the image, as specified on each dataset.
81
+ - `annotator_id` Integer number uniquely identifying each annotator.
82
+ - `caption` Image caption as a string of characters.
83
+ - `timed_caption` List of timed utterances, i.e. {utterance, start_time, end_time} where utterance is a word (or group of words) and (start_time, end_time) is the time during which it was spoken, with respect to the start of the recording.
84
+ - `traces` List of trace segments, one between each time the mouse pointer enters the image and goes away from it. Each trace segment is represented as a list of timed points, i.e. {x, y, t}, where x and y are the normalized image coordinates (with origin at the top-left corner of the image) and t is the time in seconds since the start of the recording. Please note that the coordinates can go a bit beyond the image, i.e. <0 or >1, as we recorded the mouse traces including a small band around the image.
85
+ - `voice_recording` Relative URL path with respect to https://storage.googleapis.com/localized-narratives/voice-recordings where to find the voice recording (in OGG format) for that particular image.
86
+
87
+ ### Data Splits
88
+
89
+ [More Information Needed]
90
+
91
+ ## Dataset Creation
92
+
93
+ ### Curation Rationale
94
+
95
+ [More Information Needed]
96
+
97
+ ### Source Data
98
+
99
+ #### Initial Data Collection and Normalization
100
+
101
+ [More Information Needed]
102
+
103
+ #### Who are the source language producers?
104
+
105
+ [More Information Needed]
106
+
107
+ ### Annotations
108
+
109
+ #### Annotation process
110
+
111
+ [More Information Needed]
112
+
113
+ #### Who are the annotators?
114
+
115
+ [More Information Needed]
116
+
117
+ ### Personal and Sensitive Information
118
+
119
+ [More Information Needed]
120
+
121
+ ## Considerations for Using the Data
122
+
123
+ ### Social Impact of Dataset
124
+
125
+ [More Information Needed]
126
+
127
+ ### Discussion of Biases
128
+
129
+ [More Information Needed]
130
+
131
+ ### Other Known Limitations
132
+
133
+ [More Information Needed]
134
+
135
+ ## Additional Information
136
+
137
+ ### Dataset Curators
138
+
139
+ [More Information Needed]
140
+
141
+ ### Licensing Information
142
+
143
+ [More Information Needed]
144
+
145
+ ### Citation Information
146
+
147
+ [More Information Needed]
148
+
149
+ ### Contributions
150
+
151
+ Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.