Thomas Wang commited on
Commit
5acac2f
1 Parent(s): 3a948b0

Add Visual Genome (#4161)

Browse files

Commit from https://github.com/huggingface/datasets/commit/1a1d32df517bbf531da1c43a7f421c0dd2beb5d5

README.md ADDED
@@ -0,0 +1,460 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ region_descriptions:
18
+ - image-to-text
19
+ objects:
20
+ - object-detection
21
+ question_answers:
22
+ - visual-question-answering
23
+ task_ids:
24
+ region_descriptions:
25
+ - image-captioning
26
+ paperswithcode_id: visual-genome
27
+ pretty_name: VisualGenome
28
+ ---
29
+
30
+ # Dataset Card for Visual Genome
31
+
32
+ ## Table of Contents
33
+ - [Table of Contents](#table-of-contents)
34
+ - [Dataset Description](#dataset-description)
35
+ - [Dataset Summary](#dataset-summary)
36
+ - [Dataset Preprocessing](#dataset-preprocessing)
37
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
38
+ - [Languages](#languages)
39
+ - [Dataset Structure](#dataset-structure)
40
+ - [Data Instances](#data-instances)
41
+ - [Data Fields](#data-fields)
42
+ - [Data Splits](#data-splits)
43
+ - [Dataset Creation](#dataset-creation)
44
+ - [Curation Rationale](#curation-rationale)
45
+ - [Source Data](#source-data)
46
+ - [Annotations](#annotations)
47
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
48
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
49
+ - [Social Impact of Dataset](#social-impact-of-dataset)
50
+ - [Discussion of Biases](#discussion-of-biases)
51
+ - [Other Known Limitations](#other-known-limitations)
52
+ - [Additional Information](#additional-information)
53
+ - [Dataset Curators](#dataset-curators)
54
+ - [Licensing Information](#licensing-information)
55
+ - [Citation Information](#citation-information)
56
+ - [Contributions](#contributions)
57
+
58
+ ## Dataset Description
59
+
60
+ - **Homepage:** https://visualgenome.org/
61
+ - **Repository:**
62
+ - **Paper:** https://visualgenome.org/static/paper/Visual_Genome.pdf
63
+ - **Leaderboard:**
64
+ - **Point of Contact:** ranjaykrishna [at] gmail [dot] com
65
+
66
+ ### Dataset Summary
67
+
68
+ Visual Genome is a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language.
69
+
70
+ From the paper:
71
+ > Despite progress in perceptual tasks such as
72
+ image classification, computers still perform poorly on
73
+ cognitive tasks such as image description and question
74
+ answering. Cognition is core to tasks that involve not
75
+ just recognizing, but reasoning about our visual world.
76
+ However, models used to tackle the rich content in images for cognitive tasks are still being trained using the
77
+ same datasets designed for perceptual tasks. To achieve
78
+ success at cognitive tasks, models need to understand
79
+ the interactions and relationships between objects in an
80
+ image. When asked “What vehicle is the person riding?”,
81
+ computers will need to identify the objects in an image
82
+ as well as the relationships riding(man, carriage) and
83
+ pulling(horse, carriage) to answer correctly that “the
84
+ person is riding a horse-drawn carriage.”
85
+
86
+ Visual Genome has:
87
+ - 108,077 image
88
+ - 5.4 Million Region Descriptions
89
+ - 1.7 Million Visual Question Answers
90
+ - 3.8 Million Object Instances
91
+ - 2.8 Million Attributes
92
+ - 2.3 Million Relationships
93
+
94
+ From the paper:
95
+ > Our dataset contains over 108K images where each
96
+ image has an average of 35 objects, 26 attributes, and 21
97
+ pairwise relationships between objects. We canonicalize
98
+ the objects, attributes, relationships, and noun phrases
99
+ in region descriptions and questions answer pairs to
100
+ WordNet synsets.
101
+
102
+ ### Dataset Preprocessing
103
+
104
+ ### Supported Tasks and Leaderboards
105
+
106
+ ### Languages
107
+
108
+ All of annotations use English as primary language.
109
+
110
+ ## Dataset Structure
111
+
112
+ ### Data Instances
113
+
114
+ When loading a specific configuration, users has to append a version dependent suffix:
115
+ ```python
116
+ from datasets import load_dataset
117
+ load_dataset("visual_genome", "region_description_v1.2.0")
118
+ ```
119
+
120
+ #### region_descriptions
121
+
122
+ An example of looks as follows.
123
+
124
+ ```
125
+ {
126
+ "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>,
127
+ "image_id": 1,
128
+ "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg",
129
+ "width": 800,
130
+ "height": 600,
131
+ "coco_id": null,
132
+ "flickr_id": null,
133
+ "regions": [
134
+ {
135
+ "region_id": 1382,
136
+ "image_id": 1,
137
+ "phrase": "the clock is green in colour",
138
+ "x": 421,
139
+ "y": 57,
140
+ "width": 82,
141
+ "height": 139
142
+ },
143
+ ...
144
+ ]
145
+ }
146
+ ```
147
+
148
+ #### objects
149
+
150
+ An example of looks as follows.
151
+
152
+ ```
153
+ {
154
+ "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>,
155
+ "image_id": 1,
156
+ "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg",
157
+ "width": 800,
158
+ "height": 600,
159
+ "coco_id": null,
160
+ "flickr_id": null,
161
+ "objects": [
162
+ {
163
+ "object_id": 1058498,
164
+ "x": 421,
165
+ "y": 91,
166
+ "w": 79,
167
+ "h": 339,
168
+ "names": [
169
+ "clock"
170
+ ],
171
+ "synsets": [
172
+ "clock.n.01"
173
+ ]
174
+ },
175
+ ...
176
+ ]
177
+ }
178
+ ```
179
+
180
+ #### attributes
181
+
182
+ An example of looks as follows.
183
+
184
+ ```
185
+ {
186
+ "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>,
187
+ "image_id": 1,
188
+ "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg",
189
+ "width": 800,
190
+ "height": 600,
191
+ "coco_id": null,
192
+ "flickr_id": null,
193
+ "attributes": [
194
+ {
195
+ "object_id": 1058498,
196
+ "x": 421,
197
+ "y": 91,
198
+ "w": 79,
199
+ "h": 339,
200
+ "names": [
201
+ "clock"
202
+ ],
203
+ "synsets": [
204
+ "clock.n.01"
205
+ ],
206
+ "attributes": [
207
+ "green",
208
+ "tall"
209
+ ]
210
+ },
211
+ ...
212
+ }
213
+ ]
214
+ ```
215
+
216
+ #### relationships
217
+
218
+ An example of looks as follows.
219
+
220
+ ```
221
+ {
222
+ "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>,
223
+ "image_id": 1,
224
+ "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg",
225
+ "width": 800,
226
+ "height": 600,
227
+ "coco_id": null,
228
+ "flickr_id": null,
229
+ "relationships": [
230
+ {
231
+ "relationship_id": 15927,
232
+ "predicate": "ON",
233
+ "synsets": "['along.r.01']",
234
+ "subject": {
235
+ "object_id": 5045,
236
+ "x": 119,
237
+ "y": 338,
238
+ "w": 274,
239
+ "h": 192,
240
+ "names": [
241
+ "shade"
242
+ ],
243
+ "synsets": [
244
+ "shade.n.01"
245
+ ]
246
+ },
247
+ "object": {
248
+ "object_id": 5046,
249
+ "x": 77,
250
+ "y": 328,
251
+ "w": 714,
252
+ "h": 262,
253
+ "names": [
254
+ "street"
255
+ ],
256
+ "synsets": [
257
+ "street.n.01"
258
+ ]
259
+ }
260
+ }
261
+ ...
262
+ }
263
+ ]
264
+ ```
265
+ #### question_answers
266
+
267
+ An example of looks as follows.
268
+
269
+ ```
270
+ {
271
+ "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>,
272
+ "image_id": 1,
273
+ "url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg",
274
+ "width": 800,
275
+ "height": 600,
276
+ "coco_id": null,
277
+ "flickr_id": null,
278
+ "qas": [
279
+ {
280
+ "qa_id": 986768,
281
+ "image_id": 1,
282
+ "question": "What color is the clock?",
283
+ "answer": "Green.",
284
+ "a_objects": [],
285
+ "q_objects": []
286
+ },
287
+ ...
288
+ }
289
+ ]
290
+ ```
291
+
292
+ ### Data Fields
293
+
294
+ When loading a specific configuration, users has to append a version dependent suffix:
295
+ ```python
296
+ from datasets import load_dataset
297
+ load_dataset("visual_genome", "region_description_v1.2.0")
298
+ ```
299
+
300
+ #### region_descriptions
301
+
302
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
303
+ - `image_id`: Unique numeric ID of the image.
304
+ - `url`: URL of source image.
305
+ - `width`: Image width.
306
+ - `height`: Image height.
307
+ - `coco_id`: Id mapping to MSCOCO indexing.
308
+ - `flickr_id`: Id mapping to Flicker indexing.
309
+ - `regions`: Holds a list of `Region` dataclasses:
310
+ - `region_id`: Unique numeric ID of the region.
311
+ - `image_id`: Unique numeric ID of the image.
312
+ - `x`: x coordinate of bounding box's top left corner.
313
+ - `y`: y coordinate of bounding box's top left corner.
314
+ - `width`: Bounding box width.
315
+ - `height`: Bounding box height.
316
+
317
+ #### objects
318
+
319
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
320
+ - `image_id`: Unique numeric ID of the image.
321
+ - `url`: URL of source image.
322
+ - `width`: Image width.
323
+ - `height`: Image height.
324
+ - `coco_id`: Id mapping to MSCOCO indexing.
325
+ - `flickr_id`: Id mapping to Flicker indexing.
326
+ - `objects`: Holds a list of `Object` dataclasses:
327
+ - `object_id`: Unique numeric ID of the object.
328
+ - `x`: x coordinate of bounding box's top left corner.
329
+ - `y`: y coordinate of bounding box's top left corner.
330
+ - `w`: Bounding box width.
331
+ - `h`: Bounding box height.
332
+ - `names`: List of names associated with the object. This field can hold multiple values in the sense the multiple names are considered as acceptable. For example: ['monitor', 'computer'] at https://cs.stanford.edu/people/rak248/VG_100K/3.jpg
333
+ - `synsets`: List of `WordNet synsets`.
334
+
335
+ #### attributes
336
+
337
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
338
+ - `image_id`: Unique numeric ID of the image.
339
+ - `url`: URL of source image.
340
+ - `width`: Image width.
341
+ - `height`: Image height.
342
+ - `coco_id`: Id mapping to MSCOCO indexing.
343
+ - `flickr_id`: Id mapping to Flicker indexing.
344
+ - `attributes`: Holds a list of `Object` dataclasses:
345
+ - `object_id`: Unique numeric ID of the region.
346
+ - `x`: x coordinate of bounding box's top left corner.
347
+ - `y`: y coordinate of bounding box's top left corner.
348
+ - `w`: Bounding box width.
349
+ - `h`: Bounding box height.
350
+ - `names`: List of names associated with the object. This field can hold multiple values in the sense the multiple names are considered as acceptable. For example: ['monitor', 'computer'] at https://cs.stanford.edu/people/rak248/VG_100K/3.jpg
351
+ - `synsets`: List of `WordNet synsets`.
352
+ - `attributes`: List of attributes associated with the object.
353
+
354
+ #### relationships
355
+
356
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
357
+ - `image_id`: Unique numeric ID of the image.
358
+ - `url`: URL of source image.
359
+ - `width`: Image width.
360
+ - `height`: Image height.
361
+ - `coco_id`: Id mapping to MSCOCO indexing.
362
+ - `flickr_id`: Id mapping to Flicker indexing.
363
+ - `relationships`: Holds a list of `Relationship` dataclasses:
364
+ - `relationship_id`: Unique numeric ID of the object.
365
+ - `predicate`: Predicate defining relationship between a subject and an object.
366
+ - `synsets`: List of `WordNet synsets`.
367
+ - `subject`: Object dataclass. See subsection on `objects`.
368
+ - `object`: Object dataclass. See subsection on `objects`.
369
+
370
+ #### question_answers
371
+
372
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
373
+ - `image_id`: Unique numeric ID of the image.
374
+ - `url`: URL of source image.
375
+ - `width`: Image width.
376
+ - `height`: Image height.
377
+ - `coco_id`: Id mapping to MSCOCO indexing.
378
+ - `flickr_id`: Id mapping to Flicker indexing.
379
+ - `qas`: Holds a list of `Question-Answering` dataclasses:
380
+ - `qa_id`: Unique numeric ID of the question-answer pair.
381
+ - `image_id`: Unique numeric ID of the image.
382
+ - `question`: Question.
383
+ - `answer`: Answer.
384
+ - `q_objects`: List of object dataclass associated with `question` field. See subsection on `objects`.
385
+ - `a_objects`: List of object dataclass associated with `answer` field. See subsection on `objects`.
386
+
387
+ ### Data Splits
388
+
389
+ All the data is contained in training set.
390
+
391
+ ## Dataset Creation
392
+
393
+ ### Curation Rationale
394
+
395
+ ### Source Data
396
+
397
+ #### Initial Data Collection and Normalization
398
+
399
+ #### Who are the source language producers?
400
+
401
+ ### Annotations
402
+
403
+ #### Annotation process
404
+
405
+ #### Who are the annotators?
406
+
407
+ From the paper:
408
+ > We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over
409
+ 33, 000 unique workers contributed to the dataset. The
410
+ dataset was collected over the course of 6 months after
411
+ 15 months of experimentation and iteration on the data
412
+ representation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where
413
+ each HIT involved creating descriptions, questions and
414
+ answers, or region graphs. Each HIT was designed such
415
+ that workers manage to earn anywhere between $6-$8
416
+ per hour if they work continuously, in line with ethical
417
+ research standards on Mechanical Turk (Salehi et al.,
418
+ 2015). Visual Genome HITs achieved a 94.1% retention
419
+ rate, meaning that 94.1% of workers who completed one
420
+ of our tasks went ahead to do more. [...] 93.02% of workers contributed from the United States.
421
+ The majority of our workers were
422
+ between the ages of 25 and 34 years old. Our youngest
423
+ contributor was 18 years and the oldest was 68 years
424
+ old. We also had a near-balanced split of 54.15% male
425
+ and 45.85% female workers.
426
+
427
+ ### Personal and Sensitive Information
428
+
429
+ ## Considerations for Using the Data
430
+
431
+ ### Social Impact of Dataset
432
+
433
+ ### Discussion of Biases
434
+
435
+ ### Other Known Limitations
436
+
437
+ ## Additional Information
438
+
439
+ ### Dataset Curators
440
+
441
+ ### Licensing Information
442
+
443
+ Visual Genome by Ranjay Krishna is licensed under a Creative Commons Attribution 4.0 International License.
444
+
445
+ ### Citation Information
446
+
447
+ ```bibtex
448
+ @inproceedings{krishnavisualgenome,
449
+ title={Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations},
450
+ author={Krishna, Ranjay and Zhu, Yuke and Groth, Oliver and Johnson, Justin and Hata, Kenji and Kravitz, Joshua and Chen, Stephanie and Kalantidis, Yannis and Li, Li-Jia and Shamma, David A and Bernstein, Michael and Fei-Fei, Li},
451
+ year = {2016},
452
+ url = {https://arxiv.org/abs/1602.07332},
453
+ }
454
+ ```
455
+
456
+ ### Contributions
457
+
458
+ Due to limitation of the dummy_data creation, we provide a `fix_generated_dummy_data.py` script that fix the dataset in-place.
459
+
460
+ Thanks to [@thomasw21](https://github.com/thomasw21) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"region_descriptions_v1.0.0": {"description": "Visual Genome enable to model objects and relationships between objects.\nThey collect dense annotations of objects, attributes, and relationships within each image.\nSpecifically, the dataset contains over 108K images where each image has an average of 35 objects, 26 attributes, and 21 pairwise relationships between objects.\n", "citation": "@inproceedings{krishnavisualgenome,\n title={Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations},\n author={Krishna, Ranjay and Zhu, Yuke and Groth, Oliver and Johnson, Justin and Hata, Kenji and Kravitz, Joshua and Chen, Stephanie and Kalantidis, Yannis and Li, Li-Jia and Shamma, David A and Bernstein, Michael and Fei-Fei, Li},\n year = {2016},\n url = {https://arxiv.org/abs/1602.07332},\n}\n", "homepage": "https://visualgenome.org/", "license": "Creative Commons Attribution 4.0 International License", "features": {"image": {"decode": true, "id": null, "_type": "Image"}, "image_id": {"dtype": "int32", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "width": {"dtype": "int32", "id": null, "_type": "Value"}, "height": {"dtype": "int32", "id": null, "_type": "Value"}, "coco_id": {"dtype": "int64", "id": null, "_type": "Value"}, "flickr_id": {"dtype": "int64", "id": null, "_type": "Value"}, "regions": [{"region_id": {"dtype": "int32", "id": null, "_type": "Value"}, "image_id": {"dtype": "int32", "id": null, "_type": "Value"}, "phrase": {"dtype": "string", "id": null, "_type": "Value"}, "x": {"dtype": "int32", "id": null, "_type": "Value"}, "y": {"dtype": "int32", "id": null, "_type": "Value"}, "width": {"dtype": "int32", "id": null, "_type": "Value"}, "height": {"dtype": "int32", "id": null, "_type": "Value"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "visual_genome", "config_name": "region_descriptions_v1.0.0", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 260873884, "num_examples": 108077, "dataset_name": "visual_genome"}}, "download_checksums": {"https://visualgenome.org/static/data/dataset/image_data.json.zip": {"num_bytes": 1780854, "checksum": "b87a94918cb2ff4d952cf1dfeca0b9cf6cd6fd204c2f8704645653be1163681a"}, "https://visualgenome.org/static/data/dataset/region_descriptions_v1.json.zip": {"num_bytes": 99460401, "checksum": "9e54cd76082f7ce5168a1779fc8c3d0629720492ec0fa12f2d8339c3e0dc9734"}, "https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip": {"num_bytes": 9731705982, "checksum": "51c682d2721f880150720bb416e0346a4c787e4c55d7f80dfd1bd3f73ba81646"}, "https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip": {"num_bytes": 5471658058, "checksum": "99da1a0ddf87011319ff3b05cf9176ffee2731cc3c52951162d9ef0d68e3cfb5"}}, "download_size": 15304605295, "post_processing_size": null, "dataset_size": 260873884, "size_in_bytes": 15565479179}}
dummy/attributes_v1.0.0/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2eb2d5b44553b1fd5760f605b661f8d9554129b72f9c63cd9a24bc6f51f0fb86
3
+ size 8375
dummy/attributes_v1.2.0/1.2.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:444a4156c3eb7716a0eaf275f4665f9dc421467721787e47b705cd316cf5f84a
3
+ size 9615
dummy/objects_v1.0.0/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7bd33685ec593b15d00eaaf7bd91baa63020089b3dc900ad2a342efb56407357
3
+ size 8340
dummy/objects_v1.2.0/1.2.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a4a6d14731b6248293b7c6298bed95f5f6feafea879200edae6211fbeae30a4
3
+ size 8892
dummy/question_answers_v1.0.0/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:333aa1b504e6f46d5dddc503b0e7ac0012515822f3e07b8f4b1b5cbce7ea291a
3
+ size 15247
dummy/question_answers_v1.2.0/1.2.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:206effd41327ea727f637dc70584dcb0af146e19b1f64542d69a5707d3b87fe2
3
+ size 15235
dummy/region_descriptions_v1.0.0/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:958b13bf902a3981671311e5d1a67e148c5f678674fa1e69bf50a41b0deb5b75
3
+ size 33502
dummy/region_descriptions_v1.2.0/1.2.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b04b63d98aca9cc6d2f43dedc2aa7b81698d0f04e58a16b6ea4637f5338d9293
3
+ size 34062
dummy/relationships_v1.0.0/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55fd27dd21559254b79fe088f63c5ae2ec37e9a398451cf6c2bd4af4126bb4d7
3
+ size 9380
dummy/relationships_v1.2.0/1.2.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a6b95c2b5ee1ef39ce0bf1336d3e2feac62030f12946ee68193e920546444f7
3
+ size 10169
fix_generated_dummy_data.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import re
3
+ from pathlib import Path
4
+ from zipfile import ZipFile
5
+
6
+
7
+ def main():
8
+ dummy_dir = Path(__file__).parent / "dummy"
9
+ config_paths = list(dummy_dir.iterdir())
10
+
11
+ for config in config_paths:
12
+ versions = list(config.iterdir())
13
+ assert len(versions) == 1, versions
14
+ version = versions[0]
15
+ zip_filepath = version / "dummy_data.zip"
16
+
17
+ # We need to open the zip file
18
+ with ZipFile(zip_filepath, "r") as zip_dir:
19
+ with zip_dir.open("dummy_data/image_data.json.zip/image_data.json", "r") as fi:
20
+ image_metadatas = json.load(fi)
21
+
22
+ default_jpg_path = Path(__file__).parent / "huggingface.jpg"
23
+ with ZipFile(zip_filepath, "a") as zip_dir:
24
+ for image_metadata in image_metadatas:
25
+ url = image_metadata["url"]
26
+
27
+ matches = re.match(r"https://cs.stanford.edu/people/rak248/VG_100K(?:_(2))?/[0-9]+.jpg", url)
28
+ assert matches is not None
29
+
30
+ # Find where locally the images should be
31
+ vg_version = matches.group(1)
32
+ if vg_version is None:
33
+ local_path = re.sub(
34
+ "https://cs.stanford.edu/people/rak248/VG_100K", "dummy_data/images.zip/VG_100K", url
35
+ )
36
+ else:
37
+ local_path = re.sub(
38
+ f"https://cs.stanford.edu/people/rak248/VG_100K_{vg_version}",
39
+ f"dummy_data/images{vg_version}.zip/VG_100K_{vg_version}",
40
+ url,
41
+ )
42
+
43
+ # Write those images.
44
+ zip_dir.write(filename=default_jpg_path, arcname=local_path)
45
+
46
+
47
+ if __name__ == "__main__":
48
+ main()
huggingface.jpg ADDED
visual_genome.py ADDED
@@ -0,0 +1,465 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Visual Genome dataset."""
16
+
17
+ import json
18
+ import os
19
+ import re
20
+ from collections import defaultdict
21
+ from typing import Any, Callable, Dict, Optional
22
+ from urllib.parse import urlparse
23
+
24
+ import datasets
25
+
26
+
27
+ logger = datasets.logging.get_logger(__name__)
28
+
29
+ _CITATION = """\
30
+ @inproceedings{krishnavisualgenome,
31
+ title={Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations},
32
+ author={Krishna, Ranjay and Zhu, Yuke and Groth, Oliver and Johnson, Justin and Hata, Kenji and Kravitz, Joshua and Chen, Stephanie and Kalantidis, Yannis and Li, Li-Jia and Shamma, David A and Bernstein, Michael and Fei-Fei, Li},
33
+ year = {2016},
34
+ url = {https://arxiv.org/abs/1602.07332},
35
+ }
36
+ """
37
+
38
+ _DESCRIPTION = """\
39
+ Visual Genome enable to model objects and relationships between objects.
40
+ They collect dense annotations of objects, attributes, and relationships within each image.
41
+ Specifically, the dataset contains over 108K images where each image has an average of 35 objects, 26 attributes, and 21 pairwise relationships between objects.
42
+ """
43
+
44
+ _HOMEPAGE = "https://visualgenome.org/"
45
+
46
+ _LICENSE = "Creative Commons Attribution 4.0 International License"
47
+
48
+ _BASE_IMAGE_URLS = {
49
+ "https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip": "VG_100K",
50
+ "https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip": "VG_100K_2",
51
+ }
52
+
53
+ _LATEST_VERSIONS = {
54
+ "region_descriptions": "1.2.0",
55
+ "objects": "1.4.0",
56
+ "attributes": "1.2.0",
57
+ "relationships": "1.4.0",
58
+ "question_answers": "1.2.0",
59
+ "image_metadata": "1.2.0",
60
+ }
61
+
62
+ # ---- Features ----
63
+
64
+ _BASE_IMAGE_METADATA_FEATURES = {
65
+ "image_id": datasets.Value("int32"),
66
+ "url": datasets.Value("string"),
67
+ "width": datasets.Value("int32"),
68
+ "height": datasets.Value("int32"),
69
+ "coco_id": datasets.Value("int64"),
70
+ "flickr_id": datasets.Value("int64"),
71
+ }
72
+
73
+ _BASE_SYNTET_FEATURES = {
74
+ "synset_name": datasets.Value("string"),
75
+ "entity_name": datasets.Value("string"),
76
+ "entity_idx_start": datasets.Value("int32"),
77
+ "entity_idx_end": datasets.Value("int32"),
78
+ }
79
+
80
+ _BASE_OBJECT_FEATURES = {
81
+ "object_id": datasets.Value("int32"),
82
+ "x": datasets.Value("int32"),
83
+ "y": datasets.Value("int32"),
84
+ "w": datasets.Value("int32"),
85
+ "h": datasets.Value("int32"),
86
+ "names": [datasets.Value("string")],
87
+ "synsets": [datasets.Value("string")],
88
+ }
89
+
90
+ _BASE_QA_OBJECT_FEATURES = {
91
+ "object_id": datasets.Value("int32"),
92
+ "x": datasets.Value("int32"),
93
+ "y": datasets.Value("int32"),
94
+ "w": datasets.Value("int32"),
95
+ "h": datasets.Value("int32"),
96
+ "names": [datasets.Value("string")],
97
+ "synsets": [datasets.Value("string")],
98
+ }
99
+
100
+ _BASE_QA_OBJECT = {
101
+ "qa_id": datasets.Value("int32"),
102
+ "image_id": datasets.Value("int32"),
103
+ "question": datasets.Value("string"),
104
+ "answer": datasets.Value("string"),
105
+ "a_objects": [_BASE_QA_OBJECT_FEATURES],
106
+ "q_objects": [_BASE_QA_OBJECT_FEATURES],
107
+ }
108
+
109
+ _BASE_REGION_FEATURES = {
110
+ "region_id": datasets.Value("int32"),
111
+ "image_id": datasets.Value("int32"),
112
+ "phrase": datasets.Value("string"),
113
+ "x": datasets.Value("int32"),
114
+ "y": datasets.Value("int32"),
115
+ "width": datasets.Value("int32"),
116
+ "height": datasets.Value("int32"),
117
+ }
118
+
119
+ _BASE_RELATIONSHIP_FEATURES = {
120
+ "relationship_id": datasets.Value("int32"),
121
+ "predicate": datasets.Value("string"),
122
+ "synsets": datasets.Value("string"),
123
+ "subject": _BASE_OBJECT_FEATURES,
124
+ "object": _BASE_OBJECT_FEATURES,
125
+ }
126
+
127
+ _NAME_VERSION_TO_ANNOTATION_FEATURES = {
128
+ "region_descriptions": {
129
+ "1.2.0": {"regions": [_BASE_REGION_FEATURES]},
130
+ "1.0.0": {"regions": [_BASE_REGION_FEATURES]},
131
+ },
132
+ "objects": {
133
+ "1.4.0": {"objects": [{**_BASE_OBJECT_FEATURES, "merged_object_ids": [datasets.Value("int32")]}]},
134
+ "1.2.0": {"objects": [_BASE_OBJECT_FEATURES]},
135
+ "1.0.0": {"objects": [_BASE_OBJECT_FEATURES]},
136
+ },
137
+ "attributes": {
138
+ "1.2.0": {"attributes": [{**_BASE_OBJECT_FEATURES, "attributes": [datasets.Value("string")]}]},
139
+ "1.0.0": {"attributes": [{**_BASE_OBJECT_FEATURES, "attributes": [datasets.Value("string")]}]},
140
+ },
141
+ "relationships": {
142
+ "1.4.0": {
143
+ "relationships": [
144
+ {
145
+ **_BASE_RELATIONSHIP_FEATURES,
146
+ "subject": {**_BASE_OBJECT_FEATURES, "merged_object_ids": [datasets.Value("int32")]},
147
+ "object": {**_BASE_OBJECT_FEATURES, "merged_object_ids": [datasets.Value("int32")]},
148
+ }
149
+ ]
150
+ },
151
+ "1.2.0": {"relationships": [_BASE_RELATIONSHIP_FEATURES]},
152
+ "1.0.0": {"relationships": [_BASE_RELATIONSHIP_FEATURES]},
153
+ },
154
+ "question_answers": {"1.2.0": {"qas": [_BASE_QA_OBJECT]}, "1.0.0": {"qas": [_BASE_QA_OBJECT]}},
155
+ }
156
+
157
+ # ----- Helpers -----
158
+
159
+
160
+ def _get_decompressed_filename_from_url(url: str) -> str:
161
+ parsed_url = urlparse(url)
162
+ compressed_filename = os.path.basename(parsed_url.path)
163
+
164
+ # Remove `.zip` suffix
165
+ assert compressed_filename.endswith(".zip")
166
+ uncompressed_filename = compressed_filename[:-4]
167
+
168
+ # Remove version.
169
+ unversioned_uncompressed_filename = re.sub(r"_v[0-9]+(?:_[0-9]+)?\.json$", ".json", uncompressed_filename)
170
+
171
+ return unversioned_uncompressed_filename
172
+
173
+
174
+ def _get_local_image_path(img_url: str, folder_local_paths: Dict[str, str]) -> str:
175
+ """
176
+ Obtain image folder given an image url.
177
+
178
+ For example:
179
+ Given `https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg` as an image url, this method returns the local path for that image.
180
+ """
181
+ matches = re.fullmatch(r"^https://cs.stanford.edu/people/rak248/(VG_100K(?:_2)?)/([0-9]+\.jpg)$", img_url)
182
+ assert matches is not None, f"Got img_url: {img_url}, matched: {matches}"
183
+ folder, filename = matches.group(1), matches.group(2)
184
+ return os.path.join(folder_local_paths[folder], filename)
185
+
186
+
187
+ # ----- Annotation normalizers ----
188
+
189
+ _BASE_ANNOTATION_URL = "https://visualgenome.org/static/data/dataset"
190
+
191
+
192
+ def _normalize_region_description_annotation_(annotation: Dict[str, Any]) -> Dict[str, Any]:
193
+ """Normalizes region descriptions annotation in-place"""
194
+ # Some attributes annotations don't have an attribute field
195
+ for region in annotation["regions"]:
196
+ # `id` should be converted to `region_id`:
197
+ if "id" in region:
198
+ region["region_id"] = region["id"]
199
+ del region["id"]
200
+
201
+ # `image` should be converted to `image_id`
202
+ if "image" in region:
203
+ region["image_id"] = region["image"]
204
+ del region["image"]
205
+
206
+ return annotation
207
+
208
+
209
+ def _normalize_object_annotation_(annotation: Dict[str, Any]) -> Dict[str, Any]:
210
+ """Normalizes object annotation in-place"""
211
+ # Some attributes annotations don't have an attribute field
212
+ for object_ in annotation["objects"]:
213
+ # `id` should be converted to `object_id`:
214
+ if "id" in object_:
215
+ object_["object_id"] = object_["id"]
216
+ del object_["id"]
217
+
218
+ # Some versions of `object` annotations don't have `synsets` field.
219
+ if "synsets" not in object_:
220
+ object_["synsets"] = None
221
+
222
+ return annotation
223
+
224
+
225
+ def _normalize_attribute_annotation_(annotation: Dict[str, Any]) -> Dict[str, Any]:
226
+ """Normalizes attributes annotation in-place"""
227
+ # Some attributes annotations don't have an attribute field
228
+ for attribute in annotation["attributes"]:
229
+ # `id` should be converted to `object_id`:
230
+ if "id" in attribute:
231
+ attribute["object_id"] = attribute["id"]
232
+ del attribute["id"]
233
+
234
+ # `objects_names` should be convered to `names:
235
+ if "object_names" in attribute:
236
+ attribute["names"] = attribute["object_names"]
237
+ del attribute["object_names"]
238
+
239
+ # Some versions of `attribute` annotations don't have `synsets` field.
240
+ if "synsets" not in attribute:
241
+ attribute["synsets"] = None
242
+
243
+ # Some versions of `attribute` annotations don't have `attributes` field.
244
+ if "attributes" not in attribute:
245
+ attribute["attributes"] = None
246
+
247
+ return annotation
248
+
249
+
250
+ def _normalize_relationship_annotation_(annotation: Dict[str, Any]) -> Dict[str, Any]:
251
+ """Normalizes relationship annotation in-place"""
252
+ # For some reason relationships objects have a single name instead of a list of names.
253
+ for relationship in annotation["relationships"]:
254
+ # `id` should be converted to `object_id`:
255
+ if "id" in relationship:
256
+ relationship["relationship_id"] = relationship["id"]
257
+ del relationship["id"]
258
+
259
+ if "synsets" not in relationship:
260
+ relationship["synsets"] = None
261
+
262
+ subject = relationship["subject"]
263
+ object_ = relationship["object"]
264
+
265
+ for obj in [subject, object_]:
266
+ # `id` should be converted to `object_id`:
267
+ if "id" in obj:
268
+ obj["object_id"] = obj["id"]
269
+ del obj["id"]
270
+
271
+ if "name" in obj:
272
+ obj["names"] = [obj["name"]]
273
+ del obj["name"]
274
+
275
+ if "synsets" not in obj:
276
+ obj["synsets"] = None
277
+
278
+ return annotation
279
+
280
+
281
+ def _normalize_image_metadata_(image_metadata: Dict[str, Any]) -> Dict[str, Any]:
282
+ """Normalizes image metadata in-place"""
283
+ if "id" in image_metadata:
284
+ image_metadata["image_id"] = image_metadata["id"]
285
+ del image_metadata["id"]
286
+ return image_metadata
287
+
288
+
289
+ _ANNOTATION_NORMALIZER = defaultdict(lambda: lambda x: x)
290
+ _ANNOTATION_NORMALIZER.update(
291
+ {
292
+ "region_descriptions": _normalize_region_description_annotation_,
293
+ "objects": _normalize_object_annotation_,
294
+ "attributes": _normalize_attribute_annotation_,
295
+ "relationships": _normalize_relationship_annotation_,
296
+ }
297
+ )
298
+
299
+ # ---- Visual Genome loading script ----
300
+
301
+
302
+ class VisualGenomeConfig(datasets.BuilderConfig):
303
+ """BuilderConfig for Visual Genome."""
304
+
305
+ def __init__(self, name: str, version: Optional[str] = None, with_image: bool = True, **kwargs):
306
+ _version = _LATEST_VERSIONS[name] if version is None else version
307
+ _name = f"{name}_v{_version}"
308
+ super(VisualGenomeConfig, self).__init__(version=datasets.Version(_version), name=_name, **kwargs)
309
+ self._name_without_version = name
310
+ self.annotations_features = _NAME_VERSION_TO_ANNOTATION_FEATURES[self._name_without_version][
311
+ self.version.version_str
312
+ ]
313
+ self.with_image = with_image
314
+
315
+ @property
316
+ def annotations_url(self):
317
+ if self.version.match(_LATEST_VERSIONS[self._name_without_version]):
318
+ return f"{_BASE_ANNOTATION_URL}/{self._name_without_version}.json.zip"
319
+
320
+ major, minor = self.version.major, self.version.minor
321
+ if minor == 0:
322
+ return f"{_BASE_ANNOTATION_URL}/{self._name_without_version}_v{major}.json.zip"
323
+ else:
324
+ return f"{_BASE_ANNOTATION_URL}/{self._name_without_version}_v{major}_{minor}.json.zip"
325
+
326
+ @property
327
+ def image_metadata_url(self):
328
+ if not self.version.match(_LATEST_VERSIONS["image_metadata"]):
329
+ logger.warning(
330
+ f"Latest image metadata version is {_LATEST_VERSIONS['image_metadata']}. Trying to generate a dataset of version: {self.version}. Please double check that image data are unchanged between the two versions."
331
+ )
332
+ return f"{_BASE_ANNOTATION_URL}/image_data.json.zip"
333
+
334
+ @property
335
+ def features(self):
336
+ return datasets.Features(
337
+ {
338
+ **({"image": datasets.Image()} if self.with_image else {}),
339
+ **_BASE_IMAGE_METADATA_FEATURES,
340
+ **self.annotations_features,
341
+ }
342
+ )
343
+
344
+
345
+ class VisualGenome(datasets.GeneratorBasedBuilder):
346
+ """Visual Genome dataset."""
347
+
348
+ BUILDER_CONFIG_CLASS = VisualGenomeConfig
349
+ BUILDER_CONFIGS = [
350
+ *[VisualGenomeConfig(name="region_descriptions", version=version) for version in ["1.0.0", "1.2.0"]],
351
+ *[VisualGenomeConfig(name="question_answers", version=version) for version in ["1.0.0", "1.2.0"]],
352
+ *[
353
+ VisualGenomeConfig(name="objects", version=version)
354
+ # TODO: add support for 1.4.0
355
+ for version in ["1.0.0", "1.2.0"]
356
+ ],
357
+ *[VisualGenomeConfig(name="attributes", version=version) for version in ["1.0.0", "1.2.0"]],
358
+ *[
359
+ VisualGenomeConfig(name="relationships", version=version)
360
+ # TODO: add support for 1.4.0
361
+ for version in ["1.0.0", "1.2.0"]
362
+ ],
363
+ ]
364
+
365
+ def _info(self):
366
+ return datasets.DatasetInfo(
367
+ description=_DESCRIPTION,
368
+ features=self.config.features,
369
+ homepage=_HOMEPAGE,
370
+ license=_LICENSE,
371
+ citation=_CITATION,
372
+ version=self.config.version,
373
+ )
374
+
375
+ def _split_generators(self, dl_manager):
376
+ # Download image meta datas.
377
+ image_metadatas_dir = dl_manager.download_and_extract(self.config.image_metadata_url)
378
+ image_metadatas_file = os.path.join(
379
+ image_metadatas_dir, _get_decompressed_filename_from_url(self.config.image_metadata_url)
380
+ )
381
+
382
+ # Download annotations
383
+ annotations_dir = dl_manager.download_and_extract(self.config.annotations_url)
384
+ annotations_file = os.path.join(
385
+ annotations_dir, _get_decompressed_filename_from_url(self.config.annotations_url)
386
+ )
387
+
388
+ # Optionally download images
389
+ if self.config.with_image:
390
+ image_folder_keys = list(_BASE_IMAGE_URLS.keys())
391
+ image_dirs = dl_manager.download_and_extract(image_folder_keys)
392
+ image_folder_local_paths = {
393
+ _BASE_IMAGE_URLS[key]: os.path.join(dir_, _BASE_IMAGE_URLS[key])
394
+ for key, dir_ in zip(image_folder_keys, image_dirs)
395
+ }
396
+ else:
397
+ image_folder_local_paths = None
398
+
399
+ return [
400
+ datasets.SplitGenerator(
401
+ name=datasets.Split.TRAIN,
402
+ gen_kwargs={
403
+ "image_folder_local_paths": image_folder_local_paths,
404
+ "image_metadatas_file": image_metadatas_file,
405
+ "annotations_file": annotations_file,
406
+ "annotation_normalizer_": _ANNOTATION_NORMALIZER[self.config._name_without_version],
407
+ },
408
+ ),
409
+ ]
410
+
411
+ def _generate_examples(
412
+ self,
413
+ image_folder_local_paths: Optional[Dict[str, str]],
414
+ image_metadatas_file: str,
415
+ annotations_file: str,
416
+ annotation_normalizer_: Callable[[Dict[str, Any]], Dict[str, Any]],
417
+ ):
418
+ with open(annotations_file, "r", encoding="utf-8") as fi:
419
+ annotations = json.load(fi)
420
+
421
+ with open(image_metadatas_file, "r", encoding="utf-8") as fi:
422
+ image_metadatas = json.load(fi)
423
+
424
+ assert len(image_metadatas) == len(annotations)
425
+ for idx, (image_metadata, annotation) in enumerate(zip(image_metadatas, annotations)):
426
+ # in-place operation to normalize image_metadata
427
+ _normalize_image_metadata_(image_metadata)
428
+
429
+ # Normalize image_id across all annotations
430
+ if "id" in annotation:
431
+ # annotation["id"] corresponds to image_metadata["image_id"]
432
+ assert (
433
+ image_metadata["image_id"] == annotation["id"]
434
+ ), f"Annotations doesn't match with image metadataset. Got image_metadata['image_id']: {image_metadata['image_id']} and annotations['id']: {annotation['id']}"
435
+ del annotation["id"]
436
+ else:
437
+ assert "image_id" in annotation
438
+ assert (
439
+ image_metadata["image_id"] == annotation["image_id"]
440
+ ), f"Annotations doesn't match with image metadataset. Got image_metadata['image_id']: {image_metadata['image_id']} and annotations['image_id']: {annotation['image_id']}"
441
+
442
+ # Normalize image_id across all annotations
443
+ if "image_url" in annotation:
444
+ # annotation["image_url"] corresponds to image_metadata["url"]
445
+ assert (
446
+ image_metadata["url"] == annotation["image_url"]
447
+ ), f"Annotations doesn't match with image metadataset. Got image_metadata['url']: {image_metadata['url']} and annotations['image_url']: {annotation['image_url']}"
448
+ del annotation["image_url"]
449
+ elif "url" in annotation:
450
+ # annotation["url"] corresponds to image_metadata["url"]
451
+ assert (
452
+ image_metadata["url"] == annotation["url"]
453
+ ), f"Annotations doesn't match with image metadataset. Got image_metadata['url']: {image_metadata['url']} and annotations['url']: {annotation['url']}"
454
+
455
+ # in-place operation to normalize annotations
456
+ annotation_normalizer_(annotation)
457
+
458
+ # optionally add image to the annotation
459
+ if image_folder_local_paths is not None:
460
+ filepath = _get_local_image_path(image_metadata["url"], image_folder_local_paths)
461
+ image_dict = {"image": filepath}
462
+ else:
463
+ image_dict = {}
464
+
465
+ yield idx, {**image_dict, **image_metadata, **annotation}