Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1M<n<10M
Language Creators:
found
Annotations Creators:
found
Source Datasets:
original
Tags:
License:
Thomas Wang commited on
Commit
878e94d
1 Parent(s): 61bfd5e

Add SBU Captions Photo Dataset (#4130)

Browse files

* Add SBU Captions

Co-authored-by: mariosasko <mariosasko777@gmail.com>

Commit from https://github.com/huggingface/datasets/commit/01c7f41a81c9f84de905a5888cc85cd4c7fd3f21

Files changed (4) hide show
  1. README.md +214 -0
  2. dataset_infos.json +1 -0
  3. dummy/0.0.0/dummy_data.zip +3 -0
  4. sbu_captions.py +104 -0
README.md ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1M<n<10M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - image-to-text
18
+ task_ids:
19
+ - image-captioning
20
+ paperswithcode_id: sbu-captions-dataset
21
+ pretty_name: SBU Captioned Photo Dataset
22
+ ---
23
+
24
+ # Dataset Card for SBU Captioned Photo Dataset
25
+
26
+ ## Table of Contents
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Dataset Preprocessing](#dataset-preprocessing)
31
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-fields)
36
+ - [Data Splits](#data-splits)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Annotations](#annotations)
41
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
43
+ - [Social Impact of Dataset](#social-impact-of-dataset)
44
+ - [Discussion of Biases](#discussion-of-biases)
45
+ - [Other Known Limitations](#other-known-limitations)
46
+ - [Additional Information](#additional-information)
47
+ - [Dataset Curators](#dataset-curators)
48
+ - [Licensing Information](#licensing-information)
49
+ - [Citation Information](#citation-information)
50
+ - [Contributions](#contributions)
51
+
52
+ ## Dataset Description
53
+
54
+ - **Homepage:** [SBU Captioned Photo Dataset homepage](http://www.cs.virginia.edu/~vicente/sbucaptions/)
55
+ - **Repository:**
56
+ - **Paper:** [Im2Text: Describing Images Using 1 Million Captioned Photographs](https://papers.nips.cc/paper/2011/hash/5dd9db5e033da9c6fb5ba83c7a7ebea9-Abstract.html)
57
+ - **Leaderboard:**
58
+ - **Point of Contact:** [Vicente Ordóñez Román](mailto:vicente@virginia.edu)
59
+
60
+ ### Dataset Summary
61
+
62
+ SBU Captioned Photo Dataset is a collection of associated captions and images from Flickr.
63
+
64
+ ### Dataset Preprocessing
65
+
66
+ This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
67
+
68
+ ```python
69
+ from concurrent.futures import ThreadPoolExecutor
70
+ from functools import partial
71
+ import io
72
+ import urllib
73
+
74
+ import PIL
75
+
76
+ from datasets import load_dataset
77
+ from datasets.utils.file_utils import get_datasets_user_agent
78
+
79
+
80
+ def fetch_single_image(image_url, timeout=None, retries=0):
81
+ for _ in range(retries + 1):
82
+ try:
83
+ request = urllib.request.Request(
84
+ image_url,
85
+ data=None,
86
+ headers={"user-agent": get_datasets_user_agent()},
87
+ )
88
+ with urllib.request.urlopen(request, timeout=timeout) as req:
89
+ image = PIL.Image.open(io.BytesIO(req.read()))
90
+ break
91
+ except Exception:
92
+ image = None
93
+ return image
94
+
95
+
96
+ def fetch_images(batch, num_threads, timeout=None, retries=0):
97
+ fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
98
+ with ThreadPoolExecutor(max_workers=num_threads) as executor:
99
+ batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
100
+ return batch
101
+
102
+
103
+ num_threads = 20
104
+ dset = load_dataset("sbu_captions")
105
+ dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
106
+ ```
107
+
108
+ ### Supported Tasks and Leaderboards
109
+
110
+ - `image-to-text`: This dataset can be used to train a model for Image Captioning where the goal is to predict a caption given the image.
111
+
112
+ ### Languages
113
+
114
+ All captions are in English.
115
+
116
+ ## Dataset Structure
117
+
118
+ ### Data Instances
119
+
120
+ Each instance in SBU Captioned Photo Dataset represents a single image with a caption and a user_id:
121
+
122
+ ```
123
+ {
124
+ 'img_url': 'http://static.flickr.com/2723/4385058960_b0f291553e.jpg',
125
+ 'user_id': '47889917@N08',
126
+ 'caption': 'A wooden chair in the living room'
127
+ }
128
+ ```
129
+
130
+ ### Data Fields
131
+
132
+ - `image_url`: Static URL for downloading the image associated with the post.
133
+ - `caption`: Textual description of the image.
134
+ - `user_id`: Author of caption.
135
+
136
+ ### Data Splits
137
+
138
+ All the data is contained in training split. The training set has 1M instances.
139
+
140
+ ## Dataset Creation
141
+
142
+ ### Curation Rationale
143
+
144
+ From the paper:
145
+ > One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually
146
+ relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results.
147
+
148
+ ### Source Data
149
+
150
+ The source images come from Flickr.
151
+
152
+ #### Initial Data Collection and Normalization
153
+
154
+ One key contribution of our paper is a novel web-scale database of photographs with associated
155
+ descriptive text. To enable effective captioning of novel images, this database must be good in two
156
+ ways: 1) It must be large so that image based matches to a query are reasonably similar, 2) The
157
+ captions associated with the data base photographs must be visually relevant so that transferring
158
+ captions between pictures is useful. To achieve the first requirement we query Flickr using a huge
159
+ number of pairs of query terms (objects, attributes, actions, stuff, and scenes). This produces a very
160
+ large, but noisy initial set of photographs with associated text.
161
+
162
+ #### Who are the source language producers?
163
+
164
+ The Flickr users.
165
+
166
+ ### Annotations
167
+
168
+ #### Annotation process
169
+
170
+ Text descriptions associated with the images are inherited as annotations/captions.
171
+
172
+ #### Who are the annotators?
173
+
174
+ The Flickr users.
175
+
176
+ ### Personal and Sensitive Information
177
+
178
+ ## Considerations for Using the Data
179
+
180
+ ### Social Impact of Dataset
181
+
182
+ ### Discussion of Biases
183
+
184
+ ### Other Known Limitations
185
+
186
+ ## Additional Information
187
+
188
+ ### Dataset Curators
189
+
190
+ Vicente Ordonez, Girish Kulkarni and Tamara L. Berg.
191
+
192
+ ### Licensing Information
193
+
194
+ Not specified.
195
+
196
+ ### Citation Information
197
+
198
+ ```bibtex
199
+ @inproceedings{NIPS2011_5dd9db5e,
200
+ author = {Ordonez, Vicente and Kulkarni, Girish and Berg, Tamara},
201
+ booktitle = {Advances in Neural Information Processing Systems},
202
+ editor = {J. Shawe-Taylor and R. Zemel and P. Bartlett and F. Pereira and K.Q. Weinberger},
203
+ pages = {},
204
+ publisher = {Curran Associates, Inc.},
205
+ title = {Im2Text: Describing Images Using 1 Million Captioned Photographs},
206
+ url = {https://proceedings.neurips.cc/paper/2011/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf},
207
+ volume = {24},
208
+ year = {2011}
209
+ }
210
+ ```
211
+
212
+ ### Contributions
213
+
214
+ Thanks to [@thomasw21](https://github.com/thomasw21) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "The SBU Captioned Photo Dataset is a collection of over 1 million images with associated text descriptions extracted from Flicker.\n", "citation": "@inproceedings{NIPS2011_5dd9db5e,\n author = {Ordonez, Vicente and Kulkarni, Girish and Berg, Tamara},\n booktitle = {Advances in Neural Information Processing Systems},\n editor = {J. Shawe-Taylor and R. Zemel and P. Bartlett and F. Pereira and K.Q. Weinberger},\n pages = {},\n publisher = {Curran Associates, Inc.},\n title = {Im2Text: Describing Images Using 1 Million Captioned Photographs},\n url = {https://proceedings.neurips.cc/paper/2011/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf},\n volume = {24},\n year = {2011}\n}\n", "homepage": "http://www.cs.virginia.edu/~vicente/sbucaptions", "license": "unknown", "features": {"image_url": {"dtype": "string", "id": null, "_type": "Value"}, "user_id": {"dtype": "string", "id": null, "_type": "Value"}, "caption": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "sbu_captioned_photo_dataset", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 143795586, "num_examples": 1000000, "dataset_name": "sbu_captioned_photo_dataset"}}, "download_checksums": {"http://www.cs.virginia.edu/~vicente/sbucaptions/sbu-captions-all.tar.gz": {"num_bytes": 49787719, "checksum": "3d145fb58fea5bf5680e71c82e93d336c1a06d726dbea7f7702d49f5bf2ff532"}}, "download_size": 49787719, "post_processing_size": null, "dataset_size": 143795586, "size_in_bytes": 193583305}}
dummy/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00d14d8199ccb3ce7cb398c516ccf900acaec507fab3cb3bfab8997011e90520
3
+ size 880
sbu_captions.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """SBU Captioned Photo Dataset"""
16
+
17
+ import json
18
+
19
+ import datasets
20
+
21
+
22
+ _CITATION = """\
23
+ @inproceedings{NIPS2011_5dd9db5e,
24
+ author = {Ordonez, Vicente and Kulkarni, Girish and Berg, Tamara},
25
+ booktitle = {Advances in Neural Information Processing Systems},
26
+ editor = {J. Shawe-Taylor and R. Zemel and P. Bartlett and F. Pereira and K.Q. Weinberger},
27
+ pages = {},
28
+ publisher = {Curran Associates, Inc.},
29
+ title = {Im2Text: Describing Images Using 1 Million Captioned Photographs},
30
+ url = {https://proceedings.neurips.cc/paper/2011/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf},
31
+ volume = {24},
32
+ year = {2011}
33
+ }
34
+ """
35
+
36
+ _DESCRIPTION = """\
37
+ The SBU Captioned Photo Dataset is a collection of over 1 million images with associated text descriptions extracted from Flicker.
38
+ """
39
+
40
+ _LICENSE = "unknown"
41
+
42
+ _HOMEPAGE = "http://www.cs.virginia.edu/~vicente/sbucaptions"
43
+
44
+ _URL = "http://www.cs.virginia.edu/~vicente/sbucaptions/sbu-captions-all.tar.gz"
45
+
46
+ _FEATURES = datasets.Features(
47
+ {"image_url": datasets.Value("string"), "user_id": datasets.Value("string"), "caption": datasets.Value("string")}
48
+ )
49
+
50
+ _MAP_SBU_FEATURES_TO_DATASETS_FEATURES = {"image_urls": "image_url", "user_ids": "user_id", "captions": "caption"}
51
+
52
+
53
+ class SBUCaptionedPhotoDatasetConfig(datasets.BuilderConfig):
54
+ """BuilderConfig for SBU Captioned Photo dataset."""
55
+
56
+ VERSION = datasets.Version("0.0.0")
57
+
58
+ def __init__(self, version=None, *args, **kwargs):
59
+ super().__init__(
60
+ version=version or self.VERSION,
61
+ *args,
62
+ **kwargs,
63
+ )
64
+
65
+
66
+ class SBUCaptionedPhotoDataset(datasets.GeneratorBasedBuilder):
67
+ """SBU Captioned Photo dataset."""
68
+
69
+ def _info(self):
70
+ return datasets.DatasetInfo(
71
+ description=_DESCRIPTION,
72
+ features=_FEATURES,
73
+ homepage=_HOMEPAGE,
74
+ license=_LICENSE,
75
+ citation=_CITATION,
76
+ )
77
+
78
+ def _split_generators(self, dl_manager: datasets.DownloadManager):
79
+ archive = dl_manager.download(_URL)
80
+
81
+ return [
82
+ datasets.SplitGenerator(
83
+ name=datasets.Split.TRAIN,
84
+ gen_kwargs={
85
+ "files": dl_manager.iter_archive(archive),
86
+ },
87
+ )
88
+ ]
89
+
90
+ def _generate_examples(self, files):
91
+ annotations = None
92
+ for path, f in files:
93
+ if path.endswith("sbu-captions-all.json"):
94
+ annotations = json.loads(f.read().decode("utf-8"))
95
+ break
96
+
97
+ # Sanity checks
98
+ assert annotations is not None
99
+ nb_samples = len(annotations[next(iter(annotations.keys()))])
100
+ assert all(len(values) == nb_samples for values in annotations.values())
101
+ keys = tuple(annotations.keys())
102
+
103
+ for idx in range(nb_samples):
104
+ yield idx, {_MAP_SBU_FEATURES_TO_DATASETS_FEATURES[key]: annotations[key][idx] for key in keys}