Datasets:

Languages: English
Multilinguality: monolingual
Size Categories: 10M<n<100M
Language Creators: found
Annotations Creators: found
Source Datasets: original
License:
system HF staff commited on
Commit
7c89d87
0 Parent(s):

Update files from the datasets library (from 1.18.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.18.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +385 -0
  3. dataset_infos.json +1 -0
  4. dummy/all/1.0.0/dummy_data.zip +3 -0
  5. red_caps.py +367 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,385 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10M<n<100M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - other
18
+ task_ids:
19
+ - other-other-image-classification
20
+ - other-other-image-captioning
21
+ paperswithcode_id: redcaps
22
+ pretty_name: RedCaps
23
+ ---
24
+
25
+ # Dataset Card for RedCaps
26
+
27
+ ## Table of Contents
28
+ - [Table of Contents](#table-of-contents)
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Dataset Preprocessing](#dataset-preprocessing)
32
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
33
+ - [Languages](#languages)
34
+ - [Dataset Structure](#dataset-structure)
35
+ - [Data Instances](#data-instances)
36
+ - [Data Fields](#data-fields)
37
+ - [Data Splits](#data-splits)
38
+ - [Dataset Creation](#dataset-creation)
39
+ - [Curation Rationale](#curation-rationale)
40
+ - [Source Data](#source-data)
41
+ - [Annotations](#annotations)
42
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
44
+ - [Social Impact of Dataset](#social-impact-of-dataset)
45
+ - [Discussion of Biases](#discussion-of-biases)
46
+ - [Other Known Limitations](#other-known-limitations)
47
+ - [Additional Information](#additional-information)
48
+ - [Dataset Curators](#dataset-curators)
49
+ - [Licensing Information](#licensing-information)
50
+ - [Citation Information](#citation-information)
51
+ - [Contributions](#contributions)
52
+
53
+ ## Dataset Description
54
+
55
+ - **Homepage:** https://redcaps.xyz/
56
+ - **Repository:**
57
+ - **Paper:** https://arxiv.org/abs/2111.11431
58
+ - **Leaderboard:**
59
+ - **Point of Contact:** kdexd@umich.edu
60
+
61
+ ### Dataset Summary
62
+
63
+ RedCaps is a large-scale dataset of 12M image-text pairs collected from Reddit.
64
+ Images and captions from Reddit depict and describe a wide variety of objects and scenes.
65
+ The data is collected from a manually curated set of subreddits (350 total),
66
+ which give coarse image labels and allow steering of the dataset composition
67
+ without labeling individual instances. RedCaps data is created *by the people, for the people* – it contains everyday things that users like to share on social media, for example hobbies (r/crafts) and pets (r/shiba). Captions often contain specific and
68
+ fine-grained descriptions (northern cardinal, taj mahal). Subreddit names provide relevant image
69
+ labels (r/shiba) even when captions may not (mlem!), and sometimes may group many visually
70
+ unrelated images through a common semantic meaning (r/perfectfit).
71
+
72
+ ### Dataset Preprocessing
73
+
74
+ This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
75
+
76
+ ```python
77
+ from datasets import load_dataset
78
+ from datasets.utils.file_utils import get_datasets_user_agent
79
+
80
+ def fetch_images(batch, timeout):
81
+ import PIL.Image
82
+ import requests
83
+
84
+ images = []
85
+ for image_url in batch["image_url"]:
86
+ try:
87
+ image = PIL.Image.open(
88
+ requests.get(
89
+ image_url,
90
+ stream=True,
91
+ headers={"user-agent": get_datasets_user_agent()},
92
+ timeout=timeout,
93
+ ).raw
94
+ )
95
+ except requests.exceptions.ConnectionError:
96
+ image = None
97
+ images.append(image)
98
+ batch["image"] = images
99
+ return batch
100
+
101
+ timeout = None
102
+ num_proc = 4
103
+ dset = load_dataset("red_caps", "rabbits_2017")
104
+ dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"timeout": timeout}, num_proc=num_proc)
105
+ ```
106
+
107
+ ### Supported Tasks and Leaderboards
108
+
109
+ From the paper:
110
+ > We have used our dataset to train deep neural networks that perform image captioning, and
111
+ that learn transferable visual representations for a variety of downstream visual recognition tasks
112
+ (image classification, object detection, instance segmentation).
113
+
114
+ > We anticipate that the dataset could be used for a variety of vision-and-language (V&L) tasks,
115
+ such as image or text retrieval or text-to-image synthesis.
116
+
117
+ ### Languages
118
+
119
+ All of the subreddits in RedCaps use English as their primary language.
120
+
121
+ ## Dataset Structure
122
+
123
+ ### Data Instances
124
+
125
+ Each instance in RedCaps represents a single Reddit image post:
126
+
127
+ ```
128
+ {
129
+ 'image_id': 'bpzj7r',
130
+ 'author': 'djasz1',
131
+ 'image_url': 'https://i.redd.it/ho0wntksivy21.jpg',
132
+ 'raw_caption': 'Found on a friend’s property in the Keys FL. She is now happily living in my house.',
133
+ 'caption': 'found on a friend's property in the keys fl. she is now happily living in my house.', 'subreddit': 3,
134
+ 'score': 72,
135
+ 'created_utc': datetime.datetime(2019, 5, 18, 1, 36, 41),
136
+ 'permalink': '/r/airplants/comments/bpzj7r/found_on_a_friends_property_in_the_keys_fl_she_is/', 'crosspost_parents': None
137
+ }
138
+ ```
139
+
140
+ ### Data Fields
141
+
142
+ - `image_id`: Unique alphanumeric ID of the image post (assigned by Reddit).
143
+ - `author`: Reddit username of the image post author.
144
+ - `image_url`: Static URL for downloading the image associated with the post.
145
+ - `raw_caption`: Textual description of the image, written by the post author.
146
+ - `caption`: Cleaned version of "raw_caption" by us (see Q35).
147
+ - `subreddit`: Name of subreddit where the post was submitted.
148
+ - `score`: Net upvotes (discounting downvotes) received by the image post. This field is equal to `None` if the image post is a crosspost.
149
+ - `created_utc`: Integer time epoch (in UTC) when the post was submitted to Reddit.
150
+ - `permalink`: Partial URL of the Reddit post (https://reddit.com/<permalink>).
151
+ - `crosspost_parents`: List of parent posts. This field is optional.
152
+
153
+
154
+ ### Data Splits
155
+
156
+ All the data is contained in training set. The training set has nearly 12M (12,011,111) instances.
157
+
158
+ From the paper:
159
+ > We intend our dataset to be primarily used for pre-training with one or more specific downstream task(s) in mind. Hence, all instances in our dataset would be used for training while
160
+ the validation split is derived from downstream task(s). If users require a validation split, we
161
+ recommend sampling it such that it follows the same subreddit distribution as entire dataset.
162
+
163
+ ## Dataset Creation
164
+
165
+ ### Curation Rationale
166
+
167
+ From the paper:
168
+ > Large datasets of image-text pairs are widely used for pre-training generic representations
169
+ that transfer to a variety of downstream vision and vision-and-language tasks. Existing public
170
+ datasets of this kind were curated from search engine results (SBU Captions [1]) or HTML
171
+ alt-text from arbitrary web pages (Conceptual Captions [2, 31]). They performed complex
172
+ data filtering to deal with noisy web data. Due to aggressive filtering, their data collection is
173
+ inefficient and diversity is artificially supressed. We argue that the quality of data depends on
174
+ its source, and the human intent behind its creation. In this work, we explore Reddit – a social
175
+ media platform, for curating high quality data. We introduce RedCaps – a large dataset of
176
+ 12M image-text pairs from Reddit. While we expect the use-cases of RedCaps to be similar to
177
+ existing datasets, we discuss how Reddit as a data source leads to fast and lightweight collection,
178
+ better data quality, lets us easily steer the data distribution, and facilitates ethically responsible data curation.
179
+
180
+ ### Source Data
181
+
182
+ #### Initial Data Collection and Normalization
183
+
184
+ From the paper:
185
+ > **Data Collection Pipeline**
186
+ Reddit’s uniform structure allows us to parallelize data collection as independent tasks – each task
187
+ involves collecting posts submitted to a single subreddit in one year. Our collection pipeline has three steps: (1) subreddit selection, (2) image post filtering, and (3) caption cleaning.
188
+ **Step 1**. Subreddit selection: We collect data from a manually curated set of subreddits. Subreddits
189
+ have their own rules, community norms, and moderators so curating subreddits allows us to steer the
190
+ dataset’s composition without annotating individual instances. We select subreddits with a high volume of images posts, where images tend to be photographs (rather than memes, drawings, screenshots,
191
+ etc) and post titles tend to describe image content (rather than making jokes, political commentary,
192
+ etc). We do not select any NSFW, banned, or quarantined subreddits. We want to minimize the
193
+ number of people that appear in RedCaps, so we omit subreddits whose primary purpose is to share or
194
+ comment on images of people (such as celebrity pics or user selfies). We choose subreddits focused on
195
+ general photography (r/pics, r/itookapicture), animals (r/axolotls, r/birdsofprey, r/dachshund),
196
+ plants (r/roses, r/succulents), objects (r/classiccars, r/trains, r/mechanicalkeyboards), food
197
+ (r/steak, r/macarons), scenery (r/cityporn1
198
+ , r/desertporn), or activities (r/carpentry, r/kayaking).
199
+ In total we collect data from 350 subreddits; the full list can be found in Appendix A.
200
+ **Step 2**. Image post filtering: We use Pushshift [41] and Reddit [42, 43] APIs to download all image
201
+ posts submitted to our selected subreddits from 2008–2020. Posts are collected at least six months
202
+ after their creation to let upvotes stabilize. We only collect posts with images hosted on three domains:
203
+ Reddit (i.redd.it), Imgur (i.imgur.com), and Flickr (staticflickr.com). Some image posts contain
204
+ multiple images (gallery posts) – in this case we only collect the first image and associate it with
205
+ the caption. We discard posts with < 2 upvotes to avoid unappealing content, and we discard posts
206
+ marked NSFW (by their authors or subreddit moderators) to avoid pornographic or disturbing content.
207
+ **Step 3**. Caption cleaning: We expect Reddit post titles to be less noisy than other large-scale
208
+ sources of image captions such as alt-text [2, 31], so we apply minimal text cleaning. We lowercase
209
+ captions and use ftfy [44] to remove character accents, emojis, and non-latin characters, following
210
+ [29, 35, 36]. Then we apply simple pattern matching to discard all sub-strings enclosed in brackets
211
+ ((.*), [.*]). These sub-strings usually give non-semantic information: original content tags [oc],
212
+ image resolutions (800x600 px), camera specs (shot with iPhone), self-promotion [Instagram:
213
+ @user], and other references (link in comments). Finally, like [31] we replace social media
214
+ handles (words starting with ‘@’) with a [USR] token to protect user privacy and reduce redundancy.
215
+ Due to such filtering, ≈12K (0.1%) captions in our dataset are empty strings. We do not discard them,
216
+ as subreddit names alone provide meaningful supervision. Unlike CC-3M or CC-12M that discard
217
+ captions without nouns or that don’t overlap image tags, we do not discard any instances in this step.
218
+ Through this pipeline, we collect 13.4M instances from 350 subreddits. Our collection pipeline is
219
+ less resource-intensive than existing datasets – we do not require webpage crawlers, search engines,
220
+ or large databases of indexed webpages. RedCaps is easily extensible in the future by selecting more
221
+ subreddits and collecting posts from future years. Next, we perform additional filtering to mitigate
222
+ user privacy risks and harmful stereotypes in RedCaps, resulting in final size of 12M instances.
223
+
224
+ #### Who are the source language producers?
225
+
226
+ Reddit is the singular data source for RedCaps.
227
+
228
+ ### Annotations
229
+
230
+ #### Annotation process
231
+
232
+ The dataset is built using fully automatic data collection pipeline which doesn't require any human annotators.
233
+
234
+ #### Who are the annotators?
235
+
236
+ The annotation process doesn't require any human annotators.
237
+
238
+ ### Personal and Sensitive Information
239
+
240
+ From the paper:
241
+ > **Does the dataset relate to people?**
242
+ The dataset pertains to people in that people wrote the captions and posted images to Reddit
243
+ that we curate in RedCaps. We made specific design choices while curating RedCaps to avoid
244
+ large quantities of images containing people:
245
+ (a) We collect data from manually curated subreddits in which most contain primarily pertains
246
+ to animals, objects, places, or activities. We exclude all subreddits whose primary purpose
247
+ is to share and describe images of people (such as celebrity photos or user selfies).
248
+ (b) We use an off-the-shelf face detector to find and remove images with potential presence of
249
+ human faces. We manually checked 50K random images in RedCaps (Q16) and found 79
250
+ images with identifiable human faces – the entire dataset may have ≈19K (0.15%) images
251
+ with identifiable people. Refer Section 2.2 in the main paper.
252
+
253
+ > **Is it possible to identify one or more natural persons, either directly or indirectly (i.e., in
254
+ combination with other data) from the dataset?**
255
+ Yes, all instances in RedCaps include Reddit usernames of their post authors. This could be
256
+ used to look up the Reddit user profile, and some Reddit users may have identifying information
257
+ in their profiles. Some images may contain human faces which could be identified by
258
+ appearance. However, note that all this information is already public on Reddit, and searching it
259
+ in RedCaps is no easier than searching directly on Reddit.
260
+
261
+ > **Were the individuals in question notified about the data collection?**
262
+ No. Reddit users are anonymous by default, and are not required to share their personal contact
263
+ information (email, phone numbers, etc.). Hence, the only way to notify the authors of RedCaps
264
+ image posts is by sending them private messages on Reddit. This is practically difficult to do
265
+ manually, and will be classified as spam and blocked by Reddit if attempted to programmatically
266
+ send a templated message to millions of users.
267
+
268
+ > **Did the individuals in question consent to the collection and use of their data?**
269
+ Users did not explicitly consent to the use of their data in our dataset. However, by uploading
270
+ their data on Reddit, they consent that it would appear on the Reddit plaform and will be
271
+ accessible via the official Reddit API (which we use to collect RedCaps).
272
+
273
+ > **If consent was obtained, were the consenting individuals provided with a mechanism to
274
+ revoke their consent in the future or for certain uses?**
275
+ Users have full control over the presence of their data in our dataset. If users wish to revoke
276
+ their consent, they can delete the underlying Reddit post – it will be automatically removed
277
+ dfrom RedCaps since we distributed images as URLs. Moreover, we provide an opt-out request
278
+ form on our dataset website for anybody to request removal of an individual instance if it is
279
+ potentially harmful (e.g. NSFW, violates privacy, harmful stereotypes, etc.).
280
+
281
+ ## Considerations for Using the Data
282
+
283
+ ### Social Impact of Dataset
284
+
285
+ From the paper:
286
+ > **Has an analysis of the potential impact of the dataset and its use on data subjects (e.g.,
287
+ a data protection impact analysis) been conducted?**
288
+ No.
289
+
290
+ ### Discussion of Biases
291
+
292
+ From the paper:
293
+ > **Harmful Stereotypes**: Another concern with
294
+ Reddit data is that images or language may represent harmful stereotypes about gender, race, or other
295
+ characteristics of people [48, 49, 51]. We select only non-NSFW subreddits with active moderation
296
+ for collecting data. This stands in contrast to less curated uses of Reddit data, such as GPT-2 [35]
297
+ whose training data includes at least 63K documents from banned or quarantined subreddits which
298
+ may contain toxic language [53]. We attempt to further reduce harmful stereotypes in two ways:
299
+ > * **NSFW images**: We use the InceptionV3 [54] model from [55] to filter images detected as porn or hentai with confidence ≥ 0.9. Similar to face filtering, we estimated precision of our filtering and estimated amount of missed detections, shown in Table 1. The model detects 87K images with low
300
+ precision (∼1%) – most detections are non-NSFW images with pink and beige hues.
301
+ > * **Potentially derogatory language**: We filter instances whose captions contain words or phrases from a common blocklist [56]. It is important to note that such coarse filtering might suppress language from marginalized groups reclaiming slurs [51]; however, as RedCaps is not intended to describe people, we believe this is a pragmatic tradeoff to avoid propagating harmful labels.
302
+
303
+ > **Reddit demographics**: Reddit’s user demographics are not representative of the population at large.
304
+ Compared to US adults, Reddit users skew male (69% vs 49%), young (58% 18-29 years old vs
305
+ 22%), college educated (36% vs 28%), and politically liberal (41% vs 25%) [57]. Reddit users
306
+ are predominantly white (63%) [57], and 49% of desktop traffic to Reddit comes from the United
307
+ States [58]. All of the subreddits in RedCaps use English as their primary language. Taken together,
308
+ these demographic biases likely also bias the types of objects and places that appear in images on
309
+ Reddit, and the language used to describe these images. We do not offer explicit countermeasures to
310
+ these biases, but users of RedCaps should keep in mind that size doesn’t guarantee diversity [51].
311
+ Subtler issues may also exist, such as imbalanced representation of demographic groups [59] or
312
+ gender bias in object co-occurrence [60] or language [61]. These are hard to control in internet
313
+ data, so we release RedCaps with explicit instructions on suitable use-cases; specifically requesting models not be trained to identify people, or make decisions that impact people. We document these instructions and other terms-of-use in a datasheet [45], provided in Appendix G.
314
+
315
+ > **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?**
316
+ The scale of RedCaps means that we are unable to verify the contents of all images and
317
+ captions. However we have tried to minimize the possibility that RedCaps contains data that
318
+ might be offensive, insulting, threatening, or might cause anxiety via the following mitigations:
319
+ (a) We manually curate the set of subreddits from which to collect data; we only chose
320
+ subreddits that are not marked NSFW and which generally contain non-offensive content.
321
+ (b) Within our curated subreddits, we did not include any posts marked NSFW.
322
+ (c) We removed all instances whose captions contained any of the 400 potentially offensive
323
+ words or phrases. Refer Section 2.2 in the main paper.
324
+ (d) We remove all instances whose images were flagged NSFW by an off-the-shelf detector.
325
+ We manually checked 50K random images in RedCaps and found one image containing
326
+ nudity (exposed buttocks; no identifiable face). Refer Section 2.2 in the main paper
327
+
328
+ > **Does the dataset identify any subpopulations (e.g., by age, gender)?**
329
+ RedCaps does not explicitly identify any subpopulations. Since some images contain people
330
+ and captions are free-form natural language written by Reddit users, it is possible that some
331
+ captions may identify people appearing in individual images as part of a subpopulation.
332
+
333
+ > **Were any ethical review processes conducted (e.g., by an institutional review board)?**
334
+ We did not conduct a formal ethical review process via institutional review boards. However,
335
+ as described in Section 2.2 of the main paper and Q16 we employed several filtering mechanisms
336
+ to try and remove instances that could be problematic.
337
+
338
+ ### Other Known Limitations
339
+
340
+ From the paper:
341
+ > **Are there any errors, sources of noise, or redundancies in the dataset?**
342
+ RedCaps is noisy by design since image-text pairs on the internet are noisy and unstructured.
343
+ Some instances may also have duplicate images and captions – Reddit users may have shared
344
+ the same image post in multiple subreddits. Such redundancies constitute a very small fraction
345
+ of the dataset, and should have almost no effect in training large-scale models.
346
+
347
+ > **Does the dataset contain data that might be considered confidential (e.g., data that is
348
+ protected by legal privilege or by doctor-patient confidentiality, data that includes the
349
+ content of individuals non-public communications)?**
350
+ No, the subreddits included in RedCaps do not cover topics that may be considered confidential. All posts were publicly shared on Reddit prior to inclusion in RedCaps.
351
+
352
+ ## Additional Information
353
+
354
+ ### Dataset Curators
355
+
356
+ From the paper:
357
+ > Four researchers at the University of Michigan (affiliated as of 2021) have created RedCaps:
358
+ Karan Desai, Gaurav Kaul, Zubin Aysola, and Justin Johnson.
359
+
360
+ ### Licensing Information
361
+
362
+ The image metadata is licensed under CC-BY 4.0 license. Additionally, uses of this dataset are subject to Reddit API terms (https://www.reddit.com/wiki/
363
+ api-terms) and users must comply with Reddit User Agreeement, Content Policy,
364
+ and Privacy Policy – all accessible at https://www.redditinc.com/policies.
365
+
366
+ From the paper:
367
+ > RedCaps should only be used for non-commercial research. RedCaps should not be used for any tasks that involve identifying features related to people (facial recognition, gender, age, ethnicity identification, etc.) or make decisions that impact people (mortgages, job applications, criminal sentences; or moderation decisions about user-uploaded data that could result in bans from a website). Any commercial and for-profit uses of RedCaps are restricted – it should not be used to train models that will be deployed in production systems as part of a product offered by businesses or government agencies.
368
+
369
+
370
+ ### Citation Information
371
+
372
+ ```
373
+ @misc{desai2021redcaps,
374
+ title={RedCaps: web-curated image-text data created by the people, for the people},
375
+ author={Karan Desai and Gaurav Kaul and Zubin Aysola and Justin Johnson},
376
+ year={2021},
377
+ eprint={2111.11431},
378
+ archivePrefix={arXiv},
379
+ primaryClass={cs.CV}
380
+ }
381
+ ```
382
+
383
+ ### Contributions
384
+
385
+ Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"all": {"description": "RedCaps is a large-scale dataset of 12M image-text pairs collected from Reddit.\nImages and captions from Reddit depict and describe a wide variety of objects and scenes.\nThe data is collected from a manually curated set of subreddits (350 total),\nwhich give coarse image labels and allow steering of the dataset composition\nwithout labeling individual instances.\n", "citation": "@misc{desai2021redcaps,\n title={RedCaps: web-curated image-text data created by the people, for the people},\n author={Karan Desai and Gaurav Kaul and Zubin Aysola and Justin Johnson},\n year={2021},\n eprint={2111.11431},\n archivePrefix={arXiv},\n primaryClass={cs.CV}\n}\n", "homepage": "https://redcaps.xyz/", "license": "CC BY 4.0", "features": {"image_id": {"dtype": "string", "id": null, "_type": "Value"}, "author": {"dtype": "string", "id": null, "_type": "Value"}, "image_url": {"dtype": "string", "id": null, "_type": "Value"}, "raw_caption": {"dtype": "string", "id": null, "_type": "Value"}, "caption": {"dtype": "string", "id": null, "_type": "Value"}, "subreddit": {"num_classes": 350, "names": ["abandonedporn", "abandoned", "absoluteunits", "airplants", "alltheanimals", "amateurphotography", "amateurroomporn", "animalporn", "antiques", "antkeeping", "ants", "aquariums", "architectureporn", "artefactporn", "astronomy", "astrophotography", "australiancattledog", "australianshepherd", "autumnporn", "averagebattlestations", "awwducational", "awwnverts", "axolotls", "backpacking", "backyardchickens", "baking", "ballpython", "barista", "bassfishing", "battlestations", "bbq", "beagle", "beardeddragons", "beekeeping", "beerandpizza", "beerporn", "beerwithaview", "beginnerwoodworking", "bengalcats", "bento", "bernesemountaindogs", "berries", "bettafish", "bicycling", "bikecommuting", "birding", "birdphotography", "birdpics", "birdsofprey", "birds", "blackcats", "blacksmith", "bladesmith", "boatporn", "bonsai", "bookporn", "bookshelf", "bordercollie", "bostonterrier", "botanicalporn", "breadit", "breakfastfood", "breakfast", "bridgeporn", "brochet", "budgetfood", "budgies", "bulldogs", "burgers", "butterflies", "cabinporn", "cactus", "cakedecorating", "cakewin", "cameras", "campingandhiking", "camping", "carnivorousplants", "carpentry", "carporn", "cassetteculture", "castiron", "castles", "casualknitting", "catpictures", "cats", "ceramics", "chameleons", "charcuterie", "cheesemaking", "cheese", "chefit", "chefknives", "chickens", "chihuahua", "chinchilla", "chinesefood", "churchporn", "cider", "cityporn", "classiccars", "cockatiel", "cocktails", "coffeestations", "coins", "cookiedecorating", "corgi", "cornsnakes", "cozyplaces", "crafts", "crestedgecko", "crochet", "crossstitch", "crows", "crystals", "cupcakes", "dachshund", "damnthatsinteresting", "desertporn", "designmyroom", "desksetup", "dessertporn", "dessert", "diy", "dobermanpinscher", "doggos", "dogpictures", "drunkencookery", "duck", "dumpsterdiving", "earthporn", "eatsandwiches", "embroidery", "entomology", "equestrian", "espresso", "exposureporn", "eyebleach", "f1porn", "farming", "femalelivingspace", "fermentation", "ferrets", "fireporn", "fishing", "fish", "flowers", "flyfishing", "foodporn", "food", "foraging", "fossilporn", "fountainpens", "foxes", "frenchbulldogs", "frogs", "gardening", "gardenwild", "geckos", "gemstones", "geologyporn", "germanshepherds", "glutenfree", "goldenretrievers", "goldfish", "gold", "greatpyrenees", "grilledcheese", "grilling", "guineapigs", "gunporn", "guns", "hamsters", "handtools", "healthyfood", "hedgehog", "helicopters", "herpetology", "hiking", "homestead", "horses", "hotpeppers", "houseplants", "houseporn", "husky", "icecreamery", "indoorgarden", "infrastructureporn", "insects", "instantpot", "interestingasfuck", "interiordesign", "itookapicture", "jellyfish", "jewelry", "kayakfishing", "kayaking", "ketorecipes", "knifeporn", "knives", "labrador", "leathercraft", "leopardgeckos", "lizards", "lookatmydog", "macarons", "machineporn", "macroporn", "malelivingspace", "mead", "mealprepsunday", "mechanicalkeyboards", "mechanicalpencils", "melts", "metalworking", "microgreens", "microporn", "mildlyinteresting", "mineralporn", "monitors", "monstera", "mostbeautiful", "motorcycleporn", "muglife", "mushroomgrowers", "mushroomporn", "mushrooms", "mycology", "natureisfuckinglit", "natureporn", "nebelung", "orchids", "otters", "outdoors", "owls", "parrots", "pelletgrills", "pens", "perfectfit", "permaculture", "photocritique", "photographs", "pics", "pitbulls", "pizza", "plantbaseddiet", "plantedtank", "plantsandpots", "plants", "pomeranians", "pottery", "pourpainting", "proplifting", "pugs", "pug", "quilting", "rabbits", "ramen", "rarepuppers", "reeftank", "reptiles", "resincasting", "roomporn", "roses", "rottweiler", "ruralporn", "sailing", "salsasnobs", "samoyeds", "savagegarden", "scotch", "seaporn", "seriouseats", "sewing", "sharks", "shiba", "shihtzu", "shrimptank", "siamesecats", "siberiancats", "silverbugs", "skyporn", "sloths", "smoking", "snails", "snakes", "sneakers", "sneks", "somethingimade", "soup", "sourdough", "sousvide", "spaceporn", "spicy", "spiderbro", "spiders", "squirrels", "steak", "streetphotography", "succulents", "superbowl", "supermodelcats", "sushi", "tacos", "tarantulas", "tastyfood", "teaporn", "tea", "tequila", "terrariums", "thedepthsbelow", "thriftstorehauls", "tinyanimalsonfingers", "tonightsdinner", "toolporn", "tools", "torties", "tortoise", "tractors", "trailrunning", "trains", "trucks", "turtle", "underwaterphotography", "upcycling", "urbanexploration", "urbanhell", "veganfoodporn", "veganrecipes", "vegetablegardening", "vegetarian", "villageporn", "vintageaudio", "vintage", "vinyl", "volumeeating", "watches", "waterporn", "weatherporn", "wewantplates", "wildernessbackpacking", "wildlifephotography", "wine", "winterporn", "woodcarving", "woodworking", "workbenches", "workspaces", "yarnaddicts", "zerowaste"], "names_file": null, "id": null, "_type": "ClassLabel"}, "score": {"dtype": "int32", "id": null, "_type": "Value"}, "created_utc": {"dtype": "timestamp[s, tz=UTC]", "id": null, "_type": "Value"}, "permalink": {"dtype": "string", "id": null, "_type": "Value"}, "crosspost_parents": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "red_caps", "config_name": "all", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3235627787, "num_examples": 12011121, "dataset_name": "red_caps"}}, "download_checksums": {"https://www.dropbox.com/s/cqtdpsl4hewlli1/redcaps_v1.0_annotations.zip?dl=1": {"num_bytes": 1061908181, "checksum": "d502951cb91fe9163c53ef7a72587f29e3d975f081b86139d66cf29dd14e8864"}}, "download_size": 1061908181, "post_processing_size": null, "dataset_size": 3235627787, "size_in_bytes": 4297535968}}
dummy/all/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82dcacecd82f25c1fbfae629fba39f2a51126be7d9c1edea4435431214bfb66f
3
+ size 302735
red_caps.py ADDED
@@ -0,0 +1,367 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """RedCaps dataset."""
16
+
17
+
18
+ import collections
19
+ import json
20
+ import os
21
+ import re
22
+
23
+ import datasets
24
+
25
+
26
+ _CITATION = """\
27
+ @misc{desai2021redcaps,
28
+ title={RedCaps: web-curated image-text data created by the people, for the people},
29
+ author={Karan Desai and Gaurav Kaul and Zubin Aysola and Justin Johnson},
30
+ year={2021},
31
+ eprint={2111.11431},
32
+ archivePrefix={arXiv},
33
+ primaryClass={cs.CV}
34
+ }
35
+ """
36
+
37
+ _DESCRIPTION = """\
38
+ RedCaps is a large-scale dataset of 12M image-text pairs collected from Reddit.
39
+ Images and captions from Reddit depict and describe a wide variety of objects and scenes.
40
+ The data is collected from a manually curated set of subreddits (350 total),
41
+ which give coarse image labels and allow steering of the dataset composition
42
+ without labeling individual instances.
43
+ """
44
+
45
+ _HOMEPAGE = "https://redcaps.xyz/"
46
+
47
+ _LICENSE = "CC BY 4.0"
48
+
49
+ _URL = "https://www.dropbox.com/s/cqtdpsl4hewlli1/redcaps_v1.0_annotations.zip?dl=1"
50
+
51
+ _SUBREDDITS_WITH_YEAR = """\
52
+ abandonedporn_2017 abandonedporn_2018 abandonedporn_2019 abandonedporn_2020 abandoned_2017 abandoned_2018
53
+ abandoned_2019 abandoned_2020 absoluteunits_2018 absoluteunits_2019 absoluteunits_2020 airplants_2017 airplants_2018
54
+ airplants_2019 airplants_2020 alltheanimals_2019 alltheanimals_2020 amateurphotography_2017 amateurphotography_2018
55
+ amateurphotography_2019 amateurphotography_2020 amateurroomporn_2017 amateurroomporn_2018 amateurroomporn_2019
56
+ amateurroomporn_2020 animalporn_2017 animalporn_2018 animalporn_2019 animalporn_2020 antiques_2017 antiques_2018
57
+ antiques_2019 antiques_2020 antkeeping_2017 antkeeping_2018 antkeeping_2019 antkeeping_2020 ants_2017 ants_2018
58
+ ants_2019 ants_2020 aquariums_2017 aquariums_2018 aquariums_2019 aquariums_2020 architectureporn_2017
59
+ architectureporn_2018 architectureporn_2019 architectureporn_2020 artefactporn_2017 artefactporn_2018 artefactporn_2019
60
+ artefactporn_2020 astronomy_2017 astronomy_2018 astronomy_2019 astronomy_2020 astrophotography_2017
61
+ astrophotography_2018 astrophotography_2019 astrophotography_2020 australiancattledog_2017 australiancattledog_2018
62
+ australiancattledog_2019 australiancattledog_2020 australianshepherd_2017 australianshepherd_2018
63
+ australianshepherd_2019 australianshepherd_2020 autumnporn_2017 autumnporn_2018 autumnporn_2019 autumnporn_2020
64
+ averagebattlestations_2017 averagebattlestations_2018 averagebattlestations_2019 averagebattlestations_2020
65
+ awwducational_2017 awwducational_2018 awwducational_2019 awwducational_2020 awwnverts_2017 awwnverts_2018
66
+ awwnverts_2019 awwnverts_2020 axolotls_2017 axolotls_2018 axolotls_2019 axolotls_2020 backpacking_2017 backpacking_2018
67
+ backpacking_2019 backpacking_2020 backyardchickens_2017 backyardchickens_2018 backyardchickens_2019
68
+ backyardchickens_2020 baking_2017 baking_2018 baking_2019 baking_2020 ballpython_2017 ballpython_2018 ballpython_2019
69
+ ballpython_2020 barista_2017 barista_2018 barista_2019 barista_2020 bassfishing_2017 bassfishing_2018 bassfishing_2019
70
+ bassfishing_2020 battlestations_2017 battlestations_2018 battlestations_2019 battlestations_2020 bbq_2017 bbq_2018
71
+ bbq_2019 bbq_2020 beagle_2017 beagle_2018 beagle_2019 beagle_2020 beardeddragons_2017 beardeddragons_2018
72
+ beardeddragons_2019 beardeddragons_2020 beekeeping_2017 beekeeping_2018 beekeeping_2019 beekeeping_2020
73
+ beerandpizza_2017 beerandpizza_2018 beerandpizza_2019 beerandpizza_2020 beerporn_2017 beerporn_2018 beerporn_2019
74
+ beerporn_2020 beerwithaview_2017 beerwithaview_2018 beerwithaview_2019 beerwithaview_2020 beginnerwoodworking_2017
75
+ beginnerwoodworking_2018 beginnerwoodworking_2019 beginnerwoodworking_2020 bengalcats_2017 bengalcats_2018
76
+ bengalcats_2019 bengalcats_2020 bento_2017 bento_2018 bento_2019 bento_2020 bernesemountaindogs_2017
77
+ bernesemountaindogs_2018 bernesemountaindogs_2019 bernesemountaindogs_2020 berries_2017 berries_2018 berries_2019
78
+ berries_2020 bettafish_2017 bettafish_2018 bettafish_2019 bettafish_2020 bicycling_2017 bicycling_2018 bicycling_2019
79
+ bicycling_2020 bikecommuting_2017 bikecommuting_2018 bikecommuting_2019 bikecommuting_2020 birding_2017 birding_2018
80
+ birding_2019 birding_2020 birdphotography_2017 birdphotography_2018 birdphotography_2019 birdphotography_2020
81
+ birdpics_2017 birdpics_2018 birdpics_2019 birdpics_2020 birdsofprey_2017 birdsofprey_2018 birdsofprey_2019
82
+ birdsofprey_2020 birds_2019 birds_2020 blackcats_2017 blackcats_2018 blackcats_2019 blackcats_2020 blacksmith_2017
83
+ blacksmith_2018 blacksmith_2019 blacksmith_2020 bladesmith_2017 bladesmith_2018 bladesmith_2019 bladesmith_2020
84
+ boatporn_2017 boatporn_2018 boatporn_2019 boatporn_2020 bonsai_2017 bonsai_2018 bonsai_2019 bonsai_2020 bookporn_2017
85
+ bookporn_2018 bookporn_2019 bookporn_2020 bookshelf_2017 bookshelf_2018 bookshelf_2019 bookshelf_2020 bordercollie_2017
86
+ bordercollie_2018 bordercollie_2019 bordercollie_2020 bostonterrier_2017 bostonterrier_2018 bostonterrier_2019
87
+ bostonterrier_2020 botanicalporn_2017 botanicalporn_2018 botanicalporn_2019 botanicalporn_2020 breadit_2017
88
+ breadit_2018 breadit_2019 breadit_2020 breakfastfood_2017 breakfastfood_2018 breakfastfood_2019 breakfastfood_2020
89
+ breakfast_2017 breakfast_2018 breakfast_2019 breakfast_2020 bridgeporn_2017 bridgeporn_2018 bridgeporn_2019
90
+ bridgeporn_2020 brochet_2017 brochet_2018 brochet_2019 brochet_2020 budgetfood_2017 budgetfood_2018 budgetfood_2019
91
+ budgetfood_2020 budgies_2017 budgies_2018 budgies_2019 budgies_2020 bulldogs_2017 bulldogs_2018 bulldogs_2019
92
+ bulldogs_2020 burgers_2017 burgers_2018 burgers_2019 burgers_2020 butterflies_2017 butterflies_2018 butterflies_2019
93
+ butterflies_2020 cabinporn_2017 cabinporn_2018 cabinporn_2019 cabinporn_2020 cactus_2017 cactus_2018 cactus_2019
94
+ cactus_2020 cakedecorating_2017 cakedecorating_2018 cakedecorating_2019 cakedecorating_2020 cakewin_2017 cakewin_2018
95
+ cakewin_2019 cakewin_2020 cameras_2017 cameras_2018 cameras_2019 cameras_2020 campingandhiking_2017
96
+ campingandhiking_2018 campingandhiking_2019 campingandhiking_2020 camping_2017 camping_2018 camping_2019 camping_2020
97
+ carnivorousplants_2017 carnivorousplants_2018 carnivorousplants_2019 carnivorousplants_2020 carpentry_2017
98
+ carpentry_2018 carpentry_2019 carpentry_2020 carporn_2017 carporn_2018 carporn_2019 carporn_2020 cassetteculture_2017
99
+ cassetteculture_2018 cassetteculture_2019 cassetteculture_2020 castiron_2017 castiron_2018 castiron_2019 castiron_2020
100
+ castles_2017 castles_2018 castles_2019 castles_2020 casualknitting_2017 casualknitting_2018 casualknitting_2019
101
+ casualknitting_2020 catpictures_2017 catpictures_2018 catpictures_2019 catpictures_2020 cats_2017 cats_2018 cats_2019
102
+ cats_2020 ceramics_2017 ceramics_2018 ceramics_2019 ceramics_2020 chameleons_2017 chameleons_2018 chameleons_2019
103
+ chameleons_2020 charcuterie_2017 charcuterie_2018 charcuterie_2019 charcuterie_2020 cheesemaking_2017 cheesemaking_2018
104
+ cheesemaking_2019 cheesemaking_2020 cheese_2017 cheese_2018 cheese_2019 cheese_2020 chefit_2017 chefit_2018 chefit_2019
105
+ chefit_2020 chefknives_2017 chefknives_2018 chefknives_2019 chefknives_2020 chickens_2017 chickens_2018 chickens_2019
106
+ chickens_2020 chihuahua_2017 chihuahua_2018 chihuahua_2019 chihuahua_2020 chinchilla_2017 chinchilla_2018
107
+ chinchilla_2019 chinchilla_2020 chinesefood_2017 chinesefood_2018 chinesefood_2019 chinesefood_2020 churchporn_2017
108
+ churchporn_2018 churchporn_2019 churchporn_2020 cider_2017 cider_2018 cider_2019 cider_2020 cityporn_2017 cityporn_2018
109
+ cityporn_2019 cityporn_2020 classiccars_2017 classiccars_2018 classiccars_2019 classiccars_2020 cockatiel_2017
110
+ cockatiel_2018 cockatiel_2019 cockatiel_2020 cocktails_2017 cocktails_2018 cocktails_2019 cocktails_2020
111
+ coffeestations_2017 coffeestations_2018 coffeestations_2019 coffeestations_2020 coins_2017 coins_2018 coins_2019
112
+ coins_2020 cookiedecorating_2017 cookiedecorating_2018 cookiedecorating_2019 cookiedecorating_2020 corgi_2017
113
+ corgi_2018 corgi_2019 corgi_2020 cornsnakes_2017 cornsnakes_2018 cornsnakes_2019 cornsnakes_2020 cozyplaces_2017
114
+ cozyplaces_2018 cozyplaces_2019 cozyplaces_2020 crafts_2017 crafts_2018 crafts_2019 crafts_2020 crestedgecko_2017
115
+ crestedgecko_2018 crestedgecko_2019 crestedgecko_2020 crochet_2017 crochet_2018 crochet_2019 crochet_2020
116
+ crossstitch_2017 crossstitch_2018 crossstitch_2019 crossstitch_2020 crows_2017 crows_2018 crows_2019 crows_2020
117
+ crystals_2017 crystals_2018 crystals_2019 crystals_2020 cupcakes_2017 cupcakes_2018 cupcakes_2019 cupcakes_2020
118
+ dachshund_2017 dachshund_2018 dachshund_2019 dachshund_2020 damnthatsinteresting_2017 damnthatsinteresting_2018
119
+ damnthatsinteresting_2019 damnthatsinteresting_2020 desertporn_2017 desertporn_2018 desertporn_2019 desertporn_2020
120
+ designmyroom_2017 designmyroom_2018 designmyroom_2019 designmyroom_2020 desksetup_2017 desksetup_2018 desksetup_2019
121
+ desksetup_2020 dessertporn_2017 dessertporn_2018 dessertporn_2019 dessertporn_2020 dessert_2017 dessert_2018
122
+ dessert_2019 dessert_2020 diy_2017 diy_2018 diy_2019 diy_2020 dobermanpinscher_2017 dobermanpinscher_2018
123
+ dobermanpinscher_2019 dobermanpinscher_2020 doggos_2017 doggos_2018 doggos_2019 doggos_2020 dogpictures_2017
124
+ dogpictures_2018 dogpictures_2019 dogpictures_2020 drunkencookery_2017 drunkencookery_2018 drunkencookery_2019
125
+ drunkencookery_2020 duck_2017 duck_2018 duck_2019 duck_2020 dumpsterdiving_2017 dumpsterdiving_2018 dumpsterdiving_2019
126
+ dumpsterdiving_2020 earthporn_2017 earthporn_2018 earthporn_2019 earthporn_2020 eatsandwiches_2017 eatsandwiches_2018
127
+ eatsandwiches_2019 eatsandwiches_2020 embroidery_2017 embroidery_2018 embroidery_2019 embroidery_2020 entomology_2017
128
+ entomology_2018 entomology_2019 entomology_2020 equestrian_2017 equestrian_2018 equestrian_2019 equestrian_2020
129
+ espresso_2017 espresso_2018 espresso_2019 espresso_2020 exposureporn_2017 exposureporn_2018 exposureporn_2019
130
+ exposureporn_2020 eyebleach_2017 eyebleach_2018 eyebleach_2019 eyebleach_2020 f1porn_2017 f1porn_2018 f1porn_2019
131
+ f1porn_2020 farming_2017 farming_2018 farming_2019 farming_2020 femalelivingspace_2017 femalelivingspace_2018
132
+ femalelivingspace_2019 femalelivingspace_2020 fermentation_2017 fermentation_2018 fermentation_2019 fermentation_2020
133
+ ferrets_2017 ferrets_2018 ferrets_2019 ferrets_2020 fireporn_2017 fireporn_2018 fireporn_2019 fireporn_2020
134
+ fishing_2017 fishing_2018 fishing_2019 fishing_2020 fish_2017 fish_2018 fish_2019 fish_2020 flowers_2017 flowers_2018
135
+ flowers_2019 flowers_2020 flyfishing_2017 flyfishing_2018 flyfishing_2019 flyfishing_2020 foodporn_2017 foodporn_2018
136
+ foodporn_2019 foodporn_2020 food_2017 food_2018 food_2019 food_2020 foraging_2017 foraging_2018 foraging_2019
137
+ foraging_2020 fossilporn_2017 fossilporn_2018 fossilporn_2019 fossilporn_2020 fountainpens_2017 fountainpens_2018
138
+ fountainpens_2019 fountainpens_2020 foxes_2017 foxes_2018 foxes_2019 foxes_2020 frenchbulldogs_2017 frenchbulldogs_2018
139
+ frenchbulldogs_2019 frenchbulldogs_2020 frogs_2017 frogs_2018 frogs_2019 frogs_2020 gardening_2017 gardening_2018
140
+ gardening_2019 gardening_2020 gardenwild_2017 gardenwild_2018 gardenwild_2019 gardenwild_2020 geckos_2017 geckos_2018
141
+ geckos_2019 geckos_2020 gemstones_2017 gemstones_2018 gemstones_2019 gemstones_2020 geologyporn_2017 geologyporn_2018
142
+ geologyporn_2019 geologyporn_2020 germanshepherds_2017 germanshepherds_2018 germanshepherds_2019 germanshepherds_2020
143
+ glutenfree_2017 glutenfree_2018 glutenfree_2019 glutenfree_2020 goldenretrievers_2017 goldenretrievers_2018
144
+ goldenretrievers_2019 goldenretrievers_2020 goldfish_2017 goldfish_2018 goldfish_2019 goldfish_2020 gold_2017 gold_2018
145
+ gold_2019 gold_2020 greatpyrenees_2017 greatpyrenees_2018 greatpyrenees_2019 greatpyrenees_2020 grilledcheese_2017
146
+ grilledcheese_2018 grilledcheese_2019 grilledcheese_2020 grilling_2017 grilling_2018 grilling_2019 grilling_2020
147
+ guineapigs_2017 guineapigs_2018 guineapigs_2019 guineapigs_2020 gunporn_2017 gunporn_2018 gunporn_2019 gunporn_2020
148
+ guns_2017 guns_2018 guns_2019 guns_2020 hamsters_2017 hamsters_2018 hamsters_2019 hamsters_2020 handtools_2017
149
+ handtools_2018 handtools_2019 handtools_2020 healthyfood_2017 healthyfood_2018 healthyfood_2019 healthyfood_2020
150
+ hedgehog_2017 hedgehog_2018 hedgehog_2019 hedgehog_2020 helicopters_2017 helicopters_2018 helicopters_2019
151
+ helicopters_2020 herpetology_2017 herpetology_2018 herpetology_2019 herpetology_2020 hiking_2017 hiking_2018
152
+ hiking_2019 hiking_2020 homestead_2017 homestead_2018 homestead_2019 homestead_2020 horses_2017 horses_2018 horses_2019
153
+ horses_2020 hotpeppers_2017 hotpeppers_2018 hotpeppers_2019 hotpeppers_2020 houseplants_2017 houseplants_2018
154
+ houseplants_2019 houseplants_2020 houseporn_2017 houseporn_2018 houseporn_2019 houseporn_2020 husky_2017 husky_2018
155
+ husky_2019 husky_2020 icecreamery_2017 icecreamery_2018 icecreamery_2019 icecreamery_2020 indoorgarden_2017
156
+ indoorgarden_2018 indoorgarden_2019 indoorgarden_2020 infrastructureporn_2017 infrastructureporn_2018
157
+ infrastructureporn_2019 infrastructureporn_2020 insects_2017 insects_2018 insects_2019 insects_2020 instantpot_2017
158
+ instantpot_2018 instantpot_2019 instantpot_2020 interestingasfuck_2017 interestingasfuck_2018 interestingasfuck_2019
159
+ interestingasfuck_2020 interiordesign_2017 interiordesign_2018 interiordesign_2019 interiordesign_2020
160
+ itookapicture_2017 itookapicture_2018 itookapicture_2019 itookapicture_2020 jellyfish_2017 jellyfish_2018
161
+ jellyfish_2019 jellyfish_2020 jewelry_2017 jewelry_2018 jewelry_2019 jewelry_2020 kayakfishing_2017 kayakfishing_2018
162
+ kayakfishing_2019 kayakfishing_2020 kayaking_2017 kayaking_2018 kayaking_2019 kayaking_2020 ketorecipes_2017
163
+ ketorecipes_2018 ketorecipes_2019 ketorecipes_2020 knifeporn_2017 knifeporn_2018 knifeporn_2019 knifeporn_2020
164
+ knives_2017 knives_2018 knives_2019 knives_2020 labrador_2017 labrador_2018 labrador_2019 labrador_2020
165
+ leathercraft_2017 leathercraft_2018 leathercraft_2019 leathercraft_2020 leopardgeckos_2017 leopardgeckos_2018
166
+ leopardgeckos_2019 leopardgeckos_2020 lizards_2017 lizards_2018 lizards_2019 lizards_2020 lookatmydog_2017
167
+ lookatmydog_2018 lookatmydog_2019 lookatmydog_2020 macarons_2017 macarons_2018 macarons_2019 macarons_2020
168
+ machineporn_2017 machineporn_2018 machineporn_2019 machineporn_2020 macroporn_2017 macroporn_2018 macroporn_2019
169
+ macroporn_2020 malelivingspace_2017 malelivingspace_2018 malelivingspace_2019 malelivingspace_2020 mead_2017 mead_2018
170
+ mead_2019 mead_2020 mealprepsunday_2017 mealprepsunday_2018 mealprepsunday_2019 mealprepsunday_2020
171
+ mechanicalkeyboards_2017 mechanicalkeyboards_2018 mechanicalkeyboards_2019 mechanicalkeyboards_2020
172
+ mechanicalpencils_2017 mechanicalpencils_2018 mechanicalpencils_2019 mechanicalpencils_2020 melts_2017 melts_2018
173
+ melts_2019 melts_2020 metalworking_2017 metalworking_2018 metalworking_2019 metalworking_2020 microgreens_2017
174
+ microgreens_2018 microgreens_2019 microgreens_2020 microporn_2017 microporn_2018 microporn_2019 microporn_2020
175
+ mildlyinteresting_2017 mildlyinteresting_2018 mildlyinteresting_2019 mildlyinteresting_2020 mineralporn_2017
176
+ mineralporn_2018 mineralporn_2019 mineralporn_2020 monitors_2017 monitors_2018 monitors_2019 monitors_2020
177
+ monstera_2018 monstera_2019 monstera_2020 mostbeautiful_2017 mostbeautiful_2018 mostbeautiful_2019 mostbeautiful_2020
178
+ motorcycleporn_2017 motorcycleporn_2018 motorcycleporn_2019 motorcycleporn_2020 muglife_2017 muglife_2018 muglife_2019
179
+ muglife_2020 mushroomgrowers_2017 mushroomgrowers_2018 mushroomgrowers_2019 mushroomgrowers_2020 mushroomporn_2017
180
+ mushroomporn_2018 mushroomporn_2019 mushroomporn_2020 mushrooms_2017 mushrooms_2018 mushrooms_2019 mushrooms_2020
181
+ mycology_2017 mycology_2018 mycology_2019 mycology_2020 natureisfuckinglit_2017 natureisfuckinglit_2018
182
+ natureisfuckinglit_2019 natureisfuckinglit_2020 natureporn_2017 natureporn_2018 natureporn_2019 natureporn_2020
183
+ nebelung_2017 nebelung_2018 nebelung_2019 nebelung_2020 orchids_2017 orchids_2018 orchids_2019 orchids_2020 otters_2017
184
+ otters_2018 otters_2019 otters_2020 outdoors_2017 outdoors_2018 outdoors_2019 outdoors_2020 owls_2017 owls_2018
185
+ owls_2019 owls_2020 parrots_2017 parrots_2018 parrots_2019 parrots_2020 pelletgrills_2017 pelletgrills_2018
186
+ pelletgrills_2019 pelletgrills_2020 pens_2017 pens_2018 pens_2019 pens_2020 perfectfit_2017 perfectfit_2018
187
+ perfectfit_2019 perfectfit_2020 permaculture_2017 permaculture_2018 permaculture_2019 permaculture_2020
188
+ photocritique_2017 photocritique_2018 photocritique_2019 photocritique_2020 photographs_2017 photographs_2018
189
+ photographs_2019 photographs_2020 pics_2017 pics_2018 pics_2019 pics_2020 pitbulls_2017 pitbulls_2018 pitbulls_2019
190
+ pitbulls_2020 pizza_2017 pizza_2018 pizza_2019 pizza_2020 plantbaseddiet_2017 plantbaseddiet_2018 plantbaseddiet_2019
191
+ plantbaseddiet_2020 plantedtank_2017 plantedtank_2018 plantedtank_2019 plantedtank_2020 plantsandpots_2019
192
+ plantsandpots_2020 plants_2017 plants_2018 plants_2019 plants_2020 pomeranians_2017 pomeranians_2018 pomeranians_2019
193
+ pomeranians_2020 pottery_2017 pottery_2018 pottery_2019 pottery_2020 pourpainting_2017 pourpainting_2018
194
+ pourpainting_2019 pourpainting_2020 proplifting_2017 proplifting_2018 proplifting_2019 proplifting_2020 pugs_2017
195
+ pugs_2018 pugs_2019 pugs_2020 pug_2017 pug_2018 pug_2019 pug_2020 quilting_2017 quilting_2018 quilting_2019
196
+ quilting_2020 rabbits_2017 rabbits_2018 rabbits_2019 rabbits_2020 ramen_2017 ramen_2018 ramen_2019 ramen_2020
197
+ rarepuppers_2017 rarepuppers_2018 rarepuppers_2019 rarepuppers_2020 reeftank_2017 reeftank_2018 reeftank_2019
198
+ reeftank_2020 reptiles_2017 reptiles_2018 reptiles_2019 reptiles_2020 resincasting_2017 resincasting_2018
199
+ resincasting_2019 resincasting_2020 roomporn_2017 roomporn_2018 roomporn_2019 roomporn_2020 roses_2017 roses_2018
200
+ roses_2019 roses_2020 rottweiler_2017 rottweiler_2018 rottweiler_2019 rottweiler_2020 ruralporn_2017 ruralporn_2018
201
+ ruralporn_2019 ruralporn_2020 sailing_2017 sailing_2018 sailing_2019 sailing_2020 salsasnobs_2018 salsasnobs_2019
202
+ salsasnobs_2020 samoyeds_2017 samoyeds_2018 samoyeds_2019 samoyeds_2020 savagegarden_2017 savagegarden_2018
203
+ savagegarden_2019 savagegarden_2020 scotch_2017 scotch_2018 scotch_2019 scotch_2020 seaporn_2017 seaporn_2018
204
+ seaporn_2019 seaporn_2020 seriouseats_2017 seriouseats_2018 seriouseats_2019 seriouseats_2020 sewing_2017 sewing_2018
205
+ sewing_2019 sewing_2020 sharks_2017 sharks_2018 sharks_2019 sharks_2020 shiba_2017 shiba_2018 shiba_2019 shiba_2020
206
+ shihtzu_2017 shihtzu_2018 shihtzu_2019 shihtzu_2020 shrimptank_2017 shrimptank_2018 shrimptank_2019 shrimptank_2020
207
+ siamesecats_2017 siamesecats_2018 siamesecats_2019 siamesecats_2020 siberiancats_2017 siberiancats_2018
208
+ siberiancats_2019 siberiancats_2020 silverbugs_2017 silverbugs_2018 silverbugs_2019 silverbugs_2020 skyporn_2017
209
+ skyporn_2018 skyporn_2019 skyporn_2020 sloths_2017 sloths_2018 sloths_2019 sloths_2020 smoking_2017 smoking_2018
210
+ smoking_2019 smoking_2020 snails_2017 snails_2018 snails_2019 snails_2020 snakes_2017 snakes_2018 snakes_2019
211
+ snakes_2020 sneakers_2017 sneakers_2018 sneakers_2019 sneakers_2020 sneks_2017 sneks_2018 sneks_2019 sneks_2020
212
+ somethingimade_2017 somethingimade_2018 somethingimade_2019 somethingimade_2020 soup_2017 soup_2018 soup_2019 soup_2020
213
+ sourdough_2017 sourdough_2018 sourdough_2019 sourdough_2020 sousvide_2017 sousvide_2018 sousvide_2019 sousvide_2020
214
+ spaceporn_2017 spaceporn_2018 spaceporn_2019 spaceporn_2020 spicy_2017 spicy_2018 spicy_2019 spicy_2020 spiderbro_2017
215
+ spiderbro_2018 spiderbro_2019 spiderbro_2020 spiders_2017 spiders_2018 spiders_2019 spiders_2020 squirrels_2017
216
+ squirrels_2018 squirrels_2019 squirrels_2020 steak_2017 steak_2018 steak_2019 steak_2020 streetphotography_2017
217
+ streetphotography_2018 streetphotography_2019 streetphotography_2020 succulents_2017 succulents_2018 succulents_2019
218
+ succulents_2020 superbowl_2017 superbowl_2018 superbowl_2019 superbowl_2020 supermodelcats_2017 supermodelcats_2018
219
+ supermodelcats_2019 supermodelcats_2020 sushi_2017 sushi_2018 sushi_2019 sushi_2020 tacos_2017 tacos_2018 tacos_2019
220
+ tacos_2020 tarantulas_2017 tarantulas_2018 tarantulas_2019 tarantulas_2020 tastyfood_2017 tastyfood_2018 tastyfood_2019
221
+ tastyfood_2020 teaporn_2017 teaporn_2018 teaporn_2019 teaporn_2020 tea_2017 tea_2018 tea_2019 tea_2020 tequila_2017
222
+ tequila_2018 tequila_2019 tequila_2020 terrariums_2017 terrariums_2018 terrariums_2019 terrariums_2020
223
+ thedepthsbelow_2017 thedepthsbelow_2018 thedepthsbelow_2019 thedepthsbelow_2020 thriftstorehauls_2017
224
+ thriftstorehauls_2018 thriftstorehauls_2019 thriftstorehauls_2020 tinyanimalsonfingers_2017 tinyanimalsonfingers_2018
225
+ tinyanimalsonfingers_2019 tinyanimalsonfingers_2020 tonightsdinner_2017 tonightsdinner_2018 tonightsdinner_2019
226
+ tonightsdinner_2020 toolporn_2017 toolporn_2018 toolporn_2019 toolporn_2020 tools_2017 tools_2018 tools_2019 tools_2020
227
+ torties_2017 torties_2018 torties_2019 torties_2020 tortoise_2017 tortoise_2018 tortoise_2019 tortoise_2020
228
+ tractors_2017 tractors_2018 tractors_2019 tractors_2020 trailrunning_2017 trailrunning_2018 trailrunning_2019
229
+ trailrunning_2020 trains_2017 trains_2018 trains_2019 trains_2020 trucks_2017 trucks_2018 trucks_2019 trucks_2020
230
+ turtle_2017 turtle_2018 turtle_2019 turtle_2020 underwaterphotography_2017 underwaterphotography_2018
231
+ underwaterphotography_2019 underwaterphotography_2020 upcycling_2017 upcycling_2018 upcycling_2019 upcycling_2020
232
+ urbanexploration_2017 urbanexploration_2018 urbanexploration_2019 urbanexploration_2020 urbanhell_2017 urbanhell_2018
233
+ urbanhell_2019 urbanhell_2020 veganfoodporn_2017 veganfoodporn_2018 veganfoodporn_2019 veganfoodporn_2020
234
+ veganrecipes_2017 veganrecipes_2018 veganrecipes_2019 veganrecipes_2020 vegetablegardening_2017 vegetablegardening_2018
235
+ vegetablegardening_2019 vegetablegardening_2020 vegetarian_2017 vegetarian_2018 vegetarian_2019 vegetarian_2020
236
+ villageporn_2017 villageporn_2018 villageporn_2019 villageporn_2020 vintageaudio_2017 vintageaudio_2018
237
+ vintageaudio_2019 vintageaudio_2020 vintage_2017 vintage_2018 vintage_2019 vintage_2020 vinyl_2017 vinyl_2018
238
+ vinyl_2019 vinyl_2020 volumeeating_2017 volumeeating_2018 volumeeating_2019 volumeeating_2020 watches_2017 watches_2018
239
+ watches_2019 watches_2020 waterporn_2017 waterporn_2018 waterporn_2019 waterporn_2020 weatherporn_2017 weatherporn_2018
240
+ weatherporn_2019 weatherporn_2020 wewantplates_2017 wewantplates_2018 wewantplates_2019 wewantplates_2020
241
+ wildernessbackpacking_2017 wildernessbackpacking_2018 wildernessbackpacking_2019 wildernessbackpacking_2020
242
+ wildlifephotography_2017 wildlifephotography_2018 wildlifephotography_2019 wildlifephotography_2020 wine_2017 wine_2018
243
+ wine_2019 wine_2020 winterporn_2017 winterporn_2018 winterporn_2019 winterporn_2020 woodcarving_2017 woodcarving_2018
244
+ woodcarving_2019 woodcarving_2020 woodworking_2017 woodworking_2018 woodworking_2019 woodworking_2020 workbenches_2017
245
+ workbenches_2018 workbenches_2019 workbenches_2020 workspaces_2017 workspaces_2018 workspaces_2019 workspaces_2020
246
+ yarnaddicts_2017 yarnaddicts_2018 yarnaddicts_2019 yarnaddicts_2020 zerowaste_2017 zerowaste_2018 zerowaste_2019
247
+ zerowaste_2020
248
+ """
249
+ _SUBREDDITS_WITH_YEAR = _SUBREDDITS_WITH_YEAR.strip().split()
250
+
251
+ _SUBREDDIT_TO_YEAR = collections.defaultdict(list)
252
+ for subreddit_with_year in _SUBREDDITS_WITH_YEAR:
253
+ subreddit, year = subreddit_with_year.split("_")
254
+ _SUBREDDIT_TO_YEAR[subreddit].append(year)
255
+
256
+ _SUBREDDITS = list(_SUBREDDIT_TO_YEAR.keys())
257
+
258
+
259
+ def _config_name_to_subreddits_with_year(config_name):
260
+ if config_name == "all":
261
+ return _SUBREDDITS_WITH_YEAR
262
+ elif re.match(r".*_\d{4}$", config_name):
263
+ return [config_name]
264
+ else:
265
+ return [f"{config_name}_{year}" for year in _SUBREDDIT_TO_YEAR[config_name]]
266
+
267
+
268
+ def _config_name_to_description(config_name):
269
+ if config_name == "all":
270
+ return "Contains data from all the subreddits"
271
+ else:
272
+ if re.match(r".*_\d{4}$", config_name):
273
+ subreddit, year = config_name.split("_")
274
+ year_str = "2008 - 2017" if year == "2017" else year
275
+ else:
276
+ subreddit = config_name
277
+ year_str = ", ".join(
278
+ ["2008 - 2017" if year == "2017" else year for year in _SUBREDDIT_TO_YEAR[config_name]]
279
+ )
280
+ return f"Contains data from the {subreddit} subreddit posted in {year_str}"
281
+
282
+
283
+ class RedCapsConfig(datasets.BuilderConfig):
284
+ """BuilderConfig for RedCaps."""
285
+
286
+ def __init__(self, name, **kwargs):
287
+ """BuilderConfig for RedCaps.
288
+
289
+ Args:
290
+ **kwargs: keyword arguments forwarded to super.
291
+ """
292
+ assert "description" not in kwargs
293
+ kwargs["description"] = _config_name_to_description(name)
294
+ super(RedCapsConfig, self).__init__(version=datasets.Version("1.0.0", ""), name=name, **kwargs)
295
+
296
+
297
+ class RedCaps(datasets.GeneratorBasedBuilder):
298
+ """RedCaps dataset."""
299
+
300
+ BUILDER_CONFIGS = [
301
+ RedCapsConfig("all"),
302
+ ]
303
+ BUILDER_CONFIGS += [RedCapsConfig(subreddit) for subreddit in _SUBREDDITS]
304
+ BUILDER_CONFIGS += [RedCapsConfig(subreddit_with_year) for subreddit_with_year in _SUBREDDITS_WITH_YEAR]
305
+
306
+ DEFAULT_CONFIG_NAME = "all"
307
+
308
+ def _info(self):
309
+ features = datasets.Features(
310
+ {
311
+ "image_id": datasets.Value("string"),
312
+ "author": datasets.Value("string"),
313
+ "image_url": datasets.Value("string"),
314
+ "raw_caption": datasets.Value("string"),
315
+ "caption": datasets.Value("string"),
316
+ "subreddit": datasets.ClassLabel(names=_SUBREDDITS),
317
+ "score": datasets.Value("int32"),
318
+ "created_utc": datasets.Value("timestamp[s, tz=UTC]"),
319
+ "permalink": datasets.Value("string"),
320
+ "crosspost_parents": datasets.Sequence(datasets.Value("string")),
321
+ }
322
+ )
323
+
324
+ return datasets.DatasetInfo(
325
+ description=_DESCRIPTION,
326
+ features=features,
327
+ supervised_keys=None,
328
+ homepage=_HOMEPAGE,
329
+ license=_LICENSE,
330
+ citation=_CITATION,
331
+ )
332
+
333
+ def _split_generators(self, dl_manager):
334
+ annotations_dir = dl_manager.download_and_extract(_URL)
335
+ return [
336
+ datasets.SplitGenerator(
337
+ name=datasets.Split.TRAIN,
338
+ gen_kwargs={
339
+ "annotations_dir": annotations_dir,
340
+ "subreddits": _config_name_to_subreddits_with_year(self.config.name),
341
+ },
342
+ ),
343
+ ]
344
+
345
+ def _generate_examples(self, annotations_dir, subreddits):
346
+ annotations_dir = os.path.join(annotations_dir, "annotations")
347
+ idx = 0
348
+ for subreddit in subreddits:
349
+ subreddit_file = os.path.join(annotations_dir, subreddit + ".json")
350
+ with open(subreddit_file, encoding="utf-8") as f:
351
+ data = json.load(f)
352
+ for annot in data["annotations"]:
353
+ yield idx, {
354
+ "image_id": annot["image_id"],
355
+ "author": annot["author"],
356
+ "image_url": annot["url"],
357
+ "raw_caption": annot["raw_caption"],
358
+ "caption": annot["caption"],
359
+ "subreddit": annot["subreddit"],
360
+ "score": annot["score"] if "score" in annot else None,
361
+ "created_utc": annot["created_utc"],
362
+ "permalink": annot["permalink"],
363
+ "crosspost_parents": annot["crosspost_parents"]
364
+ if "crosspost_parents" in annot and annot["crosspost_parents"]
365
+ else None,
366
+ }
367
+ idx += 1