system HF staff commited on
Commit
ae55623
0 Parent(s):

Update files from the datasets library (from 1.18.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.18.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +236 -0
  3. dataset_infos.json +1 -0
  4. dummy/1.0.0/dummy_data.zip +3 -0
  5. pass.py +105 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - machine-generated
6
+ - expert-generated
7
+ languages:
8
+ - en
9
+ licenses:
10
+ - cc-by-4-0
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - 1M<n<10M
15
+ source_datasets:
16
+ - extended|yffc100M
17
+ task_categories:
18
+ - other
19
+ task_ids:
20
+ - other-image-pretraining
21
+ paperswithcode_id: pass
22
+ pretty_name: Pictures without humAns for Self-Supervision
23
+ ---
24
+
25
+ # Dataset Card for PASS
26
+
27
+ ## Table of Contents
28
+ - [Table of Contents](#table-of-contents)
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-fields)
36
+ - [Data Splits](#data-splits)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Annotations](#annotations)
41
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
43
+ - [Social Impact of Dataset](#social-impact-of-dataset)
44
+ - [Discussion of Biases](#discussion-of-biases)
45
+ - [Other Known Limitations](#other-known-limitations)
46
+ - [Additional Information](#additional-information)
47
+ - [Dataset Curators](#dataset-curators)
48
+ - [Licensing Information](#licensing-information)
49
+ - [Citation Information](#citation-information)
50
+ - [Contributions](#contributions)
51
+
52
+ ## Dataset Description
53
+
54
+ - **Homepage:** [PASS homepage](https://www.robots.ox.ac.uk/~vgg/research/pass/)
55
+ - **Repository:** [PASS repository](https://github.com/yukimasano/PASS)
56
+ - **Paper:** [PASS: An ImageNet replacement for self-supervised pretraining without humans](https://arxiv.org/abs/2109.13228)
57
+ - **Leaderboard:** [Pretrained models with scores](https://github.com/yukimasano/PASS#pretrained-models)
58
+ - **Point of Contact:** [Yuki M. Asano](mailto:yukiATMARKrobots.ox.ac.uk)
59
+
60
+ ### Dataset Summary
61
+
62
+ PASS is a large-scale image dataset, containing 1.4 million images, that does not include any humans and which can be used for high-quality pretraining while significantly reducing privacy concerns.
63
+
64
+ ### Supported Tasks and Leaderboards
65
+
66
+ From the paper:
67
+
68
+ > **Has the dataset been used for any tasks already?** In the paper we show and benchmark the
69
+ intended use of this dataset as a pretraining dataset. For this the dataset is used an unlabelled image collection on which visual features are learned and then transferred to downstream tasks. We show that with this dataset it is possible to learn competitive visual features, without any humans in the pretraining dataset and with complete license information.
70
+
71
+ > **Is there a repository that links to any or all papers or systems that use the dataset?** We will
72
+ be listing these at the repository.
73
+
74
+ > **What (other) tasks could the dataset be used for?** We believe this dataset might allow researchers and practitioners to further evaluate the differences that pretraining datasets can have on the learned features. Furthermore, since the meta-data is available for the images, it is possible to investigate the effect of image resolution on self-supervised learning methods, a domain largely underresearched thus far, as the current de-facto standard, ImageNet, only comes in one size.
75
+
76
+ > **Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?** Given that this dataset is a subset of a dataset that randomly samples images from flickr, the image distribution is biased towards European and American creators. As in the main papers discussion, this can lead to non-generalizeable features, or even biased features as the images taken in other countries might be more likely to further reflect and propagate stereotypes [84], though in our case these do not refer to sterotypes about humans.
77
+
78
+ > **Are there tasks for which the dataset should not be used?** This dataset is meant for research
79
+ purposes only. The dataset should also not be used for, e.g. connecting images and usernames, as
80
+ this might risk de-anonymising the dataset in the long term. The usernames are solely provided for
81
+ attribution.
82
+
83
+ ### Languages
84
+
85
+ English.
86
+
87
+ ## Dataset Structure
88
+
89
+ ### Data Instances
90
+
91
+ A data point comprises an image and its meta-data:
92
+
93
+ ```
94
+ {
95
+ 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FFAD48E35F8>, 'creator_username': 'NTShieldsy',
96
+ 'hash': 'e1662344ffa8c231d198c367c692cc',
97
+ 'gps_latitude': 21.206675,
98
+ 'gps_longitude': 39.166558,
99
+ 'date_taken': datetime.datetime(2012, 8, 9, 18, 0, 20)
100
+ }
101
+ ```
102
+
103
+ ### Data Fields
104
+
105
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
106
+ - `creator_username`: The photographer.
107
+ - `hash`: The hash, as computed from YFCC-100M.
108
+ - `gps_latitude`: Latitude of image if existent, otherwise None.
109
+ - `gps_longitude`: Longitude of image if existent, otherwise None.
110
+ - `date_taken`: Datetime of image if existent, otherwise None.
111
+
112
+ ### Data Splits
113
+
114
+ All the data is contained in training set. The training set has 1.4M (1,439,719) instances.
115
+
116
+ From the paper:
117
+
118
+ > **Are there recommended data splits (e.g., training, development/validation, testing)?** As outlined in the intended usecases, this dataset is meant for pretraining representations. As such, the models derived from training on this dataset need to be evaluated on different datasets, so called down-stream tasks. Thus the recommended split is to use all samples for training.
119
+
120
+ ## Dataset Creation
121
+
122
+ ### Curation Rationale
123
+
124
+ From the paper:
125
+
126
+ > **For what purpose was the dataset created?** Neural networks pretrained on large image collections have been shown to transfer well to other visual tasks where there is little labelled data, i.e. transferring a model works better than starting with a randomly initialized network every time for a new task, as many visual features can be repurposed. This dataset has as its goal to provide a safer large-scale dataset for such pretraining of visual features. In particular, this dataset does not contain any humans or human parts and does not contain any labels. The first point is important, as the current standard for pretraining, ImageNet and its face-blurred version only provide pseudo-anonymity and furthermore do not provide correct licences to the creators. The second point is relevant as pretraining is moving towards the self-supervised paradigm, where labels are not required. Yet most methods are developed on the highly curated ImageNet dataset, yielding potentially non-generalizeable research.
127
+
128
+ ### Source Data
129
+
130
+ #### Initial Data Collection and Normalization
131
+
132
+ From the paper:
133
+
134
+ * **Collection process**:
135
+
136
+ > **How was the data associated with each instance acquired?** The data was collected from the
137
+ publicly available dataset YFCC-100M which is hosted on the AWS public datasets platform. We have used the meta-data, namely the copyright information to filter only images with the CC-BY licence and have downloaded these using the aws command line interface, allowing for quick and stable downloading. In addition, all files were subsequently scanned for viruses using Sophos SAVScan virus detection utility, v.5.74.0.
138
+
139
+ > **What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)?** Our dataset is a subset
140
+ of the YFCC-100M dataset. The YFCC-100M dataset itself was created by effectively randomly
141
+ selecting publicly available images from flickr, resulting in approximately 98M images.
142
+
143
+ > **Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set?** The dataset is a sample of a larger set—all possible digital photographs. As outlined in Section 3 we start from an existing dataset, YFCC-100M, and stratify the images (removing images with people and personal information, removing images with harmful content, removing images with unsuitable licenses, each user contributes at most 80 images to the dataset). This leaves 1.6M images, out of which we take a random sample of 1.28M images to replicate the size of the ImageNet dataset. While this dataset can thus be extended, this is the set that we have verified to not contain humans, human parts and disturbing content.
144
+
145
+ > **Over what timeframe was the data collected?** The images underlying the dataset were downloaded between March and June 2021 from the AWS public datasets’ S3 bucket, following the
146
+ download code provided in the repo. However the images contained were originally and taken
147
+ anywhere from 2000 to 2015, with the majority being shot between 2010-2014.
148
+
149
+ * **Preprocessing/cleaning/labeling**:
150
+
151
+ > **Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing,tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)?** After the download of approx. 17M images, the corrupted, or single-color images were removed from the dataset prior to the generation of the dataset(s) used in the paper. The images were not further preprocessed or edited.
152
+
153
+ > **Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)?** Yes. The creators of the dataset maintain a copy of the 17M original images with the CC-BY licence of YFCC100M that sits at the start of our dataset creation pipeline. Is the software used to preprocess/clean/label the instances available? We have only used basic Python primitives for this. For the annotations we have used VIA [27, 28].
154
+
155
+ #### Who are the source language producers?
156
+
157
+ From the paper:
158
+
159
+ > **Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)?** As described, the data was collected automatically by simply downloading images from a publicly hosted S3 bucket. The human verification was done using a professional data annotation company that pays 150% of the local minimum wage.
160
+
161
+ ### Annotations
162
+
163
+ #### Annotation process
164
+
165
+ This dataset doesn't contain annotations.
166
+
167
+ #### Who are the annotators?
168
+
169
+ This dataset doesn't contain annotations.
170
+
171
+ ### Personal and Sensitive Information
172
+
173
+ From the paper:
174
+
175
+ > **Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals’ non-public communications)?** No.
176
+
177
+ > **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?** No. Besides checking for human presence in the images, the annotators were also given the choice of flagging images for disturbing content, which once flagged was removed.
178
+
179
+ > **Does the dataset relate to people? If not, you may skip the remaining questions in this section.**
180
+ No.
181
+
182
+ > **Does the dataset identify any subpopulations (e.g., by age, gender)?** NA
183
+
184
+ > **Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset?** NA
185
+
186
+ > **Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)?** NA
187
+
188
+ > **Were any ethical review processes conducted (e.g., by an institutional review board)?** No
189
+
190
+ ## Considerations for Using the Data
191
+
192
+ ### Social Impact of Dataset
193
+
194
+ [More Information Needed]
195
+
196
+ ### Discussion of Biases
197
+
198
+ From the paper:
199
+
200
+ > **Is your dataset free of biases?** No. There are many kinds of biases that can either be quantified, e.g. geo-location (most images originate from the US and Europe) or camera-model (most images are taken with professional DSLR cameras not easily affordable), there are likely many more biases that this dataset does contain. The only thing that this dataset does not contain are humans and parts of humans, as far as our validation procedure is accurate.
201
+
202
+ ### Other Known Limitations
203
+
204
+ From the paper:
205
+
206
+ > **Can you guarantee compliance to GDPR?** No, we cannot comment on legal issues.
207
+
208
+ ## Additional Information
209
+
210
+ ### Dataset Curators
211
+
212
+ YM. Asano, C. Rupprecht, A. Zisserman and A. Vedaldi.
213
+
214
+ From the paper:
215
+
216
+ > **Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?** The dataset has been constructued by the research group
217
+ “Visual Geometry Group” at the University of Oxford at the Engineering Science Department.
218
+
219
+ ### Licensing Information
220
+
221
+ The PASS dataset is available to download for commercial/research purposes under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/). A complete version of the license can be found [here](https://www.robots.ox.ac.uk/~vgg/research/pass/license_pass.txt). The whole dataset only contains CC-BY licensed images with full attribution information.
222
+
223
+ ### Citation Information
224
+
225
+ ```bibtex
226
+ @Article{asano21pass,
227
+ author = "Yuki M. Asano and Christian Rupprecht and Andrew Zisserman and Andrea Vedaldi",
228
+ title = "PASS: An ImageNet replacement for self-supervised pretraining without humans",
229
+ journal = "NeurIPS Track on Datasets and Benchmarks",
230
+ year = "2021"
231
+ }
232
+ ```
233
+
234
+ ### Contributions
235
+
236
+ Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "PASS (Pictures without humAns for Self-Supervision) is a large-scale dataset of 1,440,191 images that does not include any humans\nand which can be used for high-quality pretraining while significantly reducing privacy concerns.\nThe PASS images are sourced from the YFCC-100M dataset.\n", "citation": "@Article{asano21pass,\nauthor = \"Yuki M. Asano and Christian Rupprecht and Andrew Zisserman and Andrea Vedaldi\",\ntitle = \"PASS: An ImageNet replacement for self-supervised pretraining without humans\",\njournal = \"NeurIPS Track on Datasets and Benchmarks\",\nyear = \"2021\"\n}\n", "homepage": "https://www.robots.ox.ac.uk/~vgg/research/pass/", "license": "Creative Commons Attribution 4.0 International", "features": {"image": {"id": null, "_type": "Image"}, "creator_username": {"dtype": "string", "id": null, "_type": "Value"}, "hash": {"dtype": "string", "id": null, "_type": "Value"}, "gps_latitude": {"dtype": "float32", "id": null, "_type": "Value"}, "gps_longitude": {"dtype": "float32", "id": null, "_type": "Value"}, "date_taken": {"dtype": "timestamp[us]", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "pass", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 178578279339, "num_examples": 1439719, "dataset_name": "pass"}}, "download_checksums": {"https://zenodo.org/record/5570664/files/pass_metadata.csv?download=1": {"num_bytes": 151124344, "checksum": "86eeb812aa5fed17eb06f6902c77f695c2a17de489569428833526adc61fe669"}, "https://zenodo.org/record/5570664/files/PASS.0.tar?download=1": {"num_bytes": 18719498240, "checksum": "4f1380dad26a51c8ee4459943b795a71df0b1fe228eec8294c39f84215d8252d"}, "https://zenodo.org/record/5570664/files/PASS.1.tar?download=1": {"num_bytes": 18702233600, "checksum": "f573d0b994224d2e5c8a47a4e16a228d64744ddf235fdae5704eb2843b2c8536"}, "https://zenodo.org/record/5570664/files/PASS.2.tar?download=1": {"num_bytes": 18708899840, "checksum": "f46791122c4e75a77131b56b64eab6aa813de629a963f557c280a626a636fbbd"}, "https://zenodo.org/record/5570664/files/PASS.3.tar?download=1": {"num_bytes": 18705152000, "checksum": "ee761f4792eb3e7160d4aa62cb59e0c4c263f64b4ac1ef621ab16103041200ba"}, "https://zenodo.org/record/5570664/files/PASS.4.tar?download=1": {"num_bytes": 18697226240, "checksum": "e699129fc91e164a51c5e79267f5a288e8eb41979eab22455714ff1e90c9cb63"}, "https://zenodo.org/record/5570664/files/PASS.5.tar?download=1": {"num_bytes": 18690590720, "checksum": "3bb284916640f216c554958030936c9e9930a517496310f0ffbaabde51e01c79"}, "https://zenodo.org/record/5570664/files/PASS.6.tar?download=1": {"num_bytes": 18693263360, "checksum": "cb633e82e9fe9be2b81182fd5bff863f5b0a75373746169d0595be7fcdcc374e"}, "https://zenodo.org/record/5570664/files/PASS.7.tar?download=1": {"num_bytes": 18709043200, "checksum": "236a0815368c339d14aa3634b0a3be11b9638d245200c78271a74b2c753c228e"}, "https://zenodo.org/record/5570664/files/PASS.8.tar?download=1": {"num_bytes": 18702499840, "checksum": "91dec78455e56559cda092935ba01ac6978cea78d24e21e27f2dbc170314cf1a"}, "https://zenodo.org/record/5570664/files/PASS.9.tar?download=1": {"num_bytes": 11174297600, "checksum": "fba7f2414beffa2163b6cec1641b9e434d54abaa1a509086d5bd22c3122537e2"}}, "download_size": 179653828984, "post_processing_size": null, "dataset_size": 178578279339, "size_in_bytes": 358232108323}}
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fca9b0075ab484d509c9726c78ec35527e63445b1941ab4f9fc5e1183bf8b7c6
3
+ size 12801
pass.py ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """PASS dataset."""
15
+
16
+ import os
17
+ from datetime import datetime
18
+
19
+ import numpy as np
20
+ import pandas as pd
21
+
22
+ import datasets
23
+
24
+
25
+ _DESCRIPTION = """\
26
+ PASS (Pictures without humAns for Self-Supervision) is a large-scale dataset of 1,440,191 images that does not include any humans
27
+ and which can be used for high-quality pretraining while significantly reducing privacy concerns.
28
+ The PASS images are sourced from the YFCC-100M dataset.
29
+ """
30
+
31
+ _CITATION = """\
32
+ @Article{asano21pass,
33
+ author = "Yuki M. Asano and Christian Rupprecht and Andrew Zisserman and Andrea Vedaldi",
34
+ title = "PASS: An ImageNet replacement for self-supervised pretraining without humans",
35
+ journal = "NeurIPS Track on Datasets and Benchmarks",
36
+ year = "2021"
37
+ }
38
+ """
39
+
40
+ _HOMEPAGE = "https://www.robots.ox.ac.uk/~vgg/research/pass/"
41
+
42
+ _LICENSE = "Creative Commons Attribution 4.0 International"
43
+
44
+ _IMAGE_ARCHIVE_DOWNLOAD_URL_TEMPLATE = "https://zenodo.org/record/5570664/files/PASS.{idx}.tar?download=1"
45
+
46
+ _METADATA_DOWNLOAD_URL = "https://zenodo.org/record/5570664/files/pass_metadata.csv?download=1"
47
+
48
+
49
+ class PASS(datasets.GeneratorBasedBuilder):
50
+ """PASS dataset."""
51
+
52
+ VERSION = datasets.Version("1.0.0")
53
+
54
+ def _info(self):
55
+ return datasets.DatasetInfo(
56
+ description=_DESCRIPTION,
57
+ features=datasets.Features(
58
+ {
59
+ "image": datasets.Image(),
60
+ "creator_username": datasets.Value("string"),
61
+ "hash": datasets.Value("string"),
62
+ "gps_latitude": datasets.Value("float32"),
63
+ "gps_longitude": datasets.Value("float32"),
64
+ "date_taken": datasets.Value("timestamp[us]"),
65
+ }
66
+ ),
67
+ homepage=_HOMEPAGE,
68
+ license=_LICENSE,
69
+ citation=_CITATION,
70
+ )
71
+
72
+ def _split_generators(self, dl_manager):
73
+ """Returns SplitGenerators."""
74
+ metadata_file, *image_dirs = dl_manager.download(
75
+ [_METADATA_DOWNLOAD_URL] + [_IMAGE_ARCHIVE_DOWNLOAD_URL_TEMPLATE.format(idx=i) for i in range(10)]
76
+ )
77
+ metadata = pd.read_csv(metadata_file, encoding="utf-8")
78
+ metadata = metadata.replace(np.NaN, pd.NA).where(metadata.notnull(), None)
79
+ metadata = metadata.set_index("hash")
80
+ return [
81
+ datasets.SplitGenerator(
82
+ name=datasets.Split.TRAIN,
83
+ gen_kwargs={
84
+ "metadata": metadata,
85
+ "image_archives": [dl_manager.iter_archive(image_dir) for image_dir in image_dirs],
86
+ },
87
+ )
88
+ ]
89
+
90
+ def _generate_examples(self, metadata, image_archives):
91
+ """Yields examples."""
92
+ for image_archive in image_archives:
93
+ for path, file in image_archive:
94
+ img_hash = os.path.basename(path).split(".")[0]
95
+ img_meta = metadata.loc[img_hash]
96
+ yield img_hash, {
97
+ "image": {"path": path, "bytes": file.read()},
98
+ "creator_username": img_meta["unickname"],
99
+ "hash": img_hash,
100
+ "gps_latitude": img_meta["latitude"],
101
+ "gps_longitude": img_meta["longitude"],
102
+ "date_taken": datetime.strptime(img_meta["datetaken"], "%Y-%m-%d %H:%M:%S.%f")
103
+ if img_meta["datetaken"] is not None
104
+ else None,
105
+ }