Datasets:

Languages:
English
ArXiv:
License:
parquet-converter commited on
Commit
49237d6
•
1 Parent(s): b634532

Update parquet files

Browse files
README.md DELETED
@@ -1,263 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- license:
9
- - cc-by-nc-nd-4.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 10K<n<100K
14
- source_datasets:
15
- - extended|other-wider
16
- task_categories:
17
- - object-detection
18
- task_ids:
19
- - face-detection
20
- paperswithcode_id: wider-face-1
21
- pretty_name: WIDER FACE
22
- dataset_info:
23
- features:
24
- - name: image
25
- dtype: image
26
- - name: faces
27
- sequence:
28
- - name: bbox
29
- sequence: float32
30
- length: 4
31
- - name: blur
32
- dtype:
33
- class_label:
34
- names:
35
- 0: clear
36
- 1: normal
37
- 2: heavy
38
- - name: expression
39
- dtype:
40
- class_label:
41
- names:
42
- 0: typical
43
- 1: exaggerate
44
- - name: illumination
45
- dtype:
46
- class_label:
47
- names:
48
- 0: normal
49
- 1: 'exaggerate '
50
- - name: occlusion
51
- dtype:
52
- class_label:
53
- names:
54
- 0: 'no'
55
- 1: partial
56
- 2: heavy
57
- - name: pose
58
- dtype:
59
- class_label:
60
- names:
61
- 0: typical
62
- 1: atypical
63
- - name: invalid
64
- dtype: bool
65
- splits:
66
- - name: train
67
- num_bytes: 12049881
68
- num_examples: 12880
69
- - name: test
70
- num_bytes: 3761103
71
- num_examples: 16097
72
- - name: validation
73
- num_bytes: 2998735
74
- num_examples: 3226
75
- download_size: 3676086479
76
- dataset_size: 18809719
77
- ---
78
-
79
- # Dataset Card for WIDER FACE
80
-
81
- ## Table of Contents
82
- - [Table of Contents](#table-of-contents)
83
- - [Dataset Description](#dataset-description)
84
- - [Dataset Summary](#dataset-summary)
85
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
86
- - [Languages](#languages)
87
- - [Dataset Structure](#dataset-structure)
88
- - [Data Instances](#data-instances)
89
- - [Data Fields](#data-fields)
90
- - [Data Splits](#data-splits)
91
- - [Dataset Creation](#dataset-creation)
92
- - [Curation Rationale](#curation-rationale)
93
- - [Source Data](#source-data)
94
- - [Annotations](#annotations)
95
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
96
- - [Considerations for Using the Data](#considerations-for-using-the-data)
97
- - [Social Impact of Dataset](#social-impact-of-dataset)
98
- - [Discussion of Biases](#discussion-of-biases)
99
- - [Other Known Limitations](#other-known-limitations)
100
- - [Additional Information](#additional-information)
101
- - [Dataset Curators](#dataset-curators)
102
- - [Licensing Information](#licensing-information)
103
- - [Citation Information](#citation-information)
104
- - [Contributions](#contributions)
105
-
106
- ## Dataset Description
107
-
108
- - **Homepage:** http://shuoyang1213.me/WIDERFACE/index.html
109
- - **Repository:**
110
- - **Paper:** [WIDER FACE: A Face Detection Benchmark](https://arxiv.org/abs/1511.06523)
111
- - **Leaderboard:** http://shuoyang1213.me/WIDERFACE/WiderFace_Results.html
112
- - **Point of Contact:** shuoyang.1213@gmail.com
113
-
114
- ### Dataset Summary
115
-
116
- WIDER FACE dataset is a face detection benchmark dataset, of which images are
117
- selected from the publicly available WIDER dataset. We choose 32,203 images and
118
- label 393,703 faces with a high degree of variability in scale, pose and
119
- occlusion as depicted in the sample images. WIDER FACE dataset is organized
120
- based on 61 event classes. For each event class, we randomly select 40%/10%/50%
121
- data as training, validation and testing sets. We adopt the same evaluation
122
- metric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets,
123
- we do not release bounding box ground truth for the test images. Users are
124
- required to submit final prediction files, which we shall proceed to evaluate.
125
-
126
- ### Supported Tasks and Leaderboards
127
-
128
- - `face-detection`: The dataset can be used to train a model for Face Detection. More information on evaluating the model's performance can be found [here](http://shuoyang1213.me/WIDERFACE/WiderFace_Results.html).
129
-
130
- ### Languages
131
-
132
- English
133
-
134
- ## Dataset Structure
135
-
136
- ### Data Instances
137
-
138
- A data point comprises an image and its face annotations.
139
-
140
- ```
141
- {
142
- 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1024x755 at 0x19FA12186D8>, 'faces': {
143
- 'bbox': [
144
- [178.0, 238.0, 55.0, 73.0],
145
- [248.0, 235.0, 59.0, 73.0],
146
- [363.0, 157.0, 59.0, 73.0],
147
- [468.0, 153.0, 53.0, 72.0],
148
- [629.0, 110.0, 56.0, 81.0],
149
- [745.0, 138.0, 55.0, 77.0]
150
- ],
151
- 'blur': [2, 2, 2, 2, 2, 2],
152
- 'expression': [0, 0, 0, 0, 0, 0],
153
- 'illumination': [0, 0, 0, 0, 0, 0],
154
- 'occlusion': [1, 2, 1, 2, 1, 2],
155
- 'pose': [0, 0, 0, 0, 0, 0],
156
- 'invalid': [False, False, False, False, False, False]
157
- }
158
- }
159
- ```
160
-
161
- ### Data Fields
162
-
163
- - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
164
- - `faces`: a dictionary of face attributes for the faces present on the image
165
- - `bbox`: the bounding box of each face (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
166
- - `blur`: the blur level of each face, with possible values including `clear` (0), `normal` (1) and `heavy`
167
- - `expression`: the facial expression of each face, with possible values including `typical` (0) and `exaggerate` (1)
168
- - `illumination`: the lightning condition of each face, with possible values including `normal` (0) and `exaggerate` (1)
169
- - `occlusion`: the level of occlusion of each face, with possible values including `no` (0), `partial` (1) and `heavy` (2)
170
- - `pose`: the pose of each face, with possible values including `typical` (0) and `atypical` (1)
171
- - `invalid`: whether the image is valid or invalid.
172
-
173
- ### Data Splits
174
-
175
- The data is split into training, validation and testing set. WIDER FACE dataset is organized
176
- based on 61 event classes. For each event class, 40%/10%/50%
177
- data is randomly selected as training, validation and testing sets. The training set contains 12880 images, the validation set 3226 images and test set 16097 images.
178
-
179
- ## Dataset Creation
180
-
181
- ### Curation Rationale
182
-
183
- The curators state that the current face detection datasets typically contain a few thousand faces, with limited variations in pose, scale, facial expression, occlusion, and background clutters,
184
- making it difficult to assess for real world performance. They argue that the limitations of datasets have partially contributed to the failure of some algorithms in coping
185
- with heavy occlusion, small scale, and atypical pose.
186
-
187
- ### Source Data
188
-
189
- #### Initial Data Collection and Normalization
190
-
191
- WIDER FACE dataset is a subset of the WIDER dataset.
192
- The images in WIDER were collected in the following three steps: 1) Event categories
193
- were defined and chosen following the Large Scale Ontology for Multimedia (LSCOM) [22], which provides around 1000 concepts relevant to video event analysis. 2) Images
194
- are retrieved using search engines like Google and Bing. For
195
- each category, 1000-3000 images were collected. 3) The
196
- data were cleaned by manually examining all the images
197
- and filtering out images without human face. Then, similar
198
- images in each event category were removed to ensure large
199
- diversity in face appearance. A total of 32203 images are
200
- eventually included in the WIDER FACE dataset.
201
-
202
- #### Who are the source language producers?
203
-
204
- The images are selected from publicly available WIDER dataset.
205
-
206
- ### Annotations
207
-
208
- #### Annotation process
209
-
210
- The curators label the bounding boxes for all
211
- the recognizable faces in the WIDER FACE dataset. The
212
- bounding box is required to tightly contain the forehead,
213
- chin, and cheek.. If a face is occluded, they still label it with a bounding box but with an estimation on the scale of occlusion. Similar to the PASCAL VOC dataset [6], they assign an ’Ignore’ flag to the face
214
- which is very difficult to be recognized due to low resolution and small scale (10 pixels or less). After annotating
215
- the face bounding boxes, they further annotate the following
216
- attributes: pose (typical, atypical) and occlusion level (partial, heavy). Each annotation is labeled by one annotator
217
- and cross-checked by two different people.
218
-
219
- #### Who are the annotators?
220
-
221
- Shuo Yang, Ping Luo, Chen Change Loy and Xiaoou Tang.
222
-
223
- ### Personal and Sensitive Information
224
-
225
- [More Information Needed]
226
-
227
- ## Considerations for Using the Data
228
-
229
- ### Social Impact of Dataset
230
-
231
- [More Information Needed]
232
-
233
- ### Discussion of Biases
234
-
235
- [More Information Needed]
236
-
237
- ### Other Known Limitations
238
-
239
- [More Information Needed]
240
-
241
- ## Additional Information
242
-
243
- ### Dataset Curators
244
-
245
- Shuo Yang, Ping Luo, Chen Change Loy and Xiaoou Tang
246
-
247
- ### Licensing Information
248
-
249
- [Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)](https://creativecommons.org/licenses/by-nc-nd/4.0/).
250
-
251
- ### Citation Information
252
-
253
- ```
254
- @inproceedings{yang2016wider,
255
- Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou},
256
- Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
257
- Title = {WIDER FACE: A Face Detection Benchmark},
258
- Year = {2016}}
259
- ```
260
-
261
- ### Contributions
262
-
263
- Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "WIDER FACE dataset is a face detection benchmark dataset, of which images are\nselected from the publicly available WIDER dataset. We choose 32,203 images and\nlabel 393,703 faces with a high degree of variability in scale, pose and\nocclusion as depicted in the sample images. WIDER FACE dataset is organized\nbased on 61 event classes. For each event class, we randomly select 40%/10%/50%\ndata as training, validation and testing sets. We adopt the same evaluation\nmetric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets,\nwe do not release bounding box ground truth for the test images. Users are\nrequired to submit final prediction files, which we shall proceed to evaluate.\n", "citation": "@inproceedings{yang2016wider,\n Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou},\n Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},\n Title = {WIDER FACE: A Face Detection Benchmark},\n Year = {2016}}\n", "homepage": "http://shuoyang1213.me/WIDERFACE/", "license": "Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)", "features": {"image": {"decode": true, "id": null, "_type": "Image"}, "faces": {"feature": {"bbox": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": 4, "id": null, "_type": "Sequence"}, "blur": {"num_classes": 3, "names": ["clear", "normal", "heavy"], "id": null, "_type": "ClassLabel"}, "expression": {"num_classes": 2, "names": ["typical", "exaggerate"], "id": null, "_type": "ClassLabel"}, "illumination": {"num_classes": 2, "names": ["normal", "exaggerate "], "id": null, "_type": "ClassLabel"}, "occlusion": {"num_classes": 3, "names": ["no", "partial", "heavy"], "id": null, "_type": "ClassLabel"}, "pose": {"num_classes": 2, "names": ["typical", "atypical"], "id": null, "_type": "ClassLabel"}, "invalid": {"dtype": "bool", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wider_face", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 12049881, "num_examples": 12880, "dataset_name": "wider_face"}, "test": {"name": "test", "num_bytes": 3761103, "num_examples": 16097, "dataset_name": "wider_face"}, "validation": {"name": "validation", "num_bytes": 2998735, "num_examples": 3226, "dataset_name": "wider_face"}}, "download_checksums": {"https://huggingface.co/datasets/wider_face/resolve/main/data/WIDER_train.zip": {"num_bytes": 1465602149, "checksum": "e23b76129c825cafae8be944f65310b2e1ba1c76885afe732f179c41e5ed6d59"}, "https://huggingface.co/datasets/wider_face/resolve/main/data/WIDER_val.zip": {"num_bytes": 362752168, "checksum": "f9efbd09f28c5d2d884be8c0eaef3967158c866a593fc36ab0413e4b2a58a17a"}, "https://huggingface.co/datasets/wider_face/resolve/main/data/WIDER_test.zip": {"num_bytes": 1844140520, "checksum": "3b0313e11ea292ec58894b47ac4c0503b230e12540330845d70a7798241f88d3"}, "https://huggingface.co/datasets/wider_face/resolve/main/data/wider_face_split.zip": {"num_bytes": 3591642, "checksum": "c7561e4f5e7a118c249e0a5c5c902b0de90bbf120d7da9fa28d99041f68a8a5c"}}, "download_size": 3676086479, "post_processing_size": null, "dataset_size": 18809719, "size_in_bytes": 3694896198}}
 
 
data/wider_face_split.zip → default/wider_face-test-00000-of-00004.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c7561e4f5e7a118c249e0a5c5c902b0de90bbf120d7da9fa28d99041f68a8a5c
3
- size 3591642
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f20ec825f58fb431a37483aa4ff3a7250b7d4f1d70231527ece610d3b4ca7e7f
3
+ size 585235249
data/WIDER_val.zip → default/wider_face-test-00001-of-00004.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f9efbd09f28c5d2d884be8c0eaef3967158c866a593fc36ab0413e4b2a58a17a
3
- size 362752168
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c591ae57a011e4d233932efd47beacea60cf7dadd8392c68aae57e7866c96455
3
+ size 581011762
data/WIDER_test.zip → default/wider_face-test-00002-of-00004.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3b0313e11ea292ec58894b47ac4c0503b230e12540330845d70a7798241f88d3
3
- size 1844140520
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05d4bb166e0e685d44b1673802c2ccd1e789a51c4db7670bd8e7d715f7f220d3
3
+ size 559695016
data/WIDER_train.zip → default/wider_face-test-00003-of-00004.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e23b76129c825cafae8be944f65310b2e1ba1c76885afe732f179c41e5ed6d59
3
- size 1465602149
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4b3e5a7be7553bf52cc96d7a43f1ce8e351b6619698115068c3ce61243bff6c
3
+ size 127707344
default/wider_face-train-00000-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83debe033b642fb5b0d95f062c05c433a0734b5316a5b75d4f51796ca3c99c5a
3
+ size 594758209
default/wider_face-train-00001-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f118ae8dccb573b8c5a88be6dcc54a6bd683c0a33c920bb0ea93ce659340abb
3
+ size 550737595
default/wider_face-train-00002-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1914a403344c67b939f13a999062bdc8e2b1d06c207b4a92986d79c18e1e01c
3
+ size 328649116
default/wider_face-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:691fd720c61313a450bf77437fb602a8f58a693799392bc500935e36aff986ac
3
+ size 365032533
wider_face.py DELETED
@@ -1,165 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """WIDER FACE dataset."""
16
-
17
- import os
18
-
19
- import datasets
20
-
21
-
22
- _HOMEPAGE = "http://shuoyang1213.me/WIDERFACE/"
23
-
24
- _LICENSE = "Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)"
25
-
26
- _CITATION = """\
27
- @inproceedings{yang2016wider,
28
- Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou},
29
- Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
30
- Title = {WIDER FACE: A Face Detection Benchmark},
31
- Year = {2016}}
32
- """
33
-
34
- _DESCRIPTION = """\
35
- WIDER FACE dataset is a face detection benchmark dataset, of which images are
36
- selected from the publicly available WIDER dataset. We choose 32,203 images and
37
- label 393,703 faces with a high degree of variability in scale, pose and
38
- occlusion as depicted in the sample images. WIDER FACE dataset is organized
39
- based on 61 event classes. For each event class, we randomly select 40%/10%/50%
40
- data as training, validation and testing sets. We adopt the same evaluation
41
- metric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets,
42
- we do not release bounding box ground truth for the test images. Users are
43
- required to submit final prediction files, which we shall proceed to evaluate.
44
- """
45
-
46
-
47
- _REPO = "https://huggingface.co/datasets/wider_face/resolve/main/data"
48
- _URLS = {
49
- "train": f"{_REPO}/WIDER_train.zip",
50
- "validation": f"{_REPO}/WIDER_val.zip",
51
- "test": f"{_REPO}/WIDER_test.zip",
52
- "annot": f"{_REPO}/wider_face_split.zip",
53
- }
54
-
55
-
56
- class WiderFace(datasets.GeneratorBasedBuilder):
57
- """WIDER FACE dataset."""
58
-
59
- VERSION = datasets.Version("1.0.0")
60
-
61
- def _info(self):
62
- return datasets.DatasetInfo(
63
- description=_DESCRIPTION,
64
- features=datasets.Features(
65
- {
66
- "image": datasets.Image(),
67
- "faces": datasets.Sequence(
68
- {
69
- "bbox": datasets.Sequence(datasets.Value("float32"), length=4),
70
- "blur": datasets.ClassLabel(names=["clear", "normal", "heavy"]),
71
- "expression": datasets.ClassLabel(names=["typical", "exaggerate"]),
72
- "illumination": datasets.ClassLabel(names=["normal", "exaggerate "]),
73
- "occlusion": datasets.ClassLabel(names=["no", "partial", "heavy"]),
74
- "pose": datasets.ClassLabel(names=["typical", "atypical"]),
75
- "invalid": datasets.Value("bool"),
76
- }
77
- ),
78
- }
79
- ),
80
- supervised_keys=None,
81
- homepage=_HOMEPAGE,
82
- license=_LICENSE,
83
- citation=_CITATION,
84
- )
85
-
86
- def _split_generators(self, dl_manager):
87
- data_dir = dl_manager.download_and_extract(_URLS)
88
- return [
89
- datasets.SplitGenerator(
90
- name=datasets.Split.TRAIN,
91
- gen_kwargs={
92
- "split": "train",
93
- "data_dir": data_dir["train"],
94
- "annot_dir": data_dir["annot"],
95
- },
96
- ),
97
- datasets.SplitGenerator(
98
- name=datasets.Split.TEST,
99
- gen_kwargs={
100
- "split": "test",
101
- "data_dir": data_dir["test"],
102
- "annot_dir": data_dir["annot"],
103
- },
104
- ),
105
- datasets.SplitGenerator(
106
- name=datasets.Split.VALIDATION,
107
- gen_kwargs={
108
- "split": "val",
109
- "data_dir": data_dir["validation"],
110
- "annot_dir": data_dir["annot"],
111
- },
112
- ),
113
- ]
114
-
115
- def _generate_examples(self, split, data_dir, annot_dir):
116
- image_dir = os.path.join(data_dir, "WIDER_" + split, "images")
117
- annot_fname = "wider_face_test_filelist.txt" if split == "test" else f"wider_face_{split}_bbx_gt.txt"
118
- with open(os.path.join(annot_dir, "wider_face_split", annot_fname), "r", encoding="utf-8") as f:
119
- idx = 0
120
- while True:
121
- line = f.readline()
122
- line = line.rstrip()
123
- if not line.endswith(".jpg"):
124
- break
125
- image_file_path = os.path.join(image_dir, line)
126
- faces = []
127
- if split != "test":
128
- # Read number of bounding boxes
129
- nbboxes = int(f.readline())
130
- # Cases with 0 bounding boxes, still have one line with all zeros.
131
- # So we have to read it and discard it.
132
- if nbboxes == 0:
133
- f.readline()
134
- else:
135
- for _ in range(nbboxes):
136
- line = f.readline()
137
- line = line.rstrip()
138
- line_split = line.split()
139
- assert len(line_split) == 10, f"Cannot parse line: {line_split}"
140
- line_parsed = [int(n) for n in line_split]
141
- (
142
- xmin,
143
- ymin,
144
- wbox,
145
- hbox,
146
- blur,
147
- expression,
148
- illumination,
149
- invalid,
150
- occlusion,
151
- pose,
152
- ) = line_parsed
153
- faces.append(
154
- {
155
- "bbox": [xmin, ymin, wbox, hbox],
156
- "blur": blur,
157
- "expression": expression,
158
- "illumination": illumination,
159
- "occlusion": occlusion,
160
- "pose": pose,
161
- "invalid": invalid,
162
- }
163
- )
164
- yield idx, {"image": image_file_path, "faces": faces}
165
- idx += 1