Datasets:

Languages:
English
ArXiv:
License:
system HF staff commited on
Commit
ade652b
0 Parent(s):

Update files from the datasets library (from 1.18.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.18.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +208 -0
  3. dataset_infos.json +1 -0
  4. dummy/1.0.0/dummy_data.zip +3 -0
  5. wider_face.py +163 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - extended|other-wider
16
+ task_categories:
17
+ - other
18
+ task_ids:
19
+ - other-other-face-detection
20
+ paperswithcode_id: wider-face-1
21
+ pretty_name: WIDER FACE
22
+ ---
23
+
24
+ # Dataset Card for WIDER FACE
25
+
26
+ ## Table of Contents
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+ - [Contributions](#contributions)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:** http://shuoyang1213.me/WIDERFACE/index.html
54
+ - **Repository:**
55
+ - **Paper:** [WIDER FACE: A Face Detection Benchmark](https://arxiv.org/abs/1511.06523)
56
+ - **Leaderboard:** http://shuoyang1213.me/WIDERFACE/WiderFace_Results.html
57
+ - **Point of Contact:** shuoyang.1213@gmail.com
58
+
59
+ ### Dataset Summary
60
+
61
+ WIDER FACE dataset is a face detection benchmark dataset, of which images are
62
+ selected from the publicly available WIDER dataset. We choose 32,203 images and
63
+ label 393,703 faces with a high degree of variability in scale, pose and
64
+ occlusion as depicted in the sample images. WIDER FACE dataset is organized
65
+ based on 61 event classes. For each event class, we randomly select 40%/10%/50%
66
+ data as training, validation and testing sets. We adopt the same evaluation
67
+ metric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets,
68
+ we do not release bounding box ground truth for the test images. Users are
69
+ required to submit final prediction files, which we shall proceed to evaluate.
70
+
71
+ ### Supported Tasks and Leaderboards
72
+
73
+ - `face-detection`: The dataset can be used to train a model for Face Detection. More information on evaluating the model's performance can be found [here](http://shuoyang1213.me/WIDERFACE/WiderFace_Results.html).
74
+
75
+ ### Languages
76
+
77
+ English
78
+
79
+ ## Dataset Structure
80
+
81
+ ### Data Instances
82
+
83
+ A data point comprises an image and its face annotations.
84
+
85
+ ```
86
+ {
87
+ 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1024x755 at 0x19FA12186D8>, 'faces': {
88
+ 'bbox': [
89
+ [178.0, 238.0, 55.0, 73.0],
90
+ [248.0, 235.0, 59.0, 73.0],
91
+ [363.0, 157.0, 59.0, 73.0],
92
+ [468.0, 153.0, 53.0, 72.0],
93
+ [629.0, 110.0, 56.0, 81.0],
94
+ [745.0, 138.0, 55.0, 77.0]
95
+ ],
96
+ 'blur': [2, 2, 2, 2, 2, 2],
97
+ 'expression': [0, 0, 0, 0, 0, 0],
98
+ 'illumination': [0, 0, 0, 0, 0, 0],
99
+ 'occlusion': [1, 2, 1, 2, 1, 2],
100
+ 'pose': [0, 0, 0, 0, 0, 0],
101
+ 'invalid': [False, False, False, False, False, False]
102
+ }
103
+ }
104
+ ```
105
+
106
+ ### Data Fields
107
+
108
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
109
+ - `faces`: a dictionary of face attributes for the faces present on the image
110
+ - `bbox`: the bounding box of each face (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
111
+ - `blur`: the blur level of each face, with possible values including `clear` (0), `normal` (1) and `heavy`
112
+ - `expression`: the facial expression of each face, with possible values including `typical` (0) and `exaggerate` (1)
113
+ - `illumination`: the lightning condition of each face, with possible values including `normal` (0) and `exaggerate` (1)
114
+ - `occlusion`: the level of occlusion of each face, with possible values including `no` (0), `partial` (1) and `heavy` (2)
115
+ - `pose`: the pose of each face, with possible values including `typical` (0) and `atypical` (1)
116
+ - `invalid`: whether the image is valid or invalid.
117
+
118
+ ### Data Splits
119
+
120
+ The data is split into training, validation and testing set. WIDER FACE dataset is organized
121
+ based on 61 event classes. For each event class, 40%/10%/50%
122
+ data is randomly selected as training, validation and testing sets. The training set contains 12880 images, the validation set 3226 images and test set 16097 images.
123
+
124
+ ## Dataset Creation
125
+
126
+ ### Curation Rationale
127
+
128
+ The curators state that the current face detection datasets typically contain a few thousand faces, with limited variations in pose, scale, facial expression, occlusion, and background clutters,
129
+ making it difficult to assess for real world performance. They argue that the limitations of datasets have partially contributed to the failure of some algorithms in coping
130
+ with heavy occlusion, small scale, and atypical pose.
131
+
132
+ ### Source Data
133
+
134
+ #### Initial Data Collection and Normalization
135
+
136
+ WIDER FACE dataset is a subset of the WIDER dataset.
137
+ The images in WIDER were collected in the following three steps: 1) Event categories
138
+ were defined and chosen following the Large Scale Ontology for Multimedia (LSCOM) [22], which provides around 1000 concepts relevant to video event analysis. 2) Images
139
+ are retrieved using search engines like Google and Bing. For
140
+ each category, 1000-3000 images were collected. 3) The
141
+ data were cleaned by manually examining all the images
142
+ and filtering out images without human face. Then, similar
143
+ images in each event category were removed to ensure large
144
+ diversity in face appearance. A total of 32203 images are
145
+ eventually included in the WIDER FACE dataset.
146
+
147
+ #### Who are the source language producers?
148
+
149
+ The images are selected from publicly available WIDER dataset.
150
+
151
+ ### Annotations
152
+
153
+ #### Annotation process
154
+
155
+ The curators label the bounding boxes for all
156
+ the recognizable faces in the WIDER FACE dataset. The
157
+ bounding box is required to tightly contain the forehead,
158
+ chin, and cheek.. If a face is occluded, they still label it with a bounding box but with an estimation on the scale of occlusion. Similar to the PASCAL VOC dataset [6], they assign an ’Ignore’ flag to the face
159
+ which is very difficult to be recognized due to low resolution and small scale (10 pixels or less). After annotating
160
+ the face bounding boxes, they further annotate the following
161
+ attributes: pose (typical, atypical) and occlusion level (partial, heavy). Each annotation is labeled by one annotator
162
+ and cross-checked by two different people.
163
+
164
+ #### Who are the annotators?
165
+
166
+ Shuo Yang, Ping Luo, Chen Change Loy and Xiaoou Tang.
167
+
168
+ ### Personal and Sensitive Information
169
+
170
+ [More Information Needed]
171
+
172
+ ## Considerations for Using the Data
173
+
174
+ ### Social Impact of Dataset
175
+
176
+ [More Information Needed]
177
+
178
+ ### Discussion of Biases
179
+
180
+ [More Information Needed]
181
+
182
+ ### Other Known Limitations
183
+
184
+ [More Information Needed]
185
+
186
+ ## Additional Information
187
+
188
+ ### Dataset Curators
189
+
190
+ Shuo Yang, Ping Luo, Chen Change Loy and Xiaoou Tang
191
+
192
+ ### Licensing Information
193
+
194
+ [More Information Needed]
195
+
196
+ ### Citation Information
197
+
198
+ ```
199
+ @inproceedings{yang2016wider,
200
+ Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou},
201
+ Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
202
+ Title = {WIDER FACE: A Face Detection Benchmark},
203
+ Year = {2016}}
204
+ ```
205
+
206
+ ### Contributions
207
+
208
+ Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "WIDER FACE dataset is a face detection benchmark dataset, of which images are\nselected from the publicly available WIDER dataset. We choose 32,203 images and\nlabel 393,703 faces with a high degree of variability in scale, pose and\nocclusion as depicted in the sample images. WIDER FACE dataset is organized\nbased on 61 event classes. For each event class, we randomly select 40%/10%/50%\ndata as training, validation and testing sets. We adopt the same evaluation\nmetric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets,\nwe do not release bounding box ground truth for the test images. Users are\nrequired to submit final prediction files, which we shall proceed to evaluate.\n", "citation": "@inproceedings{yang2016wider,\n Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou},\n Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},\n Title = {WIDER FACE: A Face Detection Benchmark},\n Year = {2016}}\n", "homepage": "http://shuoyang1213.me/WIDERFACE/", "license": "Unknown", "features": {"image": {"id": null, "_type": "Image"}, "faces": {"feature": {"bbox": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": 4, "id": null, "_type": "Sequence"}, "blur": {"num_classes": 3, "names": ["clear", "normal", "heavy"], "names_file": null, "id": null, "_type": "ClassLabel"}, "expression": {"num_classes": 2, "names": ["typical", "exaggerate"], "names_file": null, "id": null, "_type": "ClassLabel"}, "illumination": {"num_classes": 2, "names": ["normal", "exaggerate "], "names_file": null, "id": null, "_type": "ClassLabel"}, "occlusion": {"num_classes": 3, "names": ["no", "partial", "heavy"], "names_file": null, "id": null, "_type": "ClassLabel"}, "pose": {"num_classes": 2, "names": ["typical", "atypical"], "names_file": null, "id": null, "_type": "ClassLabel"}, "invalid": {"dtype": "bool", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wider_face", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 11996002, "num_examples": 12880, "dataset_name": "wider_face"}, "test": {"name": "test", "num_bytes": 3796193, "num_examples": 16097, "dataset_name": "wider_face"}, "validation": {"name": "validation", "num_bytes": 2985369, "num_examples": 3226, "dataset_name": "wider_face"}}, "download_checksums": {"https://drive.google.com/u/0/uc?id=15hGDLhsx8bLgLcIRD5DhYt5iBxnjNF1M&export=download": {"num_bytes": 1465602149, "checksum": "e23b76129c825cafae8be944f65310b2e1ba1c76885afe732f179c41e5ed6d59"}, "https://drive.google.com/u/0/uc?id=1HIfDbVEWKmsYKJZm4lchTBDLW5N7dY5T&export=download": {"num_bytes": 1844140520, "checksum": "3b0313e11ea292ec58894b47ac4c0503b230e12540330845d70a7798241f88d3"}, "https://drive.google.com/u/0/uc?id=1GUCogbp16PMGa39thoMMeWxp7Rp5oM8Q&export=download": {"num_bytes": 362752168, "checksum": "f9efbd09f28c5d2d884be8c0eaef3967158c866a593fc36ab0413e4b2a58a17a"}, "http://shuoyang1213.me/WIDERFACE/support/bbx_annotation/wider_face_split.zip": {"num_bytes": 3591642, "checksum": "c7561e4f5e7a118c249e0a5c5c902b0de90bbf120d7da9fa28d99041f68a8a5c"}}, "download_size": 3676086479, "post_processing_size": null, "dataset_size": 18777564, "size_in_bytes": 3694864043}}
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0c38fd59a83037e2271f9778fa84462a34613629f35b2882343ad7077e93f8c
3
+ size 6751
wider_face.py ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """WIDER FACE dataset."""
16
+
17
+ import os
18
+
19
+ import datasets
20
+
21
+
22
+ _HOMEPAGE = "http://shuoyang1213.me/WIDERFACE/"
23
+
24
+ _LICENSE = "Unknown"
25
+
26
+ _CITATION = """\
27
+ @inproceedings{yang2016wider,
28
+ Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou},
29
+ Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
30
+ Title = {WIDER FACE: A Face Detection Benchmark},
31
+ Year = {2016}}
32
+ """
33
+
34
+ _DESCRIPTION = """\
35
+ WIDER FACE dataset is a face detection benchmark dataset, of which images are
36
+ selected from the publicly available WIDER dataset. We choose 32,203 images and
37
+ label 393,703 faces with a high degree of variability in scale, pose and
38
+ occlusion as depicted in the sample images. WIDER FACE dataset is organized
39
+ based on 61 event classes. For each event class, we randomly select 40%/10%/50%
40
+ data as training, validation and testing sets. We adopt the same evaluation
41
+ metric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets,
42
+ we do not release bounding box ground truth for the test images. Users are
43
+ required to submit final prediction files, which we shall proceed to evaluate.
44
+ """
45
+
46
+ _TRAIN_DOWNLOAD_URL = "https://drive.google.com/u/0/uc?id=15hGDLhsx8bLgLcIRD5DhYt5iBxnjNF1M&export=download"
47
+ _TEST_DOWNLOAD_URL = "https://drive.google.com/u/0/uc?id=1HIfDbVEWKmsYKJZm4lchTBDLW5N7dY5T&export=download"
48
+ _VALIDATION_DOWNLOAD_URL = "https://drive.google.com/u/0/uc?id=1GUCogbp16PMGa39thoMMeWxp7Rp5oM8Q&export=download"
49
+ _ANNOT_DOWNLOAD_URL = "http://shuoyang1213.me/WIDERFACE/support/bbx_annotation/wider_face_split.zip"
50
+
51
+
52
+ class WiderFace(datasets.GeneratorBasedBuilder):
53
+ """WIDER FACE dataset."""
54
+
55
+ VERSION = datasets.Version("1.0.0")
56
+
57
+ def _info(self):
58
+ return datasets.DatasetInfo(
59
+ description=_DESCRIPTION,
60
+ features=datasets.Features(
61
+ {
62
+ "image": datasets.Image(),
63
+ "faces": datasets.Sequence(
64
+ {
65
+ "bbox": datasets.Sequence(datasets.Value("float32"), length=4),
66
+ "blur": datasets.ClassLabel(names=["clear", "normal", "heavy"]),
67
+ "expression": datasets.ClassLabel(names=["typical", "exaggerate"]),
68
+ "illumination": datasets.ClassLabel(names=["normal", "exaggerate "]),
69
+ "occlusion": datasets.ClassLabel(names=["no", "partial", "heavy"]),
70
+ "pose": datasets.ClassLabel(names=["typical", "atypical"]),
71
+ "invalid": datasets.Value("bool"),
72
+ }
73
+ ),
74
+ }
75
+ ),
76
+ supervised_keys=None,
77
+ homepage=_HOMEPAGE,
78
+ license=_LICENSE,
79
+ citation=_CITATION,
80
+ )
81
+
82
+ def _split_generators(self, dl_manager):
83
+ train_dir, test_dir, validation_dir, annot_dir = dl_manager.download_and_extract(
84
+ [_TRAIN_DOWNLOAD_URL, _TEST_DOWNLOAD_URL, _VALIDATION_DOWNLOAD_URL, _ANNOT_DOWNLOAD_URL]
85
+ )
86
+ return [
87
+ datasets.SplitGenerator(
88
+ name=datasets.Split.TRAIN,
89
+ gen_kwargs={
90
+ "split": "train",
91
+ "data_dir": train_dir,
92
+ "annot_dir": annot_dir,
93
+ },
94
+ ),
95
+ datasets.SplitGenerator(
96
+ name=datasets.Split.TEST,
97
+ gen_kwargs={
98
+ "split": "test",
99
+ "data_dir": test_dir,
100
+ "annot_dir": annot_dir,
101
+ },
102
+ ),
103
+ datasets.SplitGenerator(
104
+ name=datasets.Split.VALIDATION,
105
+ gen_kwargs={
106
+ "split": "val",
107
+ "data_dir": validation_dir,
108
+ "annot_dir": annot_dir,
109
+ },
110
+ ),
111
+ ]
112
+
113
+ def _generate_examples(self, split, data_dir, annot_dir):
114
+ image_dir = os.path.join(data_dir, "WIDER_" + split, "images")
115
+ annot_fname = "wider_face_test_filelist.txt" if split == "test" else f"wider_face_{split}_bbx_gt.txt"
116
+ with open(os.path.join(annot_dir, "wider_face_split", annot_fname), "r", encoding="utf-8") as f:
117
+ idx = 0
118
+ while True:
119
+ line = f.readline()
120
+ line = line.rstrip()
121
+ if not line.endswith(".jpg"):
122
+ break
123
+ image_file_path = os.path.join(image_dir, line)
124
+ faces = []
125
+ if split != "test":
126
+ # Read number of bounding boxes
127
+ nbboxes = int(f.readline())
128
+ # Cases with 0 bounding boxes, still have one line with all zeros.
129
+ # So we have to read it and discard it.
130
+ if nbboxes == 0:
131
+ f.readline()
132
+ else:
133
+ for _ in range(nbboxes):
134
+ line = f.readline()
135
+ line = line.rstrip()
136
+ line_split = line.split()
137
+ assert len(line_split) == 10, f"Cannot parse line: {line_split}"
138
+ line_parsed = [int(n) for n in line_split]
139
+ (
140
+ xmin,
141
+ ymin,
142
+ wbox,
143
+ hbox,
144
+ blur,
145
+ expression,
146
+ illumination,
147
+ invalid,
148
+ occlusion,
149
+ pose,
150
+ ) = line_parsed
151
+ faces.append(
152
+ {
153
+ "bbox": [xmin, ymin, wbox, hbox],
154
+ "blur": blur,
155
+ "expression": expression,
156
+ "illumination": illumination,
157
+ "occlusion": occlusion,
158
+ "pose": pose,
159
+ "invalid": invalid,
160
+ }
161
+ )
162
+ yield idx, {"image": image_file_path, "faces": faces}
163
+ idx += 1