khaclinh commited on
Commit
e354384
1 Parent(s): 74ae0a9
Files changed (5) hide show
  1. README.md +210 -1
  2. data/annotations.zip +3 -0
  3. data/soiling_annotations.zip +3 -0
  4. pp4av.py +149 -0
  5. requirements.txt +6 -0
README.md CHANGED
@@ -1,3 +1,212 @@
1
  ---
2
- license: cc-by-nc-nd-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - cc-by-nc-nd-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - extended
16
+ task_categories:
17
+ - object-detection
18
+ task_ids:
19
+ - face-detection
20
+ - license-plate-detection
21
+ pretty_name: PP4AV
22
  ---
23
+
24
+ # Dataset Card for PP4AV
25
+
26
+ ## Table of Contents
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Languages](#languages)
31
+ - [Dataset Creation](#dataset-creation)
32
+ - [Source Data](#source-data)
33
+ - [Annotations](#annotations)
34
+ - [Dataset folder](#folder)
35
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
36
+ - [Dataset Structure](#dataset-structure)
37
+ - [Data Instances](#data-instances)
38
+ - [Data Fields](#data-fields)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+ - [Contributions](#contributions)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Homepage:** https://github.com/khaclinh/pp4av
52
+ - **Repository:** https://github.com/khaclinh/pp4av
53
+ - **Paper:** [PP4AV: A benchmarking Dataset for Privacy-preserving Autonomous Driving]
54
+ - **Point of Contact:** linhtk.dhbk@gmail.com
55
+
56
+ ### Dataset Summary
57
+
58
+ PP4AV is the first public dataset with faces and license plates annotated with driving scenarios. P4AV provides 3,447 annotated driving images for both faces and license plates. For normal camera data, dataset sampled images from the existing videos in which cameras were mounted in moving vehicles, running around the European cities. The images in PP4AV were sampled from 6 European cities at various times of day, including nighttime. This dataset use the fisheye images from the WoodScape dataset to select 244 images from the front, rear, left, and right cameras for fisheye camera data. PP4AV dataset can be used as a benchmark suite (evaluating dataset) for data anonymization models in autonomous driving.
59
+
60
+ ### Languages
61
+
62
+ English
63
+
64
+
65
+ ## Dataset Creation
66
+
67
+ ### Source Data
68
+
69
+ #### Initial Data Collection and Normalization
70
+
71
+ The objective of PP4AV is to build a benchmark dataset that can be used to evaluate face and license plate detection models for autonomous driving. For normal camera data, we sampled images from the existing videos in which cameras were mounted in moving vehicles, running around the European cities. We focus on sampling data in urban areas rather than highways in order to provide sufficient samples of license plates and pedestrians. The images in PP4AV were sampled from **6** European cities at various times of day, including nighttime. The source data from 6 cities in European was described as follow:
72
+ - `Paris`: This subset contains **1450** images of the car driving down a Parisian street during the day. The video frame rate is 30 frames per second. The video is longer than one hour. We cut a shorter video for sampling and annotation. The original video can be found at the following URL:
73
+ URL: [paris_youtube_video](https://www.youtube.com/watch?v=nqWtGWymV6c)
74
+ - `Netherland day time`: This subset consists of **388** images of Hague, Amsterdam city in day time. The image of this subset are sampled from the bellow original video:
75
+ URL: [netherland_youtube_video](https://www.youtube.com/watch?v=Xuo4uCZxNrE)
76
+ The frame rate of the video is 30 frames per second. We cut a shorter video for sampling and annotation. The original video was longer than a half hour.
77
+ - `Netherland night time`: This subset consists of **824** images of Hague, Amsterdam city in night time sampled by the following original video:
78
+ URL: [netherland_youtube_video](https://www.youtube.com/watch?v=eAy9eHsynhM)
79
+ The frame rate of the video is 30 frames per second. We cut a shorter video for sampling and annotation. The original video was longer than a half hour.
80
+ - `Switzerland`: This subset consists of **372** images of Switzerland sampled by the following video:
81
+ URL: [switzerland_youtube_video](https://www.youtube.com/watch?v=0iw5IP94m0Q)
82
+ The frame rate of the video is 30 frames per second. We cut a shorter video for sampling and annotation. The original video was longer than one hour.
83
+ - `Zurich`: This subset consists of **50** images of Zurich city provided by the Cityscapes training set in package [leftImg8bit_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=3)
84
+ - `Stuttgart`: This subset consists of **69** images of Stuttgart city provided by the Cityscapes training set in package [leftImg8bit_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=3)
85
+ - `Strasbourg`: This subset consists of **50** images of Strasbourg city provided by the Cityscapes training set in package [leftImg8bit_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=3)
86
+
87
+ We use the fisheye images from the WoodScape dataset to select **244** images from the front, rear, left, and right cameras for fisheye camera data.
88
+ The source of fisheye data for sampling is located at WoodScape's [Fisheye images](https://woodscape.valeo.com/download).
89
+
90
+ In total, **3,447** images were selected and annotated in PP4AV.
91
+
92
+
93
+ ### Annotations
94
+
95
+ #### Annotation process
96
+
97
+ Annotators annotate facial and license plate objects in images. For facial objects, bounding boxes are defined by all detectable human faces from the forehead to the chin to the ears. Faces were labelled with diverse sizes, skin tones, and faces partially obscured by a transparent material, such as a car windshield. For license plate objects, bounding boxes consists of all recognizable license plates with high variability, such as different sizes, countries, vehicle types (motorcycle, automobile, bus, truck), and occlusions by other vehicles. License plates were annotated for vehicles involved in moving traffic. To ensure the quality of annotation, there are two-step process for annotation. In the first phase, two teams of annotators will independently annotate identical image sets. After their annotation output is complete, a merging method based on the IoU scores between the two bounding boxes of the two annotations will be applied. Pairs of annotations with IoU scores above a threshold will be merged and saved as a single annotation. Annotated pairs with IoU scores below a threshold will be considered conflicting. In the second phase, two teams of reviewers will inspect the conflicting pairs of annotations for revision before a second merging method similar to the first is applied. The results of these two phases will be combined to form the final annotation. All work is conducted on the CVAT tool https://github.com/openvinotoolkit/cvat.
98
+
99
+ #### Who are the annotators?
100
+
101
+ Vantix Data Science team
102
+
103
+ ### Dataset Folder
104
+ The `data` folder contains below files:
105
+ - `images.zip`: contains all preprocessed images of PP4AV dataset. In this `zip` file, there are bellow folder included:
106
+ `fisheye`: folder contains 244 fisheye images in `.png` file type
107
+ `zurich`: folder contains 244 fisheye images in `.png` file type
108
+ `strasbourg`: folder contains 244 fisheye images in `.png` file type
109
+ `stuttgart`: folder contains 244 fisheye images in `.png` file type
110
+ `switzerland`: folder contains 244 fisheye images in `.png` file type
111
+ `netherlands_day`: folder contains 244 fisheye images in `.png` file type
112
+ `netherlands_night`: folder contains 244 fisheye images in `.png` file type
113
+ `paris`: folder contains 244 fisheye images in `.png` file type
114
+
115
+ - `annotations.zip`: contains annotation data corresponding to `images.zip` data. In this file, there are bellow folder included:
116
+ `fisheye`: folder contains 244 annotation `.txt` file type for fisheye image following `yolo v1.1` format.
117
+ `zurich`: folder contains 50 file `.txt` annotation following `yolo v1.1` format, which corresponding to 50 images file of `zurich` subset.
118
+ `strasbourg`: folder contains 50 file `.txt` annotation following `yolo v1.1` format, which corresponding to 50 images file of `strasbourg` subset.
119
+ `stuttgart`: folder contains 69 file `.txt` annotation following `yolo v1.1` format, which corresponding to 69 images file of `stuttgart` subset.
120
+ `switzerland`: folder contains 372 file `.txt` annotation following `yolo v1.1` format, which corresponding to 372 images file of `switzerland` subset.
121
+ `netherlands_day`: folder contains 388 file `.txt` annotation following `yolo v1.1` format, which corresponding to 388 images file of `netherlands_day` subset.
122
+ `netherlands_night`: folder contains 824 file `.txt` annotation following `yolo v1.1` format, which corresponding to 824 images file of `netherlands_night` subset.
123
+ `paris`: folder contains 1450 file `.txt` annotation following `yolo v1.1` format, which corresponding to 1450 images file of `paris` subset.
124
+ - `soiling_annotations.zip`: contain raw annotation data without filtering. The folder structure stored in this file is similar to format of `annotations.zip`.
125
+
126
+
127
+ ### Personal and Sensitive Information
128
+
129
+ [More Information Needed]
130
+
131
+ ## Dataset Structure
132
+
133
+ ### Data Instances
134
+
135
+ A data point comprises an image and its face and license plate annotations.
136
+
137
+ ```
138
+ {
139
+ 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1920x1080 at 0x19FA12186D8>, 'objects': {
140
+ 'bbox': [
141
+ [0 0.230078 0.317081 0.239062 0.331367],
142
+ [1 0.5017185 0.0306425 0.5185935 0.0410975],
143
+ [1 0.695078 0.0710145 0.7109375 0.0863355],
144
+ [1 0.4089065 0.31646 0.414375 0.32764],
145
+ [0 0.1843745 0.403416 0.201093 0.414182],
146
+ [0 0.7132 0.3393474 0.717922 0.3514285]
147
+ ]
148
+ }
149
+ }
150
+ ```
151
+
152
+ ### Data Fields
153
+
154
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
155
+ - `objects`: a dictionary of face and license plate bounding boxes present on the image
156
+ - `bbox`: the bounding box of each face and license plate (in the [yolo](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#yolo) format). Basically, each row in annotation `.txt` file for each image `.png` file consists of data in format: `<object-class> <x_center> <y_center> <width> <height>`:
157
+ - `object-class`: integer number of object from 0 to 1, where 0 indicate face object, and 1 indicate licese plate object
158
+ - `x_center`: normalized x-axis coordinate of the center of the bounding box.
159
+ `x_center = <absolute_x_center> / <image_width>`
160
+ - `y_center`: normalized y-axis coordinate of the center of the bounding box.
161
+ `y_center = <absolute_y_center> / <image_height>`
162
+ - `width`: normalized width of the bounding box.
163
+ `width = <absolute_width> / <image_width>`
164
+ - `height`: normalized wheightdth of the bounding box.
165
+ `height = <absolute_height> / <image_height>`
166
+ - Example lines in YOLO v1.1 format `.txt' annotation file:
167
+ `1 0.716797 0.395833 0.216406 0.147222
168
+ 0 0.687109 0.379167 0.255469 0.158333
169
+ 1 0.420312 0.395833 0.140625 0.166667
170
+ `
171
+
172
+
173
+ ## Considerations for Using the Data
174
+
175
+ ### Social Impact of Dataset
176
+
177
+ [More Information Needed]
178
+
179
+ ### Discussion of Biases
180
+
181
+ [More Information Needed]
182
+
183
+ ### Other Known Limitations
184
+
185
+ [More Information Needed]
186
+
187
+ ## Additional Information
188
+
189
+ ### Dataset Curators
190
+
191
+ Linh Trinh
192
+
193
+ ### Licensing Information
194
+
195
+ [Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)](https://creativecommons.org/licenses/by-nc-nd/4.0/).
196
+
197
+ ### Citation Information
198
+
199
+ ```
200
+ @article{PP4AV2022,
201
+ title = {PP4AV: A benchmarking Dataset for Privacy-preserving Autonomous Driving},
202
+ author = {Linh Trinh, Phuong Pham, Hoang Trinh, Nguyen Bach, Dung Nguyen, Giang Nguyen, Huy Nguyen},
203
+ booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
204
+ year = {2023}
205
+ }
206
+ ```
207
+
208
+ ### Contributions
209
+
210
+ Thanks to [@khaclinh](https://github.com/khaclinh) for adding this dataset.
211
+
212
+
data/annotations.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aca1fed529a6f00598db63a1c685c3915d33d420430bad252a2822bac4b6609c
3
+ size 982986
data/soiling_annotations.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10731fc73eead247d280ce895d342f71b934eefdec1b7ab3622c92d7a3ddf516
3
+ size 1005826
pp4av.py ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """PP4AV dataset."""
16
+
17
+ import os
18
+ from glob import glob
19
+ from tqdm import tqdm
20
+ from pathlib import Path
21
+ from typing import List
22
+ import re
23
+ from collections import defaultdict
24
+ import datasets
25
+
26
+
27
+
28
+ _HOMEPAGE = "https://github.com/khaclinh/pp4av"
29
+
30
+ _LICENSE = "Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)"
31
+
32
+ _CITATION = """\
33
+ @article{PP4AV2022,
34
+ title = {PP4AV: A benchmarking Dataset for Privacy-preserving Autonomous Driving},
35
+ author = {Linh Trinh, Phuong Pham, Hoang Trinh, Nguyen Bach, Dung Nguyen, Giang Nguyen, Huy Nguyen},
36
+ booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
37
+ year = {2023}
38
+ }
39
+ """
40
+
41
+ _DESCRIPTION = """\
42
+ PP4AV is the first public dataset with faces and license plates annotated with driving scenarios.
43
+ P4AV provides 3,447 annotated driving images for both faces and license plates.
44
+ For normal camera data, dataset sampled images from the existing videos in which cameras were mounted in moving vehicles, running around the European cities.
45
+ The images in PP4AV were sampled from 6 European cities at various times of day, including nighttime.
46
+ This dataset use the fisheye images from the WoodScape dataset to select 244 images from the front, rear, left, and right cameras for fisheye camera data.
47
+ PP4AV dataset can be used as a benchmark suite (evaluating dataset) for data anonymization models in autonomous driving.
48
+ """
49
+
50
+
51
+ _REPO = "https://huggingface.co/datasets/khaclinh/pp4av/resolve/main/data"
52
+ _URLS = {
53
+ "test": f"{_REPO}/images.zip",
54
+ "annot": f"{_REPO}/soiling_annotations.zip",
55
+ }
56
+
57
+ IMG_EXT = ['png', 'jpeg', 'jpg']
58
+ _SUBREDDITS = ["zurich", "strasbourg", "stuttgart", "switzerland", "netherlands_day", "netherlands_night", "paris"]
59
+
60
+ class PP4AVConfig(datasets.BuilderConfig):
61
+ """BuilderConfig for PP4AV."""
62
+
63
+ def __init__(self, name, **kwargs):
64
+ """BuilderConfig for PP4AV.
65
+ Args:
66
+ **kwargs: keyword arguments forwarded to super.
67
+ """
68
+ super(PP4AVConfig, self).__init__(version=datasets.Version("1.0.0", ""), name=name, **kwargs)
69
+
70
+ class PP4AV(datasets.GeneratorBasedBuilder):
71
+ """PP4AV dataset."""
72
+
73
+ BUILDER_CONFIGS = [
74
+ PP4AVConfig("fisheye"),
75
+ ]
76
+
77
+ BUILDER_CONFIGS += [PP4AVConfig(subreddit) for subreddit in _SUBREDDITS]
78
+
79
+ DEFAULT_CONFIG_NAME = "fisheye"
80
+
81
+ VERSION = datasets.Version("1.0.0")
82
+
83
+ def _info(self):
84
+ return datasets.DatasetInfo(
85
+ description=_DESCRIPTION,
86
+ features=datasets.Features(
87
+ {
88
+ "image": datasets.Image(),
89
+ "faces": datasets.Sequence(datasets.Sequence(datasets.Value("float32"), length=4)),
90
+ "plates": datasets.Sequence(datasets.Sequence(datasets.Value("float32"), length=4)),
91
+
92
+ }
93
+ ),
94
+ supervised_keys=None,
95
+ homepage=_HOMEPAGE,
96
+ license=_LICENSE,
97
+ citation=_CITATION,
98
+ )
99
+
100
+ def _split_generators(self, dl_manager):
101
+ data_dir = dl_manager.download_and_extract(_URLS)
102
+ return [
103
+ datasets.SplitGenerator(
104
+ name=datasets.Split.TEST,
105
+ gen_kwargs={
106
+ "name": self.config.name,
107
+ "data_dir": data_dir["test"],
108
+ "annot_dir": data_dir["annot"],
109
+ },
110
+ ),
111
+ ]
112
+
113
+ def _generate_examples(self, name, data_dir, annot_dir):
114
+
115
+ image_dir = os.path.join(data_dir, name)
116
+ annotation_dir = os.path.join(annot_dir, name)
117
+ files = []
118
+
119
+ idx = 0
120
+ for i_file in glob(os.path.join(image_dir, "*.png")):
121
+ plates = []
122
+ faces = []
123
+
124
+ img_relative_file = os.path.relpath(i_file, image_dir)
125
+ gt_relative_path = img_relative_file.replace(".png", ".txt")
126
+
127
+ gt_path = os.path.join(annotation_dir, gt_relative_path)
128
+
129
+ annotation = defaultdict(list)
130
+ with open(gt_path, "r", encoding="utf-8") as f:
131
+ line = f.readline().strip()
132
+ while line:
133
+ assert re.match(r"^\d( [\d\.]+){4,5}$", line), "Incorrect line: %s" % line
134
+ cls, cx, cy, w, h = line.split()[:5]
135
+ cls, cx, cy, w, h = int(cls), float(cx), float(cy), float(w), float(h)
136
+ x1, y1, x2, y2 = cx - w / 2, cy - h / 2, cx + w / 2, cy + h / 2
137
+ annotation[cls].append([x1, y1, x2, y2])
138
+ line = f.readline().strip()
139
+
140
+ for cls, bboxes in annotation.items():
141
+ for x1, y1, x2, y2 in bboxes:
142
+ if cls == 0:
143
+ faces.append([x1, y1, x2, y2])
144
+ else:
145
+ plates.append([x1, y1, x2, y2])
146
+
147
+ yield idx, {"image": i_file, "faces": faces, "plates": plates}
148
+
149
+ idx += 1
requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ regex
2
+ glob2
3
+ tqdm
4
+ pathlib
5
+ collections2
6
+ typing