keremberke commited on
Commit
90d2c6d
1 Parent(s): 28fa769

dataset uploaded by roboflow2huggingface package

Browse files
README.dataset.txt ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # undefined > raw-images_ommittedSuitClasses
2
+ https://public.roboflow.ai/object-detection/undefined
3
+
4
+ Provided by undefined
5
+ License: CC BY 4.0
6
+
7
+ # Personal Protective Equipment Dataset and Model
8
+
9
+ This dataset is a collection of images that contains annotations for the classes below:
10
+
11
+ * goggles
12
+ * helmet
13
+ * mask
14
+ * no-suit
15
+ * no_goggles
16
+ * no_helmet
17
+ * no_mask
18
+ * no_shoes
19
+ * shoes
20
+ * suit
21
+ * no_glove
22
+ * glove
23
+
24
+
25
+ ## Usage
26
+
27
+ Most of these classes are underrepresented and would need to be [balanced](https://blog.roboflow.com/handling-unbalanced-classes/) for better detection. An improved model can be utilized for use cases that'll detect the classes above in order to minimize exposure to hazards that cause serious workplace injuries.
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - object-detection
4
+ tags:
5
+ - roboflow
6
+ - roboflow2huggingface
7
+ - Manufacturing
8
+ ---
9
+
10
+ <div align="center">
11
+ <img width="640" alt="keremberke/protective-equipment-detection" src="https://huggingface.co/datasets/keremberke/protective-equipment-detection/resolve/main/thumbnail.jpg">
12
+ </div>
13
+
14
+ ### Dataset Labels
15
+
16
+ ```
17
+ ['glove', 'goggles', 'helmet', 'mask', 'no_glove', 'no_goggles', 'no_helmet', 'no_mask', 'no_shoes', 'shoes']
18
+ ```
19
+
20
+
21
+ ### Number of Images
22
+
23
+ ```json
24
+ {'valid': 3570, 'test': 1935, 'train': 6473}
25
+ ```
26
+
27
+
28
+ ### How to Use
29
+
30
+ - Install [datasets](https://pypi.org/project/datasets/):
31
+
32
+ ```bash
33
+ pip install datasets
34
+ ```
35
+
36
+ - Load the dataset:
37
+
38
+ ```python
39
+ from datasets import load_dataset
40
+
41
+ ds = load_dataset("keremberke/protective-equipment-detection", name="full")
42
+ example = ds['train'][0]
43
+ ```
44
+
45
+ ### Roboflow Dataset Page
46
+ [https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi/dataset/7](https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi/dataset/7?ref=roboflow2huggingface)
47
+
48
+ ### Citation
49
+
50
+ ```
51
+ @misc{ ppes-kaxsi_dataset,
52
+ title = { PPEs Dataset },
53
+ type = { Open Source Dataset },
54
+ author = { Personal Protective Equipment },
55
+ howpublished = { \\url{ https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi } },
56
+ url = { https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi },
57
+ journal = { Roboflow Universe },
58
+ publisher = { Roboflow },
59
+ year = { 2022 },
60
+ month = { jul },
61
+ note = { visited on 2023-01-18 },
62
+ }
63
+ ```
64
+
65
+ ### License
66
+ CC BY 4.0
67
+
68
+ ### Dataset Summary
69
+ This dataset was exported via roboflow.ai on July 7, 2022 at 3:49 PM GMT
70
+
71
+ It includes 11978 images.
72
+ Ppe-equipements are annotated in COCO format.
73
+
74
+ The following pre-processing was applied to each image:
75
+ * Auto-orientation of pixel data (with EXIF-orientation stripping)
76
+
77
+ No image augmentation techniques were applied.
78
+
79
+
80
+
README.roboflow.txt ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ PPEs - v7 raw-images_ommittedSuitClasses
3
+ ==============================
4
+
5
+ This dataset was exported via roboflow.ai on July 7, 2022 at 3:49 PM GMT
6
+
7
+ It includes 11978 images.
8
+ Ppe-equipements are annotated in COCO format.
9
+
10
+ The following pre-processing was applied to each image:
11
+ * Auto-orientation of pixel data (with EXIF-orientation stripping)
12
+
13
+ No image augmentation techniques were applied.
14
+
15
+
data/test.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16dfeef2e6e14bb188e40d0e856c12c48033e6a61b5b9230eaf53db97d3feb6e
3
+ size 245068886
data/train.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5e3796e56892acf35631767af6138f622eafba875a53996e5618783c575cf39
3
+ size 903731317
data/valid-mini.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58602e56adafaca03e12f457c7f82ce7f2036099a0fc65a0327e4af2ec254e84
3
+ size 379962
data/valid.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e86d9e44eb4d13ae81f7b93de75fb8955665f80136911d97262ea61ff3db694c
3
+ size 1125517514
protective-equipment-detection.py ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import collections
2
+ import json
3
+ import os
4
+
5
+ import datasets
6
+
7
+
8
+ _HOMEPAGE = "https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi/dataset/7"
9
+ _LICENSE = "CC BY 4.0"
10
+ _CITATION = """\
11
+ @misc{ ppes-kaxsi_dataset,
12
+ title = { PPEs Dataset },
13
+ type = { Open Source Dataset },
14
+ author = { Personal Protective Equipment },
15
+ howpublished = { \\url{ https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi } },
16
+ url = { https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi },
17
+ journal = { Roboflow Universe },
18
+ publisher = { Roboflow },
19
+ year = { 2022 },
20
+ month = { jul },
21
+ note = { visited on 2023-01-18 },
22
+ }
23
+ """
24
+ _CATEGORIES = ['glove', 'goggles', 'helmet', 'mask', 'no_glove', 'no_goggles', 'no_helmet', 'no_mask', 'no_shoes', 'shoes']
25
+ _ANNOTATION_FILENAME = "_annotations.coco.json"
26
+
27
+
28
+ class PROTECTIVEEQUIPMENTDETECTIONConfig(datasets.BuilderConfig):
29
+ """Builder Config for protective-equipment-detection"""
30
+
31
+ def __init__(self, data_urls, **kwargs):
32
+ """
33
+ BuilderConfig for protective-equipment-detection.
34
+
35
+ Args:
36
+ data_urls: `dict`, name to url to download the zip file from.
37
+ **kwargs: keyword arguments forwarded to super.
38
+ """
39
+ super(PROTECTIVEEQUIPMENTDETECTIONConfig, self).__init__(version=datasets.Version("1.0.0"), **kwargs)
40
+ self.data_urls = data_urls
41
+
42
+
43
+ class PROTECTIVEEQUIPMENTDETECTION(datasets.GeneratorBasedBuilder):
44
+ """protective-equipment-detection object detection dataset"""
45
+
46
+ VERSION = datasets.Version("1.0.0")
47
+ BUILDER_CONFIGS = [
48
+ PROTECTIVEEQUIPMENTDETECTIONConfig(
49
+ name="full",
50
+ description="Full version of protective-equipment-detection dataset.",
51
+ data_urls={
52
+ "train": "https://huggingface.co/datasets/keremberke/protective-equipment-detection/resolve/main/data/train.zip",
53
+ "validation": "https://huggingface.co/datasets/keremberke/protective-equipment-detection/resolve/main/data/valid.zip",
54
+ "test": "https://huggingface.co/datasets/keremberke/protective-equipment-detection/resolve/main/data/test.zip",
55
+ },
56
+ ),
57
+ PROTECTIVEEQUIPMENTDETECTIONConfig(
58
+ name="mini",
59
+ description="Mini version of protective-equipment-detection dataset.",
60
+ data_urls={
61
+ "train": "https://huggingface.co/datasets/keremberke/protective-equipment-detection/resolve/main/data/valid-mini.zip",
62
+ "validation": "https://huggingface.co/datasets/keremberke/protective-equipment-detection/resolve/main/data/valid-mini.zip",
63
+ "test": "https://huggingface.co/datasets/keremberke/protective-equipment-detection/resolve/main/data/valid-mini.zip",
64
+ },
65
+ )
66
+ ]
67
+
68
+ def _info(self):
69
+ features = datasets.Features(
70
+ {
71
+ "image_id": datasets.Value("int64"),
72
+ "image": datasets.Image(),
73
+ "width": datasets.Value("int32"),
74
+ "height": datasets.Value("int32"),
75
+ "objects": datasets.Sequence(
76
+ {
77
+ "id": datasets.Value("int64"),
78
+ "area": datasets.Value("int64"),
79
+ "bbox": datasets.Sequence(datasets.Value("float32"), length=4),
80
+ "category": datasets.ClassLabel(names=_CATEGORIES),
81
+ }
82
+ ),
83
+ }
84
+ )
85
+ return datasets.DatasetInfo(
86
+ features=features,
87
+ homepage=_HOMEPAGE,
88
+ citation=_CITATION,
89
+ license=_LICENSE,
90
+ )
91
+
92
+ def _split_generators(self, dl_manager):
93
+ data_files = dl_manager.download_and_extract(self.config.data_urls)
94
+ return [
95
+ datasets.SplitGenerator(
96
+ name=datasets.Split.TRAIN,
97
+ gen_kwargs={
98
+ "folder_dir": data_files["train"],
99
+ },
100
+ ),
101
+ datasets.SplitGenerator(
102
+ name=datasets.Split.VALIDATION,
103
+ gen_kwargs={
104
+ "folder_dir": data_files["validation"],
105
+ },
106
+ ),
107
+ datasets.SplitGenerator(
108
+ name=datasets.Split.TEST,
109
+ gen_kwargs={
110
+ "folder_dir": data_files["test"],
111
+ },
112
+ ),
113
+ ]
114
+
115
+ def _generate_examples(self, folder_dir):
116
+ def process_annot(annot, category_id_to_category):
117
+ return {
118
+ "id": annot["id"],
119
+ "area": annot["area"],
120
+ "bbox": annot["bbox"],
121
+ "category": category_id_to_category[annot["category_id"]],
122
+ }
123
+
124
+ image_id_to_image = {}
125
+ idx = 0
126
+
127
+ annotation_filepath = os.path.join(folder_dir, _ANNOTATION_FILENAME)
128
+ with open(annotation_filepath, "r") as f:
129
+ annotations = json.load(f)
130
+ category_id_to_category = {category["id"]: category["name"] for category in annotations["categories"]}
131
+ image_id_to_annotations = collections.defaultdict(list)
132
+ for annot in annotations["annotations"]:
133
+ image_id_to_annotations[annot["image_id"]].append(annot)
134
+ filename_to_image = {image["file_name"]: image for image in annotations["images"]}
135
+
136
+ for filename in os.listdir(folder_dir):
137
+ filepath = os.path.join(folder_dir, filename)
138
+ if filename in filename_to_image:
139
+ image = filename_to_image[filename]
140
+ objects = [
141
+ process_annot(annot, category_id_to_category) for annot in image_id_to_annotations[image["id"]]
142
+ ]
143
+ with open(filepath, "rb") as f:
144
+ image_bytes = f.read()
145
+ yield idx, {
146
+ "image_id": image["id"],
147
+ "image": {"path": filepath, "bytes": image_bytes},
148
+ "width": image["width"],
149
+ "height": image["height"],
150
+ "objects": objects,
151
+ }
152
+ idx += 1
split_name_to_num_samples.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"valid": 3570, "test": 1935, "train": 6473}
thumbnail.jpg ADDED

Git LFS Details

  • SHA256: 375ece3f355820ee24f87d42a0f70c613d410f4815e29171337270ffe30f0b59
  • Pointer size: 131 Bytes
  • Size of remote file: 284 kB