Kai1014 commited on
Commit
dcba8a9
1 Parent(s): aa105d2

Upload folder using huggingface_hub

Browse files
Files changed (7) hide show
  1. .gitattributes +0 -1
  2. README.md +90 -0
  3. data_mask.json +4 -0
  4. mask.py +108 -0
  5. test.zip +3 -0
  6. train.zip +3 -0
  7. validation.zip +3 -0
.gitattributes CHANGED
@@ -26,7 +26,6 @@
26
  *.safetensors filter=lfs diff=lfs merge=lfs -text
27
  saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
  *.tar.* filter=lfs diff=lfs merge=lfs -text
29
- *.tar filter=lfs diff=lfs merge=lfs -text
30
  *.tflite filter=lfs diff=lfs merge=lfs -text
31
  *.tgz filter=lfs diff=lfs merge=lfs -text
32
  *.wasm filter=lfs diff=lfs merge=lfs -text
 
26
  *.safetensors filter=lfs diff=lfs merge=lfs -text
27
  saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
  *.tar.* filter=lfs diff=lfs merge=lfs -text
 
29
  *.tflite filter=lfs diff=lfs merge=lfs -text
30
  *.tgz filter=lfs diff=lfs merge=lfs -text
31
  *.wasm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license:
5
+ - odbl
6
+ pretty_name: Face Mask Detection
7
+ size_categories:
8
+ - 1K<n<10K
9
+ source_datasets:
10
+ - original
11
+ task_categories:
12
+ - image-classification
13
+ ---
14
+
15
+ ## Dataset Description
16
+ - **Homepage:** [Face Mask Detection Dataset](https://www.kaggle.com/datasets/vijaykumar1799/face-mask-detection)
17
+ - **Repository:** N/A
18
+ - **Paper:** N/A
19
+ - **Leaderboard:** N/A
20
+ - **Point of Contact:** N/A
21
+
22
+ ## Dataset Summary
23
+ A dataset from [kaggle](https://www.kaggle.com/datasets/vijaykumar1799/face-mask-detection). origin: https://dphi.tech/challenges/data-sprint-76-human-activity-recognition/233/data
24
+
25
+ ### Introduction
26
+
27
+ -
28
+
29
+ ### PROBLEM STATEMENT
30
+
31
+ -
32
+
33
+ ### About Files
34
+
35
+ - Train - contains all the images that are to be used for training your model. In this folder you will find 15 folders namely - 'calling', ’clapping’, ’cycling’, ’dancing’, ‘drinking’, ‘eating’, ‘fighting’, ‘hugging’, ‘laughing’, ‘listeningtomusic’, ‘running’, ‘sitting’, ‘sleeping’, texting’, ‘using_laptop’ which contain the images of the respective human activities.
36
+ - Test - contains 5400 images of Human Activities. For these images you are required to make predictions as the respective class names -'calling', ’clapping’, ’cycling’, ’dancing’, ‘drinking’, ‘eating’, ‘fighting’, ‘hugging’, ‘laughing’, ‘listeningtomusic’, ‘running’, ‘sitting’, ‘sleeping’, texting’, ‘using_laptop’.
37
+ - Testing_set.csv - this is the order of the predictions for each image that is to be submitted on the platform. Make sure the predictions you download are with their image’s filename in the same order as given in this file.
38
+ - sample_submission: This is a csv file that contains the sample submission for the data sprint.
39
+
40
+ ### Data Fields
41
+ The data instances have the following fields:
42
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
43
+ - `labels`: an `int` classification label. All `test` data is labeled 0.
44
+
45
+ ### Class Label Mappings:
46
+ ```
47
+ {
48
+ 'mask_weared_incorrect': 0,
49
+ 'with_mask': 1,
50
+ 'without_mask': 2
51
+ }
52
+ ```
53
+
54
+ ### Data Splits
55
+ | | train | test | validation|
56
+ |---------------|--------|------|----------:|
57
+ | # of examples | 1500 | 180 | 180
58
+
59
+ ### Data Size
60
+
61
+ - download: 46 MiB
62
+ - generated: 46.8 MiB
63
+ - total: 92.8 MiB
64
+
65
+ ```pycon
66
+ >>> from datasets import load_dataset
67
+ >>> ds = load_dataset("poolrf2001/mask")
68
+ >>> ds
69
+ DatasetDict({
70
+ test: Dataset({
71
+ features: ['image', 'labels'],
72
+ num_rows: 180
73
+ })
74
+ train: Dataset({
75
+ features: ['image', 'labels'],
76
+ num_rows: 1500
77
+ })
78
+ validation: Dataset({
79
+ features: ['image', 'labels'],
80
+ num_rows: 180
81
+ })
82
+ })
83
+ >>> ds["train"].features
84
+ {'image': Image(decode=True, id=None),
85
+ 'labels': ClassLabel(num_classes=3, names=['mask_weared_incorrect', 'with_mask', 'without_mask'], id=None)}
86
+
87
+ >>> ds["train"][0]
88
+ {'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=180x180>,
89
+ 'labels': 1}
90
+ ```
data_mask.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {"poolrf2001--mask": {"description": "", "citation": "", "homepage": "https://www.kaggle.com/datasets/vijaykumar1799/face-mask-detection", "license": "odbl-1.0", "features": {"image": {"decode": true, "id": null, "_type": "Image"},
2
+ "labels": {"num_classes": 3, "names": ['mask_weared_incorrect', 'with_mask', 'without_mask'], "id": null, "_type": "ClassLabel"}},
3
+ "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": null, "config_name": null, "version": null, "splits": {"train": {"name": "train", "num_bytes": 49140913, "num_examples": 1500, "dataset_name": "Mask"},
4
+ "test": {"name": "test", "num_bytes": 4772564, "num_examples": 180, "dataset_name": "Mask"},"validation": {"name": "validation", "num_bytes": 4848704, "num_examples": 180, "dataset_name": "Mask"}}, "download_checksums": null, "download_size": 48234496, "post_processing_size": null, "dataset_size": 49073356.8, "size_in_bytes": 97307852.8}}
mask.py ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ """Untitled9.ipynb
3
+
4
+ Automatically generated by Colaboratory.
5
+
6
+ Original file is located at
7
+ https://colab.research.google.com/drive/1qRAN4BBFZkzQKFec3qaXZ_obxuWQa6c6
8
+ """
9
+
10
+ # coding=utf-8
11
+ # Copyright 2021 The HuggingFace Datasets Authors and the current dataset script contributor.
12
+ #
13
+ # Licensed under the Apache License, Version 2.0 (the "License");
14
+ # you may not use this file except in compliance with the License.
15
+ # You may obtain a copy of the License at
16
+ #
17
+ # http://www.apache.org/licenses/LICENSE-2.0
18
+ #
19
+ # Unless required by applicable law or agreed to in writing, software
20
+ # distributed under the License is distributed on an "AS IS" BASIS,
21
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
22
+ # See the License for the specific language governing permissions and
23
+ # limitations under the License.
24
+ """Beans leaf dataset with images of diseased and health leaves."""
25
+
26
+ import os
27
+
28
+ import datasets
29
+ from datasets.tasks import ImageClassification
30
+
31
+
32
+ _HOMEPAGE = "https://huggingface.co/datasets/poolrf2001/mask"
33
+
34
+ _CITATION = """\
35
+ @ONLINE {masksdata,
36
+ author="Pool_rf",
37
+ title="Mask face dataset",
38
+ month="January",
39
+ year="2023",
40
+ url="https://huggingface.co/datasets/poolrf2001/mask"
41
+ }
42
+ """
43
+
44
+ _DESCRIPTION = """\
45
+ MaskFace es un conjunto de datos de imágenes de personas con y sin mascarillas Consta de 3 clases: 1 clase de si la persona está puesta la mascarilla,
46
+ otra clase si la persona no esta puesta la mascarilla y una clase donde la persona está puesta la mascarilla incorrectamente.
47
+ """
48
+
49
+ _URLS = {
50
+ "train": "https://huggingface.co/datasets/poolrf2001/mask/blob/main/train.zip",
51
+ "validation": "https://huggingface.co/datasets/poolrf2001/mask/blob/main/validation.zip",
52
+ "test": "https://huggingface.co/datasets/poolrf2001/mask/blob/main/test.zip",
53
+ }
54
+
55
+ _NAMES = ["mask_weared_incorrect", "width_mask", "without_mask"]
56
+
57
+
58
+ class mask(datasets.GeneratorBasedBuilder):
59
+ """MaskFace images dataset."""
60
+
61
+ def _info(self):
62
+ return datasets.DatasetInfo(
63
+ description=_DESCRIPTION,
64
+ features=datasets.Features(
65
+ {
66
+
67
+ "image": datasets.Image(),
68
+ "labels": datasets.features.ClassLabel(names=_NAMES),
69
+ }
70
+ ),
71
+ supervised_keys=("image", "labels"),
72
+ homepage=_HOMEPAGE,
73
+ citation=_CITATION,
74
+ task_templates=[ImageClassification(image_column="image", label_column="labels")],
75
+ )
76
+
77
+ def _split_generators(self, dl_manager):
78
+ data_files = dl_manager.download_and_extract(_URLS)
79
+ return [
80
+ datasets.SplitGenerator(
81
+ name=datasets.Split.TRAIN,
82
+ gen_kwargs={
83
+ "files": dl_manager.iter_files([data_files["train"]]),
84
+ },
85
+ ),
86
+ datasets.SplitGenerator(
87
+ name=datasets.Split.VALIDATION,
88
+ gen_kwargs={
89
+ "files": dl_manager.iter_files([data_files["validation"]]),
90
+ },
91
+ ),
92
+ datasets.SplitGenerator(
93
+ name=datasets.Split.TEST,
94
+ gen_kwargs={
95
+ "files": dl_manager.iter_files([data_files["test"]]),
96
+ },
97
+ ),
98
+ ]
99
+
100
+ def _generate_examples(self, files):
101
+ for i, path in enumerate(files):
102
+ file_name = os.path.basename(path)
103
+ if file_name.endswith(".png"):
104
+ yield i, {
105
+
106
+ "image": path,
107
+ "labels": os.path.basename(os.path.dirname(path)).lower(),
108
+ }
test.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:768761c5ebcd81c77499aa9e34f235b478e19ef8f2c1ffdbc1cf58ac36ff9b7b
3
+ size 4693735
train.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:320399fcd07832fcbd8fda359ece13d65a619c05f9c5cf4a8fbd7d266b2b6a85
3
+ size 38806014
validation.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea6ff3f5bd9a5128c76ea599fc5b7059b35f08bae28dd89c42c2c2bfc55e7597
3
+ size 4758962