mask_kaggle / README.md
Elisa's picture
Upload folder using huggingface_hub
8070de7 verified
metadata
language:
  - en
license:
  - odbl
pretty_name: Face Mask Detection
size_categories:
  - 1K<n<10K
source_datasets:
  - original
task_categories:
  - image-classification

Dataset Description

Dataset Summary

A dataset from kaggle. origin: https://dphi.tech/challenges/data-sprint-76-human-activity-recognition/233/data

Introduction

-

PROBLEM STATEMENT

-

About Files

  • Train - contains all the images that are to be used for training your model. In this folder you will find 15 folders namely - 'calling', ’clapping’, ’cycling’, ’dancing’, ‘drinking’, ‘eating’, ‘fighting’, ‘hugging’, ‘laughing’, ‘listeningtomusic’, ‘running’, ‘sitting’, ‘sleeping’, texting’, ‘using_laptop’ which contain the images of the respective human activities.
  • Test - contains 5400 images of Human Activities. For these images you are required to make predictions as the respective class names -'calling', ’clapping’, ’cycling’, ’dancing’, ‘drinking’, ‘eating’, ‘fighting’, ‘hugging’, ‘laughing’, ‘listeningtomusic’, ‘running’, ‘sitting’, ‘sleeping’, texting’, ‘using_laptop’.
  • Testing_set.csv - this is the order of the predictions for each image that is to be submitted on the platform. Make sure the predictions you download are with their image’s filename in the same order as given in this file.
  • sample_submission: This is a csv file that contains the sample submission for the data sprint.

Data Fields

The data instances have the following fields:

  • image: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
  • labels: an int classification label. All test data is labeled 0.

Class Label Mappings:

{
    'mask_weared_incorrect': 0,
    'with_mask': 1,
    'without_mask': 2
}

Data Splits

train test validation
# of examples 1500 180 180

Data Size

  • download: 46 MiB
  • generated: 46.8 MiB
  • total: 92.8 MiB
>>> from datasets import load_dataset
>>> ds = load_dataset("poolrf2001/mask")
>>> ds
DatasetDict({
    test: Dataset({
        features: ['image', 'labels'],
        num_rows: 180
    })
    train: Dataset({
        features: ['image', 'labels'],
        num_rows: 1500
    })
    validation: Dataset({
        features: ['image', 'labels'],
        num_rows: 180
    })
})
>>> ds["train"].features
{'image': Image(decode=True, id=None),
 'labels': ClassLabel(num_classes=3, names=['mask_weared_incorrect', 'with_mask', 'without_mask'], id=None)}
 
>>> ds["train"][0]
{'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=180x180>,
 'labels': 1}