keremberke commited on
Commit
0ee061a
1 Parent(s): 68795f3

dataset uploaded by roboflow2huggingface package

Browse files
README.dataset.txt ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Nike Adidas and Converse Shoes Classification > rawImages_70-20-10split
2
+ https://universe.roboflow.com/popular-benchmarks/nike-adidas-and-converse-shoes-classification
3
+
4
+ Provided by a Roboflow user
5
+ License: Public Domain
6
+
7
+ ## Nike, Adidas and Converse Shoes Dataset for Classification
8
+
9
+ This dataset was obtained from [Kaggle](https://kaggle.com): https://www.kaggle.com/datasets/die9origephit/nike-adidas-and-converse-imaged/
10
+
11
+ ### Dataset Collection Methodology:
12
+ "The dataset was obtained downloading images from `Google images`. The images with a `.webp` format were transformed into `.jpg` images. The obtained images were randomly shuffled and resized so that all the images had a resolution of `240x240 pixels`. Then, they were split into `train` and `test` datasets and saved."
13
+
14
+ ### Versions:
15
+ * *v1*: `original_raw-images`: the original images without [Preprocessing](https://docs.roboflow.com/image-transformations/image-preprocessing) or [Augmentation](https://docs.roboflow.com/image-transformations/image-augmentation) applied, other than [Auto-Orient to remove EXIF data](https://blog.roboflow.com/exif-auto-orientation/). These images are in the original train/test split from Kaggle: `237 images in each train set` and `38 images in each test set`
16
+ * *v2*: `original_trainTestSplit-augmented3x`: the original train/test split, augmented with 3x image generation. This version was not trained with [Roboflow Train](https://docs.roboflow.com/train).
17
+ * *v3*: `original_trainTestSplit-augmented5x`: the original train/test split, augmented with 5x image generation. This version was not trained with Roboflow Train.
18
+ * *v4*: `rawImages_70-20-10split`: the original images without Preprocessing or Augmentation applied, other than Auto-Orient to remove EXIF data. Dataset splies were modified to a `70% train`, `20% valid`, `10%` test [train/valid/test split](https://blog.roboflow.com/train-test-split/)
19
+ * NOTE: 70%/20%/10% split: `576 images in train set`, `166 images in valid set`, `83 images in test set`
20
+ * *v5*: `70-20-10split-augmented3x`: modified to a `70% train`, `20% valid`, `10%` test train/valid/test split, augmented with 3x image generation. This version was trained with Roboflow Train.
21
+ * *v6*: `70-20-10split-augmented5x`: modified to a `70% train`, `20% valid`, `10%` test train/valid/test split, augmented with 5x image generation. This version was trained with Roboflow Train.
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - image-classification
4
+ tags:
5
+ - roboflow
6
+ - roboflow2huggingface
7
+ - Sports
8
+ - Retail
9
+ - Benchmark
10
+ ---
11
+
12
+ <div align="center">
13
+ <img width="640" alt="keremberke/shoe-classification" src="https://huggingface.co/datasets/keremberke/shoe-classification/resolve/main/thumbnail.jpg">
14
+ </div>
15
+
16
+ ### Dataset Labels
17
+
18
+ ```
19
+ ['converse', 'adidas', 'nike']
20
+ ```
21
+
22
+
23
+ ### Number of Images
24
+
25
+ ```json
26
+ {'train': 576, 'test': 83, 'valid': 166}
27
+ ```
28
+
29
+
30
+ ### How to Use
31
+
32
+ - Install [datasets](https://pypi.org/project/datasets/):
33
+
34
+ ```bash
35
+ pip install datasets
36
+ ```
37
+
38
+ - Load the dataset:
39
+
40
+ ```python
41
+ from datasets import load_dataset
42
+
43
+ ds = load_dataset("keremberke/shoe-classification", name="full")
44
+ example = ds['train'][0]
45
+ ```
46
+
47
+ ### Roboflow Dataset Page
48
+ [https://universe.roboflow.com/popular-benchmarks/nike-adidas-and-converse-shoes-classification/dataset/4](https://universe.roboflow.com/popular-benchmarks/nike-adidas-and-converse-shoes-classification/dataset/4?ref=roboflow2huggingface)
49
+
50
+ ### Citation
51
+
52
+ ```
53
+
54
+ ```
55
+
56
+ ### License
57
+ Public Domain
58
+
59
+ ### Dataset Summary
60
+ This dataset was exported via roboflow.com on October 28, 2022 at 2:38 AM GMT
61
+
62
+ Roboflow is an end-to-end computer vision platform that helps you
63
+ * collaborate with your team on computer vision projects
64
+ * collect & organize images
65
+ * understand unstructured image data
66
+ * annotate, and create datasets
67
+ * export, train, and deploy computer vision models
68
+ * use active learning to improve your dataset over time
69
+
70
+ It includes 825 images.
71
+ Shoes are annotated in folder format.
72
+
73
+ The following pre-processing was applied to each image:
74
+ * Auto-orientation of pixel data (with EXIF-orientation stripping)
75
+
76
+ No image augmentation techniques were applied.
77
+
78
+
79
+
README.roboflow.txt ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ Nike Adidas and Converse Shoes Classification - v4 rawImages_70-20-10split
3
+ ==============================
4
+
5
+ This dataset was exported via roboflow.com on October 28, 2022 at 2:38 AM GMT
6
+
7
+ Roboflow is an end-to-end computer vision platform that helps you
8
+ * collaborate with your team on computer vision projects
9
+ * collect & organize images
10
+ * understand unstructured image data
11
+ * annotate, and create datasets
12
+ * export, train, and deploy computer vision models
13
+ * use active learning to improve your dataset over time
14
+
15
+ It includes 825 images.
16
+ Shoes are annotated in folder format.
17
+
18
+ The following pre-processing was applied to each image:
19
+ * Auto-orientation of pixel data (with EXIF-orientation stripping)
20
+
21
+ No image augmentation techniques were applied.
22
+
23
+
data/test.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d065e7520eb26d1320420e77468a6543e67ce2ff5c445ead4ef0f8ad76cd1084
3
+ size 769200
data/train.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20c43e3ca05372202b35a44b61d418b5dc812f23cb893d36f1ecdb5b91e64add
3
+ size 5127451
data/valid-mini.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:562174fb8186ae8ce519dcc309a6cd5cabe43fd82dcf0a0544f2cc28123f6e7f
3
+ size 18516
data/valid.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90ee4574c99c8e08c5fd83cef2c94f323211afa8d1c2e07eb4f9ca665877ac92
3
+ size 1526731
shoe-classification.py ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ import datasets
4
+ from datasets.tasks import ImageClassification
5
+
6
+
7
+ _HOMEPAGE = "https://universe.roboflow.com/popular-benchmarks/nike-adidas-and-converse-shoes-classification/dataset/4"
8
+ _LICENSE = "Public Domain"
9
+ _CITATION = """\
10
+
11
+ """
12
+ _CATEGORIES = ['converse', 'adidas', 'nike']
13
+
14
+
15
+ class SHOECLASSIFICATIONConfig(datasets.BuilderConfig):
16
+ """Builder Config for shoe-classification"""
17
+
18
+ def __init__(self, data_urls, **kwargs):
19
+ """
20
+ BuilderConfig for shoe-classification.
21
+
22
+ Args:
23
+ data_urls: `dict`, name to url to download the zip file from.
24
+ **kwargs: keyword arguments forwarded to super.
25
+ """
26
+ super(SHOECLASSIFICATIONConfig, self).__init__(version=datasets.Version("1.0.0"), **kwargs)
27
+ self.data_urls = data_urls
28
+
29
+
30
+ class SHOECLASSIFICATION(datasets.GeneratorBasedBuilder):
31
+ """shoe-classification image classification dataset"""
32
+
33
+ VERSION = datasets.Version("1.0.0")
34
+ BUILDER_CONFIGS = [
35
+ SHOECLASSIFICATIONConfig(
36
+ name="full",
37
+ description="Full version of shoe-classification dataset.",
38
+ data_urls={
39
+ "train": "https://huggingface.co/datasets/keremberke/shoe-classification/resolve/main/data/train.zip",
40
+ "validation": "https://huggingface.co/datasets/keremberke/shoe-classification/resolve/main/data/valid.zip",
41
+ "test": "https://huggingface.co/datasets/keremberke/shoe-classification/resolve/main/data/test.zip",
42
+ }
43
+ ,
44
+ ),
45
+ SHOECLASSIFICATIONConfig(
46
+ name="mini",
47
+ description="Mini version of shoe-classification dataset.",
48
+ data_urls={
49
+ "train": "https://huggingface.co/datasets/keremberke/shoe-classification/resolve/main/data/valid-mini.zip",
50
+ "validation": "https://huggingface.co/datasets/keremberke/shoe-classification/resolve/main/data/valid-mini.zip",
51
+ "test": "https://huggingface.co/datasets/keremberke/shoe-classification/resolve/main/data/valid-mini.zip",
52
+ },
53
+ )
54
+ ]
55
+
56
+ def _info(self):
57
+ return datasets.DatasetInfo(
58
+ features=datasets.Features(
59
+ {
60
+ "image_file_path": datasets.Value("string"),
61
+ "image": datasets.Image(),
62
+ "labels": datasets.features.ClassLabel(names=_CATEGORIES),
63
+ }
64
+ ),
65
+ supervised_keys=("image", "labels"),
66
+ homepage=_HOMEPAGE,
67
+ citation=_CITATION,
68
+ license=_LICENSE,
69
+ task_templates=[ImageClassification(image_column="image", label_column="labels")],
70
+ )
71
+
72
+ def _split_generators(self, dl_manager):
73
+ data_files = dl_manager.download_and_extract(self.config.data_urls)
74
+ return [
75
+ datasets.SplitGenerator(
76
+ name=datasets.Split.TRAIN,
77
+ gen_kwargs={
78
+ "files": dl_manager.iter_files([data_files["train"]]),
79
+ },
80
+ ),
81
+ datasets.SplitGenerator(
82
+ name=datasets.Split.VALIDATION,
83
+ gen_kwargs={
84
+ "files": dl_manager.iter_files([data_files["validation"]]),
85
+ },
86
+ ),
87
+ datasets.SplitGenerator(
88
+ name=datasets.Split.TEST,
89
+ gen_kwargs={
90
+ "files": dl_manager.iter_files([data_files["test"]]),
91
+ },
92
+ ),
93
+ ]
94
+
95
+ def _generate_examples(self, files):
96
+ for i, path in enumerate(files):
97
+ file_name = os.path.basename(path)
98
+ if file_name.endswith((".jpg", ".png", ".jpeg", ".bmp", ".tif", ".tiff")):
99
+ yield i, {
100
+ "image_file_path": path,
101
+ "image": path,
102
+ "labels": os.path.basename(os.path.dirname(path)),
103
+ }
split_name_to_num_samples.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train": 576, "test": 83, "valid": 166}
thumbnail.jpg ADDED

Git LFS Details

  • SHA256: c8a548da4dd03f173a98a9b639a8429d93d39107c8e84f6f2ea7ab73879a1b06
  • Pointer size: 130 Bytes
  • Size of remote file: 96.1 kB