Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
expert-generated
Source Datasets:
extended|other-nist
Tags:
License:
system HF staff commited on
Commit
198356e
0 Parent(s):

Update files from the datasets library (from 1.2.1)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.1

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +146 -0
  3. dataset_infos.json +1 -0
  4. dummy/mnist/1.0.0/dummy_data.zip +3 -0
  5. mnist.py +116 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - experts
4
+ language_creators:
5
+ - found
6
+ languages: []
7
+ licenses:
8
+ - MIT
9
+ multilinguality: []
10
+ size_categories:
11
+ - 10K<n<100K
12
+ source_datasets:
13
+ - extended|other-nist
14
+ task_categories:
15
+ - other
16
+ task_ids:
17
+ - other-other-image-classification
18
+ ---
19
+
20
+ # Dataset Card for MNIST
21
+
22
+ ## Table of Contents
23
+ - [Dataset Description](#dataset-description)
24
+ - [Dataset Summary](#dataset-summary)
25
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
26
+ - [Languages](#languages)
27
+ - [Dataset Structure](#dataset-structure)
28
+ - [Data Instances](#data-instances)
29
+ - [Data Fields](#data-instances)
30
+ - [Data Splits](#data-instances)
31
+ - [Dataset Creation](#dataset-creation)
32
+ - [Curation Rationale](#curation-rationale)
33
+ - [Source Data](#source-data)
34
+ - [Annotations](#annotations)
35
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
36
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
37
+ - [Social Impact of Dataset](#social-impact-of-dataset)
38
+ - [Discussion of Biases](#discussion-of-biases)
39
+ - [Other Known Limitations](#other-known-limitations)
40
+ - [Additional Information](#additional-information)
41
+ - [Dataset Curators](#dataset-curators)
42
+ - [Licensing Information](#licensing-information)
43
+ - [Citation Information](#citation-information)
44
+
45
+ ## Dataset Description
46
+
47
+ - **Homepage:** http://yann.lecun.com/exdb/mnist/
48
+ - **Repository:**
49
+ - **Paper:** MNIST handwritten digit database by Yann LeCun, Corinna Cortes, and CJ Burges
50
+ - **Leaderboard:**
51
+ - **Point of Contact:**
52
+
53
+ ### Dataset Summary
54
+
55
+ The MNIST dataset consists of 70,000 28x28 black-and-white images of handwritten digits extracted from two NIST databases. There are 60,000 images in the training dataset and 10,000 images in the validation dataset, one class per digit so a total of 10 classes, with 7,000 images (6,000 train images and 1,000 test images) per class.
56
+ Half of the image were drawn by Census Bureau employees and the other half by high school students (this split is evenly distributed in the training and testing sets).
57
+
58
+ ### Supported Tasks and Leaderboards
59
+
60
+ [More Information Needed]
61
+
62
+ ### Languages
63
+
64
+ English
65
+
66
+ ## Dataset Structure
67
+
68
+ ### Data Instances
69
+
70
+ A data point comprises an image and its label.
71
+
72
+ ### Data Fields
73
+
74
+ - image: a 2d array of integers representing the 28x28 image.
75
+ - label: an integer between 0 and 9 representing the digit.
76
+
77
+ ### Data Splits
78
+
79
+ The data is split into training and test set. All the images in the test set were drawn by different individuals than the images in the training set. The training set contains 60,000 images and the test set 10,000 images.
80
+
81
+ ## Dataset Creation
82
+
83
+ ### Curation Rationale
84
+
85
+ The MNIST database was created to provide a testbed for people wanting to try pattern recognition methods or machine learning algorithms while spending minimal efforts on preprocessing and formatting. Images of the original dataset (NIST) were in two groups, one consisting of images drawn by Census Bureau employees and one consisting of images drawn by high school students. In NIST, the training set was built by grouping all the images of the Census Bureau employees, and the test set was built by grouping the images form the high school students.
86
+ The goal in building MNIST was to have a training and test set following the same distributions, so the training set contains 30,000 images drawn by Census Bureau employees and 30,000 images drawn by high school students, and the test set contains 5,000 images of each group. The curators took care to make sure all the images in the test set were drawn by different individuals than the images in the training set.
87
+
88
+ ### Source Data
89
+
90
+ #### Initial Data Collection and Normalization
91
+
92
+ The original images from NIST were size normalized to fit a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels (i.e., pixels don't simply have a value of black and white, but a level of greyness from 0 to 255) as a result of the anti-aliasing technique used by the normalization algorithm. The images were then centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field.
93
+
94
+ #### Who are the source image producers?
95
+
96
+ Half of the source images were drawn by Census Bureau employees, half by high school students. According to the dataset curator, the images from the first group are more easily recognizable.
97
+
98
+ ### Annotations
99
+
100
+ #### Annotation process
101
+
102
+ The images were not annotated after their creation: the image creators annotated their images with the corresponding label after drawing them.
103
+
104
+ #### Who are the annotators?
105
+
106
+ Same as the source data creators.
107
+
108
+ ### Personal and Sensitive Information
109
+
110
+ [More Information Needed]
111
+
112
+ ## Considerations for Using the Data
113
+
114
+ ### Social Impact of Dataset
115
+
116
+ [More Information Needed]
117
+
118
+ ### Discussion of Biases
119
+
120
+ [More Information Needed]
121
+
122
+ ### Other Known Limitations
123
+
124
+ [More Information Needed]
125
+
126
+ ## Additional Information
127
+
128
+ ### Dataset Curators
129
+
130
+ Chris Burges, Corinna Cortes and Yann LeCun
131
+
132
+ ### Licensing Information
133
+
134
+ MIT Licence
135
+
136
+ ### Citation Information
137
+
138
+ ```
139
+ @article{lecun2010mnist,
140
+ title={MNIST handwritten digit database},
141
+ author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
142
+ journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
143
+ volume={2},
144
+ year={2010}
145
+ }
146
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"mnist": {"description": "The MNIST dataset consists of 70,000 28x28 black-and-white images in 10 classes (one for each digits), with 7,000\nimages per class. There are 60,000 training images and 10,000 test images.\n", "citation": "@article{lecun2010mnist,\n title={MNIST handwritten digit database},\n author={LeCun, Yann and Cortes, Corinna and Burges, CJ},\n journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},\n volume={2},\n year={2010}\n}\n", "homepage": "http://yann.lecun.com/exdb/mnist/", "license": "", "features": {"image": {"shape": [28, 28], "dtype": "uint8", "id": null, "_type": "Array2D"}, "label": {"num_classes": 10, "names": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": {"input": "image", "output": "label"}, "builder_name": "mnist", "config_name": "mnist", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 54480048, "num_examples": 60000, "dataset_name": "mnist"}, "test": {"name": "test", "num_bytes": 9080008, "num_examples": 10000, "dataset_name": "mnist"}}, "download_checksums": {"https://storage.googleapis.com/cvdf-datasets/mnist/train-images-idx3-ubyte.gz": {"num_bytes": 9912422, "checksum": "440fcabf73cc546fa21475e81ea370265605f56be210a4024d2ca8f203523609"}, "https://storage.googleapis.com/cvdf-datasets/mnist/train-labels-idx1-ubyte.gz": {"num_bytes": 28881, "checksum": "3552534a0a558bbed6aed32b30c495cca23d567ec52cac8be1a0730e8010255c"}, "https://storage.googleapis.com/cvdf-datasets/mnist/t10k-images-idx3-ubyte.gz": {"num_bytes": 1648877, "checksum": "8d422c7b0a1c1c79245a5bcf07fe86e33eeafee792b84584aec276f5a2dbc4e6"}, "https://storage.googleapis.com/cvdf-datasets/mnist/t10k-labels-idx1-ubyte.gz": {"num_bytes": 4542, "checksum": "f7ae60f92e00ec6debd23a6088c31dbd2371eca3ffa0defaefb259924204aec6"}}, "download_size": 11594722, "post_processing_size": null, "dataset_size": 63560056, "size_in_bytes": 75154778}}
dummy/mnist/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1578d4564fb76269e42a289ca55f9edef694a898b7be3b288c396ee54c36ea01
3
+ size 2672
mnist.py ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """MNIST Data Set"""
18
+
19
+ from __future__ import absolute_import, division, print_function
20
+
21
+ import struct
22
+
23
+ import numpy as np
24
+
25
+ import datasets
26
+
27
+
28
+ _CITATION = """\
29
+ @article{lecun2010mnist,
30
+ title={MNIST handwritten digit database},
31
+ author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
32
+ journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
33
+ volume={2},
34
+ year={2010}
35
+ }
36
+ """
37
+
38
+ _DESCRIPTION = """\
39
+ The MNIST dataset consists of 70,000 28x28 black-and-white images in 10 classes (one for each digits), with 7,000
40
+ images per class. There are 60,000 training images and 10,000 test images.
41
+ """
42
+
43
+ _URL = "https://storage.googleapis.com/cvdf-datasets/mnist/"
44
+ _URLS = {
45
+ "train_images": "train-images-idx3-ubyte.gz",
46
+ "train_labels": "train-labels-idx1-ubyte.gz",
47
+ "test_images": "t10k-images-idx3-ubyte.gz",
48
+ "test_labels": "t10k-labels-idx1-ubyte.gz",
49
+ }
50
+
51
+
52
+ class MNIST(datasets.GeneratorBasedBuilder):
53
+ """MNIST Data Set"""
54
+
55
+ BUILDER_CONFIGS = [
56
+ datasets.BuilderConfig(
57
+ name="mnist",
58
+ version=datasets.Version("1.0.0"),
59
+ description=_DESCRIPTION,
60
+ )
61
+ ]
62
+
63
+ def _info(self):
64
+ return datasets.DatasetInfo(
65
+ description=_DESCRIPTION,
66
+ features=datasets.Features(
67
+ {
68
+ "image": datasets.Array2D(shape=(28, 28), dtype="uint8"),
69
+ "label": datasets.features.ClassLabel(names=["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]),
70
+ }
71
+ ),
72
+ supervised_keys=("image", "label"),
73
+ homepage="http://yann.lecun.com/exdb/mnist/",
74
+ citation=_CITATION,
75
+ )
76
+
77
+ def _split_generators(self, dl_manager):
78
+ urls_to_download = {key: _URL + fname for key, fname in _URLS.items()}
79
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
80
+ print(downloaded_files)
81
+
82
+ return [
83
+ datasets.SplitGenerator(
84
+ name=datasets.Split.TRAIN,
85
+ gen_kwargs={
86
+ "filepath": [downloaded_files["train_images"], downloaded_files["train_labels"]],
87
+ "split": "train",
88
+ },
89
+ ),
90
+ datasets.SplitGenerator(
91
+ name=datasets.Split.TEST,
92
+ gen_kwargs={
93
+ "filepath": [downloaded_files["test_images"], downloaded_files["test_labels"]],
94
+ "split": "test",
95
+ },
96
+ ),
97
+ ]
98
+
99
+ def _generate_examples(self, filepath, split):
100
+ """This function returns the examples in the raw form."""
101
+ # Images
102
+ with open(filepath[0], "rb") as f:
103
+ # First 16 bytes contain some metadata
104
+ _ = f.read(4)
105
+ size = struct.unpack(">I", f.read(4))[0]
106
+ _ = f.read(8)
107
+ images = np.frombuffer(f.read(), dtype=np.uint8).reshape(size, 28, 28)
108
+
109
+ # Labels
110
+ with open(filepath[1], "rb") as f:
111
+ # First 8 bytes contain some metadata
112
+ _ = f.read(8)
113
+ labels = np.frombuffer(f.read(), dtype=np.uint8)
114
+
115
+ for idx in range(size):
116
+ yield idx, {"image": images[idx], "label": str(labels[idx])}