Datasets:
File size: 2,244 Bytes
c5aaf8b 21d61b2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
'5': F
'6': G
'7': H
'8': I
'9': J
splits:
- name: train
num_bytes: 6842235.510231657
num_examples: 14979
- name: test
num_bytes: 1715013.5296924065
num_examples: 3745
download_size: 8865158
dataset_size: 8557249.039924063
---
# Dataset Card for "notMNIST"
## Overview
The notMNIST dataset is a collection of images of letters from A to J in various fonts. It is designed as a more challenging alternative to the traditional MNIST dataset, which consists of handwritten digits. The notMNIST dataset is commonly used in machine learning and computer vision tasks for character recognition.
## Dataset Information
Number of Classes: 10 (A to J)
Number of Samples: 187,24
Image Size: 28 x 28 pixels
Color Channels: Grayscale
## Dataset Structure
The dataset is split into a training set and a test set. Each class has its own subdirectory containing images of that class. The directory structure is as follows:
notMNIST/
|-- train/
| |-- A/
| |-- B/
| |-- ...
| |-- J/
|
|-- test/
| |-- A/
| |-- B/
| |-- ...
| |-- J/
## Acknowledgements
http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html
https://www.kaggle.com/datasets/lubaroli/notmnist
## Inspiration
This is a pretty good dataset to train classifiers! According to Yaroslav:
Judging by the examples, one would expect this to be a harder task
than MNIST. This seems to be the case -- logistic regression on top of
stacked auto-encoder with fine-tuning gets about 89% accuracy whereas
same approach gives got 98% on MNIST. Dataset consists of small
hand-cleaned part, about 19k instances, and large uncleaned dataset,
500k instances. Two parts have approximately 0.5% and 6.5% label error
rate. I got this by looking through glyphs and counting how often my
guess of the letter didn't match it's unicode value in the font file.
|