notMNIST / README.md
anubhavmaity's picture
Update README.md
21d61b2
|
raw
history blame
No virus
2.24 kB
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: image
      dtype: image
    - name: label
      dtype:
        class_label:
          names:
            '0': A
            '1': B
            '2': C
            '3': D
            '4': E
            '5': F
            '6': G
            '7': H
            '8': I
            '9': J
  splits:
    - name: train
      num_bytes: 6842235.510231657
      num_examples: 14979
    - name: test
      num_bytes: 1715013.5296924065
      num_examples: 3745
  download_size: 8865158
  dataset_size: 8557249.039924063

Dataset Card for "notMNIST"

Overview

The notMNIST dataset is a collection of images of letters from A to J in various fonts. It is designed as a more challenging alternative to the traditional MNIST dataset, which consists of handwritten digits. The notMNIST dataset is commonly used in machine learning and computer vision tasks for character recognition.

Dataset Information

Number of Classes: 10 (A to J) Number of Samples: 187,24 Image Size: 28 x 28 pixels Color Channels: Grayscale

Dataset Structure

The dataset is split into a training set and a test set. Each class has its own subdirectory containing images of that class. The directory structure is as follows:

notMNIST/ |-- train/ | |-- A/ | |-- B/ | |-- ... | |-- J/ | |-- test/ | |-- A/ | |-- B/ | |-- ... | |-- J/

Acknowledgements

http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html https://www.kaggle.com/datasets/lubaroli/notmnist

Inspiration

This is a pretty good dataset to train classifiers! According to Yaroslav:

Judging by the examples, one would expect this to be a harder task than MNIST. This seems to be the case -- logistic regression on top of stacked auto-encoder with fine-tuning gets about 89% accuracy whereas same approach gives got 98% on MNIST. Dataset consists of small hand-cleaned part, about 19k instances, and large uncleaned dataset, 500k instances. Two parts have approximately 0.5% and 6.5% label error rate. I got this by looking through glyphs and counting how often my guess of the letter didn't match it's unicode value in the font file.