--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': A '1': B '2': C '3': D '4': E '5': F '6': G '7': H '8': I '9': J splits: - name: train num_bytes: 6842235.510231657 num_examples: 14979 - name: test num_bytes: 1715013.5296924065 num_examples: 3745 download_size: 8865158 dataset_size: 8557249.039924063 task_categories: - image-classification - image-to-image - text-to-image - image-to-text tags: - mnist - notmnist pretty_name: notMNIST size_categories: - 10K Judging by the examples, one would expect this to be a harder task than MNIST. This seems to be the case -- logistic regression on top of stacked auto-encoder with fine-tuning gets about 89% accuracy whereas same approach gives got 98% on MNIST. Dataset consists of small hand-cleaned part, about 19k instances, and large uncleaned dataset, 500k instances. Two parts have approximately 0.5% and 6.5% label error rate. I got this by looking through glyphs and counting how often my guess of the letter didn't match it's unicode value in the font file.