anubhavmaity commited on
Commit
5ba2658
1 Parent(s): 9b65bb7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -62,7 +62,7 @@ The notMNIST dataset is a collection of images of letters from A to J in various
62
 
63
  The dataset is split into a training set and a test set. Each class has its own subdirectory containing images of that class. The directory structure is as follows:
64
 
65
- notMNIST/
66
  |-- train/
67
  | |-- A/
68
  | |-- B/
@@ -73,7 +73,7 @@ The dataset is split into a training set and a test set. Each class has its own
73
  | |-- A/
74
  | |-- B/
75
  | |-- ...
76
- | |-- J/
77
 
78
 
79
  ## Acknowledgements
@@ -86,11 +86,11 @@ The dataset is split into a training set and a test set. Each class has its own
86
 
87
  This is a pretty good dataset to train classifiers! According to Yaroslav:
88
 
89
- Judging by the examples, one would expect this to be a harder task
90
  than MNIST. This seems to be the case -- logistic regression on top of
91
  stacked auto-encoder with fine-tuning gets about 89% accuracy whereas
92
  same approach gives got 98% on MNIST. Dataset consists of small
93
  hand-cleaned part, about 19k instances, and large uncleaned dataset,
94
  500k instances. Two parts have approximately 0.5% and 6.5% label error
95
  rate. I got this by looking through glyphs and counting how often my
96
- guess of the letter didn't match it's unicode value in the font file.
 
62
 
63
  The dataset is split into a training set and a test set. Each class has its own subdirectory containing images of that class. The directory structure is as follows:
64
 
65
+ ```notMNIST/
66
  |-- train/
67
  | |-- A/
68
  | |-- B/
 
73
  | |-- A/
74
  | |-- B/
75
  | |-- ...
76
+ | |-- J/```
77
 
78
 
79
  ## Acknowledgements
 
86
 
87
  This is a pretty good dataset to train classifiers! According to Yaroslav:
88
 
89
+ > "Judging by the examples, one would expect this to be a harder task
90
  than MNIST. This seems to be the case -- logistic regression on top of
91
  stacked auto-encoder with fine-tuning gets about 89% accuracy whereas
92
  same approach gives got 98% on MNIST. Dataset consists of small
93
  hand-cleaned part, about 19k instances, and large uncleaned dataset,
94
  500k instances. Two parts have approximately 0.5% and 6.5% label error
95
  rate. I got this by looking through glyphs and counting how often my
96
+ guess of the letter didn't match it's unicode value in the font file."