anubhavmaity commited on
Commit
4500ae0
1 Parent(s): 5ba2658

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -53,6 +53,7 @@ The notMNIST dataset is a collection of images of letters from A to J in various
53
 
54
  ## Dataset Information
55
 
 
56
  Number of Classes: 10 (A to J)
57
  Number of Samples: 187,24
58
  Image Size: 28 x 28 pixels
@@ -62,7 +63,8 @@ The notMNIST dataset is a collection of images of letters from A to J in various
62
 
63
  The dataset is split into a training set and a test set. Each class has its own subdirectory containing images of that class. The directory structure is as follows:
64
 
65
- ```notMNIST/
 
66
  |-- train/
67
  | |-- A/
68
  | |-- B/
@@ -73,7 +75,7 @@ The dataset is split into a training set and a test set. Each class has its own
73
  | |-- A/
74
  | |-- B/
75
  | |-- ...
76
- | |-- J/```
77
 
78
 
79
  ## Acknowledgements
@@ -86,11 +88,11 @@ The dataset is split into a training set and a test set. Each class has its own
86
 
87
  This is a pretty good dataset to train classifiers! According to Yaroslav:
88
 
89
- > "Judging by the examples, one would expect this to be a harder task
90
  than MNIST. This seems to be the case -- logistic regression on top of
91
  stacked auto-encoder with fine-tuning gets about 89% accuracy whereas
92
  same approach gives got 98% on MNIST. Dataset consists of small
93
  hand-cleaned part, about 19k instances, and large uncleaned dataset,
94
  500k instances. Two parts have approximately 0.5% and 6.5% label error
95
  rate. I got this by looking through glyphs and counting how often my
96
- guess of the letter didn't match it's unicode value in the font file."
 
53
 
54
  ## Dataset Information
55
 
56
+ ```lua
57
  Number of Classes: 10 (A to J)
58
  Number of Samples: 187,24
59
  Image Size: 28 x 28 pixels
 
63
 
64
  The dataset is split into a training set and a test set. Each class has its own subdirectory containing images of that class. The directory structure is as follows:
65
 
66
+ ```lua
67
+ notMNIST/
68
  |-- train/
69
  | |-- A/
70
  | |-- B/
 
75
  | |-- A/
76
  | |-- B/
77
  | |-- ...
78
+ | |-- J/
79
 
80
 
81
  ## Acknowledgements
 
88
 
89
  This is a pretty good dataset to train classifiers! According to Yaroslav:
90
 
91
+ > Judging by the examples, one would expect this to be a harder task
92
  than MNIST. This seems to be the case -- logistic regression on top of
93
  stacked auto-encoder with fine-tuning gets about 89% accuracy whereas
94
  same approach gives got 98% on MNIST. Dataset consists of small
95
  hand-cleaned part, about 19k instances, and large uncleaned dataset,
96
  500k instances. Two parts have approximately 0.5% and 6.5% label error
97
  rate. I got this by looking through glyphs and counting how often my
98
+ guess of the letter didn't match it's unicode value in the font file.