wgetdd commited on
Commit
efe9c46
1 Parent(s): 7ac4bcc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +162 -1
README.md CHANGED
@@ -6,8 +6,169 @@ colorTo: blue
6
  sdk: gradio
7
  sdk_version: 3.39.0
8
  app_file: app.py
9
- pinned: True
10
  license: mit
11
  ---
12
 
13
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  sdk: gradio
7
  sdk_version: 3.39.0
8
  app_file: app.py
9
+ pinned: False
10
  license: mit
11
  ---
12
 
13
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
14
+
15
+ # Problem Statement
16
+
17
+ 1. Train CNN on cifar dataset with residual blocks
18
+ 2. Target accuracy -> 90% on the test set
19
+ 3. Use torch_lr_finder for finding LR
20
+ 4. User OneCycleLR as Lr scheduler
21
+
22
+ ## Features
23
+
24
+ ### GradCAM Image Visualization
25
+
26
+ Spaces App allows users to visualize GradCAM images generated from the neural network model. GradCAM provides insight into which regions of the input image influenced the model's predictions the most. Users can customize the visualization by specifying:
27
+
28
+ * The User wants to visualize gradcam output or not
29
+ * Opacity levels for better clarity.
30
+
31
+ ### Misclassified Image Viewer
32
+
33
+ With Spaces App, users can explore misclassified images by the neural network model. This feature helps identify cases where the model's predictions did not match the actual labels. Users can:
34
+
35
+ * Choose if you want to check where model failed to predict correct class
36
+ * Choose the number of misclassified images to view.
37
+
38
+ ### Image Upload and Examples
39
+
40
+ Spaces App allows users to upload their own images for analysis. Additionally, it provides ten example images to help users get started quickly and explore the app's capabilities.
41
+
42
+ ### Top Classes Display
43
+
44
+ Users can request the app to show the top predicted classes for an input image. They can specify the number of top classes to be displayed (limited to a maximum of 10), making it easy to focus on the most relevant results.
45
+
46
+ ## How to Use Spaces App
47
+
48
+ 1. **Setting GradCAM Preferences**
49
+ * Upon launching the app, users will be prompted to choose whether they want to visualize GradCAM images.
50
+ * Users can specify the number of GradCAM images to view, the target layer for visualization, and adjust opacity levels for better visualization.
51
+ 2. **Misclassified Image Viewer**
52
+ * If users are interested in exploring misclassified images, they can select the relevant option and specify the number of images they want to see.
53
+ 3. **Uploading Images**
54
+ * To analyze custom images, users can upload their own images through the app's image upload functionality.
55
+ 4. **Example Images**
56
+ * For users who want to quickly explore the app's features, ten example images are provided.
57
+ 5. **Top Classes Display**
58
+ * Users can choose to see the top predicted classes for an input image. They should specify the number of top classes (up to 10) they wish to view.
59
+
60
+ # Model Parameters
61
+
62
+ ``````
63
+ ==========================================================================================
64
+ Layer (type:depth-idx) Output Shape Param #
65
+ ==========================================================================================
66
+ CustomResnet [512, 10] --
67
+ ├─Sequential: 1-1 [512, 64, 32, 32] --
68
+ │ └─Conv2d: 2-1 [512, 64, 32, 32] 1,728
69
+ │ └─BatchNorm2d: 2-2 [512, 64, 32, 32] 128
70
+ │ └─ReLU: 2-3 [512, 64, 32, 32] --
71
+ ├─Sequential: 1-2 [512, 128, 16, 16] --
72
+ │ └─Conv2d: 2-4 [512, 128, 32, 32] 73,728
73
+ │ └─MaxPool2d: 2-5 [512, 128, 16, 16] --
74
+ │ └─BatchNorm2d: 2-6 [512, 128, 16, 16] 256
75
+ │ └─ReLU: 2-7 [512, 128, 16, 16] --
76
+ ├─Sequential: 1-3 [512, 128, 16, 16] --
77
+ │ └─Conv2d: 2-8 [512, 128, 16, 16] 147,456
78
+ │ └─BatchNorm2d: 2-9 [512, 128, 16, 16] 256
79
+ │ └─ReLU: 2-10 [512, 128, 16, 16] --
80
+ │ └─Conv2d: 2-11 [512, 128, 16, 16] 147,456
81
+ │ └─BatchNorm2d: 2-12 [512, 128, 16, 16] 256
82
+ │ └─ReLU: 2-13 [512, 128, 16, 16] --
83
+ ├─Sequential: 1-4 [512, 256, 8, 8] --
84
+ │ └─Conv2d: 2-14 [512, 256, 16, 16] 294,912
85
+ │ └─MaxPool2d: 2-15 [512, 256, 8, 8] --
86
+ │ └─BatchNorm2d: 2-16 [512, 256, 8, 8] 512
87
+ │ └─ReLU: 2-17 [512, 256, 8, 8] --
88
+ ├─Sequential: 1-5 [512, 512, 4, 4] --
89
+ │ └─Conv2d: 2-18 [512, 512, 8, 8] 1,179,648
90
+ │ └─MaxPool2d: 2-19 [512, 512, 4, 4] --
91
+ │ └─BatchNorm2d: 2-20 [512, 512, 4, 4] 1,024
92
+ │ └─ReLU: 2-21 [512, 512, 4, 4] --
93
+ ├─Sequential: 1-6 [512, 512, 4, 4] --
94
+ │ └─Conv2d: 2-22 [512, 512, 4, 4] 2,359,296
95
+ │ └─BatchNorm2d: 2-23 [512, 512, 4, 4] 1,024
96
+ │ └─ReLU: 2-24 [512, 512, 4, 4] --
97
+ │ └─Conv2d: 2-25 [512, 512, 4, 4] 2,359,296
98
+ │ └─BatchNorm2d: 2-26 [512, 512, 4, 4] 1,024
99
+ │ └─ReLU: 2-27 [512, 512, 4, 4] --
100
+ ├─MaxPool2d: 1-7 [512, 512, 1, 1] --
101
+ ├─Linear: 1-8 [512, 10] 5,130
102
+ ==========================================================================================
103
+ Total params: 6,573,130
104
+ Trainable params: 6,573,130
105
+ Non-trainable params: 0
106
+ Total mult-adds (G): 194.18
107
+ ==========================================================================================
108
+ Input size (MB): 6.29
109
+ Forward/backward pass size (MB): 2382.41
110
+ Params size (MB): 26.29
111
+ Estimated Total Size (MB): 2414.99
112
+ ==========================================================================================
113
+ ``````
114
+
115
+ # Accuracy Report
116
+
117
+ | Model Experiments | Found Max LR | Min LR | Best Validation accuracy | Best Training Accuray |
118
+ | ------------------------------------------------------------------------------------------------- | ------------ | ------ | ------------------------ | --------------------- |
119
+ | [Exp-1](https://github.com/deepanshudashora/ERAV1/blob/master/session10/experiments/S10_95_90.ipynb) | 3.31E-02 | 0.023 | 90.91% | 95.88% |
120
+ | [Exp-2](https://github.com/deepanshudashora/ERAV1/blob/master/session10/experiments/S10_96_91.ipynb) | 2.63E-02 | 0.02 | 91.32% | 96.95% |
121
+ | [Exp-3](https://github.com/deepanshudashora/ERAV1/blob/master/session10/experiments/S10_98_91.ipynb) | 1.19E-02 | 0.01 | 91.72% | 98.77% |
122
+ | [S10.ipynb](https://github.com/deepanshudashora/ERAV1/blob/master/session10/S10.ipynb) | 1.87E-02 | 0.01 | 91.80% | 96.93% |
123
+
124
+ # Final Training Log
125
+
126
+ ```
127
+ Epoch 23: 100%
128
+ 118/118 [00:57<00:00, 2.06it/s, loss=0.0576, v_num=0, train_loss=0.0511, train_acc=0.988, val_loss=0.268, val_acc=0.922]
129
+ ```
130
+
131
+ # Results
132
+
133
+ ## Accuracy Plot
134
+
135
+ Here is the Accuracy and Loss metric plot for the model
136
+
137
+ <p align="center">
138
+ <img src="images/accuracy_curve.png" alt="centered image" />
139
+ </p>
140
+
141
+ ## Misclassified Images
142
+
143
+ Here is the sample result of model miss-classified images
144
+
145
+ <p align="center">
146
+ <img src="images/missclassified.png" alt="centered image" />
147
+ </p>
148
+
149
+ ## Accuracy Report for Each class
150
+
151
+ Accuracy of airplane : 82 %
152
+
153
+ Accuracy of automobile : 100 %
154
+
155
+ Accuracy of bird : 94 %
156
+
157
+ Accuracy of cat : 75 %
158
+
159
+ Accuracy of deer : 81 %
160
+
161
+ Accuracy of dog : 82 %
162
+
163
+ Accuracy of frog : 96 %
164
+
165
+ Accuracy of horse : 100 %
166
+
167
+ Accuracy of ship : 76 %
168
+
169
+ Accuracy of truck : 82 %
170
+
171
+ <p align="center">
172
+ <img src="images/accuracy_per_class.png" alt="centered image" />
173
+ </p>
174
+