CIFAR_Custom_Resnet / README.md
wgetdd's picture
Update README.md
4da736b
---
title: Custom Residual CNN Trained on CIFAR Dataset
emoji: 🚀
colorFrom: gray
colorTo: blue
sdk: gradio
sdk_version: 3.39.0
app_file: app.py
pinned: false
license: mit
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
# [Main REPO](https://github.com/deepanshudashora/ERAV1/tree/master/session12)
# Problem Statement
1. Train CNN on cifar dataset with residual blocks
2. Target accuracy -> 90% on the test set
3. Use torch_lr_finder for finding LR
4. User OneCycleLR as Lr scheduler
## Features
### GradCAM Image Visualization
Spaces App allows users to visualize GradCAM images generated from the neural network model. GradCAM provides insight into which regions of the input image influenced the model's predictions the most. Users can customize the visualization by specifying:
* The User wants to visualize gradcam output or not
* Opacity levels for better clarity.
### Misclassified Image Viewer
With Spaces App, users can explore misclassified images by the neural network model. This feature helps identify cases where the model's predictions did not match the actual labels. Users can:
* Choose if you want to check where model failed to predict correct class
* Choose the number of misclassified images to view.
### Image Upload and Examples
Spaces App allows users to upload their own images for analysis. Additionally, it provides ten example images to help users get started quickly and explore the app's capabilities.
### Top Classes Display
Users can request the app to show the top predicted classes for an input image. They can specify the number of top classes to be displayed (limited to a maximum of 10), making it easy to focus on the most relevant results.
## How to Use Spaces App
1. **Setting GradCAM Preferences**
* Upon launching the app, users will be prompted to choose whether they want to visualize GradCAM images.
* Users can specify the number of GradCAM images to view, the target layer for visualization, and adjust opacity levels for better visualization.
2. **Misclassified Image Viewer**
* If users are interested in exploring misclassified images, they can select the relevant option and specify the number of images they want to see.
3. **Uploading Images**
* To analyze custom images, users can upload their own images through the app's image upload functionality.
4. **Example Images**
* For users who want to quickly explore the app's features, ten example images are provided.
5. **Top Classes Display**
* Users can choose to see the top predicted classes for an input image. They should specify the number of top classes (up to 10) they wish to view.
# Model Parameters
``````
==========================================================================================
Layer (type:depth-idx) Output Shape Param #
==========================================================================================
CustomResnet [512, 10] --
├─Sequential: 1-1 [512, 64, 32, 32] --
│ └─Conv2d: 2-1 [512, 64, 32, 32] 1,728
│ └─BatchNorm2d: 2-2 [512, 64, 32, 32] 128
│ └─ReLU: 2-3 [512, 64, 32, 32] --
├─Sequential: 1-2 [512, 128, 16, 16] --
│ └─Conv2d: 2-4 [512, 128, 32, 32] 73,728
│ └─MaxPool2d: 2-5 [512, 128, 16, 16] --
│ └─BatchNorm2d: 2-6 [512, 128, 16, 16] 256
│ └─ReLU: 2-7 [512, 128, 16, 16] --
├─Sequential: 1-3 [512, 128, 16, 16] --
│ └─Conv2d: 2-8 [512, 128, 16, 16] 147,456
│ └─BatchNorm2d: 2-9 [512, 128, 16, 16] 256
│ └─ReLU: 2-10 [512, 128, 16, 16] --
│ └─Conv2d: 2-11 [512, 128, 16, 16] 147,456
│ └─BatchNorm2d: 2-12 [512, 128, 16, 16] 256
│ └─ReLU: 2-13 [512, 128, 16, 16] --
├─Sequential: 1-4 [512, 256, 8, 8] --
│ └─Conv2d: 2-14 [512, 256, 16, 16] 294,912
│ └─MaxPool2d: 2-15 [512, 256, 8, 8] --
│ └─BatchNorm2d: 2-16 [512, 256, 8, 8] 512
│ └─ReLU: 2-17 [512, 256, 8, 8] --
├─Sequential: 1-5 [512, 512, 4, 4] --
│ └─Conv2d: 2-18 [512, 512, 8, 8] 1,179,648
│ └─MaxPool2d: 2-19 [512, 512, 4, 4] --
│ └─BatchNorm2d: 2-20 [512, 512, 4, 4] 1,024
│ └─ReLU: 2-21 [512, 512, 4, 4] --
├─Sequential: 1-6 [512, 512, 4, 4] --
│ └─Conv2d: 2-22 [512, 512, 4, 4] 2,359,296
│ └─BatchNorm2d: 2-23 [512, 512, 4, 4] 1,024
│ └─ReLU: 2-24 [512, 512, 4, 4] --
│ └─Conv2d: 2-25 [512, 512, 4, 4] 2,359,296
│ └─BatchNorm2d: 2-26 [512, 512, 4, 4] 1,024
│ └─ReLU: 2-27 [512, 512, 4, 4] --
├─MaxPool2d: 1-7 [512, 512, 1, 1] --
├─Linear: 1-8 [512, 10] 5,130
==========================================================================================
Total params: 6,573,130
Trainable params: 6,573,130
Non-trainable params: 0
Total mult-adds (G): 194.18
==========================================================================================
Input size (MB): 6.29
Forward/backward pass size (MB): 2382.41
Params size (MB): 26.29
Estimated Total Size (MB): 2414.99
==========================================================================================
``````
# Accuracy Report
|Model Experiments|Found Max LR|Min LR|Best Validation accuracy| Best Training Accuray |
|--|--|--|--|--|
|[Exp-1](https://github.com/deepanshudashora/ERAV1/blob/master/session10/experiments/S10_95_90.ipynb)|3.31E-02|0.023|90.91%|95.88%|
|[Exp-2](https://github.com/deepanshudashora/ERAV1/blob/master/session10/experiments/S10_96_91.ipynb)|2.63E-02|0.02|91.32%|96.95%|
|[Exp-3](https://github.com/deepanshudashora/ERAV1/blob/master/session10/experiments/S10_98_91.ipynb)|1.19E-02|0.01|91.72%|98.77%|
|[Exp-4-TorchCode](https://github.com/deepanshudashora/ERAV1/blob/master/session10/S10.ipynb)|1.87E-02|0.01|91.80%|96.93%|
|[Exp-5-Lightning-precision-16](https://github.com/deepanshudashora/ERAV1/blob/master/session12/S12_Training.ipynb)|1.87E-02|0.01|92.20%|98.8%| |
# [Training Logs](https://github.com/deepanshudashora/ERAV1/blob/master/session12/csv_logs_training/lightning_logs/version_0/metrics.csv)
```
lr-Adam step train_loss train_acc epoch val_loss val_acc
88 0.005545 1799 NaN NaN NaN NaN NaN
89 NaN 1799 0.220482 0.914062 18.0 NaN NaN
90 0.005043 1849 NaN NaN NaN NaN NaN
91 NaN 1849 0.235307 0.910156 18.0 NaN NaN
92 NaN 1861 NaN NaN 18.0 0.406253 0.8724
93 0.004541 1899 NaN NaN NaN NaN NaN
94 NaN 1899 0.197022 0.925781 19.0 NaN NaN
95 0.004039 1949 NaN NaN NaN NaN NaN
96 NaN 1949 0.224633 0.933594 19.0 NaN NaN
97 NaN 1959 NaN NaN 19.0 0.367574 0.8873
98 0.003537 1999 NaN NaN NaN NaN NaN
99 NaN 1999 0.175551 0.921875 20.0 NaN NaN
100 0.003035 2049 NaN NaN NaN NaN NaN
101 NaN 2049 0.148070 0.955078 20.0 NaN NaN
102 NaN 2057 NaN NaN 20.0 0.345555 0.8963
103 0.002532 2099 NaN NaN NaN NaN NaN
104 NaN 2099 0.139945 0.955078 21.0 NaN NaN
105 0.002030 2149 NaN NaN NaN NaN NaN
106 NaN 2149 0.112343 0.960938 21.0 NaN NaN
107 NaN 2155 NaN NaN 21.0 0.311762 0.9046
108 0.001528 2199 NaN NaN NaN NaN NaN
109 NaN 2199 0.079441 0.972656 22.0 NaN NaN
110 0.001026 2249 NaN NaN NaN NaN NaN
111 NaN 2249 0.084935 0.962891 22.0 NaN NaN
112 NaN 2253 NaN NaN 22.0 0.282218 0.9190
113 0.000524 2299 NaN NaN NaN NaN NaN
114 NaN 2299 0.074329 0.968750 23.0 NaN NaN
115 0.000022 2349 NaN NaN NaN NaN NaN
116 NaN 2349 0.043582 0.988281 23.0 NaN NaN
117 NaN 2351 NaN NaN 23.0 0.268215 0.9219
```
# Results
## Accuracy Plot
Here is the Accuracy and Loss metric plot for the model
![accuracy_curve.png](images/accuracy_curve.png)
## Accuracy Report for Each class
Accuracy of airplane : 82 %
Accuracy of automobile : 100 %
Accuracy of bird : 94 %
Accuracy of cat : 75 %
Accuracy of deer : 81 %
Accuracy of dog : 82 %
Accuracy of frog : 96 %
Accuracy of horse : 100 %
Accuracy of ship : 76 %
Accuracy of truck : 82 %
![accuracy_per_class.png](images/accuracy_per_class.png)