Edit model card

Dogs-Breed-Image-Classification-V0

This model is a fine-tuned version of microsoft/resnet-50 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.8210
  • Accuracy: 0.7444

Model description

This model was trained using dataset from Kaggle - Standford dogs dataset

Quotes from the website: The Stanford Dogs dataset contains images of 120 breeds of dogs from around the world. This dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It was originally collected for fine-grain image categorization, a challenging problem as certain dog breeds have near identical features or differ in colour and age.

citation: Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [pdf] [poster] [BibTex]

Secondary: J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009. [pdf] [BibTex]

Intended uses & limitations

This model is fined tune solely for classifiying 120 species of dogs.

Training and evaluation data

75% training data, 25% testing data.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
13.4902 1.0 515 4.7822 0.0104
4.7159 2.0 1030 4.6822 0.0323
4.6143 3.0 1545 4.5940 0.0554
4.4855 4.0 2060 4.5027 0.0935
4.36 5.0 2575 4.3961 0.1239
4.2198 6.0 3090 4.3112 0.1528
4.0882 7.0 3605 4.1669 0.1747
3.9314 8.0 4120 4.0775 0.2021
3.7863 9.0 4635 3.9487 0.2310
3.6511 10.0 5150 3.9028 0.2466
3.5168 11.0 5665 3.8635 0.2626
3.3999 12.0 6180 3.7550 0.2767
3.3037 13.0 6695 3.6973 0.2884
3.1613 14.0 7210 3.6315 0.3037
3.0754 15.0 7725 3.4839 0.3188
2.9441 16.0 8240 3.4406 0.3302
2.8579 17.0 8755 3.3528 0.3406
2.7531 18.0 9270 3.3132 0.3472
2.6477 19.0 9785 3.2736 0.3567
2.5422 20.0 10300 3.1950 0.3756
2.4629 21.0 10815 3.1174 0.4004
2.3735 22.0 11330 2.9916 0.4225
2.2436 23.0 11845 2.9205 0.4509
2.1578 24.0 12360 2.9197 0.4689
2.0671 25.0 12875 2.8196 0.4866
1.9902 26.0 13390 2.7117 0.4961
1.8737 27.0 13905 2.7129 0.5078
1.7945 28.0 14420 2.6654 0.5143
1.7092 29.0 14935 2.6273 0.5301
1.6228 30.0 15450 2.5407 0.5454
1.5744 31.0 15965 2.5412 0.5559
1.4761 32.0 16480 2.4658 0.5658
1.4084 33.0 16995 2.4247 0.5673
1.2624 34.0 17510 2.3766 0.5758
1.2066 35.0 18025 2.2879 0.5843
1.124 36.0 18540 2.2039 0.5872
1.074 37.0 19055 2.2469 0.5965
0.9937 38.0 19570 2.1575 0.6011
0.9418 39.0 20085 2.0854 0.6122
0.8812 40.0 20600 1.9991 0.6254
0.819 41.0 21115 2.0161 0.6312
0.771 42.0 21630 1.9253 0.6375
0.7128 43.0 22145 1.9412 0.6390
0.6434 44.0 22660 1.8463 0.6509
0.6138 45.0 23175 1.8163 0.6650
0.5325 46.0 23690 1.7881 0.6710
0.498 47.0 24205 1.7526 0.6744
0.4565 48.0 24720 1.7155 0.6859
0.4109 49.0 25235 1.6874 0.6946
0.3681 50.0 25750 1.7386 0.6997
0.3306 51.0 26265 1.6578 0.7104
0.2913 52.0 26780 1.6641 0.7104
0.2598 53.0 27295 1.6823 0.7162
0.2311 54.0 27810 1.6835 0.7157
0.2115 55.0 28325 1.6581 0.7206
0.1843 56.0 28840 1.6286 0.7274
0.1668 57.0 29355 1.6358 0.7225
0.1483 58.0 29870 1.6422 0.7250
0.132 59.0 30385 1.6618 0.7284
0.1164 60.0 30900 1.6894 0.7262
0.1043 61.0 31415 1.6923 0.7276
0.0937 62.0 31930 1.6627 0.7323
0.0826 63.0 32445 1.6280 0.7342
0.0743 64.0 32960 1.6204 0.7366
0.0638 65.0 33475 1.6890 0.7383
0.0603 66.0 33990 1.6967 0.7335
0.0491 67.0 34505 1.6975 0.7306
0.0459 68.0 35020 1.7242 0.7337
0.0416 69.0 35535 1.7019 0.7374
0.0382 70.0 36050 1.7098 0.7381
0.0378 71.0 36565 1.7188 0.7383
0.0326 72.0 37080 1.8212 0.7376
0.0323 73.0 37595 1.7965 0.7393
0.0299 74.0 38110 1.7934 0.7301
0.0259 75.0 38625 1.7799 0.7335
0.0276 76.0 39140 1.8456 0.7301
0.0257 77.0 39655 1.8551 0.7391
0.0234 78.0 40170 1.7780 0.7391
0.0222 79.0 40685 1.8216 0.7362
0.0195 80.0 41200 1.8333 0.7352
0.0214 81.0 41715 1.8526 0.7430
0.0207 82.0 42230 1.8581 0.7364
0.0171 83.0 42745 1.8329 0.7393
0.0175 84.0 43260 1.8841 0.7396
0.0165 85.0 43775 1.8381 0.7345
0.0152 86.0 44290 1.8192 0.7379
0.0168 87.0 44805 1.8538 0.7388
0.0158 88.0 45320 1.8390 0.7371
0.0181 89.0 45835 1.8555 0.7374
0.0142 90.0 46350 1.7987 0.7352
0.0147 91.0 46865 1.8446 0.7427
0.0142 92.0 47380 1.8210 0.7444
0.0124 93.0 47895 1.8233 0.7405
0.0128 94.0 48410 1.8517 0.7393
0.0135 95.0 48925 1.8408 0.7413
0.0122 96.0 49440 1.8153 0.7396
0.0141 97.0 49955 1.8645 0.7432
0.0121 98.0 50470 1.8526 0.7430
0.0124 99.0 50985 1.8693 0.7388
0.0113 100.0 51500 1.8051 0.7427

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.3.0
  • Datasets 2.15.0
  • Tokenizers 0.15.1
Downloads last month
0
Safetensors
Model size
23.8M params
Tensor type
F32
·

Finetuned from

Evaluation results