Edit model card

hushem_40x_deit_small_sgd_001_fold3

This model is a fine-tuned version of facebook/deit-small-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3304
  • Accuracy: 0.8837

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.2372 1.0 217 1.2798 0.3488
1.0621 2.0 434 1.1335 0.5814
0.8881 3.0 651 1.0243 0.5814
0.7868 4.0 868 0.9174 0.6279
0.6948 5.0 1085 0.8587 0.6279
0.5714 6.0 1302 0.7810 0.7209
0.4585 7.0 1519 0.7011 0.8140
0.4277 8.0 1736 0.6580 0.7907
0.3688 9.0 1953 0.6164 0.8140
0.2836 10.0 2170 0.5578 0.8140
0.2148 11.0 2387 0.5322 0.8140
0.2211 12.0 2604 0.5199 0.8140
0.2014 13.0 2821 0.4865 0.8140
0.1799 14.0 3038 0.4815 0.8140
0.1565 15.0 3255 0.4749 0.7907
0.1129 16.0 3472 0.4440 0.8372
0.0992 17.0 3689 0.4542 0.7907
0.1 18.0 3906 0.4290 0.8140
0.0944 19.0 4123 0.4149 0.8140
0.0856 20.0 4340 0.4111 0.8372
0.0816 21.0 4557 0.4115 0.8140
0.0563 22.0 4774 0.3956 0.7907
0.0625 23.0 4991 0.3834 0.7907
0.0683 24.0 5208 0.3893 0.7907
0.0454 25.0 5425 0.3773 0.8140
0.0571 26.0 5642 0.3874 0.7907
0.0322 27.0 5859 0.3743 0.8140
0.0339 28.0 6076 0.3713 0.8372
0.0345 29.0 6293 0.3616 0.8372
0.0434 30.0 6510 0.3686 0.8372
0.0377 31.0 6727 0.3495 0.8605
0.0295 32.0 6944 0.3476 0.8372
0.0279 33.0 7161 0.3534 0.8605
0.0232 34.0 7378 0.3489 0.8372
0.0275 35.0 7595 0.3346 0.8837
0.0214 36.0 7812 0.3309 0.8605
0.018 37.0 8029 0.3342 0.8605
0.0167 38.0 8246 0.3289 0.8837
0.0196 39.0 8463 0.3389 0.8605
0.0269 40.0 8680 0.3388 0.8605
0.0126 41.0 8897 0.3309 0.8605
0.0119 42.0 9114 0.3316 0.8837
0.0174 43.0 9331 0.3268 0.8837
0.0199 44.0 9548 0.3304 0.8837
0.0115 45.0 9765 0.3378 0.8605
0.0138 46.0 9982 0.3301 0.8837
0.0107 47.0 10199 0.3312 0.8605
0.0108 48.0 10416 0.3294 0.9070
0.0125 49.0 10633 0.3301 0.8837
0.0148 50.0 10850 0.3304 0.8837

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2
Downloads last month
5

Finetuned from

Evaluation results