Edit model card

smids_5x_deit_small_rms_001_fold4

This model is a fine-tuned version of facebook/deit-small-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5824
  • Accuracy: 0.78

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.9095 1.0 375 0.8145 0.53
0.8394 2.0 750 0.8071 0.565
0.7959 3.0 1125 0.7903 0.6133
0.7869 4.0 1500 0.7586 0.6367
0.7989 5.0 1875 0.7552 0.6
0.8124 6.0 2250 0.7289 0.66
0.7708 7.0 2625 0.7042 0.6733
0.7586 8.0 3000 0.7504 0.6583
0.6986 9.0 3375 0.6527 0.6983
0.6979 10.0 3750 0.6544 0.7017
0.682 11.0 4125 0.6621 0.7117
0.6815 12.0 4500 0.6293 0.7067
0.6311 13.0 4875 0.6466 0.7033
0.743 14.0 5250 0.5967 0.7383
0.6884 15.0 5625 0.5874 0.7533
0.6214 16.0 6000 0.5678 0.7567
0.6379 17.0 6375 0.6145 0.7267
0.5615 18.0 6750 0.5793 0.7417
0.5825 19.0 7125 0.5647 0.76
0.5806 20.0 7500 0.5298 0.7617
0.5732 21.0 7875 0.6497 0.7117
0.4981 22.0 8250 0.6229 0.7283
0.5878 23.0 8625 0.5476 0.77
0.5732 24.0 9000 0.5431 0.7783
0.5633 25.0 9375 0.5734 0.7617
0.5704 26.0 9750 0.5553 0.7683
0.537 27.0 10125 0.5504 0.7733
0.4571 28.0 10500 0.5331 0.7783
0.5264 29.0 10875 0.5680 0.7633
0.6141 30.0 11250 0.5510 0.765
0.5469 31.0 11625 0.5500 0.7933
0.4915 32.0 12000 0.5001 0.785
0.5227 33.0 12375 0.5958 0.7783
0.4961 34.0 12750 0.5665 0.78
0.4306 35.0 13125 0.5345 0.7683
0.461 36.0 13500 0.5456 0.7683
0.5254 37.0 13875 0.5228 0.78
0.4633 38.0 14250 0.5026 0.7917
0.4546 39.0 14625 0.5577 0.7633
0.4842 40.0 15000 0.5245 0.78
0.4453 41.0 15375 0.5350 0.785
0.3943 42.0 15750 0.5494 0.7867
0.4031 43.0 16125 0.5697 0.7833
0.3729 44.0 16500 0.5326 0.7933
0.3744 45.0 16875 0.5371 0.7817
0.4535 46.0 17250 0.5557 0.7817
0.4267 47.0 17625 0.5568 0.7767
0.372 48.0 18000 0.5642 0.77
0.3734 49.0 18375 0.5737 0.785
0.4125 50.0 18750 0.5824 0.78

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2
Downloads last month
1

Finetuned from

Evaluation results