Edit model card

hushem_40x_deit_tiny_sgd_001_fold1

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9268
  • Accuracy: 0.7333

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.1534 1.0 215 1.3121 0.3778
0.8744 2.0 430 1.2305 0.5111
0.7578 3.0 645 1.1023 0.5556
0.6354 4.0 860 0.9584 0.5778
0.4832 5.0 1075 0.8877 0.6444
0.4506 6.0 1290 0.8214 0.6889
0.3619 7.0 1505 0.8077 0.6889
0.3187 8.0 1720 0.7845 0.6667
0.2423 9.0 1935 0.7629 0.7111
0.2351 10.0 2150 0.7464 0.7333
0.2043 11.0 2365 0.7249 0.6889
0.1712 12.0 2580 0.7297 0.7111
0.1294 13.0 2795 0.7280 0.7333
0.1185 14.0 3010 0.7610 0.7333
0.1264 15.0 3225 0.7479 0.7333
0.0869 16.0 3440 0.7617 0.7333
0.0902 17.0 3655 0.7623 0.7333
0.0782 18.0 3870 0.7805 0.7333
0.071 19.0 4085 0.7715 0.7333
0.063 20.0 4300 0.7777 0.7333
0.0587 21.0 4515 0.7497 0.7333
0.0675 22.0 4730 0.7998 0.7333
0.0426 23.0 4945 0.8200 0.7333
0.0373 24.0 5160 0.8281 0.7111
0.0441 25.0 5375 0.8317 0.7111
0.0323 26.0 5590 0.8133 0.7111
0.0359 27.0 5805 0.8214 0.7111
0.0291 28.0 6020 0.8265 0.7111
0.0287 29.0 6235 0.8490 0.7111
0.0271 30.0 6450 0.8534 0.7111
0.0256 31.0 6665 0.8626 0.7111
0.0212 32.0 6880 0.8791 0.7111
0.0155 33.0 7095 0.8740 0.7333
0.0144 34.0 7310 0.8433 0.7333
0.0132 35.0 7525 0.8680 0.7333
0.015 36.0 7740 0.8880 0.7333
0.0129 37.0 7955 0.8931 0.7333
0.018 38.0 8170 0.8891 0.7333
0.0092 39.0 8385 0.9122 0.7333
0.0085 40.0 8600 0.9159 0.7333
0.0124 41.0 8815 0.9199 0.7333
0.0125 42.0 9030 0.9056 0.7333
0.0107 43.0 9245 0.9191 0.7333
0.0095 44.0 9460 0.9083 0.7333
0.0115 45.0 9675 0.9189 0.7333
0.0088 46.0 9890 0.9241 0.7333
0.0065 47.0 10105 0.9299 0.7333
0.007 48.0 10320 0.9257 0.7333
0.0129 49.0 10535 0.9260 0.7333
0.0229 50.0 10750 0.9268 0.7333

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.1+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2
Downloads last month
6

Finetuned from

Evaluation results