Edit model card

smids_5x_deit_small_sgd_00001_fold5

This model is a fine-tuned version of facebook/deit-small-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9917
  • Accuracy: 0.5233

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.1042 1.0 375 1.0744 0.425
1.0586 2.0 750 1.0707 0.4317
1.0589 3.0 1125 1.0672 0.4317
1.0667 4.0 1500 1.0637 0.435
1.0705 5.0 1875 1.0604 0.435
1.0556 6.0 2250 1.0572 0.44
1.037 7.0 2625 1.0541 0.4467
1.0193 8.0 3000 1.0509 0.4533
1.0476 9.0 3375 1.0479 0.455
1.0596 10.0 3750 1.0451 0.4617
1.0294 11.0 4125 1.0422 0.465
1.0295 12.0 4500 1.0395 0.47
1.0343 13.0 4875 1.0368 0.47
1.0353 14.0 5250 1.0343 0.4717
1.029 15.0 5625 1.0318 0.4767
1.027 16.0 6000 1.0293 0.48
1.0226 17.0 6375 1.0270 0.4833
0.9782 18.0 6750 1.0246 0.4917
1.019 19.0 7125 1.0224 0.4983
0.9985 20.0 7500 1.0203 0.5017
0.9654 21.0 7875 1.0183 0.5033
1.0053 22.0 8250 1.0163 0.5083
1.0112 23.0 8625 1.0144 0.5117
1.0077 24.0 9000 1.0126 0.515
0.9958 25.0 9375 1.0109 0.515
0.987 26.0 9750 1.0092 0.515
0.9915 27.0 10125 1.0076 0.5133
0.9929 28.0 10500 1.0061 0.5133
0.9979 29.0 10875 1.0047 0.5117
0.9641 30.0 11250 1.0034 0.5117
0.9905 31.0 11625 1.0021 0.5117
1.0051 32.0 12000 1.0009 0.5117
1.0023 33.0 12375 0.9998 0.5133
0.9697 34.0 12750 0.9988 0.5167
0.9834 35.0 13125 0.9979 0.5167
0.985 36.0 13500 0.9970 0.5183
0.9928 37.0 13875 0.9962 0.5183
0.9649 38.0 14250 0.9954 0.52
1.0054 39.0 14625 0.9948 0.5217
0.9871 40.0 15000 0.9942 0.5217
0.977 41.0 15375 0.9937 0.5217
0.9799 42.0 15750 0.9932 0.5217
0.9791 43.0 16125 0.9928 0.525
0.9745 44.0 16500 0.9925 0.525
0.9751 45.0 16875 0.9922 0.525
0.9977 46.0 17250 0.9920 0.5233
0.9954 47.0 17625 0.9919 0.5233
0.9619 48.0 18000 0.9918 0.5233
0.9797 49.0 18375 0.9917 0.5233
0.9489 50.0 18750 0.9917 0.5233

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2
Downloads last month
1

Finetuned from

Evaluation results