Edit model card

smids_10x_deit_base_sgd_00001_fold5

This model is a fine-tuned version of facebook/deit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0147
  • Accuracy: 0.53

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.1118 1.0 750 1.1081 0.3483
1.0989 2.0 1500 1.1045 0.3467
1.098 3.0 2250 1.1011 0.3483
1.0956 4.0 3000 1.0977 0.35
1.0908 5.0 3750 1.0945 0.3533
1.0728 6.0 4500 1.0914 0.36
1.0863 7.0 5250 1.0883 0.3783
1.0893 8.0 6000 1.0853 0.3833
1.0669 9.0 6750 1.0824 0.3883
1.0847 10.0 7500 1.0795 0.3933
1.062 11.0 8250 1.0767 0.4033
1.0612 12.0 9000 1.0739 0.4183
1.0546 13.0 9750 1.0712 0.4267
1.0486 14.0 10500 1.0685 0.43
1.0488 15.0 11250 1.0659 0.4367
1.0504 16.0 12000 1.0633 0.4367
1.0424 17.0 12750 1.0607 0.4417
1.0426 18.0 13500 1.0582 0.4533
1.0273 19.0 14250 1.0557 0.4533
1.0365 20.0 15000 1.0533 0.4633
1.0042 21.0 15750 1.0509 0.4683
1.0296 22.0 16500 1.0486 0.4717
1.0417 23.0 17250 1.0462 0.475
1.0189 24.0 18000 1.0440 0.4817
1.0029 25.0 18750 1.0418 0.4833
1.0163 26.0 19500 1.0396 0.4883
1.0097 27.0 20250 1.0376 0.4867
1.0284 28.0 21000 1.0356 0.49
1.0024 29.0 21750 1.0337 0.4967
1.0018 30.0 22500 1.0318 0.505
1.0059 31.0 23250 1.0301 0.5083
1.0064 32.0 24000 1.0284 0.5133
1.0048 33.0 24750 1.0269 0.5133
0.9972 34.0 25500 1.0254 0.5217
0.9969 35.0 26250 1.0240 0.5217
1.0105 36.0 27000 1.0228 0.5217
0.9858 37.0 27750 1.0216 0.525
1.0046 38.0 28500 1.0205 0.525
0.9725 39.0 29250 1.0195 0.5267
0.9836 40.0 30000 1.0186 0.5267
0.9756 41.0 30750 1.0178 0.5283
0.9954 42.0 31500 1.0171 0.5283
1.0089 43.0 32250 1.0165 0.5283
0.9937 44.0 33000 1.0160 0.5267
0.9919 45.0 33750 1.0156 0.53
0.9943 46.0 34500 1.0153 0.53
0.9683 47.0 35250 1.0150 0.53
0.9833 48.0 36000 1.0148 0.53
0.9815 49.0 36750 1.0147 0.53
0.9749 50.0 37500 1.0147 0.53

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2
Downloads last month
5

Finetuned from

Evaluation results