Edit model card

hushem_40x_deit_base_sgd_0001_fold2

This model is a fine-tuned version of facebook/deit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2543
  • Accuracy: 0.3333

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.3992 1.0 215 1.3998 0.2667
1.3515 2.0 430 1.3921 0.3111
1.3529 3.0 645 1.3851 0.3111
1.3241 4.0 860 1.3787 0.3111
1.3174 5.0 1075 1.3730 0.3333
1.331 6.0 1290 1.3675 0.3333
1.3242 7.0 1505 1.3623 0.3333
1.2565 8.0 1720 1.3574 0.3111
1.2719 9.0 1935 1.3525 0.3333
1.2647 10.0 2150 1.3477 0.3333
1.2293 11.0 2365 1.3430 0.3556
1.1981 12.0 2580 1.3386 0.3556
1.2258 13.0 2795 1.3342 0.3556
1.1901 14.0 3010 1.3299 0.3111
1.1988 15.0 3225 1.3257 0.3333
1.2048 16.0 3440 1.3214 0.3333
1.1848 17.0 3655 1.3173 0.3333
1.1799 18.0 3870 1.3134 0.3333
1.1629 19.0 4085 1.3095 0.3333
1.1482 20.0 4300 1.3058 0.3333
1.1478 21.0 4515 1.3022 0.3333
1.1472 22.0 4730 1.2988 0.3333
1.1143 23.0 4945 1.2955 0.3333
1.1122 24.0 5160 1.2923 0.3333
1.1 25.0 5375 1.2893 0.3111
1.1089 26.0 5590 1.2864 0.3111
1.1001 27.0 5805 1.2836 0.3333
1.0858 28.0 6020 1.2810 0.3333
1.093 29.0 6235 1.2786 0.3333
1.081 30.0 6450 1.2761 0.3333
1.0426 31.0 6665 1.2739 0.3333
1.0757 32.0 6880 1.2718 0.3333
1.0524 33.0 7095 1.2698 0.3333
1.0725 34.0 7310 1.2680 0.3333
1.055 35.0 7525 1.2662 0.3333
1.0308 36.0 7740 1.2646 0.3333
1.0978 37.0 7955 1.2631 0.3333
1.0264 38.0 8170 1.2617 0.3333
1.0508 39.0 8385 1.2604 0.3333
1.0208 40.0 8600 1.2593 0.3333
1.0505 41.0 8815 1.2583 0.3333
1.0147 42.0 9030 1.2574 0.3333
1.0355 43.0 9245 1.2566 0.3333
1.0064 44.0 9460 1.2559 0.3333
1.019 45.0 9675 1.2554 0.3333
1.0292 46.0 9890 1.2549 0.3333
1.0584 47.0 10105 1.2546 0.3333
1.0425 48.0 10320 1.2544 0.3333
1.0211 49.0 10535 1.2543 0.3333
1.0221 50.0 10750 1.2543 0.3333

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2
Downloads last month
5

Finetuned from

Evaluation results