hkivancoral's picture
End of training
607154e
metadata
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_40x_deit_small_sgd_001_fold4
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8809523809523809

hushem_40x_deit_small_sgd_001_fold4

This model is a fine-tuned version of facebook/deit-small-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2365
  • Accuracy: 0.8810

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.2757 1.0 219 1.3298 0.2619
1.0766 2.0 438 1.1919 0.4048
0.9095 3.0 657 1.0786 0.5476
0.7507 4.0 876 0.9821 0.5476
0.6994 5.0 1095 0.8850 0.5952
0.5864 6.0 1314 0.8204 0.6429
0.4328 7.0 1533 0.7576 0.6905
0.4293 8.0 1752 0.6999 0.7143
0.3464 9.0 1971 0.6320 0.7143
0.3175 10.0 2190 0.5956 0.7381
0.2382 11.0 2409 0.5588 0.7381
0.2672 12.0 2628 0.5195 0.7381
0.2016 13.0 2847 0.4850 0.8095
0.1832 14.0 3066 0.4528 0.8095
0.1406 15.0 3285 0.4338 0.8333
0.1305 16.0 3504 0.3948 0.8571
0.1504 17.0 3723 0.3785 0.8571
0.1139 18.0 3942 0.3689 0.8571
0.096 19.0 4161 0.3548 0.8571
0.0869 20.0 4380 0.3393 0.8571
0.0874 21.0 4599 0.3057 0.8571
0.0797 22.0 4818 0.2990 0.8571
0.0596 23.0 5037 0.2862 0.8571
0.053 24.0 5256 0.3012 0.8810
0.0562 25.0 5475 0.2885 0.8810
0.0463 26.0 5694 0.2676 0.8810
0.0374 27.0 5913 0.2870 0.8810
0.037 28.0 6132 0.2638 0.8810
0.0341 29.0 6351 0.2690 0.8810
0.0327 30.0 6570 0.2566 0.8810
0.0238 31.0 6789 0.2611 0.8810
0.0256 32.0 7008 0.2643 0.8810
0.0284 33.0 7227 0.2717 0.8810
0.0213 34.0 7446 0.2627 0.8810
0.0191 35.0 7665 0.2395 0.8810
0.0246 36.0 7884 0.2517 0.8810
0.0207 37.0 8103 0.2515 0.8810
0.0134 38.0 8322 0.2484 0.8810
0.0162 39.0 8541 0.2279 0.8810
0.0165 40.0 8760 0.2516 0.8810
0.0146 41.0 8979 0.2253 0.8810
0.0168 42.0 9198 0.2425 0.8810
0.0155 43.0 9417 0.2370 0.8810
0.0145 44.0 9636 0.2352 0.8810
0.0118 45.0 9855 0.2414 0.8810
0.0107 46.0 10074 0.2338 0.8810
0.0124 47.0 10293 0.2350 0.8810
0.0125 48.0 10512 0.2352 0.8810
0.0138 49.0 10731 0.2367 0.8810
0.0183 50.0 10950 0.2365 0.8810

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2