hkivancoral's picture
End of training
984e8b9
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_1x_deit_tiny_rms_001_fold4
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.7333333333333333

smids_1x_deit_tiny_rms_001_fold4

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9960
  • Accuracy: 0.7333

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.1598 1.0 75 4.0739 0.3033
1.1025 2.0 150 1.1472 0.335
1.0056 3.0 225 0.8843 0.5617
0.9034 4.0 300 0.8985 0.5117
0.8764 5.0 375 0.9613 0.5017
0.9617 6.0 450 0.9074 0.5317
0.8578 7.0 525 0.8240 0.5717
0.8424 8.0 600 0.8437 0.5617
0.8025 9.0 675 0.7942 0.5833
0.7777 10.0 750 0.7683 0.57
0.8053 11.0 825 0.7474 0.5983
0.818 12.0 900 0.7555 0.61
0.8018 13.0 975 0.7629 0.5833
0.8411 14.0 1050 0.7216 0.635
0.6416 15.0 1125 0.8742 0.56
0.8084 16.0 1200 0.7814 0.6083
0.7505 17.0 1275 0.7600 0.6183
0.6996 18.0 1350 0.7346 0.6283
0.7648 19.0 1425 0.7240 0.6617
0.6916 20.0 1500 0.6768 0.6767
0.7556 21.0 1575 0.7263 0.6617
0.6471 22.0 1650 0.7297 0.6583
0.752 23.0 1725 0.7501 0.635
0.7349 24.0 1800 0.6751 0.6883
0.6802 25.0 1875 0.6689 0.6817
0.6239 26.0 1950 0.8871 0.5817
0.6865 27.0 2025 0.6485 0.7033
0.6138 28.0 2100 0.6457 0.7233
0.6707 29.0 2175 0.6937 0.6833
0.6824 30.0 2250 0.6688 0.7033
0.5913 31.0 2325 0.6725 0.715
0.5797 32.0 2400 0.6508 0.7167
0.5524 33.0 2475 0.7048 0.7
0.4736 34.0 2550 0.6807 0.6933
0.5263 35.0 2625 0.6317 0.7233
0.5348 36.0 2700 0.6398 0.7367
0.5082 37.0 2775 0.6440 0.7183
0.4972 38.0 2850 0.6697 0.7167
0.4567 39.0 2925 0.6947 0.73
0.4313 40.0 3000 0.6527 0.7383
0.4762 41.0 3075 0.6875 0.74
0.4293 42.0 3150 0.7259 0.7333
0.4594 43.0 3225 0.7531 0.7367
0.379 44.0 3300 0.7792 0.7383
0.3265 45.0 3375 0.7882 0.74
0.2807 46.0 3450 0.8615 0.7367
0.2733 47.0 3525 0.9438 0.73
0.2273 48.0 3600 0.9312 0.73
0.2189 49.0 3675 0.9889 0.7383
0.1609 50.0 3750 0.9960 0.7333

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0