hkivancoral's picture
End of training
a9f326f
metadata
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_5x_beit_base_rms_0001_fold5
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.7073170731707317

hushem_5x_beit_base_rms_0001_fold5

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 3.4047
  • Accuracy: 0.7073

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.4155 1.0 28 1.3777 0.2683
1.3848 2.0 56 1.2989 0.2927
1.3314 3.0 84 1.2733 0.4878
1.2486 4.0 112 1.0811 0.5122
1.2007 5.0 140 0.9236 0.5854
1.05 6.0 168 1.1380 0.5122
1.0162 7.0 196 0.9574 0.5854
0.9476 8.0 224 1.4400 0.4878
0.903 9.0 252 0.9012 0.6341
0.9351 10.0 280 1.0183 0.6829
0.8113 11.0 308 0.9612 0.6585
0.8131 12.0 336 1.6631 0.4878
0.7921 13.0 364 0.9316 0.6829
0.8114 14.0 392 1.3372 0.5854
0.7382 15.0 420 1.4796 0.6341
0.7119 16.0 448 1.9753 0.5366
0.6933 17.0 476 1.3458 0.7073
0.591 18.0 504 1.3968 0.6585
0.6986 19.0 532 1.4904 0.6829
0.6832 20.0 560 1.7362 0.6585
0.5173 21.0 588 1.5475 0.7317
0.5116 22.0 616 1.9547 0.6585
0.4833 23.0 644 2.1246 0.6341
0.4295 24.0 672 1.9058 0.7317
0.4431 25.0 700 2.4495 0.6585
0.3801 26.0 728 1.6867 0.7561
0.4263 27.0 756 2.1056 0.6585
0.3209 28.0 784 2.6127 0.6098
0.29 29.0 812 2.2833 0.6341
0.2306 30.0 840 2.6477 0.6341
0.2318 31.0 868 2.2205 0.6829
0.1766 32.0 896 2.1057 0.8293
0.1861 33.0 924 2.9102 0.6341
0.2172 34.0 952 2.3319 0.7317
0.1336 35.0 980 2.7931 0.7073
0.128 36.0 1008 3.2544 0.6098
0.1009 37.0 1036 2.3057 0.7805
0.1495 38.0 1064 2.9047 0.7317
0.0845 39.0 1092 3.1290 0.7317
0.064 40.0 1120 2.9682 0.7561
0.0399 41.0 1148 2.9364 0.7561
0.0198 42.0 1176 4.0340 0.6585
0.0179 43.0 1204 3.2313 0.7317
0.0799 44.0 1232 3.4340 0.7317
0.0495 45.0 1260 3.8737 0.6829
0.041 46.0 1288 3.5139 0.6829
0.0058 47.0 1316 3.4146 0.7073
0.0141 48.0 1344 3.4016 0.7073
0.0316 49.0 1372 3.4047 0.7073
0.0269 50.0 1400 3.4047 0.7073

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0