hkivancoral's picture
End of training
2a5effd
metadata
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_5x_beit_base_adamax_00001_fold5
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8048780487804879

hushem_5x_beit_base_adamax_00001_fold5

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6249
  • Accuracy: 0.8049

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.2175 1.0 28 1.0638 0.5610
0.735 2.0 56 0.7985 0.6585
0.4163 3.0 84 0.6796 0.7561
0.279 4.0 112 0.5555 0.7805
0.1477 5.0 140 0.5113 0.7805
0.1105 6.0 168 0.4149 0.8049
0.0639 7.0 196 0.4515 0.8049
0.0449 8.0 224 0.4623 0.8049
0.0253 9.0 252 0.4728 0.8049
0.0316 10.0 280 0.4972 0.8049
0.0123 11.0 308 0.4732 0.8049
0.0128 12.0 336 0.4924 0.8049
0.0101 13.0 364 0.4570 0.8049
0.0111 14.0 392 0.4394 0.8049
0.0107 15.0 420 0.4434 0.8293
0.0064 16.0 448 0.5061 0.8049
0.0038 17.0 476 0.4264 0.8049
0.0038 18.0 504 0.4542 0.8049
0.0106 19.0 532 0.5345 0.8049
0.0043 20.0 560 0.5084 0.8049
0.0022 21.0 588 0.5182 0.8049
0.0136 22.0 616 0.4661 0.8049
0.005 23.0 644 0.4938 0.8293
0.0094 24.0 672 0.5151 0.8293
0.0106 25.0 700 0.5393 0.8049
0.0023 26.0 728 0.5196 0.8293
0.0018 27.0 756 0.5228 0.8293
0.0039 28.0 784 0.5509 0.8049
0.002 29.0 812 0.5472 0.8049
0.0023 30.0 840 0.5687 0.8049
0.0017 31.0 868 0.5888 0.8049
0.0023 32.0 896 0.5665 0.8049
0.0021 33.0 924 0.5478 0.8049
0.002 34.0 952 0.5621 0.8049
0.0027 35.0 980 0.5915 0.8049
0.0012 36.0 1008 0.6391 0.8049
0.0008 37.0 1036 0.6817 0.8049
0.0029 38.0 1064 0.6733 0.8049
0.0009 39.0 1092 0.6240 0.8049
0.0018 40.0 1120 0.6057 0.8049
0.0019 41.0 1148 0.6204 0.8049
0.0009 42.0 1176 0.6350 0.8049
0.0017 43.0 1204 0.6368 0.8049
0.006 44.0 1232 0.6329 0.8049
0.0022 45.0 1260 0.6324 0.8049
0.0014 46.0 1288 0.6308 0.8049
0.0013 47.0 1316 0.6209 0.8049
0.0019 48.0 1344 0.6248 0.8049
0.0007 49.0 1372 0.6249 0.8049
0.0012 50.0 1400 0.6249 0.8049

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0