hkivancoral's picture
End of training
bd90198
metadata
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_3x_beit_base_rms_001_fold1
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.7579298831385642

smids_3x_beit_base_rms_001_fold1

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7807
  • Accuracy: 0.7579

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.1042 1.0 226 1.0981 0.3456
0.942 2.0 452 0.9022 0.5275
0.8328 3.0 678 0.9600 0.4691
0.8702 4.0 904 0.9083 0.5543
0.8313 5.0 1130 0.8171 0.5760
0.8558 6.0 1356 0.8467 0.5342
0.7514 7.0 1582 0.7612 0.6277
0.7839 8.0 1808 0.7968 0.5659
0.7602 9.0 2034 0.7655 0.6210
0.7694 10.0 2260 0.7429 0.6060
0.7166 11.0 2486 0.7968 0.5626
0.7048 12.0 2712 0.8272 0.6077
0.6745 13.0 2938 0.8054 0.5993
0.7185 14.0 3164 0.7867 0.6194
0.7264 15.0 3390 0.7701 0.6377
0.6767 16.0 3616 0.7383 0.6144
0.6006 17.0 3842 0.8677 0.6077
0.6721 18.0 4068 0.7460 0.6361
0.6352 19.0 4294 0.7492 0.6127
0.642 20.0 4520 0.7712 0.6160
0.6647 21.0 4746 0.7257 0.6544
0.6408 22.0 4972 0.7629 0.6611
0.7655 23.0 5198 0.7723 0.6127
0.7074 24.0 5424 0.6879 0.6928
0.6919 25.0 5650 0.6962 0.6828
0.698 26.0 5876 0.7479 0.6361
0.641 27.0 6102 0.7653 0.6644
0.6417 28.0 6328 0.7791 0.6594
0.6123 29.0 6554 0.7195 0.6761
0.5918 30.0 6780 0.6991 0.6995
0.5562 31.0 7006 0.6938 0.6978
0.6293 32.0 7232 0.6564 0.7145
0.5615 33.0 7458 0.7421 0.6878
0.5411 34.0 7684 0.6688 0.7145
0.4483 35.0 7910 0.7701 0.6962
0.4776 36.0 8136 0.6349 0.7412
0.4775 37.0 8362 0.6430 0.7262
0.4854 38.0 8588 0.7095 0.7078
0.4208 39.0 8814 0.6254 0.7412
0.3846 40.0 9040 0.6645 0.7396
0.3663 41.0 9266 0.6430 0.7563
0.3616 42.0 9492 0.6767 0.7462
0.3863 43.0 9718 0.6432 0.7596
0.2608 44.0 9944 0.6472 0.7563
0.4159 45.0 10170 0.6400 0.7663
0.333 46.0 10396 0.6911 0.7613
0.2718 47.0 10622 0.7119 0.7696
0.2639 48.0 10848 0.7421 0.7596
0.236 49.0 11074 0.7543 0.7613
0.2672 50.0 11300 0.7807 0.7579

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2