hkivancoral's picture
End of training
072f7a7
metadata
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_1x_beit_base_rms_0001_fold1
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.6978297161936561

smids_1x_beit_base_rms_0001_fold1

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7464
  • Accuracy: 0.6978

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.1002 1.0 76 0.9320 0.5459
0.9176 2.0 152 0.9156 0.4975
0.8828 3.0 228 1.4808 0.3239
0.9116 4.0 304 0.9182 0.5058
0.9681 5.0 380 0.8261 0.5726
0.8914 6.0 456 0.8412 0.5442
0.8118 7.0 532 0.8070 0.5843
0.7886 8.0 608 0.7873 0.6144
0.8228 9.0 684 0.8018 0.5593
0.7855 10.0 760 0.8650 0.5659
0.7506 11.0 836 0.8105 0.5726
0.8105 12.0 912 0.7718 0.5760
0.7542 13.0 988 0.7814 0.6027
0.8063 14.0 1064 0.7598 0.6244
0.6853 15.0 1140 0.9554 0.5526
0.6995 16.0 1216 0.7869 0.6277
0.7413 17.0 1292 0.7345 0.6561
0.6942 18.0 1368 0.7274 0.6511
0.7698 19.0 1444 0.7431 0.6711
0.7328 20.0 1520 0.7361 0.6327
0.7002 21.0 1596 0.7435 0.6427
0.6967 22.0 1672 0.8269 0.6010
0.651 23.0 1748 0.7688 0.6528
0.6937 24.0 1824 0.7386 0.6578
0.5694 25.0 1900 0.7657 0.6277
0.6705 26.0 1976 0.7210 0.6811
0.5989 27.0 2052 0.7453 0.6561
0.6274 28.0 2128 0.7780 0.6578
0.5748 29.0 2204 0.7338 0.6845
0.6764 30.0 2280 0.7373 0.6394
0.6934 31.0 2356 0.7055 0.6845
0.6007 32.0 2432 0.7394 0.6511
0.5933 33.0 2508 0.7124 0.6795
0.5894 34.0 2584 0.7760 0.6711
0.6837 35.0 2660 0.7002 0.6628
0.5776 36.0 2736 0.7352 0.6694
0.6485 37.0 2812 0.7046 0.6878
0.5352 38.0 2888 0.7058 0.6861
0.577 39.0 2964 0.6974 0.7028
0.5712 40.0 3040 0.7122 0.6811
0.5117 41.0 3116 0.7026 0.6845
0.4908 42.0 3192 0.7187 0.7045
0.4784 43.0 3268 0.7103 0.7028
0.4739 44.0 3344 0.7027 0.7162
0.5942 45.0 3420 0.7242 0.6962
0.4258 46.0 3496 0.7593 0.6912
0.4726 47.0 3572 0.7433 0.6895
0.4422 48.0 3648 0.7412 0.6928
0.4049 49.0 3724 0.7425 0.6995
0.5059 50.0 3800 0.7464 0.6978

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0