hkivancoral's picture
End of training
661c23f
metadata
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_1x_beit_base_rms_0001_fold2
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.740432612312812

smids_1x_beit_base_rms_0001_fold2

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9358
  • Accuracy: 0.7404

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.0437 1.0 75 0.9679 0.5042
0.9234 2.0 150 0.8669 0.5208
1.0795 3.0 225 0.7926 0.5874
0.9543 4.0 300 0.8244 0.5507
0.8239 5.0 375 0.7959 0.5857
0.7924 6.0 450 0.7928 0.5890
0.8468 7.0 525 0.7806 0.6256
0.8608 8.0 600 0.9027 0.5408
0.7878 9.0 675 0.7544 0.6373
0.9079 10.0 750 0.7732 0.6190
0.7705 11.0 825 0.7349 0.6290
0.7586 12.0 900 0.7322 0.6306
0.7794 13.0 975 0.7224 0.6323
0.7123 14.0 1050 0.7252 0.6572
0.744 15.0 1125 0.7450 0.5990
0.7086 16.0 1200 0.6962 0.6639
0.7295 17.0 1275 0.7508 0.6489
0.7289 18.0 1350 0.6978 0.6722
0.6947 19.0 1425 0.7112 0.6739
0.6923 20.0 1500 0.7131 0.6805
0.7545 21.0 1575 0.7480 0.6223
0.68 22.0 1650 0.6683 0.6839
0.7107 23.0 1725 0.6889 0.6772
0.6933 24.0 1800 0.6566 0.6822
0.6429 25.0 1875 0.6381 0.7005
0.6742 26.0 1950 0.6536 0.6822
0.6753 27.0 2025 0.6462 0.6889
0.6228 28.0 2100 0.6368 0.7022
0.6193 29.0 2175 0.6115 0.7171
0.5568 30.0 2250 0.6625 0.7188
0.584 31.0 2325 0.6680 0.6922
0.581 32.0 2400 0.5723 0.7654
0.5698 33.0 2475 0.6173 0.7205
0.5032 34.0 2550 0.6176 0.7338
0.5019 35.0 2625 0.6137 0.7438
0.4921 36.0 2700 0.5855 0.7571
0.453 37.0 2775 0.6724 0.7271
0.4913 38.0 2850 0.6043 0.7720
0.3871 39.0 2925 0.6124 0.7704
0.4014 40.0 3000 0.6591 0.7521
0.4698 41.0 3075 0.6575 0.7604
0.375 42.0 3150 0.6735 0.7471
0.317 43.0 3225 0.7867 0.7504
0.2968 44.0 3300 0.7423 0.7521
0.2919 45.0 3375 0.8253 0.7504
0.2598 46.0 3450 0.8629 0.7421
0.1951 47.0 3525 0.8586 0.7704
0.1905 48.0 3600 0.9010 0.7438
0.1278 49.0 3675 0.9354 0.7454
0.2294 50.0 3750 0.9358 0.7404

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0