hkivancoral's picture
End of training
a85d5c4
metadata
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_3x_beit_base_rms_00001_fold4
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8833333333333333

smids_3x_beit_base_rms_00001_fold4

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2428
  • Accuracy: 0.8833

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.2439 1.0 225 0.3063 0.88
0.0996 2.0 450 0.3661 0.8767
0.0787 3.0 675 0.4587 0.8667
0.0663 4.0 900 0.5189 0.8733
0.0442 5.0 1125 0.7230 0.8683
0.049 6.0 1350 0.6529 0.885
0.0264 7.0 1575 0.8061 0.8817
0.0229 8.0 1800 0.7781 0.89
0.0395 9.0 2025 0.9069 0.88
0.0034 10.0 2250 0.8845 0.885
0.0389 11.0 2475 1.0336 0.8783
0.0025 12.0 2700 0.9857 0.8867
0.0142 13.0 2925 1.0341 0.885
0.0196 14.0 3150 1.1721 0.8767
0.0094 15.0 3375 1.0615 0.8767
0.0053 16.0 3600 1.1359 0.8767
0.0019 17.0 3825 1.1838 0.88
0.0236 18.0 4050 1.3731 0.8617
0.0037 19.0 4275 1.2473 0.8683
0.0001 20.0 4500 1.1836 0.8833
0.0008 21.0 4725 1.2284 0.8733
0.0 22.0 4950 1.1971 0.8867
0.015 23.0 5175 1.2985 0.8783
0.0 24.0 5400 1.3191 0.8683
0.0386 25.0 5625 1.3376 0.88
0.0001 26.0 5850 1.3273 0.8717
0.0019 27.0 6075 1.3269 0.8683
0.0001 28.0 6300 1.3093 0.8733
0.0 29.0 6525 1.2247 0.88
0.0 30.0 6750 1.2682 0.8733
0.0 31.0 6975 1.2123 0.8833
0.0 32.0 7200 1.2162 0.885
0.0027 33.0 7425 1.2786 0.8783
0.0 34.0 7650 1.3256 0.8817
0.0286 35.0 7875 1.2152 0.89
0.0 36.0 8100 1.2207 0.8833
0.0001 37.0 8325 1.2285 0.885
0.0004 38.0 8550 1.1956 0.89
0.0 39.0 8775 1.1853 0.8867
0.0401 40.0 9000 1.1341 0.8967
0.0 41.0 9225 1.1526 0.8883
0.0032 42.0 9450 1.1907 0.8817
0.0002 43.0 9675 1.2154 0.8833
0.0022 44.0 9900 1.1934 0.8833
0.0 45.0 10125 1.2765 0.88
0.0 46.0 10350 1.2545 0.8767
0.0002 47.0 10575 1.2393 0.8817
0.0 48.0 10800 1.2475 0.8817
0.0 49.0 11025 1.2453 0.8817
0.0 50.0 11250 1.2428 0.8833

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2