hkivancoral's picture
End of training
6ce7473
metadata
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_3x_beit_base_rms_0001_fold1
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8497495826377296

smids_3x_beit_base_rms_0001_fold1

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4390
  • Accuracy: 0.8497

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.8068 1.0 226 1.0605 0.3272
0.7543 2.0 452 0.8522 0.5943
0.6302 3.0 678 0.8188 0.6227
0.591 4.0 904 0.7284 0.6795
0.5365 5.0 1130 0.5240 0.7830
0.4875 6.0 1356 0.4715 0.8030
0.2956 7.0 1582 0.5230 0.8130
0.3385 8.0 1808 0.4637 0.8047
0.2498 9.0 2034 0.5733 0.8230
0.2375 10.0 2260 0.5001 0.8381
0.2383 11.0 2486 0.5213 0.8164
0.1638 12.0 2712 0.7500 0.8097
0.1669 13.0 2938 0.6347 0.8347
0.091 14.0 3164 0.8704 0.8164
0.0933 15.0 3390 0.6698 0.8280
0.1167 16.0 3616 0.7435 0.8481
0.0442 17.0 3842 0.8758 0.8164
0.0649 18.0 4068 0.8054 0.8247
0.0996 19.0 4294 0.8135 0.8164
0.0421 20.0 4520 0.8460 0.8464
0.0255 21.0 4746 1.2147 0.8097
0.0814 22.0 4972 0.8708 0.8331
0.07 23.0 5198 1.0564 0.8364
0.029 24.0 5424 1.0607 0.8364
0.0335 25.0 5650 1.0179 0.8464
0.0974 26.0 5876 0.8966 0.8364
0.0251 27.0 6102 1.0900 0.8297
0.0304 28.0 6328 0.9348 0.8347
0.0116 29.0 6554 1.0392 0.8447
0.036 30.0 6780 1.0080 0.8414
0.0176 31.0 7006 1.0131 0.8364
0.0187 32.0 7232 0.9626 0.8397
0.0495 33.0 7458 0.9911 0.8414
0.0106 34.0 7684 1.2195 0.8331
0.0005 35.0 7910 1.2232 0.8464
0.0148 36.0 8136 1.1060 0.8364
0.0093 37.0 8362 1.0552 0.8364
0.0212 38.0 8588 1.1910 0.8364
0.0009 39.0 8814 1.1001 0.8431
0.0083 40.0 9040 1.2874 0.8481
0.0296 41.0 9266 1.3495 0.8381
0.0225 42.0 9492 1.3683 0.8414
0.0158 43.0 9718 1.2852 0.8481
0.0056 44.0 9944 1.3620 0.8447
0.0126 45.0 10170 1.3137 0.8431
0.0 46.0 10396 1.4527 0.8497
0.013 47.0 10622 1.4028 0.8531
0.0375 48.0 10848 1.3979 0.8481
0.0006 49.0 11074 1.4369 0.8497
0.0135 50.0 11300 1.4390 0.8497

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2