hkivancoral's picture
End of training
896c3ba
metadata
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_5x_beit_base_adamax_001_fold2
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.826955074875208

smids_5x_beit_base_adamax_001_fold2

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6098
  • Accuracy: 0.8270

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.1062 1.0 375 1.0982 0.3344
0.9246 2.0 750 0.8990 0.5008
0.869 3.0 1125 0.8746 0.5258
0.8743 4.0 1500 0.8284 0.5674
0.7741 5.0 1875 0.7937 0.5940
0.7654 6.0 2250 0.8614 0.5757
0.7957 7.0 2625 0.7450 0.6273
0.7361 8.0 3000 0.7601 0.6190
0.7832 9.0 3375 0.7148 0.6473
0.7176 10.0 3750 0.7112 0.6539
0.7021 11.0 4125 0.6760 0.6672
0.7391 12.0 4500 0.6756 0.6839
0.6718 13.0 4875 0.6548 0.6955
0.6958 14.0 5250 0.6500 0.7072
0.5987 15.0 5625 0.6407 0.7072
0.6067 16.0 6000 0.6268 0.7221
0.6185 17.0 6375 0.5757 0.7554
0.5734 18.0 6750 0.5844 0.7504
0.5355 19.0 7125 0.5955 0.7354
0.5827 20.0 7500 0.5704 0.7388
0.5749 21.0 7875 0.5428 0.7804
0.5089 22.0 8250 0.5221 0.7887
0.5094 23.0 8625 0.5782 0.7671
0.5429 24.0 9000 0.5089 0.7737
0.4205 25.0 9375 0.5382 0.7687
0.5532 26.0 9750 0.5416 0.7654
0.5743 27.0 10125 0.5044 0.7854
0.4757 28.0 10500 0.4923 0.7837
0.4893 29.0 10875 0.5093 0.7953
0.4472 30.0 11250 0.4972 0.8003
0.4925 31.0 11625 0.4677 0.8003
0.4327 32.0 12000 0.5055 0.8003
0.3876 33.0 12375 0.5066 0.8070
0.365 34.0 12750 0.5108 0.8003
0.4195 35.0 13125 0.4907 0.8136
0.3672 36.0 13500 0.5164 0.8170
0.3426 37.0 13875 0.5029 0.8136
0.3759 38.0 14250 0.4848 0.8220
0.3673 39.0 14625 0.5029 0.7953
0.3205 40.0 15000 0.4939 0.8253
0.278 41.0 15375 0.4766 0.8203
0.3125 42.0 15750 0.5477 0.8170
0.2792 43.0 16125 0.5391 0.8253
0.2594 44.0 16500 0.5619 0.8220
0.2487 45.0 16875 0.5522 0.8153
0.2997 46.0 17250 0.5706 0.8270
0.2756 47.0 17625 0.5989 0.8220
0.2505 48.0 18000 0.5806 0.8303
0.2151 49.0 18375 0.6077 0.8286
0.1705 50.0 18750 0.6098 0.8270

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2