Edit model card

smids_5x_beit_base_rms_0001_fold1

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9099
  • Accuracy: 0.9032

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.2877 1.0 376 0.3734 0.8397
0.263 2.0 752 0.2897 0.8898
0.214 3.0 1128 0.4392 0.8815
0.1401 4.0 1504 0.3640 0.8865
0.0913 5.0 1880 0.4071 0.8982
0.0792 6.0 2256 0.5796 0.8765
0.0326 7.0 2632 0.5828 0.8781
0.0613 8.0 3008 0.5485 0.8965
0.0614 9.0 3384 0.5394 0.8815
0.0378 10.0 3760 0.5802 0.8932
0.016 11.0 4136 0.5517 0.8998
0.1114 12.0 4512 0.5851 0.8798
0.0304 13.0 4888 0.5301 0.8731
0.0236 14.0 5264 0.6243 0.8965
0.0147 15.0 5640 0.5697 0.8998
0.0009 16.0 6016 0.5289 0.9098
0.003 17.0 6392 0.6450 0.8932
0.045 18.0 6768 0.7662 0.8915
0.0003 19.0 7144 0.6709 0.8898
0.0083 20.0 7520 0.7941 0.8865
0.0011 21.0 7896 0.8204 0.8831
0.0265 22.0 8272 0.7663 0.8798
0.0065 23.0 8648 0.7543 0.8865
0.0005 24.0 9024 0.8605 0.8881
0.0223 25.0 9400 0.7879 0.8815
0.0093 26.0 9776 0.8444 0.8748
0.0004 27.0 10152 0.7708 0.8781
0.0001 28.0 10528 0.7477 0.8948
0.0051 29.0 10904 0.7865 0.8831
0.0131 30.0 11280 0.8049 0.9098
0.0 31.0 11656 0.8832 0.8915
0.0001 32.0 12032 0.8723 0.8965
0.0007 33.0 12408 1.0043 0.8865
0.0001 34.0 12784 0.9137 0.8881
0.0019 35.0 13160 0.7356 0.9048
0.0 36.0 13536 0.7048 0.9032
0.0 37.0 13912 0.8706 0.9015
0.0 38.0 14288 0.7699 0.9032
0.0 39.0 14664 0.8383 0.8982
0.0 40.0 15040 0.8533 0.9048
0.0008 41.0 15416 0.8710 0.9015
0.0001 42.0 15792 0.9271 0.8898
0.0 43.0 16168 0.9308 0.8982
0.0 44.0 16544 0.9577 0.8982
0.0 45.0 16920 0.9412 0.8898
0.0033 46.0 17296 0.9423 0.8998
0.0 47.0 17672 0.9136 0.9048
0.0 48.0 18048 0.9005 0.9065
0.0001 49.0 18424 0.9138 0.9048
0.0023 50.0 18800 0.9099 0.9032

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2
Downloads last month
10

Finetuned from

Evaluation results