Edit model card

smids_5x_beit_base_rms_00001_fold5

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9550
  • Accuracy: 0.91

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.1871 1.0 375 0.3400 0.865
0.1042 2.0 750 0.2834 0.9017
0.1083 3.0 1125 0.3546 0.9033
0.0201 4.0 1500 0.3961 0.9133
0.0166 5.0 1875 0.6199 0.9083
0.0246 6.0 2250 0.6057 0.8967
0.0224 7.0 2625 0.7547 0.9117
0.0003 8.0 3000 0.7052 0.9133
0.0111 9.0 3375 0.7830 0.8983
0.0518 10.0 3750 0.8521 0.8967
0.02 11.0 4125 0.9299 0.8933
0.0138 12.0 4500 0.9525 0.8983
0.0001 13.0 4875 0.8824 0.9067
0.013 14.0 5250 0.9828 0.8833
0.0349 15.0 5625 0.8057 0.9033
0.0008 16.0 6000 0.9444 0.8983
0.0016 17.0 6375 0.8486 0.905
0.0093 18.0 6750 0.8485 0.9083
0.0057 19.0 7125 0.8351 0.895
0.0146 20.0 7500 0.8068 0.915
0.0088 21.0 7875 0.8372 0.9117
0.0074 22.0 8250 0.8780 0.905
0.0068 23.0 8625 0.9227 0.9067
0.0 24.0 9000 0.8408 0.9067
0.0 25.0 9375 0.8878 0.9067
0.0001 26.0 9750 0.6996 0.9217
0.0043 27.0 10125 0.7960 0.915
0.0021 28.0 10500 0.8288 0.91
0.002 29.0 10875 0.8059 0.9133
0.0055 30.0 11250 0.8992 0.9117
0.0001 31.0 11625 0.9502 0.9083
0.0001 32.0 12000 1.0009 0.9067
0.0047 33.0 12375 0.9429 0.91
0.0 34.0 12750 0.9564 0.905
0.0 35.0 13125 0.9119 0.9217
0.0 36.0 13500 1.0028 0.9033
0.0 37.0 13875 0.9150 0.91
0.0054 38.0 14250 0.9393 0.91
0.0 39.0 14625 1.0004 0.9067
0.0 40.0 15000 0.9733 0.9083
0.0001 41.0 15375 0.9774 0.9067
0.0 42.0 15750 0.9404 0.9133
0.0 43.0 16125 0.9724 0.9117
0.0 44.0 16500 0.9389 0.915
0.0 45.0 16875 0.9342 0.9167
0.0 46.0 17250 0.9815 0.9117
0.0058 47.0 17625 0.9724 0.9067
0.0 48.0 18000 0.9650 0.9067
0.0 49.0 18375 0.9572 0.9083
0.0012 50.0 18750 0.9550 0.91

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2
Downloads last month
13

Finetuned from

Evaluation results