Edit model card

smids_3x_beit_base_rms_001_fold3

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6251
  • Accuracy: 0.7617

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.9148 1.0 225 0.9238 0.505
0.8709 2.0 450 0.9060 0.515
0.8398 3.0 675 0.8688 0.5317
0.74 4.0 900 0.7859 0.5617
0.7787 5.0 1125 0.7847 0.6017
0.7532 6.0 1350 0.7702 0.63
0.7432 7.0 1575 0.7450 0.655
0.7264 8.0 1800 0.7610 0.6317
0.7321 9.0 2025 0.7293 0.655
0.6592 10.0 2250 0.7888 0.6367
0.7528 11.0 2475 0.7158 0.6633
0.7282 12.0 2700 0.7365 0.64
0.6884 13.0 2925 0.6939 0.6733
0.6852 14.0 3150 0.7006 0.67
0.6011 15.0 3375 0.7591 0.6233
0.6904 16.0 3600 0.6846 0.6717
0.6393 17.0 3825 0.6741 0.7117
0.6772 18.0 4050 0.6655 0.6683
0.6409 19.0 4275 0.6658 0.6933
0.5941 20.0 4500 0.6429 0.7017
0.5753 21.0 4725 0.6753 0.6833
0.5975 22.0 4950 0.6543 0.6917
0.5954 23.0 5175 0.6358 0.7233
0.5729 24.0 5400 0.6341 0.7133
0.6313 25.0 5625 0.6336 0.7033
0.5938 26.0 5850 0.6447 0.7083
0.5183 27.0 6075 0.6247 0.7233
0.5713 28.0 6300 0.6145 0.73
0.5948 29.0 6525 0.5934 0.7317
0.5273 30.0 6750 0.5971 0.7367
0.5431 31.0 6975 0.5930 0.7433
0.6025 32.0 7200 0.6434 0.7183
0.5898 33.0 7425 0.5982 0.7383
0.5455 34.0 7650 0.5983 0.75
0.4857 35.0 7875 0.6162 0.735
0.5822 36.0 8100 0.5546 0.7517
0.4869 37.0 8325 0.5748 0.745
0.4722 38.0 8550 0.5753 0.7417
0.4982 39.0 8775 0.5694 0.7483
0.4478 40.0 9000 0.5912 0.74
0.4295 41.0 9225 0.5914 0.75
0.4581 42.0 9450 0.5846 0.7617
0.3797 43.0 9675 0.5733 0.7667
0.4086 44.0 9900 0.6072 0.7517
0.4164 45.0 10125 0.6033 0.7583
0.3774 46.0 10350 0.6024 0.75
0.392 47.0 10575 0.5976 0.7617
0.3586 48.0 10800 0.6199 0.76
0.3854 49.0 11025 0.6198 0.7667
0.3586 50.0 11250 0.6251 0.7617

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2
Downloads last month
11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from

Evaluation results