Edit model card

swinv2-tiny-patch4-window8-256-finetuned-200k

This model is a fine-tuned version of microsoft/swinv2-tiny-patch4-window8-256 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3715
  • Accuracy: 0.8360

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 256
  • eval_batch_size: 256
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 1024
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.7044 0.93 10 0.6932 0.5410
0.6768 1.95 21 0.6407 0.6614
0.6359 2.98 32 0.5647 0.7208
0.5989 4.0 43 0.5674 0.7086
0.5831 4.93 53 0.5108 0.7679
0.549 5.95 64 0.4882 0.7836
0.5341 6.98 75 0.4831 0.7714
0.5172 8.0 86 0.4422 0.8115
0.4961 8.93 96 0.4422 0.7941
0.4796 9.95 107 0.4066 0.8098
0.4776 10.98 118 0.3906 0.8185
0.4668 12.0 129 0.4135 0.8150
0.4588 12.93 139 0.3884 0.8202
0.448 13.95 150 0.3764 0.8220
0.4508 14.98 161 0.3802 0.8220
0.43 16.0 172 0.3829 0.8150
0.4347 16.93 182 0.3857 0.8133
0.4232 17.95 193 0.3819 0.8150
0.4289 18.98 204 0.4055 0.8080
0.4271 20.0 215 0.3577 0.8377
0.4301 20.93 225 0.3598 0.8272
0.4257 21.95 236 0.3780 0.8237
0.4191 22.98 247 0.3545 0.8307
0.4164 24.0 258 0.4208 0.8115
0.4297 24.93 268 0.3817 0.8290
0.4168 25.95 279 0.3876 0.8220
0.4118 26.98 290 0.3670 0.8307
0.4042 28.0 301 0.3620 0.8290
0.4018 28.93 311 0.3670 0.8290
0.4074 29.95 322 0.3822 0.8290
0.4044 30.98 333 0.3561 0.8325
0.3998 32.0 344 0.3642 0.8377
0.3994 32.93 354 0.3721 0.8290
0.3982 33.95 365 0.3592 0.8394
0.4002 34.98 376 0.3740 0.8290
0.4014 36.0 387 0.3705 0.8325
0.3953 36.93 397 0.3865 0.8237
0.3934 37.95 408 0.3689 0.8342
0.3964 38.98 419 0.3570 0.8255
0.4027 40.0 430 0.3738 0.8325
0.392 40.93 440 0.3566 0.8342
0.3875 41.95 451 0.3652 0.8377
0.3866 42.98 462 0.3657 0.8342
0.396 44.0 473 0.3662 0.8342
0.3841 44.93 483 0.3764 0.8360
0.387 45.95 494 0.3687 0.8325
0.3844 46.51 500 0.3715 0.8360

Framework versions

  • Transformers 4.33.3
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
14

Finetuned from

Evaluation results