Edit model card

kaggle22

This model is a fine-tuned version of microsoft/swinv2-base-patch4-window12-192-22k on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.1484
  • Accuracy: 0.7132

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.005
  • train_batch_size: 512
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 70.0

Training results

Training Loss Epoch Step Accuracy Validation Loss
9.5189 1.0 1313 0.0002 9.4097
9.1635 2.0 2626 0.0006 9.0418
8.3432 3.0 3939 0.0065 7.8454
6.9913 4.0 5252 0.0489 6.3118
5.5048 5.0 6565 0.1423 4.9493
4.6895 6.0 7878 0.2450 3.9601
3.8881 7.0 9191 0.3136 3.4186
3.391 8.0 10504 0.3766 2.9798
3.0887 9.0 11817 0.4221 2.7054
2.7935 10.0 13130 0.4552 2.5013
2.5629 11.0 14443 0.4804 2.3581
2.3777 12.0 15756 0.4809 2.3543
2.2264 13.0 17069 0.5179 2.1632
2.0932 14.0 18382 0.5219 2.1362
1.9667 15.0 19695 0.5591 1.9567
1.8788 16.0 21008 0.5610 1.9347
1.7705 17.0 22321 0.5684 1.9483
1.7089 18.0 23634 0.5791 1.8928
1.6068 19.0 24947 0.5855 1.8435
1.5572 20.0 26260 0.5880 1.8408
1.4938 21.0 27573 0.6110 1.7413
1.4182 22.0 28886 0.6155 1.7196
1.3784 23.0 30199 0.6238 1.7105
1.3578 24.0 31512 0.6176 1.7759
1.2763 25.0 32825 0.6219 1.7365
1.2484 26.0 34138 0.6199 1.7483
1.1936 27.0 35451 0.6314 1.7003
1.1499 28.0 36764 0.6247 1.7399
1.1418 29.0 38077 0.6317 1.7091
1.0895 30.0 39390 0.6383 1.7166
1.0706 31.0 40703 0.6374 1.7384
1.0541 32.0 42016 0.6409 1.7336
1.0013 33.0 43329 0.6451 1.7185
0.9811 34.0 44642 0.6479 1.7246
0.9447 35.0 45955 0.6540 1.7245
0.6587 36.0 47268 0.7019 1.5849
0.6044 37.0 48581 0.7062 1.6146
0.572 38.0 49894 0.7081 1.6583
0.545 39.0 51207 0.7087 1.6993
0.5341 40.0 52520 0.7106 1.7078
0.5284 41.0 53833 0.7105 1.7241
0.5186 42.0 55146 0.7112 1.7408
0.506 43.0 56459 0.7106 1.7487
0.5043 44.0 57772 0.7109 1.7547
0.5094 45.0 59085 0.7111 1.7536
0.5547 46.0 60398 0.7069 1.7074
0.5391 47.0 61711 0.7090 1.7401
0.5253 48.0 63024 0.7093 1.7770
0.5066 49.0 64337 0.7102 1.8135
0.495 50.0 65650 0.7110 1.8452
0.4813 51.0 66963 0.7107 1.8846
0.4704 52.0 68276 0.7124 1.8989
0.4689 53.0 69589 0.7132 1.9311
0.4611 54.0 70902 0.7131 1.9354
0.4547 55.0 72215 0.7133 1.9741
0.4481 56.0 73528 0.7131 1.9899
0.4709 57.0 74841 1.9412 0.7104
0.4647 58.0 76154 1.9707 0.7098
0.4566 59.0 77467 2.0151 0.7116
0.4511 60.0 78780 2.0363 0.7114
0.4423 61.0 80093 2.0710 0.7112
0.4356 62.0 81406 2.0611 0.7116
0.4272 63.0 82719 2.0891 0.7118
0.4254 64.0 84032 2.0879 0.7124
0.4221 65.0 85345 2.1167 0.7131
0.4189 66.0 86658 2.1363 0.7129
0.4219 67.0 87971 2.1355 0.7130
0.4149 68.0 89284 2.1466 0.7132
0.4125 69.0 90597 2.1478 0.7131
0.4162 70.0 91910 2.1484 0.7132

Framework versions

  • Transformers 4.33.3
  • Pytorch 2.1.2
  • Datasets 2.16.1
  • Tokenizers 0.13.3
Downloads last month
16
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for faridkarimli/SWIN_finetuned_constant

Finetuned
(14)
this model