Edit model card

swinv2-tiny-patch4-window8-256-ve-UH

This model is a fine-tuned version of microsoft/swinv2-tiny-patch4-window8-256 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0154
  • Accuracy: 0.7115

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 80

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 2 1.6092 0.4038
No log 2.0 4 1.6075 0.4231
No log 3.0 6 1.6037 0.4038
No log 4.0 8 1.5960 0.4038
1.6041 5.0 10 1.5820 0.4038
1.6041 6.0 12 1.5578 0.4038
1.6041 7.0 14 1.5218 0.4038
1.6041 8.0 16 1.4849 0.4038
1.6041 9.0 18 1.4459 0.4038
1.4962 10.0 20 1.4109 0.4038
1.4962 11.0 22 1.3941 0.4038
1.4962 12.0 24 1.3865 0.4038
1.4962 13.0 26 1.3754 0.4038
1.4962 14.0 28 1.3655 0.4038
1.3392 15.0 30 1.3794 0.4038
1.3392 16.0 32 1.3800 0.4038
1.3392 17.0 34 1.3404 0.4038
1.3392 18.0 36 1.3337 0.4038
1.3392 19.0 38 1.3602 0.4038
1.2738 20.0 40 1.3384 0.4038
1.2738 21.0 42 1.3248 0.4038
1.2738 22.0 44 1.2693 0.4038
1.2738 23.0 46 1.2395 0.4038
1.2738 24.0 48 1.2427 0.4038
1.2283 25.0 50 1.2885 0.4038
1.2283 26.0 52 1.2916 0.4038
1.2283 27.0 54 1.2353 0.4038
1.2283 28.0 56 1.2032 0.4038
1.2283 29.0 58 1.2100 0.5577
1.1804 30.0 60 1.2110 0.6154
1.1804 31.0 62 1.1710 0.6346
1.1804 32.0 64 1.1323 0.6154
1.1804 33.0 66 1.1083 0.5962
1.1804 34.0 68 1.0935 0.5962
1.0925 35.0 70 1.0853 0.6346
1.0925 36.0 72 1.0622 0.6731
1.0925 37.0 74 1.0154 0.7115
1.0925 38.0 76 0.9901 0.7115
1.0925 39.0 78 0.9925 0.6923
0.9981 40.0 80 0.9865 0.6731
0.9981 41.0 82 0.9540 0.6731
0.9981 42.0 84 0.9316 0.7115
0.9981 43.0 86 0.9304 0.7115
0.9981 44.0 88 0.9246 0.6923
0.9102 45.0 90 0.8785 0.7115
0.9102 46.0 92 0.8422 0.7115
0.9102 47.0 94 0.8381 0.7115
0.9102 48.0 96 0.8359 0.7115
0.9102 49.0 98 0.8444 0.7115
0.8496 50.0 100 0.8287 0.6731
0.8496 51.0 102 0.7973 0.6923
0.8496 52.0 104 0.7799 0.6923
0.8496 53.0 106 0.7780 0.6923
0.8496 54.0 108 0.7820 0.7115
0.7808 55.0 110 0.7896 0.7115
0.7808 56.0 112 0.7737 0.6923
0.7808 57.0 114 0.7631 0.6731
0.7808 58.0 116 0.7635 0.6538
0.7808 59.0 118 0.7779 0.6538
0.757 60.0 120 0.7990 0.6731
0.757 61.0 122 0.8222 0.6538
0.757 62.0 124 0.8204 0.6538
0.757 63.0 126 0.7964 0.6731
0.757 64.0 128 0.7818 0.6538
0.6919 65.0 130 0.7796 0.6346
0.6919 66.0 132 0.7831 0.6346
0.6919 67.0 134 0.7867 0.6346
0.6919 68.0 136 0.7856 0.6346
0.6919 69.0 138 0.7793 0.6538
0.6722 70.0 140 0.7736 0.6538
0.6722 71.0 142 0.7682 0.6538
0.6722 72.0 144 0.7681 0.6538
0.6722 73.0 146 0.7672 0.6538
0.6722 74.0 148 0.7655 0.6538
0.6642 75.0 150 0.7645 0.6538
0.6642 76.0 152 0.7658 0.6538
0.6642 77.0 154 0.7677 0.6538
0.6642 78.0 156 0.7683 0.6538
0.6642 79.0 158 0.7684 0.6538
0.6491 80.0 160 0.7686 0.6538

Framework versions

  • Transformers 4.36.2
  • Pytorch 2.1.2+cu118
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
0
Safetensors
Model size
27.6M params
Tensor type
F32
·

Finetuned from