Edit model card

cards_bottom_left_swin-tiny-patch4-window7-224-finetuned-v2_more_Data

This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0009
  • Accuracy: 0.5928

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.559 1.0 1362 1.3402 0.4189
1.5165 2.0 2725 1.2308 0.4647
1.484 3.0 4087 1.1676 0.4954
1.5037 4.0 5450 1.1206 0.5198
1.4489 5.0 6812 1.1162 0.5284
1.4335 6.0 8175 1.1395 0.5047
1.4281 7.0 9537 1.0606 0.5445
1.4219 8.0 10900 1.0754 0.5408
1.3935 9.0 12262 1.0285 0.5604
1.3542 10.0 13625 1.0497 0.5453
1.3761 11.0 14987 1.0535 0.5450
1.3824 12.0 16350 1.0268 0.5591
1.3709 13.0 17712 1.0015 0.5690
1.3361 14.0 19075 1.0266 0.5595
1.3673 15.0 20437 0.9988 0.5772
1.376 16.0 21800 0.9950 0.5744
1.3486 17.0 23162 0.9837 0.5784
1.3333 18.0 24525 0.9771 0.5827
1.347 19.0 25887 0.9895 0.5770
1.3381 20.0 27250 0.9709 0.5820
1.3385 21.0 28612 0.9704 0.5833
1.336 22.0 29975 0.9646 0.5885
1.3372 23.0 31337 0.9653 0.5879
1.2979 24.0 32700 0.9867 0.5814
1.2948 25.0 34062 0.9633 0.5870
1.2767 26.0 35425 0.9578 0.5877
1.3012 27.0 36787 0.9709 0.5867
1.2667 28.0 38150 0.9648 0.5899
1.3 29.0 39512 0.9560 0.5930
1.2735 30.0 40875 0.9595 0.5949
1.2895 31.0 42237 0.9851 0.5809
1.2234 32.0 43600 0.9601 0.5931
1.2212 33.0 44962 0.9800 0.5917
1.2483 34.0 46325 0.9662 0.5982
1.2507 35.0 47687 0.9657 0.5910
1.2539 36.0 49050 0.9954 0.5783
1.2491 37.0 50412 0.9718 0.5924
1.2397 38.0 51775 0.9769 0.5930
1.1903 39.0 53137 0.9717 0.5945
1.2475 40.0 54500 0.9995 0.5855
1.2371 41.0 55862 0.9861 0.5935
1.2561 42.0 57225 0.9856 0.5958
1.2069 43.0 58587 0.9913 0.5892
1.2188 44.0 59950 0.9902 0.5950
1.1732 45.0 61312 0.9892 0.5949
1.1705 46.0 62675 0.9991 0.5914
1.18 47.0 64037 0.9952 0.5925
1.2353 48.0 65400 0.9999 0.5933
1.2057 49.0 66762 1.0001 0.5920
1.1833 49.98 68100 1.0009 0.5928

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.17.0
  • Tokenizers 0.15.2
Downloads last month
2,171
Safetensors
Model size
27.6M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sai17/cards_bottom_left_swin-tiny-patch4-window7-224-finetuned-v2_more_Data

Finetuned
(464)
this model

Evaluation results