swinv2-base-patch4-window8-256-dmae-humeda-DAV16

This model is a fine-tuned version of microsoft/swinv2-base-patch4-window8-256 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0641
  • Accuracy: 0.75

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 42

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.8696 5 1.5391 0.4038
No log 1.8696 10 1.4350 0.4231
6.5563 2.8696 15 1.3179 0.5385
6.5563 3.8696 20 1.2358 0.5385
4.5658 4.8696 25 0.9991 0.5769
4.5658 5.8696 30 0.9567 0.5385
4.5658 6.8696 35 0.8482 0.6154
2.7201 7.8696 40 1.1108 0.4615
2.7201 8.8696 45 0.7993 0.6923
1.9091 9.8696 50 0.8539 0.6154
1.9091 10.8696 55 0.8361 0.6731
1.6858 11.8696 60 0.8574 0.6731
1.6858 12.8696 65 0.9489 0.6346
1.6858 13.8696 70 0.8122 0.7115
1.2131 14.8696 75 0.8131 0.6538
1.2131 15.8696 80 0.8591 0.6731
0.8967 16.8696 85 0.9155 0.6538
0.8967 17.8696 90 0.9712 0.7115
0.8967 18.8696 95 0.9574 0.6731
0.8657 19.8696 100 1.0001 0.7115
0.8657 20.8696 105 1.1041 0.5962
0.6795 21.8696 110 1.0165 0.6923
0.6795 22.8696 115 1.0816 0.6538
0.5608 23.8696 120 1.1195 0.7308
0.5608 24.8696 125 1.0680 0.6923
0.5608 25.8696 130 1.1495 0.6923
0.6841 26.8696 135 1.0789 0.7115
0.6841 27.8696 140 1.0814 0.7115
0.4526 28.8696 145 1.0830 0.6923
0.4526 29.8696 150 1.0641 0.75
0.4526 30.8696 155 1.1337 0.6731
0.4067 31.8696 160 1.0867 0.6923
0.4067 32.8696 165 1.1103 0.6731
0.4003 33.8696 170 1.0909 0.6923
0.4003 34.8696 175 1.0950 0.6731
0.4415 35.8696 180 1.0712 0.7115
0.4415 36.8696 185 1.0569 0.7115
0.4415 37.8696 190 1.0618 0.6923
0.3715 38.8696 195 1.0770 0.6923
0.3715 39.8696 200 1.0976 0.6923
0.4178 40.8696 205 1.1072 0.6923
0.4178 41.8696 210 1.1047 0.6923

Framework versions

  • Transformers 4.47.1
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
4
Safetensors
Model size
86.9M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for RobertoSonic/swinv2-base-patch4-window8-256-dmae-humeda-DAV16

Finetuned
(14)
this model