--- license: apache-2.0 base_model: microsoft/swinv2-tiny-patch4-window8-256 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swinv2-tiny-patch4-window8-256-finetuned-PE results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8720186154741129 --- # swinv2-tiny-patch4-window8-256-finetuned-PE This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3083 - Accuracy: 0.8720 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00025 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.92 | 9 | 0.6391 | 0.6690 | | 0.6873 | 1.95 | 19 | 0.5293 | 0.7376 | | 0.6233 | 2.97 | 29 | 0.6385 | 0.6853 | | 0.5976 | 4.0 | 39 | 0.4447 | 0.7970 | | 0.5552 | 4.92 | 48 | 0.4029 | 0.8266 | | 0.552 | 5.95 | 58 | 0.3675 | 0.8429 | | 0.5055 | 6.97 | 68 | 0.3409 | 0.8581 | | 0.4816 | 8.0 | 78 | 0.3322 | 0.8615 | | 0.455 | 8.92 | 87 | 0.3166 | 0.8639 | | 0.4428 | 9.95 | 97 | 0.3100 | 0.8662 | | 0.4398 | 10.97 | 107 | 0.3713 | 0.8365 | | 0.4318 | 12.0 | 117 | 0.4019 | 0.8284 | | 0.4431 | 12.92 | 126 | 0.3074 | 0.8714 | | 0.4437 | 13.95 | 136 | 0.3156 | 0.8656 | | 0.4482 | 14.97 | 146 | 0.3516 | 0.8476 | | 0.4353 | 16.0 | 156 | 0.3162 | 0.8598 | | 0.4218 | 16.92 | 165 | 0.3018 | 0.8685 | | 0.4111 | 17.95 | 175 | 0.3143 | 0.8650 | | 0.4224 | 18.97 | 185 | 0.3146 | 0.8592 | | 0.4114 | 20.0 | 195 | 0.3097 | 0.8691 | | 0.4103 | 20.92 | 204 | 0.3038 | 0.8703 | | 0.3989 | 21.95 | 214 | 0.2893 | 0.8796 | | 0.3908 | 22.97 | 224 | 0.2956 | 0.8755 | | 0.3923 | 24.0 | 234 | 0.3041 | 0.8685 | | 0.3842 | 24.92 | 243 | 0.2876 | 0.8749 | | 0.3808 | 25.95 | 253 | 0.2907 | 0.8767 | | 0.382 | 26.97 | 263 | 0.3018 | 0.8738 | | 0.3816 | 28.0 | 273 | 0.2812 | 0.8825 | | 0.379 | 28.92 | 282 | 0.2960 | 0.8633 | | 0.3858 | 29.95 | 292 | 0.2960 | 0.8743 | | 0.3546 | 30.97 | 302 | 0.2850 | 0.8807 | | 0.3656 | 32.0 | 312 | 0.2905 | 0.8784 | | 0.3707 | 32.92 | 321 | 0.2926 | 0.8743 | | 0.3651 | 33.95 | 331 | 0.2941 | 0.8796 | | 0.3584 | 34.97 | 341 | 0.3133 | 0.8615 | | 0.36 | 36.0 | 351 | 0.3181 | 0.8679 | | 0.3496 | 36.92 | 360 | 0.3036 | 0.8685 | | 0.3458 | 37.95 | 370 | 0.2939 | 0.8732 | | 0.3431 | 38.97 | 380 | 0.3062 | 0.8703 | | 0.3512 | 40.0 | 390 | 0.2914 | 0.8755 | | 0.3512 | 40.92 | 399 | 0.3164 | 0.8674 | | 0.3403 | 41.95 | 409 | 0.3063 | 0.8679 | | 0.3423 | 42.97 | 419 | 0.3018 | 0.8720 | | 0.3312 | 44.0 | 429 | 0.3094 | 0.8697 | | 0.3365 | 44.92 | 438 | 0.3062 | 0.8755 | | 0.3319 | 45.95 | 448 | 0.3081 | 0.8720 | | 0.3409 | 46.15 | 450 | 0.3083 | 0.8720 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3