happybot's picture
Model save
a1389c0 verified
|
raw
history blame
4.93 kB
metadata
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: swin-tiny-patch4-window7-224-finetuned-eurosat
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8532818532818532

swin-tiny-patch4-window7-224-finetuned-eurosat

This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6144
  • Accuracy: 0.8533

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.1952 0.99 18 1.5914 0.5985
1.3705 1.97 36 1.2164 0.6873
1.026 2.96 54 0.9974 0.7375
0.829 4.0 73 0.7667 0.7722
0.6513 4.99 91 0.6674 0.8224
0.5516 5.97 109 0.5810 0.8378
0.4978 6.96 127 0.5498 0.8263
0.4568 8.0 146 0.5999 0.8185
0.4047 8.99 164 0.5211 0.8494
0.3696 9.97 182 0.5201 0.8571
0.3479 10.96 200 0.5310 0.8263
0.329 12.0 219 0.5439 0.8494
0.3376 12.99 237 0.5050 0.8494
0.2804 13.97 255 0.5709 0.8263
0.2941 14.96 273 0.6376 0.8147
0.3026 16.0 292 0.5447 0.8494
0.2578 16.99 310 0.5056 0.8803
0.219 17.97 328 0.5620 0.8610
0.2403 18.96 346 0.5582 0.8456
0.2258 20.0 365 0.5458 0.8494
0.2265 20.99 383 0.5411 0.8533
0.1893 21.97 401 0.5477 0.8494
0.1896 22.96 419 0.5125 0.8494
0.1976 24.0 438 0.5672 0.8340
0.1725 24.99 456 0.5581 0.8456
0.168 25.97 474 0.5965 0.8456
0.1821 26.96 492 0.5567 0.8610
0.1805 28.0 511 0.5998 0.8533
0.1616 28.99 529 0.5451 0.8533
0.1467 29.97 547 0.5574 0.8494
0.1439 30.96 565 0.5707 0.8571
0.13 32.0 584 0.6019 0.8378
0.1353 32.99 602 0.5952 0.8610
0.1329 33.97 620 0.6262 0.8378
0.1258 34.96 638 0.6314 0.8456
0.1408 36.0 657 0.5761 0.8494
0.1197 36.99 675 0.5703 0.8610
0.1208 37.97 693 0.6247 0.8456
0.1197 38.96 711 0.6026 0.8533
0.1271 40.0 730 0.5953 0.8533
0.1053 40.99 748 0.6070 0.8533
0.0846 41.97 766 0.6094 0.8610
0.1206 42.96 784 0.5912 0.8494
0.1225 44.0 803 0.6074 0.8494
0.1184 44.99 821 0.5943 0.8494
0.1027 45.97 839 0.6084 0.8494
0.1113 46.96 857 0.6034 0.8533
0.0945 48.0 876 0.6106 0.8494
0.1159 48.99 894 0.6143 0.8533
0.0963 49.32 900 0.6144 0.8533

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2