happybot's picture
Model save
5556533 verified
|
raw
history blame
4.8 kB
metadata
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: swin-tiny-patch4-window7-224-finetuned-eurosat
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.9503105590062112

swin-tiny-patch4-window7-224-finetuned-eurosat

This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1879
  • Accuracy: 0.9503

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.6144 0.96 11 1.0071 0.8447
0.8116 2.0 23 0.5227 0.8571
0.6078 2.96 34 0.4213 0.8571
0.5151 4.0 46 0.3357 0.8758
0.4499 4.96 57 0.3467 0.9068
0.4254 6.0 69 0.2344 0.9193
0.3266 6.96 80 0.2107 0.9379
0.3018 8.0 92 0.1818 0.9379
0.3339 8.96 103 0.1928 0.9379
0.2594 10.0 115 0.1936 0.9317
0.2476 10.96 126 0.1543 0.9317
0.2294 12.0 138 0.1827 0.9441
0.2193 12.96 149 0.1676 0.9317
0.1924 14.0 161 0.1553 0.9379
0.2148 14.96 172 0.1387 0.9379
0.1674 16.0 184 0.1449 0.9379
0.1815 16.96 195 0.1833 0.9317
0.1861 18.0 207 0.1818 0.9441
0.1629 18.96 218 0.2484 0.9255
0.1609 20.0 230 0.1661 0.9503
0.132 20.96 241 0.1538 0.9441
0.1468 22.0 253 0.1597 0.9565
0.0926 22.96 264 0.1613 0.9565
0.102 24.0 276 0.1420 0.9441
0.1178 24.96 287 0.1429 0.9441
0.1311 26.0 299 0.1832 0.9503
0.0982 26.96 310 0.2140 0.9441
0.0865 28.0 322 0.2040 0.9565
0.0919 28.96 333 0.1878 0.9503
0.085 30.0 345 0.1935 0.9565
0.0918 30.96 356 0.1787 0.9503
0.0939 32.0 368 0.1932 0.9441
0.1236 32.96 379 0.1736 0.9379
0.0819 34.0 391 0.1798 0.9503
0.0906 34.96 402 0.1937 0.9379
0.0865 36.0 414 0.1809 0.9379
0.0709 36.96 425 0.2062 0.9379
0.0781 38.0 437 0.1749 0.9503
0.0772 38.96 448 0.2176 0.9441
0.0535 40.0 460 0.2164 0.9503
0.0608 40.96 471 0.1976 0.9503
0.072 42.0 483 0.1837 0.9441
0.0657 42.96 494 0.2000 0.9565
0.0824 44.0 506 0.1865 0.9503
0.0584 44.96 517 0.1870 0.9565
0.0556 46.0 529 0.1863 0.9503
0.0516 46.96 540 0.1894 0.9503
0.06 47.83 550 0.1879 0.9503

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2