Edit model card

swin-tiny-patch4-window7-224-finetuned-eurosat

This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0291
  • Accuracy: 0.5714

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 90

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 1 0.9799 0.4286
No log 2.0 2 0.9703 0.4286
No log 3.0 3 0.9703 0.4286
No log 4.0 4 0.9699 0.4286
No log 5.0 5 0.9699 0.4286
No log 6.0 6 0.9881 0.4286
No log 7.0 7 0.9881 0.4286
No log 8.0 8 1.0213 0.4286
No log 9.0 9 1.0213 0.4286
0.426 10.0 10 1.0291 0.5714
0.426 11.0 11 1.0291 0.5714
0.426 12.0 12 0.9996 0.5714
0.426 13.0 13 0.9996 0.5714
0.426 14.0 14 0.8998 0.5714
0.426 15.0 15 0.8998 0.5714
0.426 16.0 16 0.8356 0.5714
0.426 17.0 17 0.8356 0.5714
0.426 18.0 18 0.8575 0.5714
0.426 19.0 19 0.8575 0.5714
0.324 20.0 20 0.9310 0.4286
0.324 21.0 21 0.9310 0.4286
0.324 22.0 22 1.0029 0.4286
0.324 23.0 23 1.0029 0.4286
0.324 24.0 24 1.0582 0.4286
0.324 25.0 25 1.0582 0.4286
0.324 26.0 26 1.0812 0.4286
0.324 27.0 27 1.0812 0.4286
0.324 28.0 28 1.0345 0.4286
0.324 29.0 29 1.0345 0.4286
0.2536 30.0 30 0.9996 0.4286
0.2536 31.0 31 0.9996 0.4286
0.2536 32.0 32 0.9401 0.5714
0.2536 33.0 33 0.9401 0.5714
0.2536 34.0 34 0.8978 0.5714
0.2536 35.0 35 0.8978 0.5714
0.2536 36.0 36 0.9056 0.5714
0.2536 37.0 37 0.9056 0.5714
0.2536 38.0 38 0.9364 0.5714
0.2536 39.0 39 0.9364 0.5714
0.2176 40.0 40 1.0523 0.5714
0.2176 41.0 41 1.0523 0.5714
0.2176 42.0 42 1.1687 0.4286
0.2176 43.0 43 1.1687 0.4286
0.2176 44.0 44 1.1968 0.4286
0.2176 45.0 45 1.1968 0.4286
0.2176 46.0 46 1.1604 0.4286
0.2176 47.0 47 1.1604 0.4286
0.2176 48.0 48 1.0505 0.4286
0.2176 49.0 49 1.0505 0.4286
0.1597 50.0 50 0.9059 0.5714
0.1597 51.0 51 0.9059 0.5714
0.1597 52.0 52 0.8606 0.5714
0.1597 53.0 53 0.8606 0.5714
0.1597 54.0 54 0.8946 0.5714
0.1597 55.0 55 0.8946 0.5714
0.1597 56.0 56 0.9643 0.5714
0.1597 57.0 57 0.9643 0.5714
0.1597 58.0 58 1.0598 0.5714
0.1597 59.0 59 1.0598 0.5714
0.1231 60.0 60 1.1833 0.5714
0.1231 61.0 61 1.1833 0.5714
0.1231 62.0 62 1.2730 0.5714
0.1231 63.0 63 1.2730 0.5714
0.1231 64.0 64 1.3132 0.4286
0.1231 65.0 65 1.3132 0.4286
0.1231 66.0 66 1.3025 0.4286
0.1231 67.0 67 1.3025 0.4286
0.1231 68.0 68 1.2702 0.4286
0.1231 69.0 69 1.2702 0.4286
0.1364 70.0 70 1.2411 0.4286
0.1364 71.0 71 1.2411 0.4286
0.1364 72.0 72 1.2222 0.4286
0.1364 73.0 73 1.2222 0.4286
0.1364 74.0 74 1.2257 0.4286
0.1364 75.0 75 1.2257 0.4286
0.1364 76.0 76 1.2552 0.4286
0.1364 77.0 77 1.2552 0.4286
0.1364 78.0 78 1.2701 0.5714
0.1364 79.0 79 1.2701 0.5714
0.0937 80.0 80 1.2753 0.5714
0.0937 81.0 81 1.2753 0.5714
0.0937 82.0 82 1.2797 0.5714
0.0937 83.0 83 1.2797 0.5714
0.0937 84.0 84 1.2840 0.5714
0.0937 85.0 85 1.2840 0.5714
0.0937 86.0 86 1.2895 0.5714
0.0937 87.0 87 1.2895 0.5714
0.0937 88.0 88 1.2917 0.5714
0.0937 89.0 89 1.2917 0.5714
0.1082 90.0 90 1.2931 0.5714

Framework versions

  • Transformers 4.30.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
0
Inference API
Drag image file here or click to browse from your device
This model can be loaded on Inference API (serverless).

Evaluation results