Edit model card

beit-base-patch16-224-85-fold1

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1430
  • Accuracy: 0.9773

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 2 0.7308 0.5455
No log 2.0 4 0.7927 0.7045
No log 3.0 6 0.9672 0.7045
No log 4.0 8 0.6257 0.7045
0.6404 5.0 10 0.4646 0.7955
0.6404 6.0 12 0.5648 0.7045
0.6404 7.0 14 0.4389 0.7727
0.6404 8.0 16 0.4523 0.75
0.6404 9.0 18 0.4698 0.75
0.455 10.0 20 0.3707 0.8409
0.455 11.0 22 0.3594 0.8182
0.455 12.0 24 0.6136 0.7273
0.455 13.0 26 0.3022 0.8864
0.455 14.0 28 0.2919 0.8409
0.3981 15.0 30 0.3612 0.8182
0.3981 16.0 32 0.2492 0.8864
0.3981 17.0 34 0.2460 0.9091
0.3981 18.0 36 0.2931 0.8636
0.3981 19.0 38 0.1822 0.9091
0.3257 20.0 40 0.2060 0.9091
0.3257 21.0 42 0.2195 0.8864
0.3257 22.0 44 0.2624 0.9091
0.3257 23.0 46 0.2384 0.9091
0.3257 24.0 48 0.1767 0.9318
0.2553 25.0 50 0.2040 0.9318
0.2553 26.0 52 0.1981 0.9091
0.2553 27.0 54 0.1835 0.9318
0.2553 28.0 56 0.1820 0.9318
0.2553 29.0 58 0.1466 0.9545
0.2083 30.0 60 0.1668 0.9318
0.2083 31.0 62 0.2229 0.9318
0.2083 32.0 64 0.1783 0.9545
0.2083 33.0 66 0.1944 0.8864
0.2083 34.0 68 0.3025 0.9091
0.2353 35.0 70 0.4457 0.8409
0.2353 36.0 72 0.2759 0.9318
0.2353 37.0 74 0.2179 0.9318
0.2353 38.0 76 0.3911 0.9091
0.2353 39.0 78 0.5785 0.8409
0.1782 40.0 80 0.2339 0.9318
0.1782 41.0 82 0.2302 0.9091
0.1782 42.0 84 0.3967 0.8864
0.1782 43.0 86 0.4447 0.8636
0.1782 44.0 88 0.2020 0.9091
0.2059 45.0 90 0.1911 0.9318
0.2059 46.0 92 0.2609 0.9091
0.2059 47.0 94 0.2925 0.9091
0.2059 48.0 96 0.2079 0.9318
0.2059 49.0 98 0.1853 0.9318
0.1706 50.0 100 0.2860 0.9318
0.1706 51.0 102 0.3735 0.8636
0.1706 52.0 104 0.1968 0.9318
0.1706 53.0 106 0.1722 0.9318
0.1706 54.0 108 0.3123 0.8636
0.1429 55.0 110 0.3297 0.8864
0.1429 56.0 112 0.1430 0.9773
0.1429 57.0 114 0.1134 0.9773
0.1429 58.0 116 0.2312 0.9091
0.1429 59.0 118 0.2826 0.9091
0.1325 60.0 120 0.2417 0.9091
0.1325 61.0 122 0.1393 0.9318
0.1325 62.0 124 0.2178 0.9318
0.1325 63.0 126 0.3991 0.9091
0.1325 64.0 128 0.3325 0.9091
0.1481 65.0 130 0.2327 0.9091
0.1481 66.0 132 0.2885 0.9091
0.1481 67.0 134 0.3576 0.9091
0.1481 68.0 136 0.2686 0.9318
0.1481 69.0 138 0.1717 0.9545
0.1237 70.0 140 0.1493 0.9545
0.1237 71.0 142 0.1429 0.9318
0.1237 72.0 144 0.1790 0.9318
0.1237 73.0 146 0.1590 0.9318
0.1237 74.0 148 0.1971 0.8864
0.105 75.0 150 0.2229 0.9318
0.105 76.0 152 0.1789 0.8864
0.105 77.0 154 0.1671 0.9545
0.105 78.0 156 0.2435 0.9318
0.105 79.0 158 0.2658 0.9318
0.0923 80.0 160 0.2092 0.9318
0.0923 81.0 162 0.1748 0.9318
0.0923 82.0 164 0.1727 0.9318
0.0923 83.0 166 0.1945 0.9091
0.0923 84.0 168 0.2429 0.9318
0.1033 85.0 170 0.2796 0.9318
0.1033 86.0 172 0.2548 0.9318
0.1033 87.0 174 0.2379 0.9091
0.1033 88.0 176 0.2409 0.9091
0.1033 89.0 178 0.2421 0.9091
0.1073 90.0 180 0.2332 0.9091
0.1073 91.0 182 0.2231 0.9091
0.1073 92.0 184 0.2153 0.9318
0.1073 93.0 186 0.2088 0.9318
0.1073 94.0 188 0.2058 0.9318
0.104 95.0 190 0.2040 0.9318
0.104 96.0 192 0.2046 0.9318
0.104 97.0 194 0.2043 0.9318
0.104 98.0 196 0.2056 0.9318
0.104 99.0 198 0.2081 0.9318
0.0896 100.0 200 0.2097 0.9318

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
85.8M params
Tensor type
F32
·
Inference API
Drag image file here or click to browse from your device
This model can be loaded on Inference API (serverless).

Finetuned from

Evaluation results