Edit model card

beit-base-patch16-224-65-fold2

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3734
  • Accuracy: 0.9014

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.9231 3 0.6830 0.5493
No log 1.8462 6 0.6407 0.6479
No log 2.7692 9 0.6612 0.5352
0.7094 4.0 13 0.6175 0.6901
0.7094 4.9231 16 0.5915 0.6761
0.7094 5.8462 19 0.5677 0.7042
0.6444 6.7692 22 0.5177 0.7746
0.6444 8.0 26 0.4928 0.7887
0.6444 8.9231 29 0.5641 0.6901
0.5574 9.8462 32 0.4372 0.8169
0.5574 10.7692 35 0.5677 0.7606
0.5574 12.0 39 0.5032 0.7606
0.543 12.9231 42 0.4745 0.8028
0.543 13.8462 45 0.4057 0.8310
0.543 14.7692 48 0.4013 0.7746
0.4499 16.0 52 0.3670 0.8310
0.4499 16.9231 55 0.4215 0.8592
0.4499 17.8462 58 0.4862 0.7746
0.3902 18.7692 61 0.5781 0.7324
0.3902 20.0 65 0.5249 0.8028
0.3902 20.9231 68 0.3937 0.8732
0.4029 21.8462 71 0.4132 0.8451
0.4029 22.7692 74 0.4178 0.8451
0.4029 24.0 78 0.7273 0.7183
0.3163 24.9231 81 0.4221 0.8451
0.3163 25.8462 84 0.4086 0.8732
0.3163 26.7692 87 0.3946 0.8732
0.2786 28.0 91 0.5320 0.8028
0.2786 28.9231 94 0.4132 0.8451
0.2786 29.8462 97 0.5542 0.7746
0.2763 30.7692 100 0.3734 0.9014
0.2763 32.0 104 0.4479 0.8310
0.2763 32.9231 107 0.3482 0.8592
0.25 33.8462 110 0.5442 0.7887
0.25 34.7692 113 0.4211 0.8732
0.25 36.0 117 0.4860 0.8592
0.2125 36.9231 120 0.4654 0.8451
0.2125 37.8462 123 0.4779 0.8592
0.2125 38.7692 126 0.5692 0.8310
0.2225 40.0 130 0.4912 0.8451
0.2225 40.9231 133 0.4528 0.8310
0.2225 41.8462 136 0.4470 0.7887
0.2225 42.7692 139 0.4251 0.8028
0.1991 44.0 143 0.4864 0.8028
0.1991 44.9231 146 0.4652 0.8592
0.1991 45.8462 149 0.5949 0.8310
0.164 46.7692 152 1.0035 0.7465
0.164 48.0 156 0.6393 0.8310
0.164 48.9231 159 0.9222 0.6761
0.1974 49.8462 162 1.0633 0.6901
0.1974 50.7692 165 0.6050 0.8310
0.1974 52.0 169 0.7134 0.8310
0.213 52.9231 172 0.6649 0.8310
0.213 53.8462 175 0.7126 0.8310
0.213 54.7692 178 0.6906 0.8169
0.1642 56.0 182 0.6956 0.8310
0.1642 56.9231 185 0.5828 0.8028
0.1642 57.8462 188 0.5866 0.8028
0.1657 58.7692 191 0.6172 0.8169
0.1657 60.0 195 0.7428 0.8310
0.1657 60.9231 198 0.8981 0.8169
0.1347 61.8462 201 0.7168 0.8169
0.1347 62.7692 204 0.8026 0.8451
0.1347 64.0 208 0.8639 0.8451
0.1335 64.9231 211 0.7604 0.8169
0.1335 65.8462 214 0.7993 0.8169
0.1335 66.7692 217 0.8330 0.8592
0.145 68.0 221 0.8143 0.8592
0.145 68.9231 224 0.7520 0.8592
0.145 69.8462 227 0.7216 0.8451
0.1658 70.7692 230 0.7968 0.8592
0.1658 72.0 234 0.7730 0.8732
0.1658 72.9231 237 0.7450 0.8732
0.1381 73.8462 240 0.7855 0.8732
0.1381 74.7692 243 0.8253 0.8592
0.1381 76.0 247 0.8065 0.8592
0.1306 76.9231 250 0.7778 0.8592
0.1306 77.8462 253 0.7814 0.8592
0.1306 78.7692 256 0.7335 0.8310
0.1027 80.0 260 0.7372 0.8451
0.1027 80.9231 263 0.7618 0.8451
0.1027 81.8462 266 0.7891 0.8451
0.1027 82.7692 269 0.8287 0.8592
0.1296 84.0 273 0.8412 0.8451
0.1296 84.9231 276 0.8014 0.8451
0.1296 85.8462 279 0.7530 0.8451
0.1162 86.7692 282 0.7243 0.8451
0.1162 88.0 286 0.7247 0.8451
0.1162 88.9231 289 0.7354 0.8451
0.1166 89.8462 292 0.7391 0.8592
0.1166 90.7692 295 0.7390 0.8592
0.1166 92.0 299 0.7374 0.8592
0.1031 92.3077 300 0.7373 0.8592

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
85.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for BilalMuftuoglu/beit-base-patch16-224-65-fold2

Finetuned
(286)
this model

Evaluation results