Edit model card

beit-base-patch16-224-hasta-85-fold1

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9074
  • Accuracy: 0.7273

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 1 1.4041 0.2727
No log 2.0 2 1.1167 0.5455
No log 3.0 3 0.7944 0.6364
No log 4.0 4 0.9074 0.7273
No log 5.0 5 1.3600 0.7273
No log 6.0 6 1.6643 0.7273
No log 7.0 7 1.6410 0.7273
No log 8.0 8 1.4549 0.7273
No log 9.0 9 1.4814 0.3636
0.4133 10.0 10 1.2719 0.6364
0.4133 11.0 11 1.0360 0.7273
0.4133 12.0 12 1.0691 0.7273
0.4133 13.0 13 0.9528 0.7273
0.4133 14.0 14 1.0430 0.7273
0.4133 15.0 15 1.3221 0.7273
0.4133 16.0 16 1.5236 0.7273
0.4133 17.0 17 1.5914 0.7273
0.4133 18.0 18 1.5015 0.7273
0.4133 19.0 19 1.3859 0.7273
0.1601 20.0 20 1.4514 0.7273
0.1601 21.0 21 1.6169 0.7273
0.1601 22.0 22 1.5223 0.7273
0.1601 23.0 23 1.4172 0.7273
0.1601 24.0 24 1.4753 0.6364
0.1601 25.0 25 1.5207 0.7273
0.1601 26.0 26 1.7055 0.7273
0.1601 27.0 27 1.7901 0.7273
0.1601 28.0 28 1.9109 0.7273
0.1601 29.0 29 1.8261 0.7273
0.1015 30.0 30 1.5657 0.7273
0.1015 31.0 31 1.3513 0.7273
0.1015 32.0 32 1.3374 0.7273
0.1015 33.0 33 1.4942 0.7273
0.1015 34.0 34 1.7731 0.7273
0.1015 35.0 35 1.8369 0.7273
0.1015 36.0 36 1.7328 0.7273
0.1015 37.0 37 1.5451 0.7273
0.1015 38.0 38 1.4483 0.7273
0.1015 39.0 39 1.4289 0.7273
0.0819 40.0 40 1.3922 0.7273
0.0819 41.0 41 1.4002 0.7273
0.0819 42.0 42 1.3673 0.7273
0.0819 43.0 43 1.2194 0.7273
0.0819 44.0 44 1.2879 0.7273
0.0819 45.0 45 1.4545 0.7273
0.0819 46.0 46 1.6902 0.7273
0.0819 47.0 47 1.8415 0.7273
0.0819 48.0 48 1.8891 0.7273
0.0819 49.0 49 1.8339 0.7273
0.0551 50.0 50 1.8433 0.7273
0.0551 51.0 51 1.8432 0.7273
0.0551 52.0 52 1.8002 0.7273
0.0551 53.0 53 1.7276 0.7273
0.0551 54.0 54 1.5764 0.7273
0.0551 55.0 55 1.3951 0.7273
0.0551 56.0 56 1.4243 0.7273
0.0551 57.0 57 1.5620 0.7273
0.0551 58.0 58 1.6897 0.7273
0.0551 59.0 59 1.7995 0.7273
0.0413 60.0 60 1.8739 0.7273
0.0413 61.0 61 1.8486 0.7273
0.0413 62.0 62 1.8073 0.7273
0.0413 63.0 63 1.8193 0.7273
0.0413 64.0 64 1.7604 0.7273
0.0413 65.0 65 1.6327 0.7273
0.0413 66.0 66 1.5447 0.7273
0.0413 67.0 67 1.4243 0.7273
0.0413 68.0 68 1.3810 0.7273
0.0413 69.0 69 1.3641 0.7273
0.038 70.0 70 1.4688 0.7273
0.038 71.0 71 1.5677 0.7273
0.038 72.0 72 1.7174 0.7273
0.038 73.0 73 1.7920 0.7273
0.038 74.0 74 1.9000 0.7273
0.038 75.0 75 1.9468 0.7273
0.038 76.0 76 1.9872 0.7273
0.038 77.0 77 2.0208 0.7273
0.038 78.0 78 2.0135 0.7273
0.038 79.0 79 1.9762 0.7273
0.0365 80.0 80 1.9576 0.7273
0.0365 81.0 81 1.9310 0.7273
0.0365 82.0 82 1.8495 0.7273
0.0365 83.0 83 1.7683 0.7273
0.0365 84.0 84 1.7109 0.7273
0.0365 85.0 85 1.6438 0.7273
0.0365 86.0 86 1.6154 0.7273
0.0365 87.0 87 1.5715 0.7273
0.0365 88.0 88 1.5428 0.7273
0.0365 89.0 89 1.5164 0.7273
0.038 90.0 90 1.5008 0.7273
0.038 91.0 91 1.4730 0.7273
0.038 92.0 92 1.4493 0.7273
0.038 93.0 93 1.4728 0.7273
0.038 94.0 94 1.5033 0.7273
0.038 95.0 95 1.5346 0.7273
0.038 96.0 96 1.5556 0.7273
0.038 97.0 97 1.5643 0.7273
0.038 98.0 98 1.5759 0.7273
0.038 99.0 99 1.5807 0.7273
0.0191 100.0 100 1.5806 0.7273

Framework versions

  • Transformers 4.41.0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
4
Safetensors
Model size
85.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for BilalMuftuoglu/beit-base-patch16-224-hasta-85-fold1

Finetuned
(286)
this model

Evaluation results