Edit model card

beit-base-patch16-224-hasta-85-fold5

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6354
  • Accuracy: 0.8182

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 1 1.0103 0.5455
No log 2.0 2 0.8046 0.6364
No log 3.0 3 0.7378 0.7273
No log 4.0 4 1.0676 0.7273
No log 5.0 5 1.3956 0.7273
No log 6.0 6 1.5799 0.7273
No log 7.0 7 1.6061 0.7273
No log 8.0 8 1.4351 0.7273
No log 9.0 9 1.3296 0.7273
0.39 10.0 10 1.2816 0.7273
0.39 11.0 11 1.2655 0.7273
0.39 12.0 12 1.2040 0.7273
0.39 13.0 13 1.0664 0.7273
0.39 14.0 14 1.0846 0.7273
0.39 15.0 15 1.2145 0.7273
0.39 16.0 16 1.4682 0.7273
0.39 17.0 17 1.4473 0.7273
0.39 18.0 18 1.2699 0.7273
0.39 19.0 19 1.2467 0.7273
0.1832 20.0 20 1.2648 0.7273
0.1832 21.0 21 1.2914 0.7273
0.1832 22.0 22 1.3444 0.7273
0.1832 23.0 23 1.5325 0.7273
0.1832 24.0 24 1.6140 0.7273
0.1832 25.0 25 1.6262 0.7273
0.1832 26.0 26 1.6753 0.7273
0.1832 27.0 27 1.6531 0.7273
0.1832 28.0 28 1.7096 0.7273
0.1832 29.0 29 1.6662 0.7273
0.1194 30.0 30 1.5769 0.7273
0.1194 31.0 31 1.4447 0.7273
0.1194 32.0 32 1.2644 0.7273
0.1194 33.0 33 1.2033 0.7273
0.1194 34.0 34 1.2703 0.7273
0.1194 35.0 35 1.4492 0.7273
0.1194 36.0 36 1.5890 0.7273
0.1194 37.0 37 1.5691 0.7273
0.1194 38.0 38 1.4127 0.7273
0.1194 39.0 39 1.3179 0.7273
0.0783 40.0 40 1.2986 0.7273
0.0783 41.0 41 1.3181 0.7273
0.0783 42.0 42 1.4253 0.7273
0.0783 43.0 43 1.5179 0.7273
0.0783 44.0 44 1.5685 0.7273
0.0783 45.0 45 1.5696 0.7273
0.0783 46.0 46 1.7571 0.7273
0.0783 47.0 47 1.9122 0.7273
0.0783 48.0 48 2.1062 0.7273
0.0783 49.0 49 2.1661 0.7273
0.056 50.0 50 2.1833 0.7273
0.056 51.0 51 2.2402 0.7273
0.056 52.0 52 2.3007 0.7273
0.056 53.0 53 2.3692 0.7273
0.056 54.0 54 2.3821 0.7273
0.056 55.0 55 2.2716 0.7273
0.056 56.0 56 2.0482 0.7273
0.056 57.0 57 1.8783 0.7273
0.056 58.0 58 1.7967 0.7273
0.056 59.0 59 1.7036 0.7273
0.052 60.0 60 1.6389 0.7273
0.052 61.0 61 1.6354 0.8182
0.052 62.0 62 1.6852 0.8182
0.052 63.0 63 1.8189 0.7273
0.052 64.0 64 1.9683 0.7273
0.052 65.0 65 2.0166 0.7273
0.052 66.0 66 2.0855 0.7273
0.052 67.0 67 2.1359 0.7273
0.052 68.0 68 2.2465 0.7273
0.052 69.0 69 2.2680 0.7273
0.0276 70.0 70 2.2728 0.7273
0.0276 71.0 71 2.2820 0.7273
0.0276 72.0 72 2.2427 0.7273
0.0276 73.0 73 2.2066 0.7273
0.0276 74.0 74 2.2434 0.7273
0.0276 75.0 75 2.3206 0.7273
0.0276 76.0 76 2.4408 0.7273
0.0276 77.0 77 2.4810 0.7273
0.0276 78.0 78 2.5091 0.7273
0.0276 79.0 79 2.4862 0.7273
0.0411 80.0 80 2.4502 0.7273
0.0411 81.0 81 2.4204 0.7273
0.0411 82.0 82 2.3838 0.7273
0.0411 83.0 83 2.3431 0.7273
0.0411 84.0 84 2.2927 0.7273
0.0411 85.0 85 2.2181 0.7273
0.0411 86.0 86 2.1633 0.7273
0.0411 87.0 87 2.0966 0.7273
0.0411 88.0 88 2.0536 0.7273
0.0411 89.0 89 2.0427 0.7273
0.0317 90.0 90 2.0524 0.7273
0.0317 91.0 91 2.0489 0.7273
0.0317 92.0 92 2.0648 0.7273
0.0317 93.0 93 2.0946 0.7273
0.0317 94.0 94 2.1155 0.7273
0.0317 95.0 95 2.1469 0.7273
0.0317 96.0 96 2.1768 0.7273
0.0317 97.0 97 2.2026 0.7273
0.0317 98.0 98 2.2205 0.7273
0.0317 99.0 99 2.2304 0.7273
0.0394 100.0 100 2.2350 0.7273

Framework versions

  • Transformers 4.41.0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
85.8M params
Tensor type
F32
·

Finetuned from

Evaluation results