Edit model card

vit-base-patch16-224-RU3-40

This model is a fine-tuned version of google/vit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5667
  • Accuracy: 0.8333

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.3821 0.99 19 1.3119 0.4833
1.2698 1.97 38 1.0852 0.6167
0.9819 2.96 57 0.8757 0.7
0.6671 4.0 77 0.7689 0.7333
0.4248 4.99 96 0.7294 0.7167
0.3005 5.97 115 0.6518 0.7833
0.2035 6.96 134 0.5667 0.8333
0.2195 8.0 154 0.6646 0.8333
0.1654 8.99 173 0.6294 0.8167
0.1581 9.97 192 0.7211 0.7833
0.1338 10.96 211 0.8129 0.7833
0.1188 12.0 231 0.7925 0.8167
0.1179 12.99 250 0.9588 0.7667
0.1017 13.97 269 1.0875 0.7167
0.0845 14.96 288 0.9355 0.7
0.1109 16.0 308 0.9387 0.8167
0.0711 16.99 327 1.1214 0.7333
0.0884 17.97 346 0.9688 0.7667
0.0668 18.96 365 1.0306 0.8
0.0716 20.0 385 1.2653 0.7167
0.0643 20.99 404 0.9894 0.7833
0.0517 21.97 423 1.0439 0.7667
0.0597 22.96 442 1.1470 0.7667
0.0533 24.0 462 1.0848 0.7833
0.0529 24.99 481 1.1481 0.75
0.0524 25.97 500 1.1322 0.7333
0.0525 26.96 519 1.1868 0.7333
0.0517 28.0 539 1.1561 0.7167
0.0309 28.99 558 1.0562 0.7833
0.0403 29.97 577 1.2901 0.7333
0.0392 30.96 596 1.1295 0.7667
0.0404 32.0 616 1.1198 0.7667
0.0381 32.99 635 1.2986 0.7167
0.0262 33.97 654 1.1655 0.75
0.0354 34.96 673 1.1223 0.7833
0.0224 36.0 693 1.1679 0.7833
0.0244 36.99 712 1.0999 0.8167
0.0368 37.97 731 1.1213 0.7833
0.0199 38.96 750 1.1003 0.8
0.028 39.48 760 1.0989 0.8

Framework versions

  • Transformers 4.36.2
  • Pytorch 2.1.2+cu118
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
2
Safetensors
Model size
85.8M params
Tensor type
F32
·

Finetuned from

Evaluation results