Edit model card

vit-base-patch16-224-U8-40c

This model is a fine-tuned version of google/vit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5609
  • Accuracy: 0.8235

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.3495 1.0 20 1.3142 0.4706
1.1689 2.0 40 1.1153 0.5686
0.8673 3.0 60 0.8498 0.6667
0.5847 4.0 80 0.7220 0.7843
0.4029 5.0 100 0.8654 0.6275
0.2562 6.0 120 0.5609 0.8235
0.2352 7.0 140 0.7272 0.7843
0.2131 8.0 160 0.7581 0.7255
0.1616 9.0 180 0.5437 0.8235
0.1266 10.0 200 0.6345 0.8039
0.1557 11.0 220 0.8280 0.7647
0.0871 12.0 240 0.9016 0.7059
0.0879 13.0 260 0.8099 0.7647
0.0844 14.0 280 0.8791 0.7255
0.0865 15.0 300 0.9713 0.7843
0.1005 16.0 320 0.9966 0.7843
0.0718 17.0 340 1.0468 0.7647
0.0591 18.0 360 0.9471 0.7843
0.0641 19.0 380 0.9905 0.7451
0.0542 20.0 400 1.0300 0.7451
0.0813 21.0 420 1.0330 0.7647
0.059 22.0 440 0.9995 0.7647
0.0679 23.0 460 0.9327 0.7451
0.0611 24.0 480 1.0073 0.7647
0.0694 25.0 500 0.9348 0.7647
0.0454 26.0 520 0.8551 0.7843
0.0536 27.0 540 0.9782 0.7647
0.0429 28.0 560 0.9203 0.7843
0.0386 29.0 580 0.8732 0.8039
0.0433 30.0 600 0.9376 0.7647
0.0353 31.0 620 0.8532 0.7843
0.0332 32.0 640 0.9123 0.8039
0.0405 33.0 660 0.9603 0.8039
0.0423 34.0 680 0.9424 0.8039
0.0383 35.0 700 0.9687 0.8235
0.0245 36.0 720 0.9509 0.8235
0.0309 37.0 740 0.8950 0.8235
0.026 38.0 760 0.9082 0.8039
0.0192 39.0 780 0.8859 0.8235
0.0322 40.0 800 0.8968 0.8235

Framework versions

  • Transformers 4.36.2
  • Pytorch 2.1.2+cu118
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
0
Safetensors
Model size
85.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Augusto777/vit-base-patch16-224-U8-40c

Finetuned
(497)
this model

Evaluation results