Edit model card

V3_Image_classification__points_durs__google_vit-base-patch16-224-in21k

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0411
  • Accuracy: 0.9927

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.6667 1.0 15 0.5893 0.9121
0.4394 2.0 30 0.3294 0.9487
0.2685 3.0 45 0.1365 0.9707
0.0936 4.0 60 0.0752 0.9853
0.0517 5.0 75 0.0553 0.9890
0.0436 6.0 90 0.0556 0.9890
0.018 7.0 105 0.0557 0.9890
0.0189 8.0 120 0.0457 0.9890
0.013 9.0 135 0.0343 0.9927
0.0115 10.0 150 0.0270 0.9963
0.0101 11.0 165 0.0355 0.9927
0.0085 12.0 180 0.0356 0.9927
0.0079 13.0 195 0.0259 0.9963
0.0069 14.0 210 0.0345 0.9927
0.0066 15.0 225 0.0360 0.9927
0.0061 16.0 240 0.0359 0.9927
0.0059 17.0 255 0.0360 0.9927
0.0055 18.0 270 0.0368 0.9927
0.0054 19.0 285 0.0375 0.9927
0.0051 20.0 300 0.0375 0.9927
0.0049 21.0 315 0.0380 0.9927
0.0047 22.0 330 0.0380 0.9927
0.0046 23.0 345 0.0383 0.9927
0.0044 24.0 360 0.0386 0.9927
0.0043 25.0 375 0.0388 0.9927
0.0041 26.0 390 0.0388 0.9927
0.0041 27.0 405 0.0391 0.9927
0.0039 28.0 420 0.0392 0.9927
0.0038 29.0 435 0.0396 0.9927
0.0037 30.0 450 0.0397 0.9927
0.0037 31.0 465 0.0397 0.9927
0.0036 32.0 480 0.0399 0.9927
0.0035 33.0 495 0.0401 0.9927
0.0034 34.0 510 0.0402 0.9927
0.0034 35.0 525 0.0403 0.9927
0.0033 36.0 540 0.0403 0.9927
0.0033 37.0 555 0.0405 0.9927
0.0032 38.0 570 0.0406 0.9927
0.0032 39.0 585 0.0406 0.9927
0.0031 40.0 600 0.0407 0.9927
0.0031 41.0 615 0.0408 0.9927
0.0031 42.0 630 0.0408 0.9927
0.003 43.0 645 0.0409 0.9927
0.003 44.0 660 0.0410 0.9927
0.003 45.0 675 0.0410 0.9927
0.003 46.0 690 0.0410 0.9927
0.003 47.0 705 0.0410 0.9927
0.0029 48.0 720 0.0411 0.9927
0.0029 49.0 735 0.0411 0.9927
0.0029 50.0 750 0.0411 0.9927

Framework versions

  • Transformers 4.30.0
  • Pytorch 2.1.1
  • Datasets 2.15.0
  • Tokenizers 0.13.3
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.