vit-base-patch16-224-in21k-v2025-2-20

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2318
  • Accuracy: 0.9143
  • F1: 0.8
  • Precision: 0.8109
  • Recall: 0.7894

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.00025
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
0.6069 0.6410 100 0.5681 0.7146 0.5533 0.4191 0.8137
0.4385 1.2821 200 0.4052 0.8384 0.6334 0.6241 0.6430
0.3415 1.9231 300 0.2995 0.8891 0.7233 0.7893 0.6674
0.3761 2.5641 400 0.2871 0.8809 0.6934 0.7863 0.6201
0.3066 3.2051 500 0.2877 0.8841 0.7072 0.7835 0.6445
0.3236 3.8462 600 0.2608 0.8937 0.7398 0.7901 0.6955
0.336 4.4872 700 0.2619 0.8926 0.7301 0.8037 0.6689
0.3003 5.1282 800 0.2736 0.8865 0.7160 0.7843 0.6585
0.2756 5.7692 900 0.2584 0.8945 0.7443 0.7862 0.7066
0.2566 6.4103 1000 0.2574 0.8928 0.7319 0.8007 0.6741
0.2609 7.0513 1100 0.2506 0.8966 0.75 0.7899 0.7140
0.2721 7.6923 1200 0.2282 0.9024 0.7599 0.8159 0.7110
0.2317 8.3333 1300 0.2425 0.9029 0.7613 0.8164 0.7132
0.2953 8.9744 1400 0.2284 0.9077 0.7758 0.8210 0.7354
0.2485 9.6154 1500 0.2320 0.9042 0.7669 0.8129 0.7258
0.2387 10.2564 1600 0.2352 0.9034 0.7672 0.8045 0.7332
0.2288 10.8974 1700 0.2178 0.9087 0.7816 0.8131 0.7524
0.1979 11.5385 1800 0.2283 0.9100 0.7881 0.8060 0.7709
0.194 12.1795 1900 0.2298 0.9024 0.7704 0.7876 0.7539
0.2011 12.8205 2000 0.2204 0.9104 0.7882 0.8103 0.7672
0.2033 13.4615 2100 0.2149 0.9133 0.7951 0.8168 0.7746
0.1795 14.1026 2200 0.2278 0.9069 0.7815 0.7971 0.7664
0.2153 14.7436 2300 0.2177 0.9100 0.7853 0.8143 0.7583
0.1814 15.3846 2400 0.2169 0.9144 0.7991 0.8154 0.7834
0.1605 16.0256 2500 0.2127 0.9141 0.8 0.8094 0.7908
0.172 16.6667 2600 0.2147 0.9116 0.7942 0.8029 0.7857
0.1622 17.3077 2700 0.2259 0.9071 0.7837 0.7923 0.7753
0.1676 17.9487 2800 0.2165 0.9117 0.7915 0.8125 0.7716
0.1581 18.5897 2900 0.2204 0.9109 0.7919 0.8037 0.7805
0.1725 19.2308 3000 0.2196 0.9108 0.7919 0.8021 0.7820
0.1306 19.8718 3100 0.2161 0.9125 0.7936 0.8137 0.7746
0.1304 20.5128 3200 0.2252 0.9061 0.7813 0.7905 0.7724
0.1248 21.1538 3300 0.2302 0.9112 0.7928 0.8040 0.7820
0.1214 21.7949 3400 0.2315 0.9085 0.7856 0.8 0.7716
0.0979 22.4359 3500 0.2298 0.9109 0.7911 0.8060 0.7768
0.1157 23.0769 3600 0.2284 0.9128 0.7964 0.8082 0.7849
0.1279 23.7179 3700 0.2327 0.9125 0.7933 0.8146 0.7731
0.1032 24.3590 3800 0.2316 0.9120 0.7932 0.8103 0.7768
0.0958 25.0 3900 0.2244 0.9156 0.8023 0.8164 0.7886
0.1156 25.6410 4000 0.2356 0.9127 0.7938 0.8148 0.7738
0.106 26.2821 4100 0.2334 0.9100 0.7912 0.7969 0.7857
0.0966 26.9231 4200 0.2334 0.9132 0.7975 0.8080 0.7871
0.0746 27.5641 4300 0.2340 0.9117 0.7939 0.8053 0.7827
0.0905 28.2051 4400 0.2323 0.9130 0.7973 0.8070 0.7879
0.0899 28.8462 4500 0.2340 0.9138 0.7987 0.8105 0.7871
0.0804 29.4872 4600 0.2318 0.9143 0.8 0.8109 0.7894

Framework versions

  • Transformers 4.48.3
  • Pytorch 2.5.1+cu124
  • Datasets 3.3.1
  • Tokenizers 0.21.0
Downloads last month

-

Downloads are not tracked for this model. How to track
Safetensors
Model size
86.2M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for liamxostrander/vit-base-patch16-224-in21k-v2025-2-20

Finetuned
(1967)
this model