Edit model card

vit-base-patch32-384-finetuned-galaxy10-decals

This model is a fine-tuned version of google/vit-base-patch32-384 on the matthieulel/galaxy10_decals dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5542
  • Accuracy: 0.8326
  • Precision: 0.8324
  • Recall: 0.8326
  • F1: 0.8298

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 512
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1
1.68 0.99 31 1.3835 0.5259 0.5014 0.5259 0.4922
0.9395 1.98 62 0.8286 0.7120 0.7053 0.7120 0.6986
0.7814 2.98 93 0.7194 0.7604 0.7515 0.7604 0.7456
0.7227 4.0 125 0.6271 0.7818 0.7913 0.7818 0.7743
0.6309 4.99 156 0.5944 0.7959 0.7959 0.7959 0.7952
0.5754 5.98 187 0.5448 0.8112 0.8165 0.8112 0.8087
0.5519 6.98 218 0.5456 0.8010 0.7990 0.8010 0.7991
0.5077 8.0 250 0.5458 0.8191 0.8229 0.8191 0.8160
0.5086 8.99 281 0.5326 0.8174 0.8181 0.8174 0.8146
0.455 9.98 312 0.5379 0.8174 0.8179 0.8174 0.8143
0.4532 10.98 343 0.5239 0.8247 0.8238 0.8247 0.8225
0.4311 12.0 375 0.5290 0.8202 0.8197 0.8202 0.8169
0.4399 12.99 406 0.5355 0.8236 0.8269 0.8236 0.8213
0.4026 13.98 437 0.5132 0.8303 0.8288 0.8303 0.8268
0.3964 14.98 468 0.5101 0.8269 0.8290 0.8269 0.8247
0.3649 16.0 500 0.5296 0.8253 0.8242 0.8253 0.8222
0.3353 16.99 531 0.5319 0.8236 0.8212 0.8236 0.8198
0.3372 17.98 562 0.5203 0.8303 0.8315 0.8303 0.8300
0.3281 18.98 593 0.5428 0.8315 0.8319 0.8315 0.8289
0.3152 20.0 625 0.5453 0.8264 0.8283 0.8264 0.8262
0.3016 20.99 656 0.5464 0.8224 0.8252 0.8224 0.8192
0.2826 21.98 687 0.5473 0.8241 0.8214 0.8241 0.8213
0.2832 22.98 718 0.5596 0.8275 0.8281 0.8275 0.8255
0.2547 24.0 750 0.5768 0.8247 0.8260 0.8247 0.8243
0.2682 24.99 781 0.5693 0.8230 0.8244 0.8230 0.8226
0.245 25.98 812 0.5542 0.8326 0.8324 0.8326 0.8298
0.2575 26.98 843 0.5665 0.8241 0.8254 0.8241 0.8234
0.2386 28.0 875 0.5716 0.8309 0.8314 0.8309 0.8293
0.2452 28.99 906 0.5659 0.8303 0.8295 0.8303 0.8279
0.2394 29.76 930 0.5674 0.8315 0.8313 0.8315 0.8294

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.15.1
Downloads last month
0
Safetensors
Model size
87.5M params
Tensor type
F32
·
Inference API
Drag image file here or click to browse from your device
This model can be loaded on Inference API (serverless).

Finetuned from