Edit model card

dinov2-s-201

This model is a fine-tuned version of facebook/dinov2-small-imagenet1k-1-layer on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5503
  • Accuracy: 0.8049

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 60

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 5 1.7244 0.2195
1.4057 2.0 10 1.1285 0.5122
1.4057 3.0 15 0.6513 0.7561
0.8392 4.0 20 0.5946 0.8049
0.8392 5.0 25 0.6221 0.8293
0.6571 6.0 30 1.3668 0.4878
0.6571 7.0 35 0.6909 0.6585
0.7314 8.0 40 0.6185 0.7073
0.7314 9.0 45 1.1204 0.5122
0.6679 10.0 50 0.6920 0.7073
0.6679 11.0 55 0.5515 0.7561
0.5023 12.0 60 0.8328 0.6829
0.5023 13.0 65 0.5849 0.7805
0.5507 14.0 70 0.4574 0.8293
0.5507 15.0 75 0.7229 0.7317
0.4605 16.0 80 0.6463 0.6829
0.4605 17.0 85 0.5158 0.7805
0.3592 18.0 90 0.5429 0.7317
0.3592 19.0 95 0.4544 0.8293
0.3719 20.0 100 0.5683 0.7805
0.3719 21.0 105 0.7423 0.7073
0.4792 22.0 110 0.6053 0.7561
0.4792 23.0 115 0.5218 0.8049
0.3421 24.0 120 0.5553 0.8049
0.3421 25.0 125 0.6367 0.7805
0.3528 26.0 130 0.3843 0.8049
0.3528 27.0 135 0.6923 0.7317
0.3335 28.0 140 0.6799 0.7073
0.3335 29.0 145 1.0437 0.6098
0.2933 30.0 150 0.8362 0.7073
0.2933 31.0 155 0.6174 0.7073
0.2902 32.0 160 0.5487 0.8780
0.2902 33.0 165 0.6631 0.8049
0.3046 34.0 170 0.7015 0.7561
0.3046 35.0 175 0.5250 0.8049
0.2355 36.0 180 0.6684 0.8537
0.2355 37.0 185 0.5820 0.7805
0.21 38.0 190 0.7903 0.7805
0.21 39.0 195 0.4358 0.9024
0.1833 40.0 200 0.8039 0.8293
0.1833 41.0 205 0.6242 0.8537
0.2227 42.0 210 0.7574 0.7073
0.2227 43.0 215 0.8873 0.7561
0.1831 44.0 220 0.9501 0.7561
0.1831 45.0 225 0.8774 0.8293
0.1815 46.0 230 0.7826 0.8049
0.1815 47.0 235 1.1516 0.6829
0.1615 48.0 240 0.6514 0.8537
0.1615 49.0 245 0.5799 0.8049
0.1381 50.0 250 0.7545 0.7805
0.1381 51.0 255 0.5452 0.8049
0.1462 52.0 260 0.7610 0.8049
0.1462 53.0 265 0.7827 0.8049
0.1096 54.0 270 0.6393 0.8537
0.1096 55.0 275 0.5902 0.8293
0.0914 56.0 280 0.7998 0.8537
0.0914 57.0 285 0.9032 0.7805
0.1674 58.0 290 0.5467 0.8537
0.1674 59.0 295 0.9872 0.7805
0.086 60.0 300 0.6481 0.8537

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
6
Safetensors
Model size
22.1M params
Tensor type
F32
·

Finetuned from

Evaluation results