Edit model card

dino-base-2023_11_17-original_head

This model is a fine-tuned version of facebook/dinov2-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1307
  • F1 Micro: 0.8332
  • F1 Macro: 0.7987
  • Roc Auc: 0.8961
  • Accuracy: 0.5248
  • Learning Rate: 0.0001

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.01
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss F1 Micro F1 Macro Roc Auc Accuracy Rate
0.4512 1.0 536 0.5206 0.7341 0.6738 0.8351 0.3994 0.01
0.4039 2.0 1072 0.3435 0.7977 0.7470 0.8819 0.4630 0.01
0.4097 3.0 1608 0.4639 0.7760 0.7255 0.8642 0.4237 0.01
0.3939 4.0 2144 0.3950 0.7937 0.7209 0.8757 0.4627 0.01
0.373 5.0 2680 0.4402 0.7570 0.7277 0.8610 0.4205 0.01
0.3838 6.0 3216 0.5527 0.7291 0.6525 0.8146 0.4101 0.01
0.3668 7.0 3752 0.4480 0.7590 0.7117 0.8412 0.4287 0.01
0.358 8.0 4288 0.4486 0.7743 0.7346 0.8785 0.4094 0.01
0.2861 9.0 4824 0.2277 0.8197 0.7896 0.8881 0.5002 0.001
0.1352 10.0 5360 0.2217 0.8174 0.7894 0.8949 0.4916 0.001
0.1151 11.0 5896 0.2070 0.8171 0.7840 0.8829 0.5091 0.001
0.106 12.0 6432 0.1962 0.8204 0.7974 0.9027 0.4995 0.001
0.1018 13.0 6968 0.1928 0.8178 0.7898 0.8933 0.4905 0.001
0.0925 14.0 7504 0.1798 0.8245 0.7847 0.8949 0.5002 0.001
0.0902 15.0 8040 0.1771 0.8159 0.7764 0.8798 0.5095 0.001
0.0871 16.0 8576 0.1733 0.8170 0.7821 0.8875 0.5055 0.001
0.084 17.0 9112 0.1710 0.8228 0.7924 0.9015 0.4930 0.001
0.0853 18.0 9648 0.1692 0.8218 0.7850 0.8905 0.4952 0.001
0.0841 19.0 10184 0.1660 0.8179 0.7836 0.8945 0.4945 0.001
0.0821 20.0 10720 0.1736 0.8107 0.7817 0.8831 0.4912 0.001
0.083 21.0 11256 0.1595 0.8178 0.7888 0.8955 0.4980 0.001
0.0801 22.0 11792 0.1613 0.8226 0.7895 0.8997 0.4991 0.001
0.0815 23.0 12328 0.1583 0.8177 0.7862 0.8899 0.5080 0.001
0.0822 24.0 12864 0.1555 0.8202 0.7782 0.8822 0.5134 0.001
0.0793 25.0 13400 0.1554 0.8207 0.7883 0.8986 0.5023 0.001
0.0788 26.0 13936 0.1543 0.8147 0.7822 0.8831 0.5016 0.001
0.0797 27.0 14472 0.1511 0.8230 0.7831 0.8886 0.5116 0.001
0.0795 28.0 15008 0.1510 0.8197 0.7831 0.8860 0.5038 0.001
0.079 29.0 15544 0.1465 0.8225 0.7879 0.8844 0.5120 0.001
0.0802 30.0 16080 0.1473 0.8229 0.7885 0.8966 0.5030 0.001
0.0786 31.0 16616 0.1627 0.8000 0.7544 0.8594 0.4955 0.001
0.0806 32.0 17152 0.1465 0.8221 0.7911 0.8916 0.4970 0.001
0.0776 33.0 17688 0.1477 0.8230 0.7925 0.9010 0.4998 0.001
0.0801 34.0 18224 0.1436 0.8221 0.7891 0.8961 0.5041 0.001
0.0797 35.0 18760 0.1497 0.8198 0.7843 0.8900 0.4905 0.001
0.0781 36.0 19296 0.1407 0.8254 0.7936 0.8924 0.5098 0.001
0.079 37.0 19832 0.1465 0.8229 0.7735 0.8898 0.5152 0.001
0.082 38.0 20368 0.1536 0.8102 0.7882 0.8861 0.4855 0.001
0.0781 39.0 20904 0.1463 0.8200 0.7856 0.8917 0.5052 0.001
0.0811 40.0 21440 0.1465 0.8159 0.7798 0.8885 0.5016 0.001
0.0786 41.0 21976 0.1521 0.8154 0.7669 0.8864 0.5027 0.001
0.0775 42.0 22512 0.1418 0.8256 0.7908 0.8961 0.5127 0.001
0.0641 43.0 23048 0.1318 0.8344 0.7996 0.8963 0.5259 0.0001
0.0633 44.0 23584 0.1312 0.8329 0.7964 0.8931 0.5313 0.0001
0.0627 45.0 24120 0.1313 0.8327 0.7981 0.8957 0.5277 0.0001
0.0627 46.0 24656 0.1307 0.8332 0.8015 0.8960 0.5270 0.0001
0.0619 47.0 25192 0.1307 0.8337 0.7990 0.8959 0.5273 0.0001
0.0626 48.0 25728 0.1309 0.8333 0.8008 0.8967 0.5234 0.0001
0.063 49.0 26264 0.1310 0.8324 0.7976 0.8954 0.5213 0.0001
0.0623 50.0 26800 0.1307 0.8332 0.7987 0.8961 0.5248 0.0001

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.1
Downloads last month
7

Finetuned from