Edit model card

dino-base-2023_10_31-demo-v5

This model is a fine-tuned version of facebook/dinov2-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0993
  • F1 Micro: 0.8523
  • F1 Macro: 0.7900
  • Roc Auc: 0.9054
  • Accuracy: 0.5712

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.01
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss F1 Micro F1 Macro Roc Auc Accuracy
0.1686 1.0 592 0.1334 0.8014 0.7001 0.8691 0.5017
0.1255 2.0 1184 0.1214 0.8129 0.7228 0.8768 0.5180
0.1167 3.0 1776 0.1210 0.8167 0.7202 0.8714 0.5259
0.1087 4.0 2368 0.1209 0.8215 0.7468 0.8967 0.5136
0.1078 5.0 2960 0.1179 0.8135 0.7248 0.8640 0.5323
0.0992 6.0 3552 0.1169 0.8280 0.7482 0.9010 0.5224
0.0961 7.0 4144 0.1144 0.8311 0.7605 0.9004 0.5209
0.0939 8.0 4736 0.1107 0.8393 0.7503 0.9092 0.5298
0.0942 9.0 5328 0.1157 0.8249 0.7416 0.8762 0.5515
0.0922 10.0 5920 0.1072 0.8364 0.7776 0.8973 0.5481
0.0895 11.0 6512 0.1102 0.8310 0.7631 0.8890 0.5328
0.0866 12.0 7104 0.1054 0.8473 0.7701 0.9099 0.5451
0.0872 13.0 7696 0.1055 0.8454 0.7851 0.9097 0.5436
0.085 14.0 8288 0.1069 0.8422 0.7684 0.9008 0.5559
0.0854 15.0 8880 0.1106 0.8316 0.7666 0.8926 0.5456
0.0841 16.0 9472 0.1068 0.8405 0.7681 0.8922 0.5702
0.0807 17.0 10064 0.1041 0.8460 0.7814 0.9051 0.5594
0.0819 18.0 10656 0.1053 0.8431 0.7822 0.9046 0.5466
0.0801 19.0 11248 0.1081 0.8395 0.7683 0.9063 0.5436
0.0795 20.0 11840 0.1077 0.8451 0.7721 0.8980 0.5520
0.0798 21.0 12432 0.1069 0.8390 0.7721 0.8839 0.5742
0.0784 22.0 13024 0.1050 0.8442 0.7847 0.9059 0.5461
0.0775 23.0 13616 0.1065 0.8443 0.7904 0.9072 0.5476
0.0727 24.0 14208 0.1010 0.8493 0.7910 0.9051 0.5678
0.0707 25.0 14800 0.1002 0.8496 0.7877 0.9058 0.5643
0.0697 26.0 15392 0.1006 0.8489 0.7886 0.9024 0.5692
0.0699 27.0 15984 0.1005 0.8531 0.7897 0.9054 0.5702
0.0692 28.0 16576 0.1001 0.8499 0.7894 0.9059 0.5663
0.0719 29.0 17168 0.0998 0.8524 0.7854 0.9058 0.5737
0.0686 30.0 17760 0.1006 0.8503 0.7897 0.9033 0.5663
0.0692 31.0 18352 0.1000 0.8519 0.7928 0.9055 0.5717
0.0707 32.0 18944 0.1000 0.8517 0.7862 0.9056 0.5737
0.0695 33.0 19536 0.1002 0.8517 0.7850 0.9012 0.5781
0.069 34.0 20128 0.1008 0.8477 0.7849 0.9003 0.5658
0.0686 35.0 20720 0.1004 0.8523 0.7866 0.9009 0.5732
0.0688 36.0 21312 0.0994 0.8517 0.7902 0.9058 0.5673
0.0688 37.0 21904 0.0994 0.8523 0.7900 0.9048 0.5732
0.0677 38.0 22496 0.0994 0.8520 0.7905 0.9051 0.5697
0.0678 39.0 23088 0.0995 0.8516 0.7911 0.9035 0.5747
0.068 40.0 23680 0.0994 0.8520 0.7888 0.9039 0.5712
0.0679 41.0 24272 0.0994 0.8535 0.7908 0.9056 0.5757
0.0682 42.0 24864 0.0993 0.8517 0.7883 0.9054 0.5707
0.0677 43.0 25456 0.0994 0.8516 0.7908 0.9052 0.5707
0.0678 44.0 26048 0.0995 0.8518 0.7916 0.9066 0.5673
0.0677 45.0 26640 0.0993 0.8519 0.7886 0.9054 0.5702
0.0684 46.0 27232 0.0995 0.8519 0.7909 0.9060 0.5697
0.0675 47.0 27824 0.0994 0.8524 0.7908 0.9048 0.5757
0.067 48.0 28416 0.0995 0.8521 0.7893 0.9044 0.5717
0.0675 49.0 29008 0.0994 0.8524 0.7902 0.9056 0.5707
0.0674 50.0 29600 0.0994 0.8517 0.7893 0.9051 0.5692
0.0679 51.0 30192 0.0993 0.8519 0.7898 0.9052 0.5697
0.0667 52.0 30784 0.0993 0.8523 0.7900 0.9054 0.5712

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.1
Downloads last month
7

Finetuned from