Edit model card

dino-large-2023_12_06-with_custom_head

This model is a fine-tuned version of facebook/dinov2-large on the multilabel_complete_dataset dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2014
  • F1 Micro: 0.8291
  • F1 Macro: 0.8015
  • Roc Auc: 0.9029
  • Accuracy: 0.5132
  • Learning Rate: 0.001

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.01
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 14

Training results

Training Loss Epoch Step Validation Loss F1 Micro F1 Macro Roc Auc Accuracy Rate
0.4706 1.0 536 0.4533 0.7389 0.6876 0.8316 0.4269 0.01
0.4045 2.0 1072 0.4262 0.7669 0.7188 0.8634 0.4391 0.01
0.3973 3.0 1608 0.4722 0.7601 0.7176 0.8372 0.4537 0.01
0.3961 4.0 2144 0.6075 0.7528 0.6913 0.8724 0.3762 0.01
0.3751 5.0 2680 0.3916 0.7884 0.7511 0.8925 0.4352 0.01
0.365 6.0 3216 0.5256 0.7660 0.7066 0.8535 0.4105 0.01
0.3565 7.0 3752 0.5708 0.7293 0.6947 0.8254 0.4101 0.01
0.3807 8.0 4288 0.4770 0.7811 0.7145 0.8609 0.4591 0.01
0.3462 9.0 4824 0.4612 0.7880 0.7522 0.8775 0.4452 0.01
0.38 10.0 5360 0.4559 0.7943 0.7517 0.8747 0.4612 0.01
0.3472 11.0 5896 0.5081 0.7709 0.7315 0.8980 0.4041 0.01
0.3167 12.0 6432 0.2364 0.8268 0.7990 0.8945 0.5141 0.001
0.1322 13.0 6968 0.2222 0.8209 0.7931 0.8951 0.4923 0.001
0.0958 14.0 7504 0.2089 0.8287 0.7975 0.8985 0.5052 0.001

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.1
Downloads last month
2
Unable to determine this model’s pipeline type. Check the docs .

Finetuned from