lombardata's picture
🍻 cheers
1193147 verified
metadata
language:
  - eng
license: apache-2.0
base_model: facebook/dinov2-large
tags:
  - multilabel-image-classification
  - multilabel
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: dinov2-large-2024_01_15-with_data_aug_batch-size32_epochs20_freeze
    results: []

dinov2-large-2024_01_15-with_data_aug_batch-size32_epochs20_freeze

This model is a fine-tuned version of facebook/dinov2-large on the multilabel_complete_dataset dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0891
  • F1 Micro: 0.8422
  • F1 Macro: 0.7067
  • Roc Auc: 0.8958
  • Accuracy: 0.5445
  • Learning Rate: 0.0001

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.01
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Accuracy F1 Macro F1 Micro Validation Loss Roc Auc Rate
No log 1.0 274 0.4728 0.5944 0.7681 0.1364 0.8536 0.001
0.2417 2.0 548 0.5028 0.6565 0.8040 0.1118 0.8701 0.001
0.2417 3.0 822 0.5122 0.6697 0.8124 0.1061 0.8763 0.001
0.1314 4.0 1096 0.5003 0.6643 0.8168 0.1062 0.8832 0.001
0.1314 5.0 1370 0.5178 0.6736 0.8176 0.1032 0.8783 0.001
0.1235 6.0 1644 0.5335 0.6928 0.8255 0.1027 0.8906 0.001
0.1235 7.0 1918 0.5237 0.6767 0.8205 0.1027 0.8774 0.001
0.1196 8.0 2192 0.5181 0.6758 0.8176 0.1027 0.8775 0.001
0.1196 9.0 2466 0.5335 0.6807 0.8224 0.0994 0.8765 0.001
0.117 10.0 2740 0.5167 0.6870 0.8283 0.1007 0.8937 0.001
0.1163 11.0 3014 0.5195 0.6925 0.8298 0.0971 0.8898 0.001
0.1163 12.0 3288 0.5230 0.7006 0.8282 0.0987 0.8861 0.001
0.1156 13.0 3562 0.5342 0.7065 0.8275 0.1017 0.8903 0.001
0.1156 14.0 3836 0.5276 0.6968 0.8243 0.1224 0.8851 0.001
0.1137 15.0 4110 0.5300 0.6958 0.8295 0.0981 0.8904 0.001
0.1137 16.0 4384 0.5398 0.7179 0.8412 0.0919 0.8981 0.0001
0.1091 17.0 4658 0.5433 0.7235 0.8431 0.0924 0.8997 0.0001
0.1091 18.0 4932 0.5447 0.7166 0.8448 0.0904 0.8984 0.0001
0.1026 19.0 5206 0.0903 0.8448 0.7248 0.8963 0.5443 0.0001
0.1026 20.0 5480 0.0887 0.8439 0.7176 0.8971 0.5422 0.0001

Framework versions

  • Transformers 4.36.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.5
  • Tokenizers 0.15.0