lombardata's picture
🍻 cheers
ef30713
metadata
language:
  - eng
license: apache-2.0
base_model: facebook/dinov2-large
tags:
  - multilabel-image-classification
  - multilabel
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: dino-large-2023_12_19-kornia_img-size518_batch-size16_epochs20
    results: []

dino-large-2023_12_19-kornia_img-size518_batch-size16_epochs20

This model is a fine-tuned version of facebook/dinov2-large on the multilabel_complete_dataset dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1199
  • F1 Micro: 0.8367
  • F1 Macro: 0.8026
  • Roc Auc: 0.9072
  • Accuracy: 0.5354
  • Learning Rate: 0.001

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.01
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss F1 Micro F1 Macro Roc Auc Accuracy Rate
0.2545 1.0 536 0.1936 0.7016 0.5553 0.7908 0.4134 0.01
0.2163 2.0 1072 0.1644 0.7672 0.6940 0.8669 0.4241 0.01
0.2142 3.0 1608 0.1720 0.7264 0.6226 0.8210 0.4259 0.01
0.2107 4.0 2144 0.1779 0.7311 0.6056 0.8442 0.4019 0.01
0.2117 5.0 2680 0.1835 0.7542 0.6745 0.8724 0.3834 0.01
0.2171 6.0 3216 0.1732 0.7347 0.5959 0.8236 0.4209 0.01
0.2178 7.0 3752 0.2698 0.7253 0.5932 0.8165 0.3905 0.01
0.2177 8.0 4288 0.1940 0.7360 0.6280 0.8286 0.4119 0.01
0.212 9.0 4824 0.1455 0.7993 0.7491 0.8757 0.4898 0.001
0.1761 10.0 5360 0.1357 0.8116 0.7661 0.8733 0.5123 0.001
0.1681 11.0 5896 0.1386 0.8152 0.7753 0.8791 0.5166 0.001
0.1579 12.0 6432 0.1820 0.8220 0.7827 0.8919 0.5163 0.001
0.1553 13.0 6968 0.1228 0.8297 0.7908 0.8898 0.5327 0.001
0.1512 14.0 7504 0.1233 0.8258 0.7815 0.8845 0.5302 0.001
0.1508 15.0 8040 0.1248 0.8179 0.7682 0.8740 0.5305 0.001
0.1499 16.0 8576 0.1193 0.8277 0.7903 0.8806 0.5395 0.001
0.1435 17.0 9112 0.1159 0.8381 0.7996 0.9037 0.5380 0.001
0.1463 18.0 9648 0.1166 0.8393 0.8033 0.8957 0.5481 0.001
0.1423 19.0 10184 0.1216 0.8327 0.8009 0.8865 0.5459 0.001
0.1444 20.0 10720 0.1171 0.8383 0.8020 0.8908 0.5509 0.001

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.1