Edit model card

pairs-classifier-20x-large-516-line100

This model is a fine-tuned version of facebook/dinov2-large-imagenet1k-1-layer on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1753
  • Accuracy: 0.6791
  • F1: 0.6790
  • Recall: 0.7219
  • Precision: 0.7241

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Recall Precision
0.7285 0.95 13 0.6921 0.6279 0.3857 0.3140 0.5
0.6882 1.96 27 0.6537 0.6372 0.5580 0.5947 0.5660
0.5771 2.98 41 0.6030 0.7070 0.7039 0.7141 0.7285
0.5086 4.0 55 0.5027 0.7395 0.7365 0.7453 0.7620
0.4418 4.95 68 0.6439 0.7023 0.7022 0.7652 0.7579
0.3432 5.96 82 0.6873 0.7116 0.7115 0.7755 0.7678
0.3346 6.98 96 0.4600 0.7721 0.7579 0.7563 0.7600
0.2755 8.0 110 0.8594 0.6884 0.6881 0.7528 0.7442
0.2676 8.95 123 0.9163 0.6884 0.6879 0.7588 0.7468
0.2658 9.96 137 0.5970 0.7442 0.7424 0.7575 0.7734
0.24 10.98 151 0.5749 0.7488 0.7454 0.7519 0.7694
0.234 12.0 165 0.8778 0.6977 0.6976 0.7517 0.7491
0.2154 12.95 178 0.8210 0.7023 0.7023 0.7490 0.7502
0.203 13.96 192 0.7305 0.7163 0.7154 0.7397 0.7512
0.1765 14.98 206 0.8578 0.6977 0.6976 0.7416 0.7440
0.1607 16.0 220 0.6878 0.7581 0.7543 0.7587 0.7769
0.1783 16.95 233 0.6481 0.7814 0.7764 0.7765 0.7954
0.1767 17.96 247 0.9805 0.6791 0.6790 0.7174 0.7215
0.1596 18.98 261 1.0490 0.6977 0.6976 0.7517 0.7491
0.152 20.0 275 1.1249 0.6744 0.6741 0.7402 0.7306
0.1807 20.95 288 0.8215 0.7302 0.7281 0.7419 0.7572
0.1441 21.96 302 1.2401 0.6791 0.6788 0.7425 0.7343
0.1494 22.98 316 0.9281 0.7023 0.7015 0.7273 0.7375
0.1303 24.0 330 0.8422 0.7256 0.7236 0.7387 0.7535
0.1324 24.95 343 0.9261 0.7070 0.7061 0.7302 0.7412
0.1214 25.96 357 1.1154 0.6791 0.6791 0.7266 0.7266
0.1184 26.98 371 1.2263 0.6698 0.6697 0.7267 0.7218
0.14 28.0 385 0.9725 0.7163 0.7154 0.7397 0.7512
0.107 28.95 398 0.9908 0.7163 0.7154 0.7397 0.7512
0.1082 29.96 412 1.1253 0.6698 0.6697 0.7216 0.7192
0.1477 30.98 426 0.7786 0.7581 0.7537 0.7564 0.7743
0.1296 32.0 440 1.5284 0.6558 0.6546 0.7377 0.7183
0.1122 32.95 453 1.0805 0.7070 0.7063 0.734 0.7437
0.0994 33.96 467 1.2907 0.6651 0.6651 0.7190 0.7155
0.1046 34.98 481 0.8528 0.7488 0.7454 0.7519 0.7694
0.0942 36.0 495 1.5078 0.6651 0.6645 0.7357 0.7231
0.1089 36.95 508 0.9448 0.7442 0.7415 0.7514 0.7683
0.096 37.96 522 1.2000 0.6651 0.6651 0.7094 0.7104
0.0903 38.98 536 0.9947 0.7163 0.7147 0.7326 0.7461
0.079 40.0 550 1.0776 0.7116 0.7106 0.7331 0.7449
0.0787 40.95 563 1.1923 0.6744 0.6744 0.7193 0.7204
0.0882 41.96 577 1.1516 0.6791 0.6788 0.7132 0.7190
0.0937 42.98 591 1.1568 0.6884 0.6881 0.7229 0.7289
0.0837 44.0 605 1.1792 0.6791 0.6790 0.7219 0.7241
0.0838 44.95 618 1.1944 0.6791 0.6790 0.7219 0.7241
0.085 45.96 632 1.0771 0.7163 0.7154 0.7397 0.7512
0.0781 46.98 646 1.1780 0.6791 0.6790 0.7219 0.7241
0.0828 47.27 650 1.1753 0.6791 0.6790 0.7219 0.7241

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.1
Downloads last month
1
Safetensors
Model size
308M params
Tensor type
F32
·
Inference API
Drag image file here or click to browse from your device
Inference API (serverless) does not yet support model repos that contain custom code.

Finetuned from