Edit model card

012-microsoft-deberta-v3-base-finetuned-yahoo-8000_2000

This model is a fine-tuned version of microsoft/deberta-v3-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9425
  • F1: 0.7138
  • Accuracy: 0.718
  • Precision: 0.7184
  • Recall: 0.718
  • System Ram Used: 4.1370
  • System Ram Total: 83.4807
  • Gpu Ram Allocated: 2.0897
  • Gpu Ram Cached: 25.8555
  • Gpu Ram Total: 39.5640
  • Gpu Utilization: 46
  • Disk Space Used: 29.6434
  • Disk Space Total: 78.1898

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss F1 Accuracy Precision Recall System Ram Used System Ram Total Gpu Ram Allocated Gpu Ram Cached Gpu Ram Total Gpu Utilization Disk Space Used Disk Space Total
2.2963 0.2 50 2.2150 0.1298 0.2015 0.2090 0.2015 3.9807 83.4807 2.0898 25.8457 39.5640 48 24.8073 78.1898
1.8843 0.4 100 1.4590 0.5588 0.592 0.6418 0.592 3.9979 83.4807 2.0898 25.8477 39.5640 49 24.8074 78.1898
1.3348 0.6 150 1.1809 0.6613 0.668 0.6736 0.668 3.9836 83.4807 2.0898 25.8555 39.5640 49 24.8074 78.1898
1.1501 0.8 200 1.0484 0.6929 0.695 0.6981 0.695 3.9695 83.4807 2.0898 25.8555 39.5640 51 24.8074 78.1898
1.0842 1.0 250 1.0265 0.6825 0.6905 0.6894 0.6905 3.9755 83.4807 2.0898 25.8555 39.5640 50 24.8075 78.1898
0.8618 1.2 300 0.9904 0.7024 0.704 0.7048 0.704 3.9708 83.4807 2.0898 25.8555 39.5640 50 24.8075 78.1898
0.9329 1.4 350 0.9927 0.6825 0.686 0.6939 0.686 3.9595 83.4807 2.0898 25.8555 39.5640 48 24.8076 78.1898
0.9053 1.6 400 0.9795 0.7021 0.705 0.7048 0.705 3.9837 83.4807 2.0898 25.8555 39.5640 48 24.8076 78.1898
0.9173 1.8 450 0.9749 0.7024 0.709 0.7140 0.709 3.9851 83.4807 2.0898 25.8555 39.5640 48 24.8077 78.1898
0.9189 2.0 500 0.9425 0.7138 0.718 0.7184 0.718 3.9949 83.4807 2.0898 25.8555 39.5640 48 24.8077 78.1898
0.7727 2.2 550 0.9590 0.7101 0.7155 0.7150 0.7155 4.1847 83.4807 2.0898 25.8555 39.5640 45 29.6429 78.1898
0.7092 2.4 600 0.9389 0.7180 0.7215 0.7177 0.7215 4.1798 83.4807 2.0901 25.8555 39.5640 47 29.6429 78.1898
0.737 2.6 650 0.9606 0.7074 0.715 0.7144 0.715 4.1766 83.4807 2.0898 25.8555 39.5640 51 29.6430 78.1898
0.7334 2.8 700 0.9348 0.7175 0.72 0.7180 0.72 4.1699 83.4807 2.0898 25.8555 39.5640 50 29.6430 78.1898
0.7316 3.0 750 0.9407 0.7230 0.7275 0.7238 0.7275 4.1785 83.4807 2.0898 25.8555 39.5640 50 29.6431 78.1898
0.6045 3.2 800 0.9300 0.7208 0.721 0.7253 0.721 4.1864 83.4807 2.0898 25.8555 39.5640 48 29.6431 78.1898
0.6262 3.4 850 0.9416 0.7165 0.7175 0.7184 0.7175 4.1847 83.4807 2.0898 25.8555 39.5640 49 29.6431 78.1898
0.5999 3.6 900 0.9542 0.7155 0.718 0.7156 0.718 4.1891 83.4807 2.0898 25.8555 39.5640 47 29.6431 78.1898
0.6436 3.8 950 0.9580 0.7085 0.7115 0.7127 0.7115 4.1644 83.4807 2.0898 25.8555 39.5640 49 29.6431 78.1898
0.59 4.0 1000 0.9476 0.7209 0.723 0.7208 0.723 4.1608 83.4807 2.0898 25.8555 39.5640 47 29.6432 78.1898
0.5422 4.2 1050 0.9658 0.7201 0.7205 0.7224 0.7205 4.1462 83.4807 2.0898 25.8555 39.5640 46 31.7150 78.1898
0.5205 4.4 1100 0.9674 0.7122 0.7155 0.7128 0.7155 4.1598 83.4807 2.0898 25.8555 39.5640 49 31.7151 78.1898
0.5253 4.6 1150 0.9563 0.7175 0.7195 0.7185 0.7195 4.1854 83.4807 2.0898 25.8555 39.5640 49 31.7151 78.1898
0.5109 4.8 1200 0.9621 0.7201 0.722 0.7192 0.722 4.1908 83.4807 2.0898 25.8555 39.5640 49 31.7151 78.1898
0.5216 5.0 1250 0.9635 0.7190 0.7215 0.7189 0.7215 4.1862 83.4807 2.0898 25.8555 39.5640 50 31.7151 78.1898

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
9

Finetuned from