deberta_classifier / README.md
Jessica666's picture
Training complete 20240728
cd4b7da verified
metadata
license: mit
base_model: microsoft/deberta-v3-base
tags:
  - multi-label text classification
  - generated_from_trainer
metrics:
  - accuracy
  - f1
  - precision
  - recall
model-index:
  - name: deberta_classifier
    results: []

deberta_classifier

This model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0183
  • Accuracy: 0.9955
  • F1: 0.6062
  • Precision: 0.8225
  • Recall: 0.4799

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
0.6159 0.1169 100 0.5955 0.7621 0.0288 0.0148 0.4839
0.3536 0.2338 200 0.3085 0.9753 0.1645 0.1091 0.3341
0.1166 0.3507 300 0.0917 0.9931 0.4124 0.5429 0.3325
0.0456 0.4676 400 0.0375 0.9931 0.4124 0.5429 0.3325
0.0308 0.5845 500 0.0270 0.9931 0.4124 0.5429 0.3325
0.0249 0.7013 600 0.0234 0.9942 0.4459 0.7407 0.3189
0.0231 0.8182 700 0.0211 0.9953 0.5983 0.7970 0.4789
0.0213 0.9351 800 0.0196 0.9953 0.5989 0.7998 0.4787
0.0197 1.0520 900 0.0187 0.9954 0.6029 0.8168 0.4778
0.0205 1.1689 1000 0.0183 0.9955 0.6062 0.8225 0.4799
0.017 1.2858 1100 0.0175 0.9959 0.6610 0.8426 0.5437
0.018 1.4027 1200 0.0170 0.9960 0.6653 0.8685 0.5392
0.0177 1.5196 1300 0.0165 0.9961 0.6722 0.8732 0.5464
0.0189 1.6365 1400 0.0162 0.9962 0.6752 0.8910 0.5435
0.0179 1.7534 1500 0.0159 0.9964 0.6898 0.9151 0.5535
0.0169 1.8703 1600 0.0158 0.9964 0.6928 0.9030 0.5620
0.0172 1.9871 1700 0.0156 0.9964 0.6909 0.9130 0.5557

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1