DandinPower's picture
End of training
a24a1c2 verified
metadata
language:
  - en
license: mit
base_model: microsoft/deberta-v3-large
tags:
  - nycu-112-2-datamining-hw2
  - generated_from_trainer
datasets:
  - DandinPower/review_onlytitleandtext
metrics:
  - accuracy
model-index:
  - name: deberta-v3-large-otat-recommened-hp
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: DandinPower/review_onlytitleandtext
          type: DandinPower/review_onlytitleandtext
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.6685714285714286

deberta-v3-large-otat-recommened-hp

This model is a fine-tuned version of microsoft/deberta-v3-large on the DandinPower/review_onlytitleandtext dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8169
  • Accuracy: 0.6686
  • Macro F1: 0.6662

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 6e-06
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy Macro F1
0.7726 1.14 500 0.8107 0.6613 0.6602
0.6983 2.29 1000 0.7739 0.669 0.6662
0.6504 3.43 1500 0.7891 0.6726 0.6725
0.6067 4.57 2000 0.8169 0.6686 0.6662

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.2.2+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2