--- language: - en license: mit tags: - nycu-112-2-datamining-hw2 - generated_from_trainer base_model: microsoft/deberta-v2-xxlarge datasets: - DandinPower/review_onlytitleandtext metrics: - accuracy model-index: - name: deberta-v2-xxlarge-otat-recommened-hp results: - task: type: text-classification name: Text Classification dataset: name: DandinPower/review_onlytitleandtext type: DandinPower/review_onlytitleandtext metrics: - type: accuracy value: 0.6741428571428572 name: Accuracy --- # deberta-v2-xxlarge-otat-recommened-hp This model is a fine-tuned version of [microsoft/deberta-v2-xxlarge](https://huggingface.co/microsoft/deberta-v2-xxlarge) on the DandinPower/review_onlytitleandtext dataset. It achieves the following results on the evaluation set: - Loss: 0.7864 - Accuracy: 0.6741 - Macro F1: 0.6719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 0.9641 | 0.46 | 200 | 0.8451 | 0.6327 | 0.6341 | | 0.8263 | 0.91 | 400 | 0.7768 | 0.6651 | 0.6650 | | 0.7605 | 1.37 | 600 | 0.7842 | 0.667 | 0.6667 | | 0.7496 | 1.83 | 800 | 0.7790 | 0.6659 | 0.6650 | | 0.7034 | 2.29 | 1000 | 0.7738 | 0.67 | 0.6639 | | 0.7134 | 2.74 | 1200 | 0.7671 | 0.6694 | 0.6698 | | 0.6839 | 3.2 | 1400 | 0.7754 | 0.6743 | 0.6770 | | 0.6699 | 3.66 | 1600 | 0.7853 | 0.6711 | 0.6666 | | 0.6502 | 4.11 | 1800 | 0.7789 | 0.671 | 0.6692 | | 0.6431 | 4.57 | 2000 | 0.7864 | 0.6741 | 0.6719 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2