XLM-R-BASE-Finetune-step2-finetune-and-eval-may31-deft-sea-5-D-06-01-T-07-56
This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.4138
- Precision 0: 0.8686
- Precision 1: 0.7973
- Recall 0: 0.8593
- Recall 1: 0.8099
- F1 0: 0.8639
- F1 1: 0.8035
- Precision Weighted: 0.8397
- Recall Weighted: 0.8392
- F1 Weighted: 0.8394
- Accuracy: 0.8392
- F1 Macro: 0.8337
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Precision 0 | Precision 1 | Recall 0 | Recall 1 | F1 0 | F1 1 | Precision Weighted | Recall Weighted | F1 Weighted | Accuracy | F1 Macro |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.5706 | 1.0 | 235 | 0.4284 | 0.8281 | 0.7829 | 0.8599 | 0.7389 | 0.8437 | 0.7603 | 0.8098 | 0.8108 | 0.8098 | 0.8108 | 0.8020 |
0.4239 | 2.0 | 470 | 0.3919 | 0.7991 | 0.8685 | 0.9320 | 0.6571 | 0.8604 | 0.7482 | 0.8273 | 0.8204 | 0.8149 | 0.8204 | 0.8043 |
0.3552 | 3.0 | 705 | 0.3774 | 0.8168 | 0.8416 | 0.9098 | 0.7015 | 0.8608 | 0.7652 | 0.8269 | 0.8252 | 0.8220 | 0.8252 | 0.8130 |
0.292 | 4.0 | 940 | 0.4014 | 0.8627 | 0.7998 | 0.8633 | 0.7990 | 0.8630 | 0.7994 | 0.8372 | 0.8372 | 0.8372 | 0.8372 | 0.8312 |
0.2404 | 5.0 | 1175 | 0.4138 | 0.8686 | 0.7973 | 0.8593 | 0.8099 | 0.8639 | 0.8035 | 0.8397 | 0.8392 | 0.8394 | 0.8392 | 0.8337 |
Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 3