metadata
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm_r_base-finetuned_after_mrp-v2-royal-lake-9
results: []
xlm_r_base-finetuned_after_mrp-v2-royal-lake-9
This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.4304
- Precision 0: 0.8589
- Precision 1: 0.8054
- Recall 0: 0.8694
- Recall 1: 0.7911
- F1 0: 0.8641
- F1 1: 0.7982
- Precision Weighted: 0.8372
- Recall Weighted: 0.8376
- F1 Weighted: 0.8374
- F1 Macro: 0.8312
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Precision 0 | Precision 1 | Recall 0 | Recall 1 | F1 0 | F1 1 | Precision Weighted | Recall Weighted | F1 Weighted | F1 Macro |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.5351 | 1.0 | 469 | 0.4344 | 0.8486 | 0.7872 | 0.8566 | 0.7764 | 0.8525 | 0.7817 | 0.8237 | 0.824 | 0.8238 | 0.8171 |
0.3729 | 2.0 | 938 | 0.4086 | 0.8931 | 0.7449 | 0.7987 | 0.8601 | 0.8432 | 0.7984 | 0.8329 | 0.8236 | 0.8250 | 0.8208 |
0.3453 | 3.0 | 1407 | 0.3670 | 0.8665 | 0.7892 | 0.8525 | 0.8079 | 0.8595 | 0.7984 | 0.8351 | 0.8344 | 0.8347 | 0.8290 |
0.2512 | 4.0 | 1876 | 0.4304 | 0.8589 | 0.8054 | 0.8694 | 0.7911 | 0.8641 | 0.7982 | 0.8372 | 0.8376 | 0.8374 | 0.8312 |
0.214 | 5.0 | 2345 | 0.5356 | 0.8703 | 0.7869 | 0.8492 | 0.8148 | 0.8596 | 0.8006 | 0.8364 | 0.8352 | 0.8356 | 0.8301 |
Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1