|
--- |
|
license: mit |
|
base_model: xlm-roberta-base |
|
tags: |
|
- generated_from_trainer |
|
metrics: |
|
- accuracy |
|
- f1 |
|
model-index: |
|
- name: scenario-NON-KD-PR-COPY-CDF-CL-D2_data-cl-cardiff_cl_only_gamma |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# scenario-NON-KD-PR-COPY-CDF-CL-D2_data-cl-cardiff_cl_only_gamma |
|
|
|
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 5.4088 |
|
- Accuracy: 0.4452 |
|
- F1: 0.4432 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 5e-05 |
|
- train_batch_size: 32 |
|
- eval_batch_size: 32 |
|
- seed: 11423 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 30 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |
|
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| |
|
| No log | 1.09 | 250 | 1.1861 | 0.4599 | 0.4461 | |
|
| 0.8893 | 2.17 | 500 | 1.2483 | 0.4753 | 0.4682 | |
|
| 0.8893 | 3.26 | 750 | 1.4640 | 0.4877 | 0.4872 | |
|
| 0.5435 | 4.35 | 1000 | 1.9901 | 0.4529 | 0.4440 | |
|
| 0.5435 | 5.43 | 1250 | 2.1858 | 0.4398 | 0.4357 | |
|
| 0.2767 | 6.52 | 1500 | 2.2484 | 0.4653 | 0.4643 | |
|
| 0.2767 | 7.61 | 1750 | 2.7287 | 0.4653 | 0.4642 | |
|
| 0.1584 | 8.7 | 2000 | 2.7996 | 0.4637 | 0.4616 | |
|
| 0.1584 | 9.78 | 2250 | 3.2599 | 0.4684 | 0.4684 | |
|
| 0.1119 | 10.87 | 2500 | 3.7690 | 0.4344 | 0.4244 | |
|
| 0.1119 | 11.96 | 2750 | 3.5578 | 0.4591 | 0.4584 | |
|
| 0.0771 | 13.04 | 3000 | 3.9089 | 0.4483 | 0.4490 | |
|
| 0.0771 | 14.13 | 3250 | 4.1349 | 0.4637 | 0.4587 | |
|
| 0.054 | 15.22 | 3500 | 4.4418 | 0.4506 | 0.4435 | |
|
| 0.054 | 16.3 | 3750 | 4.4987 | 0.4522 | 0.4511 | |
|
| 0.04 | 17.39 | 4000 | 4.5234 | 0.4514 | 0.4511 | |
|
| 0.04 | 18.48 | 4250 | 4.7455 | 0.4529 | 0.4517 | |
|
| 0.0241 | 19.57 | 4500 | 5.0606 | 0.4329 | 0.4238 | |
|
| 0.0241 | 20.65 | 4750 | 5.0820 | 0.4414 | 0.4394 | |
|
| 0.0243 | 21.74 | 5000 | 5.2753 | 0.4360 | 0.4304 | |
|
| 0.0243 | 22.83 | 5250 | 5.1224 | 0.4660 | 0.4666 | |
|
| 0.0155 | 23.91 | 5500 | 5.2712 | 0.4437 | 0.4407 | |
|
| 0.0155 | 25.0 | 5750 | 5.3846 | 0.4421 | 0.4393 | |
|
| 0.0156 | 26.09 | 6000 | 5.4060 | 0.4398 | 0.4352 | |
|
| 0.0156 | 27.17 | 6250 | 5.3914 | 0.4383 | 0.4344 | |
|
| 0.0105 | 28.26 | 6500 | 5.3427 | 0.4421 | 0.4413 | |
|
| 0.0105 | 29.35 | 6750 | 5.4088 | 0.4452 | 0.4432 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.33.3 |
|
- Pytorch 2.1.1+cu121 |
|
- Datasets 2.14.5 |
|
- Tokenizers 0.13.3 |
|
|