--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: scenario-NON-KD-PR-COPY-CDF-CL-D2_data-cl-cardiff_cl_only_delta results: [] --- # scenario-NON-KD-PR-COPY-CDF-CL-D2_data-cl-cardiff_cl_only_delta This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.4388 - Accuracy: 0.4390 - F1: 0.4380 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 11213 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.09 | 250 | 1.1982 | 0.4576 | 0.4521 | | 0.9123 | 2.17 | 500 | 1.4363 | 0.4722 | 0.4650 | | 0.9123 | 3.26 | 750 | 1.5173 | 0.4715 | 0.4672 | | 0.5863 | 4.35 | 1000 | 1.8168 | 0.4545 | 0.4539 | | 0.5863 | 5.43 | 1250 | 2.0339 | 0.4645 | 0.4643 | | 0.3146 | 6.52 | 1500 | 2.0526 | 0.4753 | 0.4739 | | 0.3146 | 7.61 | 1750 | 2.5574 | 0.4560 | 0.4543 | | 0.1799 | 8.7 | 2000 | 2.7053 | 0.4537 | 0.4542 | | 0.1799 | 9.78 | 2250 | 3.2816 | 0.4468 | 0.4462 | | 0.1166 | 10.87 | 2500 | 3.5971 | 0.4414 | 0.4418 | | 0.1166 | 11.96 | 2750 | 3.5830 | 0.4491 | 0.4477 | | 0.0894 | 13.04 | 3000 | 3.7770 | 0.4537 | 0.4537 | | 0.0894 | 14.13 | 3250 | 4.0171 | 0.4475 | 0.4464 | | 0.0625 | 15.22 | 3500 | 4.3230 | 0.4383 | 0.4350 | | 0.0625 | 16.3 | 3750 | 4.4061 | 0.4421 | 0.4402 | | 0.0419 | 17.39 | 4000 | 4.5390 | 0.4468 | 0.4460 | | 0.0419 | 18.48 | 4250 | 4.7343 | 0.4452 | 0.4445 | | 0.0328 | 19.57 | 4500 | 4.5586 | 0.4514 | 0.4527 | | 0.0328 | 20.65 | 4750 | 4.9107 | 0.4437 | 0.4424 | | 0.0225 | 21.74 | 5000 | 5.1509 | 0.4313 | 0.4276 | | 0.0225 | 22.83 | 5250 | 4.8634 | 0.4444 | 0.4436 | | 0.0209 | 23.91 | 5500 | 5.1513 | 0.4352 | 0.4318 | | 0.0209 | 25.0 | 5750 | 5.0801 | 0.4552 | 0.4555 | | 0.0117 | 26.09 | 6000 | 5.2642 | 0.4468 | 0.4444 | | 0.0117 | 27.17 | 6250 | 5.3801 | 0.4367 | 0.4342 | | 0.0092 | 28.26 | 6500 | 5.4445 | 0.4367 | 0.4343 | | 0.0092 | 29.35 | 6750 | 5.4388 | 0.4390 | 0.4380 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.13.3