--- license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: scenario-KD-PR-MSV-D2_data-cl-cardiff_cl_only_delta-jason results: [] --- # scenario-KD-PR-MSV-D2_data-cl-cardiff_cl_only_delta-jason This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 17.1445 - Accuracy: 0.4005 - F1: 0.3981 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 7777 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.09 | 250 | 11.9607 | 0.3603 | 0.3165 | | 13.9882 | 2.17 | 500 | 11.5568 | 0.375 | 0.3692 | | 13.9882 | 3.26 | 750 | 11.3711 | 0.4090 | 0.4091 | | 11.3923 | 4.35 | 1000 | 11.5830 | 0.3974 | 0.3972 | | 11.3923 | 5.43 | 1250 | 11.6052 | 0.4074 | 0.4019 | | 9.8435 | 6.52 | 1500 | 11.6147 | 0.4128 | 0.4100 | | 9.8435 | 7.61 | 1750 | 12.7909 | 0.3966 | 0.3837 | | 8.6495 | 8.7 | 2000 | 11.8204 | 0.4128 | 0.4046 | | 8.6495 | 9.78 | 2250 | 12.4905 | 0.3889 | 0.3824 | | 7.3906 | 10.87 | 2500 | 13.3672 | 0.4113 | 0.4097 | | 7.3906 | 11.96 | 2750 | 15.0103 | 0.4020 | 0.3925 | | 6.4797 | 13.04 | 3000 | 14.2666 | 0.3789 | 0.3791 | | 6.4797 | 14.13 | 3250 | 14.9095 | 0.3904 | 0.3825 | | 5.5862 | 15.22 | 3500 | 14.1357 | 0.4028 | 0.3995 | | 5.5862 | 16.3 | 3750 | 15.0102 | 0.4005 | 0.3997 | | 5.0029 | 17.39 | 4000 | 15.1376 | 0.4051 | 0.4035 | | 5.0029 | 18.48 | 4250 | 15.2697 | 0.3958 | 0.3935 | | 4.4091 | 19.57 | 4500 | 15.8735 | 0.3935 | 0.3905 | | 4.4091 | 20.65 | 4750 | 15.5799 | 0.4028 | 0.4009 | | 3.9734 | 21.74 | 5000 | 16.0068 | 0.4151 | 0.4137 | | 3.9734 | 22.83 | 5250 | 15.9701 | 0.3796 | 0.3797 | | 3.569 | 23.91 | 5500 | 16.0636 | 0.3850 | 0.3783 | | 3.569 | 25.0 | 5750 | 16.2960 | 0.3904 | 0.3878 | | 3.2517 | 26.09 | 6000 | 16.5689 | 0.3897 | 0.3850 | | 3.2517 | 27.17 | 6250 | 17.1440 | 0.3858 | 0.3843 | | 3.1047 | 28.26 | 6500 | 17.0026 | 0.4074 | 0.4028 | | 3.1047 | 29.35 | 6750 | 17.1445 | 0.4005 | 0.3981 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.13.3