--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: xnli_en_lora_alpha_64_drop_0.1_rank_32_seed_456 results: [] --- # xnli_en_lora_alpha_64_drop_0.1_rank_32_seed_456 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4781 - Accuracy: 0.8361 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 456 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.5401 | 1.0 | 12272 | 0.4940 | 0.8012 | | 0.5153 | 2.0 | 24544 | 0.4847 | 0.8185 | | 0.4912 | 3.0 | 36816 | 0.4560 | 0.8281 | | 0.464 | 4.0 | 49088 | 0.4341 | 0.8305 | | 0.4353 | 5.0 | 61360 | 0.4341 | 0.8293 | | 0.4222 | 6.0 | 73632 | 0.4421 | 0.8353 | | 0.4029 | 7.0 | 85904 | 0.4692 | 0.8181 | | 0.383 | 8.0 | 98176 | 0.4453 | 0.8289 | | 0.3747 | 9.0 | 110448 | 0.4696 | 0.8273 | | 0.358 | 10.0 | 122720 | 0.4697 | 0.8217 | | 0.3303 | 11.0 | 134992 | 0.4648 | 0.8317 | | 0.3217 | 12.0 | 147264 | 0.4618 | 0.8386 | | 0.31 | 13.0 | 159536 | 0.4796 | 0.8333 | | 0.2831 | 14.0 | 171808 | 0.4702 | 0.8329 | | 0.2847 | 15.0 | 184080 | 0.4781 | 0.8361 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1