--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: xnli_en_lora_alpha_32_drop_02_rank_16 results: [] --- # xnli_en_lora_alpha_32_drop_02_rank_16 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4319 - Accuracy: 0.8450 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.5318 | 1.0 | 12272 | 0.4882 | 0.8032 | | 0.4972 | 2.0 | 24544 | 0.4556 | 0.8277 | | 0.4683 | 3.0 | 36816 | 0.5099 | 0.7928 | | 0.4599 | 4.0 | 49088 | 0.4357 | 0.8325 | | 0.4332 | 5.0 | 61360 | 0.4250 | 0.8402 | | 0.4159 | 6.0 | 73632 | 0.4293 | 0.8333 | | 0.3916 | 7.0 | 85904 | 0.4273 | 0.8382 | | 0.3895 | 8.0 | 98176 | 0.4222 | 0.8410 | | 0.3813 | 9.0 | 110448 | 0.4269 | 0.8454 | | 0.3603 | 10.0 | 122720 | 0.4319 | 0.8450 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1