--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: xnli_en_adalora_alpha_32_drop_02_rank_16 results: [] --- # xnli_en_adalora_alpha_32_drop_02_rank_16 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4329 - Accuracy: 0.8317 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.5524 | 1.0 | 12272 | 0.5105 | 0.7980 | | 0.5069 | 2.0 | 24544 | 0.4791 | 0.8153 | | 0.484 | 3.0 | 36816 | 0.5075 | 0.8000 | | 0.4782 | 4.0 | 49088 | 0.4453 | 0.8249 | | 0.4577 | 5.0 | 61360 | 0.4433 | 0.8305 | | 0.448 | 6.0 | 73632 | 0.4361 | 0.8277 | | 0.4223 | 7.0 | 85904 | 0.4379 | 0.8261 | | 0.428 | 8.0 | 98176 | 0.4338 | 0.8361 | | 0.4224 | 9.0 | 110448 | 0.4337 | 0.8353 | | 0.4105 | 10.0 | 122720 | 0.4329 | 0.8317 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1