--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 base_model: xlm-roberta-large model-index: - name: fine-tuned-NLI-indonesian-with-xlm-roberta-large results: [] --- # fine-tuned-NLI-indonesian-with-xlm-roberta-large This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2112 - Accuracy: 0.9463 - F1: 0.9463 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7348 | 0.49 | 72 | 0.6119 | 0.6584 | 0.6544 | | 0.5955 | 0.99 | 144 | 0.2496 | 0.8959 | 0.8959 | | 0.2352 | 1.49 | 216 | 0.1968 | 0.9169 | 0.9169 | | 0.1987 | 1.98 | 288 | 0.1773 | 0.9267 | 0.9265 | | 0.1315 | 2.48 | 360 | 0.1585 | 0.9437 | 0.9437 | | 0.1206 | 2.97 | 432 | 0.1540 | 0.9411 | 0.9411 | | 0.0821 | 3.47 | 504 | 0.1861 | 0.9470 | 0.9470 | | 0.0782 | 3.97 | 576 | 0.1791 | 0.9503 | 0.9503 | | 0.0743 | 4.47 | 648 | 0.1801 | 0.9476 | 0.9476 | | 0.0691 | 4.96 | 720 | 0.1902 | 0.9463 | 0.9463 | | 0.0569 | 5.46 | 792 | 0.2112 | 0.9463 | 0.9463 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.2.0 - Tokenizers 0.13.2