--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: 16class_11k_newtest_xlm_roberta_base_25nov_v2_8epoch results: [] --- # 16class_11k_newtest_xlm_roberta_base_25nov_v2_8epoch This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1142 - Accuracy: 0.9706 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7244 | 1.0 | 826 | 0.6693 | 0.8036 | | 0.6119 | 2.0 | 1652 | 0.4189 | 0.8734 | | 0.5004 | 3.0 | 2478 | 0.3088 | 0.9141 | | 0.3626 | 4.0 | 3304 | 0.2287 | 0.9339 | | 0.2776 | 5.0 | 4130 | 0.1735 | 0.9513 | | 0.2445 | 6.0 | 4956 | 0.1446 | 0.9606 | | 0.1944 | 7.0 | 5782 | 0.1192 | 0.9682 | | 0.1633 | 8.0 | 6608 | 0.1142 | 0.9706 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0