--- license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: xlm_70k_co_vn results: [] --- # xlm_70k_co_vn This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1934 - Accuracy: 0.9679 - F1: 0.9680 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.2054 | 1.0 | 1719 | 0.1449 | 0.9595 | 0.9599 | | 0.1226 | 2.0 | 3438 | 0.1261 | 0.9638 | 0.9641 | | 0.1038 | 3.0 | 5157 | 0.1055 | 0.9682 | 0.9684 | | 0.0856 | 4.0 | 6876 | 0.1107 | 0.9676 | 0.9678 | | 0.0732 | 5.0 | 8595 | 0.1240 | 0.9680 | 0.9680 | | 0.0595 | 6.0 | 10314 | 0.1457 | 0.9679 | 0.9679 | | 0.0494 | 7.0 | 12033 | 0.1931 | 0.9667 | 0.9668 | | 0.0425 | 8.0 | 13752 | 0.1772 | 0.9672 | 0.9674 | | 0.0375 | 9.0 | 15471 | 0.1867 | 0.9683 | 0.9684 | | 0.034 | 10.0 | 17190 | 0.1934 | 0.9679 | 0.9680 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1