--- license: mit langs: - multilingual tags: - generated_from_trainer - xnli datasets: - xglue metrics: - accuracy model-index: - name: xlm-v-base-finetuned-xglue-xnli results: - task: name: Text Classification type: text-classification dataset: name: xglue type: xglue config: xnli split: validation.en+validation.ar+validation.bg+validation.de+validation.el+validation.es+validation.fr+validation.hi+validation.ru+validation.sw+validation.th+validation.tr+validation.ur+validation.vi+validation.zh args: xnli metrics: - name: Accuracy type: accuracy value: 0.7402677376171352 --- # XLM-V (base) fine-tuned on XNLI This model is a fine-tuned version of [XLM-V (base)](https://huggingface.co/facebook/xlm-v-base) on the XNLI (XGLUE) dataset. It achieves the following results on the evaluation set: - Loss: 0.6511 - Accuracy: 0.7403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0994 | 0.08 | 1000 | 1.0966 | 0.3697 | | 1.0221 | 0.16 | 2000 | 1.0765 | 0.4560 | | 0.8437 | 0.24 | 3000 | 0.8472 | 0.6179 | | 0.6997 | 0.33 | 4000 | 0.7650 | 0.6804 | | 0.6304 | 0.41 | 5000 | 0.7227 | 0.7007 | | 0.5972 | 0.49 | 6000 | 0.7430 | 0.6977 | | 0.5886 | 0.57 | 7000 | 0.7365 | 0.7066 | | 0.5585 | 0.65 | 8000 | 0.6819 | 0.7223 | | 0.5464 | 0.73 | 9000 | 0.7222 | 0.7046 | | 0.5289 | 0.81 | 10000 | 0.7290 | 0.7054 | | 0.5298 | 0.9 | 11000 | 0.6824 | 0.7221 | | 0.5241 | 0.98 | 12000 | 0.6650 | 0.7268 | | 0.4806 | 1.06 | 13000 | 0.6861 | 0.7308 | | 0.4715 | 1.14 | 14000 | 0.6619 | 0.7304 | | 0.4645 | 1.22 | 15000 | 0.6656 | 0.7284 | | 0.4443 | 1.3 | 16000 | 0.7026 | 0.7270 | | 0.4582 | 1.39 | 17000 | 0.7055 | 0.7225 | | 0.4456 | 1.47 | 18000 | 0.6592 | 0.7361 | | 0.44 | 1.55 | 19000 | 0.6816 | 0.7329 | | 0.4419 | 1.63 | 20000 | 0.6772 | 0.7357 | | 0.4403 | 1.71 | 21000 | 0.6745 | 0.7319 | | 0.4348 | 1.79 | 22000 | 0.6678 | 0.7338 | | 0.4355 | 1.87 | 23000 | 0.6614 | 0.7365 | | 0.4295 | 1.96 | 24000 | 0.6511 | 0.7403 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2