--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - tweet_sentiment_multilingual metrics: - accuracy - f1 model-index: - name: scenario-TCR_data-cardiffnlp_tweet_sentiment_multilingual_all_a results: - task: name: Text Classification type: text-classification dataset: name: tweet_sentiment_multilingual type: tweet_sentiment_multilingual config: all split: validation args: all metrics: - name: Accuracy type: accuracy value: 0.6439043209876543 - name: F1 type: f1 value: 0.6443757148090576 --- # scenario-TCR_data-cardiffnlp_tweet_sentiment_multilingual_all_a This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tweet_sentiment_multilingual dataset. It achieves the following results on the evaluation set: - Loss: 2.6822 - Accuracy: 0.6439 - F1: 0.6444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 1234 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.9412 | 1.09 | 500 | 0.8062 | 0.6389 | 0.6335 | | 0.7943 | 2.17 | 1000 | 0.8448 | 0.6451 | 0.6394 | | 0.7026 | 3.26 | 1500 | 0.8509 | 0.6497 | 0.6438 | | 0.6019 | 4.35 | 2000 | 0.8999 | 0.6478 | 0.6468 | | 0.5379 | 5.43 | 2500 | 0.9424 | 0.6312 | 0.6222 | | 0.4635 | 6.52 | 3000 | 1.0401 | 0.6431 | 0.6439 | | 0.3985 | 7.61 | 3500 | 1.0584 | 0.6397 | 0.6390 | | 0.3506 | 8.7 | 4000 | 1.1607 | 0.6443 | 0.6432 | | 0.3105 | 9.78 | 4500 | 1.1806 | 0.6408 | 0.6423 | | 0.2712 | 10.87 | 5000 | 1.3112 | 0.6316 | 0.6304 | | 0.2361 | 11.96 | 5500 | 1.3772 | 0.6466 | 0.6454 | | 0.2111 | 13.04 | 6000 | 1.4492 | 0.6385 | 0.6396 | | 0.1885 | 14.13 | 6500 | 1.6604 | 0.6335 | 0.6347 | | 0.1658 | 15.22 | 7000 | 1.7153 | 0.6358 | 0.6353 | | 0.1501 | 16.3 | 7500 | 1.7849 | 0.6412 | 0.6427 | | 0.135 | 17.39 | 8000 | 1.9749 | 0.6416 | 0.6394 | | 0.1217 | 18.48 | 8500 | 2.0530 | 0.6439 | 0.6431 | | 0.1112 | 19.57 | 9000 | 2.1378 | 0.6439 | 0.6448 | | 0.1018 | 20.65 | 9500 | 2.2656 | 0.6393 | 0.6390 | | 0.0885 | 21.74 | 10000 | 2.3568 | 0.6431 | 0.6438 | | 0.0897 | 22.83 | 10500 | 2.3852 | 0.6435 | 0.6446 | | 0.0854 | 23.91 | 11000 | 2.4019 | 0.6327 | 0.6329 | | 0.0734 | 25.0 | 11500 | 2.5260 | 0.6331 | 0.6333 | | 0.067 | 26.09 | 12000 | 2.5368 | 0.6470 | 0.6465 | | 0.0546 | 27.17 | 12500 | 2.6255 | 0.6431 | 0.6441 | | 0.0581 | 28.26 | 13000 | 2.6467 | 0.6458 | 0.6456 | | 0.0564 | 29.35 | 13500 | 2.6822 | 0.6439 | 0.6444 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.13.3