--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - tweet_sentiment_multilingual metrics: - accuracy - f1 model-index: - name: scenario-NON-KD-SCR-COPY-CDF-ALL-D2_data-cardiffnlp_tweet_sentiment_multilingual results: - task: name: Text Classification type: text-classification dataset: name: tweet_sentiment_multilingual type: tweet_sentiment_multilingual config: all split: validation args: all metrics: - name: Accuracy type: accuracy value: 0.4972993827160494 - name: F1 type: f1 value: 0.49564146924204383 --- # scenario-NON-KD-SCR-COPY-CDF-ALL-D2_data-cardiffnlp_tweet_sentiment_multilingual This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tweet_sentiment_multilingual dataset. It achieves the following results on the evaluation set: - Loss: 5.7429 - Accuracy: 0.4973 - F1: 0.4956 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 222 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 1.1068 | 1.09 | 500 | 1.0681 | 0.4352 | 0.4321 | | 0.9081 | 2.17 | 1000 | 1.2631 | 0.5046 | 0.5021 | | 0.5532 | 3.26 | 1500 | 1.5304 | 0.5108 | 0.5089 | | 0.2998 | 4.35 | 2000 | 2.0584 | 0.4884 | 0.4858 | | 0.1717 | 5.43 | 2500 | 2.7362 | 0.5 | 0.4939 | | 0.1242 | 6.52 | 3000 | 3.0470 | 0.4969 | 0.4938 | | 0.0874 | 7.61 | 3500 | 2.7990 | 0.5046 | 0.5037 | | 0.0669 | 8.7 | 4000 | 3.2793 | 0.4942 | 0.4940 | | 0.056 | 9.78 | 4500 | 3.2094 | 0.5027 | 0.5028 | | 0.0487 | 10.87 | 5000 | 3.5054 | 0.4992 | 0.4972 | | 0.0539 | 11.96 | 5500 | 3.2798 | 0.5008 | 0.5003 | | 0.0317 | 13.04 | 6000 | 3.4251 | 0.5004 | 0.4994 | | 0.0449 | 14.13 | 6500 | 4.0353 | 0.4969 | 0.4923 | | 0.0303 | 15.22 | 7000 | 4.3157 | 0.4850 | 0.4733 | | 0.0285 | 16.3 | 7500 | 3.8740 | 0.4985 | 0.4987 | | 0.0214 | 17.39 | 8000 | 4.5553 | 0.4842 | 0.4828 | | 0.0228 | 18.48 | 8500 | 4.7444 | 0.4946 | 0.4903 | | 0.0177 | 19.57 | 9000 | 4.5373 | 0.4969 | 0.4939 | | 0.0167 | 20.65 | 9500 | 4.4792 | 0.4927 | 0.4859 | | 0.0144 | 21.74 | 10000 | 4.6491 | 0.4896 | 0.4897 | | 0.0164 | 22.83 | 10500 | 4.8310 | 0.4934 | 0.4926 | | 0.0116 | 23.91 | 11000 | 4.6267 | 0.4996 | 0.4965 | | 0.0102 | 25.0 | 11500 | 5.0420 | 0.4904 | 0.4808 | | 0.0053 | 26.09 | 12000 | 5.2202 | 0.4915 | 0.4824 | | 0.01 | 27.17 | 12500 | 4.8786 | 0.4900 | 0.4868 | | 0.0076 | 28.26 | 13000 | 4.8830 | 0.4919 | 0.4906 | | 0.0064 | 29.35 | 13500 | 5.2319 | 0.4934 | 0.4890 | | 0.0055 | 30.43 | 14000 | 5.4810 | 0.4973 | 0.4953 | | 0.0057 | 31.52 | 14500 | 5.4109 | 0.5035 | 0.5019 | | 0.0032 | 32.61 | 15000 | 5.3979 | 0.5054 | 0.5041 | | 0.0092 | 33.7 | 15500 | 5.3848 | 0.4942 | 0.4940 | | 0.0053 | 34.78 | 16000 | 5.2937 | 0.5066 | 0.5046 | | 0.0029 | 35.87 | 16500 | 5.5430 | 0.5012 | 0.4971 | | 0.0011 | 36.96 | 17000 | 5.6338 | 0.4919 | 0.4905 | | 0.0027 | 38.04 | 17500 | 5.6234 | 0.4958 | 0.4960 | | 0.0042 | 39.13 | 18000 | 5.5802 | 0.4988 | 0.4991 | | 0.0012 | 40.22 | 18500 | 5.6464 | 0.4988 | 0.4993 | | 0.0037 | 41.3 | 19000 | 5.6227 | 0.4965 | 0.4945 | | 0.0007 | 42.39 | 19500 | 5.6263 | 0.4958 | 0.4939 | | 0.0003 | 43.48 | 20000 | 5.6946 | 0.4934 | 0.4937 | | 0.0016 | 44.57 | 20500 | 5.6654 | 0.4973 | 0.4977 | | 0.0018 | 45.65 | 21000 | 5.6725 | 0.4965 | 0.4952 | | 0.0012 | 46.74 | 21500 | 5.6500 | 0.4873 | 0.4869 | | 0.0008 | 47.83 | 22000 | 5.6626 | 0.4992 | 0.4985 | | 0.0006 | 48.91 | 22500 | 5.7378 | 0.4985 | 0.4968 | | 0.0004 | 50.0 | 23000 | 5.7429 | 0.4973 | 0.4956 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.13.3