--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-pysentimiento-war-tweets results: [] --- # finetuning-pysentimiento-war-tweets This model is a fine-tuned version of [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) on a dataset of 1500 tweets from Peruvian accounts. It achieves the following results on the evaluation set: - Loss: 1.7689 - Accuracy: 0.7378 - F1: 0.7456 ## Model description This model in a fine-tuned version of [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) using five labels: **pro_russia**, **against_ukraine**, **neutral**, **against_russia**, **pro_ukraine**. ## Intended uses & limitations This model shall be used to classify text (more specifically, Spanish tweets) as expressing a position concerning the Russo-Ukrainian war. ## Training and evaluation data We used an 80/20 training/test split on the aforementioned dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1