--- license: apache-2.0 tags: - datasets: - EXIST Dataset metrics: - accuracy model-index: - name: twitter_sexismo-finetuned-exist2021 results: - task: name: Text Classification type: text-classification dataset: name: EXIST Dataset type: EXIST Dataset args: es metrics: - name: Accuracy type: accuracy value: 0.80 --- # twitter_sexismo-finetuned-exist2021 This model is a fine-tuned version of [pysentimiento/robertuito-hate-speech](https://huggingface.co/pysentimiento/robertuito-hate-speech) on the EXIST dataset. It achieves the following results on the evaluation set: - Loss: 0.12 - Accuracy: 0.80 ## Model description Modelo para el Hackaton de Somos NLP para detección de sexismo en twitts en español ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - my_learning_rate = 2E-6 - my_adam_epsilon = 1E-8 - my_number_of_epochs = 15 - my_warmup = 3 - my_mini_batch_size = 32 - optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results ======== Epoch 1 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:53. Batch 100 of 143. Elapsed: 0:01:45. Average training loss: 0.39 Training epoch took: 0:02:29 ======== Epoch 2 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.36 Training epoch took: 0:02:29 ======== Epoch 3 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.34 Training epoch took: 0:02:29 ======== Epoch 4 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.33 Training epoch took: 0:02:29 ======== Epoch 5 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.31 Training epoch took: 0:02:29 ======== Epoch 6 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.29 Training epoch took: 0:02:29 ======== Epoch 7 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.28 Training epoch took: 0:02:29 ======== Epoch 8 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.27 Training epoch took: 0:02:29 ======== Epoch 9 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.25 Training epoch took: 0:02:28 ======== Epoch 10 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.24 Training epoch took: 0:02:29 ======== Epoch 11 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.23 Training epoch took: 0:02:28 ======== Epoch 12 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.22 Training epoch took: 0:02:29 ======== Epoch 13 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.21 Training epoch took: 0:02:29 ======== Epoch 14 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.20 Training epoch took: 0:02:29 ======== Epoch 15 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.19 Training epoch took: 0:02:29 ======== Epoch 16 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.18 Training epoch took: 0:02:29 ======== Epoch 17 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.17 Training epoch took: 0:02:29 ======== Epoch 18 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.17 Training epoch took: 0:02:29 ======== Epoch 19 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.16 Training epoch took: 0:02:29 ======== Epoch 20 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.15 Training epoch took: 0:02:29 ======== Epoch 21 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.15 Training epoch took: 0:02:29 ======== Epoch 22 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.15 Training epoch took: 0:02:29 ======== Epoch 23 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.14 Training epoch took: 0:02:29 ======== Epoch 24 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:45. Average training loss: 0.14 Training epoch took: 0:02:29 ======== Epoch 25 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.14 Training epoch took: 0:02:29 ======== Epoch 26 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.13 Training epoch took: 0:02:29 ======== Epoch 27 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.13 Training epoch took: 0:02:29 ======== Epoch 28 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.13 Training epoch took: 0:02:29 ======== Epoch 29 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.12 Training epoch took: 0:02:29 ======== Epoch 30 / 30 ======== Training... Batch 50 of 143. Elapsed: 0:00:52. Batch 100 of 143. Elapsed: 0:01:44. Average training loss: 0.13 Training epoch took: 0:02:29 precision recall f1-score support 0 0.78 0.82 0.80 551 1 0.82 0.79 0.81 590 accuracy 0.80 1141 macro avg 0.80 0.80 0.80 1141 weighted avg 0.80 0.80 0.80 1141 ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Tokenizers 0.11.6