--- license: apache-2.0 tags: - datasets: - EXIST Dataset metrics: - accuracy model-index: - name: twitter_sexismo-finetuned-exist2021 results: - task: name: Text Classification type: text-classification dataset: name: EXIST Dataset type: EXIST Dataset args: es metrics: - name: Accuracy type: accuracy value: 0.79 --- # twitter_sexismo-finetuned-exist2021 This model is a fine-tuned version of [pysentimiento/robertuito-hate-speech](https://huggingface.co/pysentimiento/robertuito-hate-speech) on the EXIST dataset. It achieves the following results on the evaluation set: - Loss: 0.40 - Accuracy: 0.79 ## Model description Modelo para el Hackaton de Somos NLP para detección de sexismo en twitts en español ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - my_learning_rate = 2E-6 - my_adam_epsilon = 1E-8 - my_number_of_epochs = 15 - my_warmup = 3 - my_mini_batch_size = 32 - optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results ======== Epoch 9 / 15 ======== Training... Batch 50 of 143. Elapsed: 0:00:48. Batch 100 of 143. Elapsed: 0:01:37. Average training loss: 0.43 Training epoch took: 0:02:18 ======== Epoch 10 / 15 ======== Training... Batch 50 of 143. Elapsed: 0:00:48. Batch 100 of 143. Elapsed: 0:01:37. Average training loss: 0.42 Training epoch took: 0:02:18 ======== Epoch 11 / 15 ======== Training... Batch 50 of 143. Elapsed: 0:00:48. Batch 100 of 143. Elapsed: 0:01:37. Average training loss: 0.42 Training epoch took: 0:02:18 ======== Epoch 12 / 15 ======== Training... Batch 50 of 143. Elapsed: 0:00:48. Batch 100 of 143. Elapsed: 0:01:37. Average training loss: 0.41 Training epoch took: 0:02:18 ======== Epoch 13 / 15 ======== Training... Batch 50 of 143. Elapsed: 0:00:48. Batch 100 of 143. Elapsed: 0:01:36. Average training loss: 0.40 Training epoch took: 0:02:18 ======== Epoch 14 / 15 ======== Training... Batch 50 of 143. Elapsed: 0:00:48. Batch 100 of 143. Elapsed: 0:01:37. Average training loss: 0.40 Training epoch took: 0:02:18 ======== Epoch 15 / 15 ======== Training... Batch 50 of 143. Elapsed: 0:00:48. Batch 100 of 143. Elapsed: 0:01:36. Average training loss: 0.40 Training epoch took: 0:02:18 ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Tokenizers 0.11.6