ft-roberta-toxicity / README.md
bgonzalezbustamante's picture
Update README.md
776b8cd verified
|
raw
history blame
1.66 kB
metadata
tags:
  - text-classification
  - toxicity
  - Twitter
base_model: cardiffnlp/twitter-roberta-base-sentiment
widget:
  - text: I love AutoTrain
license: mit
language:
  - es
pipeline_tag: text-classification
library_name: transformers

Fined-tuned roBERTa for Toxicity Classification in Spanish

This is a fine-tuned roBERTa model trained using as a base model Twitter-roBERTa base-sized for Sentiment Analysis, which was trained on ~58M tweets. The dataset for training this model is a gold standard for protest events for toxicity and incivility in Spanish.

The dataset comprises ~5M data points from three Latin American protest events: (a) protests against the coronavirus and judicial reform measures in Argentina during August 2020; (b) protests against education budget cuts in Brazil in May 2019; and (c) the social outburst in Chile stemming from protests against the underground fare hike in October 2019. We are focusing on interactions in Spanish to elaborate a gold standard for digital interactions in this language, therefore, we prioritise Argentinian and Chilean data.

Labels: NONTOXIC and TOXIC.

We suggest using bert-spanish-toxicity or ft-xlm-roberta-toxicity instead of this model.

Validation Metrics

  • Accuracy: 0.790
  • Precision: 0.920
  • Recall: 0.657
  • F1-Score: 0.767