Edit model card

xlm-r-large-arabic-toxic (toxic/hate speech classifier)

Toxic (hate speech) classification (Label_0: non-toxic, Label_1: toxic) of Arabic comments by fine-tuning XLM-Roberta-Large. Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English).
Usage and further info: see last section in this Colab notebook

Downloads last month
327

Space using akhooli/xlm-r-large-arabic-toxic 1