Edit model card

xlm-r-large-arabic-toxic (toxic/hate speech classifier)

Toxic (hate speech) classification (Label_0: non-toxic, Label_1: toxic) of Arabic comments by fine-tuning XLM-Roberta-Large. Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English).
Usage and further info: see last section in this Colab notebook

Downloads last month
Hosted inference API
Text Classification
This model can be loaded on the Inference API on-demand.