Back to all models
text-classification mask_token: <mask>
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

akhooli/xlm-r-large-arabic-toxic akhooli/xlm-r-large-arabic-toxic
last 30 days



Contributed by

akhooli Abed Khooli
7 models

How to use this model directly from the 🤗/transformers library:

Copy to clipboard
from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("akhooli/xlm-r-large-arabic-toxic") model = AutoModelForSequenceClassification.from_pretrained("akhooli/xlm-r-large-arabic-toxic")

xlm-r-large-arabic-toxic (toxic/hate speech classifier)

Toxic (hate speech) classification (Label_0: non-toxic, Label_1: toxic) of Arabic comments by fine-tuning XLM-Roberta-Large. Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English).
Usage and further info: see last section in this Colab notebook