Edit model card

Danish ELECTRA for hate speech (offensive language) detection

The ELECTRA Offensive model detects whether a Danish text is offensive or not. It is based on the pretrained Danish Ælæctra model.

See the DaNLP documentation for more details.

Here is how to use the model:

from transformers import ElectraTokenizer, ElectraForSequenceClassification

model = ElectraForSequenceClassification.from_pretrained("alexandrainst/da-hatespeech-detection-small")
tokenizer = ElectraTokenizer.from_pretrained("alexandrainst/da-hatespeech-detection-small")

Training data

The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.

Downloads last month
23,107