sana-ngu's picture
Update README.md
e5dd5c1
metadata
language:
  - en
metrics:
  - f1
  - accuracy
pipeline_tag: text-classification
widget:
  - text: >-
      Every woman wants to be a model. It's codeword for 'I get everything for
      free and people want me'

distilbert-base-sexism-detector

This is a fine-tuned model of distilbert-base on the Explainable Detection of Online Sexism (EDOS) dataset. It is intended to be used as a classification model for identifying tweets (0 - not sexist; 1 - sexist).

This is a light model with an 81.2 F1 score. Use this model for fase prediction using the online API, if you like to see our best model with 86.3 F1 score , use this link.

Classification examples (use these example in the Hosted Inference API in the right panel ):

Prediction Tweet
sexist Every woman wants to be a model. It's codeword for "I get everything for free and people want me"
not sexist basically I placed more value on her than I should then?

More Details

For more details about the datasets and eval results, see (we will updated the page with our paper link)

How to use

from transformers import AutoModelForSequenceClassification, AutoTokenizer,pipeline
import torch
model = AutoModelForSequenceClassification.from_pretrained('NLP-LTU/distilbert-sexism-detector')
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased') 
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
prediction=classifier("Every woman wants to be a model. It's codeword for 'I get everything for free and people want me' ")
label_pred = 'not sexist' if prediction == 0 else 'sexist' 

print(label_pred)
              precision    recall  f1-score   support

  not sexsit     0.9000    0.9264    0.9130      3030
      sexist     0.7469    0.6784    0.7110       970

    accuracy                         0.8662      4000
   macro avg     0.8234    0.8024    0.8120      4000
weighted avg     0.8628    0.8662    0.8640      4000