Tyler Ashworth
commited on
Commit
•
b8d916a
1
Parent(s):
e899a89
Remove README
Browse files
README.md
DELETED
@@ -1,47 +0,0 @@
|
|
1 |
-
---
|
2 |
-
language:
|
3 |
-
- en
|
4 |
-
metrics:
|
5 |
-
- f1
|
6 |
-
- accuracy
|
7 |
-
pipeline_tag: text-classification
|
8 |
-
widget:
|
9 |
-
- text: "Every woman wants to be a model. It's codeword for 'I get everything for free and people want me'"
|
10 |
-
---
|
11 |
-
### distilbert-base-sexism-detector
|
12 |
-
This is a fine-tuned model of distilbert-base on the Explainable Detection of Online Sexism (EDOS) dataset. It is intended to be used as a classification model for identifying tweets (0 - not sexist; 1 - sexist).
|
13 |
-
|
14 |
-
**This is a light model with an 81.2 F1 score. Use this model for fase prediction using the online API, if you like to see our best model with 86.3 F1 score , use this [link](https://huggingface.co/NLP-LTU/BERTweet-large-sexism-detector).**
|
15 |
-
|
16 |
-
Classification examples (use these example in the Hosted Inference API in the right panel ):
|
17 |
-
|
18 |
-
|Prediction|Tweet|
|
19 |
-
|-----|--------|
|
20 |
-
|sexist |Every woman wants to be a model. It's codeword for "I get everything for free and people want me" |
|
21 |
-
|not sexist |basically I placed more value on her than I should then?|
|
22 |
-
# More Details
|
23 |
-
For more details about the datasets and eval results, see (we will updated the page with our paper link)
|
24 |
-
# How to use
|
25 |
-
```python
|
26 |
-
from transformers import AutoModelForSequenceClassification, AutoTokenizer,pipeline
|
27 |
-
import torch
|
28 |
-
model = AutoModelForSequenceClassification.from_pretrained('NLP-LTU/distilbert-sexism-detector')
|
29 |
-
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased')
|
30 |
-
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
|
31 |
-
prediction=classifier("Every woman wants to be a model. It's codeword for 'I get everything for free and people want me' ")
|
32 |
-
label_pred = 'not sexist' if prediction == 0 else 'sexist'
|
33 |
-
|
34 |
-
print(label_pred)
|
35 |
-
|
36 |
-
```
|
37 |
-
```
|
38 |
-
precision recall f1-score support
|
39 |
-
|
40 |
-
not sexsit 0.9000 0.9264 0.9130 3030
|
41 |
-
sexist 0.7469 0.6784 0.7110 970
|
42 |
-
|
43 |
-
accuracy 0.8662 4000
|
44 |
-
macro avg 0.8234 0.8024 0.8120 4000
|
45 |
-
weighted avg 0.8628 0.8662 0.8640 4000
|
46 |
-
|
47 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|