Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: "it"
|
3 |
+
tags:
|
4 |
+
- bert
|
5 |
+
- sarcasm-detection
|
6 |
+
- text-classification
|
7 |
+
widget:
|
8 |
+
- text: "Auto, stop a diesel e benzina dal 2035. Ecco cosa cambia per i consumatori"
|
9 |
+
---
|
10 |
+
|
11 |
+
# Italian Sarcasm Detector
|
12 |
+
|
13 |
+
Italian Sarcasm Detector is a text classification model built to detect sarcasm from news article titles. It is fine-tuned on dbmdz/bert-base-italian-uncased and the training data consists of scraped data from Italian non-sarcastic newspaper (Il Giornale) and sarcastic newspaper (Lercio).
|
14 |
+
|
15 |
+
## Metrics:
|
16 |
+
|
17 |
+
|
18 |
+
## Training Data
|
19 |
+
|
20 |
+
Scraped data:
|
21 |
+
- Italian non-sarcastic news from [Il Giornale]([https://www.ilgiornale.it])
|
22 |
+
- Italian sarcastic news from [Lercio]([https://www.lercio.it])
|
23 |
+
|
24 |
+
Codebase:
|
25 |
+
- Git Repo: [Official repository](https://github.com/helinivan/multilingual-sarcasm-detector)
|
26 |
+
|
27 |
+
---
|
28 |
+
|
29 |
+
## Example of classification
|
30 |
+
|
31 |
+
```python
|
32 |
+
from transformers import AutoModelForSequenceClassification
|
33 |
+
from transformers import AutoTokenizer
|
34 |
+
import string
|
35 |
+
|
36 |
+
def preprocess_data(text: str) -> str:
|
37 |
+
return text.lower().translate(str.maketrans("", "", string.punctuation)).strip()
|
38 |
+
|
39 |
+
MODEL_PATH = "helinivan/italian-sarcasm-detector"
|
40 |
+
|
41 |
+
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
|
42 |
+
model = AutoModelForSequenceClassification.from_pretrained(MODEL_PATH)
|
43 |
+
|
44 |
+
text = "Auto, stop a diesel e benzina dal 2035. Ecco cosa cambia per i consumatori"
|
45 |
+
tokenized_text = tokenizer([preprocess_data(text)], padding=True, truncation=True, max_length=512, return_tensors="pt")
|
46 |
+
output = model(**tokenized_text)
|
47 |
+
probs = output.logits.softmax(dim=-1).tolist()[0]
|
48 |
+
confidence = max(probs)
|
49 |
+
prediction = probs.index(confidence)
|
50 |
+
results = {"is_sarcastic": prediction, "confidence": confidence}
|
51 |
+
|
52 |
+
```
|
53 |
+
|
54 |
+
Output:
|
55 |
+
|
56 |
+
```
|
57 |
+
{'is_sarcastic': 1, 'confidence': 0.9999909400939941}
|
58 |
+
```
|
59 |
+
|
60 |
+
## Performance
|
61 |
+
| Model-Name | F1 | Precision | Recall | Accuracy
|
62 |
+
| ------------- |:-------------| -----| -----| ----|
|
63 |
+
| helinivan/english-sarcasm-detector | 94.48 | 94.46 | 94.51 | 94.48
|
64 |
+
| helinivan/italian-sarcasm-detector | 92.99 | 92.77 | 93.24 | 93.42
|
65 |
+
| helinivan/multilingual-sarcasm-detector | 90.91 | 91.51 | 90.44 | 91.55
|