Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ tags:
|
|
7 |
- hate speech
|
8 |
- offensive language
|
9 |
widget:
|
10 |
-
- text:
|
11 |
example_title: Misogyny Detection Example 1
|
12 |
pipeline_tag: text-classification
|
13 |
---
|
@@ -59,5 +59,10 @@ Please use the following BibTeX entry if you use this model in your project:
|
|
59 |
}
|
60 |
```
|
61 |
|
|
|
|
|
|
|
|
|
|
|
62 |
## License
|
63 |
[GNU GPLv3](https://choosealicense.com/licenses/gpl-3.0/)
|
|
|
7 |
- hate speech
|
8 |
- offensive language
|
9 |
widget:
|
10 |
+
- text: Apprezzo il lavoro delle donne nella nostra comunità.
|
11 |
example_title: Misogyny Detection Example 1
|
12 |
pipeline_tag: text-classification
|
13 |
---
|
|
|
59 |
}
|
60 |
```
|
61 |
|
62 |
+
# Limitations
|
63 |
+
|
64 |
+
Entropy-Attention Regularization mitigates lexical overfitting but does not completely remove it. We expect the model still to show biases, e.g., peculiar keywords that induce a specific prediction regardless of the context. \
|
65 |
+
Please refer to our paper for a quantitative evaluation of this mitigation.
|
66 |
+
|
67 |
## License
|
68 |
[GNU GPLv3](https://choosealicense.com/licenses/gpl-3.0/)
|