patriciacarla
commited on
Commit
•
807dab1
1
Parent(s):
20f382b
Rename README.md to updated model card
Browse files
README.md → updated model card
RENAMED
@@ -19,7 +19,7 @@ metrics:
|
|
19 |
|
20 |
## Model Description
|
21 |
|
22 |
-
This model is a multilingual hate speech classifier based on the XLM-R architecture. It is trained to detect hate speech in English
|
23 |
|
24 |
## Model Details
|
25 |
|
@@ -36,6 +36,13 @@ The model is trained using a multilingual dataset consisting of Twitter and YouT
|
|
36 |
- **Multilingual Training:** The model is trained on datasets in multiple languages, allowing it to generalize well across different languages.
|
37 |
- **Learning from Disagreement:** The model incorporates techniques to learn from annotator disagreement, improving its ability to handle ambiguous and nuanced cases of hate speech.
|
38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
## Evaluation Metrics
|
40 |
|
41 |
The model's performance is evaluated using the following metrics:
|
@@ -52,9 +59,3 @@ These metrics are computed for each language separately, as well as across the e
|
|
52 |
### Primary Use Case
|
53 |
|
54 |
The primary use case for this model is to automatically detect and moderate hate speech on social media platforms, online forums, and other digital content platforms. This can help in reducing the spread of harmful content and maintaining a safe online environment.
|
55 |
-
|
56 |
-
### Limitations
|
57 |
-
|
58 |
-
- The model may struggle with extremely nuanced cases where context is critical.
|
59 |
-
- False positives can occur, where non-hate speech content is incorrectly classified as hate speech.
|
60 |
-
- The performance may vary for languages not included in the training data.
|
|
|
19 |
|
20 |
## Model Description
|
21 |
|
22 |
+
This model is a multilingual hate speech classifier based on the XLM-R architecture. It is trained to detect hate speech in English, Italian, and Slovene. The model leverages multilingual datasets and incorporates techniques to learn from disagreement among annotators, making it robust in understanding and identifying nuanced hate speech across different languages. It has been developed as part of my Master's thesis and the training methodology follows the approach outlined by Kralj Novak et al. (2022) in their paper ["Handling Disagreement in Hate Speech Modelling"](https://link.springer.com/chapter/10.1007/978-3-031-08974-9_54).
|
23 |
|
24 |
## Model Details
|
25 |
|
|
|
36 |
- **Multilingual Training:** The model is trained on datasets in multiple languages, allowing it to generalize well across different languages.
|
37 |
- **Learning from Disagreement:** The model incorporates techniques to learn from annotator disagreement, improving its ability to handle ambiguous and nuanced cases of hate speech.
|
38 |
|
39 |
+
### Hate Speech Classes
|
40 |
+
|
41 |
+
- **Acceptable**: does not present inappropriate, offensive or violent elements.
|
42 |
+
- **Inappropriate**: contains terms that are obscene or vulgar; but the text is not directed at any specific target.
|
43 |
+
- **Offensive**: includes offensive generalizations, contempt, dehumanization, or indirect offensive remarks.
|
44 |
+
- **Violent**: threatens, indulges, desires or calls for physical violence against a target; it also includes calling for, denying or glorifying war crimes and crimes against humanity.
|
45 |
+
|
46 |
## Evaluation Metrics
|
47 |
|
48 |
The model's performance is evaluated using the following metrics:
|
|
|
59 |
### Primary Use Case
|
60 |
|
61 |
The primary use case for this model is to automatically detect and moderate hate speech on social media platforms, online forums, and other digital content platforms. This can help in reducing the spread of harmful content and maintaining a safe online environment.
|
|
|
|
|
|
|
|
|
|
|
|