lheuveline
commited on
Commit
·
a45b6a6
1
Parent(s):
89eaa81
readme
Browse files
README.md
CHANGED
@@ -33,3 +33,17 @@ Since MT models are not perfect, some messages are not entirely translated or no
|
|
33 |
To check for obvious errors in pipeline, a general language detection model is used to prune non french texts.
|
34 |
|
35 |
Language detection model : papluca/xlm-roberta-base-language-detection
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
To check for obvious errors in pipeline, a general language detection model is used to prune non french texts.
|
34 |
|
35 |
Language detection model : papluca/xlm-roberta-base-language-detection
|
36 |
+
|
37 |
+
### Annotation
|
38 |
+
|
39 |
+
Since "hate speech" dimension is highly subjective, and datasets comes with different annotations types, a conventional labeling stategy is required.
|
40 |
+
|
41 |
+
Each sample is annotated with "0" if negative sample and "1" if positive sample.
|
42 |
+
|
43 |
+
### Filtering rules :
|
44 |
+
|
45 |
+
- FTR dataset :
|
46 |
+
- MLMA dataset :
|
47 |
+
- CAA dataset :
|
48 |
+
- "Annotated Corpus" dataset :
|
49 |
+
- UC-Berkeley Measuring Hate Speech dataset : average hate_speech_score > 0 => 1
|