Datasets:
Paul
/

Languages:
Arabic
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
expert-generated
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
Tags:
License:
Paul Röttger commited on
Commit
9400423
1 Parent(s): 6c86604

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -0
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - expert-generated
6
+ language:
7
+ - pt
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Portuguese HateCheck
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-classification
19
+ task_ids:
20
+ - hate-speech-detection
21
+ ---
22
+ # Dataset Card for Multilingual HateCheck
23
+ ## Dataset Description
24
+
25
+ Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
26
+ For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.
27
+ This allows for targeted diagnostic insights into model performance.
28
+
29
+ For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!
30
+ - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917
31
+ - **Repository:** https://github.com/rewire-online/multilingual-hatecheck
32
+ - **Point of Contact:** paul@rewire.online
33
+
34
+ ## Dataset Structure
35
+
36
+ The csv format mostly matches the original HateCheck data, with some adjustments for specific languages.
37
+
38
+ **mhc_case_id**
39
+ The test case ID that is unique to each test case across languages (e.g., "mandarin-1305")
40
+
41
+ **functionality**
42
+ The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.
43
+
44
+ **test_case**
45
+ The test case text.
46
+
47
+ **label_gold**
48
+ The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label.
49
+
50
+ **target_ident**
51
+ Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.
52
+
53
+ **ref_case_id**
54
+ For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.
55
+
56
+ **ref_templ_id**
57
+ The equivalent to ref_case_id, but for template IDs.
58
+
59
+ **templ_id**
60
+ The ID of the template from which the test case was generated.
61
+
62
+ **case_templ**
63
+ The template from which the test case was generated (where applicable).
64
+
65
+ **gender_male** and **gender_female**
66
+ For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.
67
+
68
+ **label_annotated**
69
+ A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']").
70
+
71
+ **label_annotated_maj**
72
+ The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts.
73
+
74
+ **disagreement_in_case**
75
+ True if label_annotated_maj does not match label_gold for the entry.
76
+
77
+ **disagreement_in_template**
78
+ True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.