Datasets:
Nayeon Lee
commited on
Commit
•
2c44b4f
1
Parent(s):
5607918
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,42 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-sa-4.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-sa-4.0
|
3 |
+
---
|
4 |
+
# CREHate
|
5 |
+
|
6 |
+
Repository for the CREHate dataset, presented in the paper "[Exploring Cross-cultural Differences in English Hate Speech Annotations: From Dataset Construction to Analysis](https://arxiv.org/abs/2308.16705)". (NAACL 2024)
|
7 |
+
|
8 |
+
## About CREHate (Paper Abstract)
|
9 |
+
|
10 |
+
<img src="https://github.com/nlee0212/CREHate/blob/main/CREHate_Dataset_Construction.png" width="400">
|
11 |
+
|
12 |
+
Most hate speech datasets neglect the cultural diversity within a single language, resulting in a critical shortcoming in hate speech detection.
|
13 |
+
To address this, we introduce **CREHate**, a **CR**oss-cultural **E**nglish **Hate** speech dataset.
|
14 |
+
To construct CREHate, we follow a two-step procedure: 1) cultural post collection and 2) cross-cultural annotation.
|
15 |
+
We sample posts from the SBIC dataset, which predominantly represents North America, and collect posts from four geographically diverse English-speaking countries (Australia, United Kingdom, Singapore, and South Africa) using culturally hateful keywords we retrieve from our survey.
|
16 |
+
Annotations are collected from the four countries plus the United States to establish representative labels for each country.
|
17 |
+
Our analysis highlights statistically significant disparities across countries in hate speech annotations.
|
18 |
+
Only 56.2% of the posts in CREHate achieve consensus among all countries, with the highest pairwise label difference rate of 26%.
|
19 |
+
Qualitative analysis shows that label disagreement occurs mostly due to different interpretations of sarcasm and the personal bias of annotators on divisive topics.
|
20 |
+
Lastly, we evaluate large language models (LLMs) under a zero-shot setting and show that current LLMs tend to show higher accuracies on Anglosphere country labels in CREHate.
|
21 |
+
|
22 |
+
## Dataset Statistics
|
23 |
+
<div id="tab:3_1_stats">
|
24 |
+
|
25 |
+
| **Data** | **Division** | **Source** | **\# Posts** |
|
26 |
+
|:---------|:--------|:-----------|:------------:|
|
27 |
+
| **CREHate** | **CC-SBIC** | Reddit | 568 |
|
28 |
+
| | | Twitter | 273 |
|
29 |
+
| | | Gab | 80 |
|
30 |
+
| | | Stormfront | 59 |
|
31 |
+
| | **CP** | Reddit | 311 |
|
32 |
+
| | | YouTube | 289 |
|
33 |
+
| | | **total** | **1,580** |
|
34 |
+
|
35 |
+
Data statistics and sources of CREHate. CC-SBIC refers to cross-culturally
|
36 |
+
re-annotated SBIC posts. CP refers to additionally collected cultural
|
37 |
+
posts from four countries (AU, GB, SG, and ZA), which are also
|
38 |
+
cross-culturally annotated.
|
39 |
+
|
40 |
+
</div>
|
41 |
+
|
42 |
+
All 1,580 posts have been annotated by annotators from the United States, Australia, United Kingdom, Singapore, and South Africa, resulting in a total of 7,900 labels.
|