manueltonneau
commited on
Commit
•
a1b6ac0
1
Parent(s):
e5c08e6
Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ pipeline_tag: fill-mask
|
|
10 |
|
11 |
|
12 |
# NaijaXLM-T-base
|
13 |
-
This is a XLM-Roberta-base model further pretrained on 2.2 billion Nigerian tweets, described and evaluated in the [reference paper](https://
|
14 |
|
15 |
## Model Details
|
16 |
|
@@ -29,8 +29,8 @@ This is a XLM-Roberta-base model further pretrained on 2.2 billion Nigerian twee
|
|
29 |
|
30 |
<!-- Provide the basic links for the model. -->
|
31 |
|
32 |
-
- **Repository:** https://github.com/
|
33 |
-
- **Paper:** https://
|
34 |
|
35 |
|
36 |
|
@@ -57,13 +57,29 @@ We kept the same vocabulary as XLM-R and trained the model until convergence for
|
|
57 |
## BibTeX entry and citation information
|
58 |
|
59 |
|
60 |
-
Please cite the [reference paper](https://
|
61 |
|
62 |
```bibtex
|
63 |
-
@
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
68 |
}
|
|
|
69 |
```
|
|
|
10 |
|
11 |
|
12 |
# NaijaXLM-T-base
|
13 |
+
This is a XLM-Roberta-base model further pretrained on 2.2 billion Nigerian tweets, described and evaluated in the [reference paper](https://aclanthology.org/2024.acl-long.488/). This model was developed by [@pvcastro](https://huggingface.co/pvcastro) and [@manueltonneau](https://huggingface.co/manueltonneau).
|
14 |
|
15 |
## Model Details
|
16 |
|
|
|
29 |
|
30 |
<!-- Provide the basic links for the model. -->
|
31 |
|
32 |
+
- **Repository:** https://github.com/worldbank/NaijaHate
|
33 |
+
- **Paper:** https://aclanthology.org/2024.acl-long.488/
|
34 |
|
35 |
|
36 |
|
|
|
57 |
## BibTeX entry and citation information
|
58 |
|
59 |
|
60 |
+
Please cite the [reference paper](https://aclanthology.org/2024.acl-long.488/) if you use this model.
|
61 |
|
62 |
```bibtex
|
63 |
+
@inproceedings{tonneau-etal-2024-naijahate,
|
64 |
+
title = "{N}aija{H}ate: Evaluating Hate Speech Detection on {N}igerian {T}witter Using Representative Data",
|
65 |
+
author = "Tonneau, Manuel and
|
66 |
+
Quinta De Castro, Pedro and
|
67 |
+
Lasri, Karim and
|
68 |
+
Farouq, Ibrahim and
|
69 |
+
Subramanian, Lakshmi and
|
70 |
+
Orozco-Olvera, Victor and
|
71 |
+
Fraiberger, Samuel",
|
72 |
+
editor = "Ku, Lun-Wei and
|
73 |
+
Martins, Andre and
|
74 |
+
Srikumar, Vivek",
|
75 |
+
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
|
76 |
+
month = aug,
|
77 |
+
year = "2024",
|
78 |
+
address = "Bangkok, Thailand",
|
79 |
+
publisher = "Association for Computational Linguistics",
|
80 |
+
url = "https://aclanthology.org/2024.acl-long.488",
|
81 |
+
pages = "9020--9040",
|
82 |
+
abstract = "To address the global issue of online hate, hate speech detection (HSD) systems are typically developed on datasets from the United States, thereby failing to generalize to English dialects from the Majority World. Furthermore, HSD models are often evaluated on non-representative samples, raising concerns about overestimating model performance in real-world settings. In this work, we introduce NaijaHate, the first dataset annotated for HSD which contains a representative sample of Nigerian tweets. We demonstrate that HSD evaluated on biased datasets traditionally used in the literature consistently overestimates real-world performance by at least two-fold. We then propose NaijaXLM-T, a pretrained model tailored to the Nigerian Twitter context, and establish the key role played by domain-adaptive pretraining and finetuning in maximizing HSD performance. Finally, owing to the modest performance of HSD systems in real-world conditions, we find that content moderators would need to review about ten thousand Nigerian tweets flagged as hateful daily to moderate 60{\%} of all hateful content, highlighting the challenges of moderating hate speech at scale as social media usage continues to grow globally. Taken together, these results pave the way towards robust HSD systems and a better protection of social media users from hateful content in low-resource settings.",
|
83 |
}
|
84 |
+
|
85 |
```
|