|
--- |
|
language: |
|
- en |
|
- yo |
|
- ha |
|
- ig |
|
- pcm |
|
size_categories: |
|
- 10K<n<100K |
|
task_categories: |
|
- text-classification |
|
--- |
|
# Dataset Card for NaijaHate |
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
|
NaijaHate is a hate speech dataset tailored to the Nigerian context. It contains 35,976 annotated Nigerian tweets, including 29,999 tweets randomly sampled from Nigerian Twitter. For a complete description of the data, please refer to the reference paper (TODO). |
|
|
|
## Dataset Structure |
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
|
|
The dataset is made of four components detailed in the `dataset` column: two components used for training a hate speech model (`stratified` and `al`) and two components for model evaluation (`eval` and `random`). We detail each component below: |
|
- `stratified`: |
|
- `al`: |
|
- `eval`: |
|
- `random`: |
|
|
|
## Dataset Creation |
|
|
|
### Source Data |
|
|
|
This dataset was sourced for a large Twitter dataset of 2.2 billion tweets posted between March 2007 and July 2023 and forming the timelines of 2.8 million Twitter users with a profile location in Nigeria. |
|
|
|
### Annotation |
|
|
|
We recruited a team of four Nigerian annotators, two female and two male, each of them from one of the four most populated Nigerian ethnic groups -- Hausa, Yoruba, Igbo and Fulani. |
|
We followed a prescriptive approach by instructing annotators to strictly adhere to extensive annotation guidelines describing our taxonomy of hate speech (see reference paper for full guidelines). |
|
Tweets are annotated as belonging to one of three classes: |
|
- hateful (`2` in the `class` column) if it contains an attack on an individual or a group based on the perceived possession of a certain characteristic (e.g., gender, race) |
|
- offensive (`1` in the `class` column), if it contains a personal attack or an insult that does not target an individual based on their identity |
|
- neutral (`0` in the `class` column) if it is neither hateful nor offensive. |
|
|
|
If a tweet is labeled as hateful, it is also annotated for the communities being targeted. The possible target communities in our dataset are: |
|
- Christians (`christian` column) |
|
- Muslims (`muslim`) |
|
- Northerners (`northerner`) |
|
- Southerners (`southerner`) |
|
- Hausas (`hausa`) |
|
- Fulanis (`fulani`) |
|
- Yorubas (`yoruba`) |
|
- Igbos (`igbo`) |
|
- Women (`women`) |
|
- LGBTQ+ (`lgbtq+`) |
|
- Herdsmen (`herdsmen`) |
|
- Biafra (`biafra`) |
|
|
|
Each tweet was labeled by three annotators. For the three-class annotation task, the 3 annotators agreed on 90\% of labeled tweets, 2 out of 3 agreed in 9.5\% of cases, and all three of them disagreed in 0.5\% of cases (Krippendorff's alpha = 0.7). |
|
|
|
## BibTeX entry and citation information |
|
|
|
TODO |
|
|
|
Please cite the [reference paper](https://aclanthology.org/2022.lrec-1.27/) if you use this dataset. |
|
|
|
```bibtex |
|
@inproceedings{XXX} |
|
``` |