You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for NaijaHate

NaijaHate is a hate speech dataset tailored to the Nigerian context. It contains 35,976 annotated Nigerian tweets, including 29,999 tweets randomly sampled from Nigerian Twitter. For a complete description of the data, please refer to the reference paper.

Source Data

This dataset was sourced from a large Twitter dataset of 2.2 billion tweets posted between March 2007 and July 2023 and forming the timelines of 2.8 million Twitter users with a profile location in Nigeria.

Dataset Structure

The dataset is made of four components detailed in the dataset column: two components used for training a hate speech model (stratified and al) and two components for model evaluation (eval and random). We detail each component below:

  • stratified: 1,607 tweets collected through stratified sampling. We use both hate-related and community-related keywords as seeds.
  • al: 2,405 tweets sampled through active learning
  • eval: 1,965 tweets from the general Nigerian Twitter dataset and with high likelihood of being hateful according to 10 benchmarked hate speech classifiers
  • random: 29,999 tweets randomly sampled from the general Nigerian Twitter dataset

Annotation

We recruited a team of four Nigerian annotators, two female and two male, each of them from one of the four most populated Nigerian ethnic groups -- Hausa, Yoruba, Igbo and Fulani. We followed a prescriptive approach by instructing annotators to strictly adhere to extensive annotation guidelines describing our taxonomy of hate speech (see reference paper for full guidelines). Tweets are annotated as belonging to one of three classes:

  • hateful (2 in the class column) if it contains an attack on an individual or a group based on the perceived possession of a certain characteristic (e.g., gender, race)
  • offensive (1 in the class column), if it contains a personal attack or an insult that does not target an individual based on their identity
  • neutral (0 in the class column) if it is neither hateful nor offensive.

If a tweet is labeled as hateful, it is also annotated for the communities being targeted. The possible target communities in our dataset are:

  • Christians (christian column)
  • Muslims (muslim)
  • Northerners (northerner)
  • Southerners (southerner)
  • Hausas (hausa)
  • Fulanis (fulani)
  • Yorubas (yoruba)
  • Igbos (igbo)
  • Women (women)
  • LGBTQ+ (lgbtq+)
  • Herdsmen (herdsmen)
  • Biafra (biafra)

Each tweet was labeled by three annotators. For the three-class annotation task, the 3 annotators agreed on 90% of labeled tweets, 2 out of 3 agreed in 9.5% of cases, and all three of them disagreed in 0.5% of cases (Krippendorff's alpha = 0.7).

Language Composition

We further detail the share of each language by dataset component below:

Stratified + active learning sets (%) Random set (%)
English 74.2 77
English & Nigerian Pidgin 11 1.5
English & Yoruba 4.2 -
Nigerian Pidgin 3.6 7.3
English & Hausa 2.2 -
Hausa 1 1.2
Yoruba - 1
URLs - 6
Emojis - 2.3

BibTeX entry and citation information

Please cite the reference paper if you use this dataset.

@article{tonneau2024naijahate,
  title={NaijaHate: Evaluating Hate Speech Detection on Nigerian Twitter Using Representative Data},
  author={Tonneau, Manuel and de Castro, Pedro Vitor Quinta and Lasri, Karim and Farouq, Ibrahim and Subramanian, Lakshminarayanan and Orozco-Olvera, Victor and Fraiberger, Samuel},
  journal={arXiv preprint arXiv:2403.19260},
  year={2024}
}
Downloads last month
0
Edit dataset card

Models trained or fine-tuned on manueltonneau/NaijaHate