Toygar's picture
readme v0.1
1d9303c
|
raw
history blame
2.19 kB
metadata
annotations_creators:
  - crowdsourced
  - expert-generated
language_creators:
  - crowdsourced
language:
  - tr
license:
  - cc-by-2.0
multilinguality:
  - monolingual
pretty_name: Turkish Offensive Language Detection Dataset
size_categories:
  - 10K<n<100K
source_datasets: []
task_categories:
  - text-classification
task_ids:
  - text-classification-other-offensive-language-classification

Dataset Summary

This dataset is enhanced version of existing offensive language studies. Existing studies are highly imbalanced, and solving this problem is too costly. To solve this, we proposed contextual data mining method for dataset augmentation. Our method is basically prevent us from retrieving random tweets and label individually. We can directly access almost exact hate related tweets and label them directly without any further human interaction in order to solve imbalanced label problem.

In addition, existing studies (can be found at Reference section) are merged to create even more comprehensive and robust dataset for Turkish offensive language detection task.

The file train.csv contains 42,398, test.csv contains 8,851, valid.csv contains 1,756 annotated tweets.

Dataset Structure

A binary dataset with with (0) Not Offensive and (1) Offensive tweets.

Task and Labels

Offensive language identification:

  • (0) Not Offensive - Tweet does not contain offense or profanity.
  • (1) Offensive - Tweet contains offensive language or a targeted (veiled or direct) offense

Data Splits

train test dev
0 (Not Offensive) 22,589 4,436 1,402
1 (Offensive) 19,809 4,415 354

Citation Information

BibTeX will be provided after publication.
UBMK 2022 Paper: "Linguistic-based Data Augmentation Approach for Offensive Language Detection"

References

We merged open-source offensive language dataset studies in Turkish to increase contextuality with existing data even more, before our method is applied.