Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 5,524 Bytes
73b8544
 
4c8e23b
 
 
 
 
 
 
 
 
 
dea2aaf
 
 
26344b5
 
 
 
 
4c8e23b
 
 
 
 
dea2aaf
 
26344b5
 
73b8544
e89f235
7602a53
596a864
659f658
323aa7e
 
 
f90fe51
 
e89f235
b2fb11c
 
b7af2b4
 
f90fe51
b2fb11c
 
 
748d1cc
 
 
 
7bcf2bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
748d1cc
3dde2f1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
748d1cc
b2fb11c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
license: openrail++
dataset_info:
  features:
  - name: text
    dtype: string
  - name: tags
    dtype: float64
  splits:
  - name: train
    num_bytes: 2105604
    num_examples: 12682
  - name: validation
    num_bytes: 705759
    num_examples: 4227
  - name: test
    num_bytes: 710408
    num_examples: 4214
  download_size: 2073133
  dataset_size: 3521771
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
---

## Ukrainian Toxicity Dataset (Semi-natural)

This is the first of its kind toxicity classification dataset for the Ukrainian language. The datasets was obtained semi-automatically by toxic keywords filtering. For manually collected datasets with crowdsourcing, please, check [textdetox/multilingual_toxicity_dataset](https://huggingface.co/datasets/textdetox/multilingual_toxicity_dataset).

Due to the subjective nature of toxicity, definitions of toxic language will vary. We include items that are commonly referred to as vulgar or profane language. ([NLLB paper](https://arxiv.org/pdf/2207.04672.pdf))

## Dataset formation:
1. Filtering Ukrainian tweets so that only tweets containing toxic language remain with toxic keywords. Source data: https://github.com/saganoren/ukr-twi-corpus
2. Non-toxic sentences were obtained from a previous dataset of tweets as well as sentences from news and fiction from UD Ukrainian IU: https://universaldependencies.org/treebanks/uk_iu/index.html
3. After that, the dataset was split into a train-test-val and all data were balanced both by the toxic/non-toxic criterion and by data source.

Labels: 0 - non-toxic, 1 - toxic.

## Load dataset: 
```
from datasets import load_dataset
dataset = load_dataset("ukr-detect/ukr-toxicity-dataset") 
```

## Citation

```
@inproceedings{dementieva-etal-2025-cross,
    title = "Cross-lingual Text Classification Transfer: The Case of {U}krainian",
    author = "Dementieva, Daryna  and
      Khylenko, Valeriia  and
      Groh, Georg",
    editor = "Rambow, Owen  and
      Wanner, Leo  and
      Apidianaki, Marianna  and
      Al-Khalifa, Hend  and
      Eugenio, Barbara Di  and
      Schockaert, Steven",
    booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
    month = jan,
    year = "2025",
    address = "Abu Dhabi, UAE",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.coling-main.97/",
    pages = "1451--1464",
    abstract = "Despite the extensive amount of labeled datasets in the NLP text classification field, the persistent imbalance in data availability across various languages remains evident. To support further fair development of NLP models, exploring the possibilities of effective knowledge transfer to new languages is crucial. Ukrainian, in particular, stands as a language that still can benefit from the continued refinement of cross-lingual methodologies. Due to our knowledge, there is a tremendous lack of Ukrainian corpora for typical text classification tasks, i.e., different types of style, or harmful speech, or texts relationships. However, the amount of resources required for such corpora collection from scratch is understandable. In this work, we leverage the state-of-the-art advances in NLP, exploring cross-lingual knowledge transfer methods avoiding manual data curation: large multilingual encoders and translation systems, LLMs, and language adapters. We test the approaches on three text classification tasks{---}toxicity classification, formality classification, and natural language inference (NLI){---}providing the {\textquotedblleft}recipe{\textquotedblright} for the optimal setups for each task."
}
```

and

```
@inproceedings{dementieva-etal-2024-toxicity,
    title = "Toxicity Classification in {U}krainian",
    author = "Dementieva, Daryna  and
      Khylenko, Valeriia  and
      Babakov, Nikolay  and
      Groh, Georg",
    editor = {Chung, Yi-Ling  and
      Talat, Zeerak  and
      Nozza, Debora  and
      Plaza-del-Arco, Flor Miriam  and
      R{\"o}ttger, Paul  and
      Mostafazadeh Davani, Aida  and
      Calabrese, Agostina},
    booktitle = "Proceedings of the 8th Workshop on Online Abuse and Harms (WOAH 2024)",
    month = jun,
    year = "2024",
    address = "Mexico City, Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.woah-1.19",
    doi = "10.18653/v1/2024.woah-1.19",
    pages = "244--255",
    abstract = "The task of toxicity detection is still a relevant task, especially in the context of safe and fair LMs development. Nevertheless, labeled binary toxicity classification corpora are not available for all languages, which is understandable given the resource-intensive nature of the annotation process. Ukrainian, in particular, is among the languages lacking such resources. To our knowledge, there has been no existing toxicity classification corpus in Ukrainian. In this study, we aim to fill this gap by investigating cross-lingual knowledge transfer techniques and creating labeled corpora by: (i){\textasciitilde}translating from an English corpus, (ii){\textasciitilde}filtering toxic samples using keywords, and (iii){\textasciitilde}annotating with crowdsourcing. We compare LLMs prompting and other cross-lingual transfer approaches with and without fine-tuning offering insights into the most robust and efficient baselines.",
}
```