Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 1,559 Bytes
73b8544
 
4c8e23b
 
 
 
 
 
 
 
 
 
dea2aaf
 
 
26344b5
 
 
 
 
4c8e23b
 
 
 
 
dea2aaf
 
26344b5
 
73b8544
e89f235
596a864
 
323aa7e
 
 
 
f90fe51
 
e89f235
b2fb11c
 
b7af2b4
 
f90fe51
b2fb11c
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
license: openrail++
dataset_info:
  features:
  - name: text
    dtype: string
  - name: tags
    dtype: float64
  splits:
  - name: train
    num_bytes: 2105604
    num_examples: 12682
  - name: validation
    num_bytes: 705759
    num_examples: 4227
  - name: test
    num_bytes: 710408
    num_examples: 4214
  download_size: 2073133
  dataset_size: 3521771
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
---

## Ukrainian Toxicity Dataset

This is the first of its kind toxicity classification dataset for the Ukrainian language.

Due to the subjective nature of toxicity, definitions of toxic language will vary. We include items that are commonly referred to as vulgar or profane language. ([NLLB paper](https://arxiv.org/pdf/2207.04672.pdf))

## Dataset formation:
1. Filtering Ukrainian tweets so that only tweets containing toxic language remain with toxic keywords. Source data: https://github.com/saganoren/ukr-twi-corpus
2. Non-toxic sentences were obtained from a previous dataset of tweets as well as sentences from news and fiction from UD Ukrainian IU: https://universaldependencies.org/treebanks/uk_iu/index.html
3. After that, the dataset was split into a train-test-val and all data were balanced both by the toxic/non-toxic criterion and by data source.

Labels: 0 - non-toxic, 1 - toxic.

## Load dataset: 
```
from datasets import load_dataset
dataset = load_dataset("ukr-detect/ukr-toxicity-dataset") 
```