|
--- |
|
license: openrail++ |
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
- name: tags |
|
dtype: float64 |
|
splits: |
|
- name: train |
|
num_bytes: 2105604 |
|
num_examples: 12682 |
|
- name: validation |
|
num_bytes: 705759 |
|
num_examples: 4227 |
|
- name: test |
|
num_bytes: 710408 |
|
num_examples: 4214 |
|
download_size: 2073133 |
|
dataset_size: 3521771 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: validation |
|
path: data/validation-* |
|
- split: test |
|
path: data/test-* |
|
--- |
|
|
|
## Ukrainian Toxicity Dataset |
|
|
|
This is the first of its kind toxicity classification dataset for the Ukrainian language. |
|
|
|
Due to the subjective nature of toxicity, definitions of toxic language will vary. We include items that are commonly referred to as vulgar or profane language. ([NLLB paper](https://arxiv.org/pdf/2207.04672.pdf)) |
|
|
|
## Dataset formation: |
|
1. Filtering Ukrainian tweets so that only tweets containing toxic language remain with toxic keywords. Source data: https://github.com/saganoren/ukr-twi-corpus |
|
2. Non-toxic sentences were obtained from a previous dataset of tweets as well as sentences from news and fiction from UD Ukrainian IU: https://universaldependencies.org/treebanks/uk_iu/index.html |
|
3. After that, the dataset was split into a train-test-val and all data were balanced both by the toxic/non-toxic criterion and by data source. |
|
|
|
Labels: 0 - non-toxic, 1 - toxic. |
|
|
|
## Load dataset: |
|
``` |
|
from datasets import load_dataset |
|
dataset = load_dataset("ukr-detect/ukr-toxicity-dataset") |
|
``` |