--- license: openrail++ dataset_info: features: - name: text dtype: string - name: tags dtype: float64 splits: - name: train num_bytes: 2105604 num_examples: 12682 - name: validation num_bytes: 705759 num_examples: 4227 - name: test num_bytes: 710408 num_examples: 4214 download_size: 2073133 dataset_size: 3521771 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* --- ## Ukrainian Toxicity Dataset This is the first of its kind toxicity classification dataset for the Ukrainian language. The datasets was obtained semi-automatically by toxic keywords filtering. For manually collected datasets with crowdsourcing, please, check [textdetox/multilingual_toxicity_dataset](https://huggingface.co/datasets/textdetox/multilingual_toxicity_dataset). Due to the subjective nature of toxicity, definitions of toxic language will vary. We include items that are commonly referred to as vulgar or profane language. ([NLLB paper](https://arxiv.org/pdf/2207.04672.pdf)) ## Dataset formation: 1. Filtering Ukrainian tweets so that only tweets containing toxic language remain with toxic keywords. Source data: https://github.com/saganoren/ukr-twi-corpus 2. Non-toxic sentences were obtained from a previous dataset of tweets as well as sentences from news and fiction from UD Ukrainian IU: https://universaldependencies.org/treebanks/uk_iu/index.html 3. After that, the dataset was split into a train-test-val and all data were balanced both by the toxic/non-toxic criterion and by data source. Labels: 0 - non-toxic, 1 - toxic. ## Load dataset: ``` from datasets import load_dataset dataset = load_dataset("ukr-detect/ukr-toxicity-dataset") ``` ## Citation ``` @article{dementieva2024toxicity, title={Toxicity Classification in Ukrainian}, author={Dementieva, Daryna and Khylenko, Valeriia and Babakov, Nikolay and Groh, Georg}, journal={arXiv preprint arXiv:2404.17841}, year={2024} } ```