--- license: openrail++ dataset_info: features: - name: text dtype: string - name: tags dtype: float64 splits: - name: train num_bytes: 2105604 num_examples: 12682 download_size: 1236621 dataset_size: 2105604 configs: - config_name: default data_files: - split: train path: data/train-* --- Dataset formation: 1. Filtering Ukrainian tweets so that only tweets containing toxic language remain. Source of Ukrainian data: https://github.com/saganoren/ukr-twi-corpus 2. Non-toxic sentences were obtained from a previous dataset of tweets as well as sentences from news and fiction from UD Ukrainian IU: https://universaldependencies.org/treebanks/uk_iu/index.html 3. After that, the dataset was split into a train-test-val and all data were balanced both by the toxic/non-toxic criterion and by data source.