--- language: - en - ru - uk - es - de - ar - am - hi - zh license: openrail++ dataset_info: features: - name: text dtype: string splits: - name: am num_bytes: 3540 num_examples: 245 - name: es num_bytes: 14683 num_examples: 1195 - name: ru num_bytes: 4174135 num_examples: 140517 - name: uk num_bytes: 153865 num_examples: 7356 - name: en num_bytes: 39323 num_examples: 3386 - name: zh num_bytes: 45303 num_examples: 3839 - name: ar num_bytes: 6050 num_examples: 430 - name: hi num_bytes: 2771 num_examples: 133 - name: de num_bytes: 3036 num_examples: 247 download_size: 2071857 dataset_size: 4442706 configs: - config_name: default data_files: - split: am path: data/am-* - split: es path: data/es-* - split: ru path: data/ru-* - split: uk path: data/uk-* - split: en path: data/en-* - split: zh path: data/zh-* - split: ar path: data/ar-* - split: hi path: data/hi-* - split: de path: data/de-* --- This is the compilation of 9 languages (English, Russian, Ukrainian, Spanish, German, Amharic, Arabic, Chinese, Hindi) toxic words lists which is used for [CLEF TextDetox 2024](https://pan.webis.de/clef24/pan24-web/text-detoxification.html) shared task. The list of original sources: * English: [link](https://github.com/coffee-and-fun/google-profanity-words/blob/main/data/en.txt) * Russian: [link](https://github.com/s-nlp/rudetoxifier/blob/main/data/train/MAT_FINAL_with_unigram_inflections.txt) * Ukrainian: [link](https://github.com/saganoren/obscene-ukr) * Spanish: [link](https://github.com/facebookresearch/flores/blob/main/toxicity/README.md) * German: [link](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) * Amhairc: ours * Arabic: ours * Hindi: [link](https://github.com/facebookresearch/flores/blob/main/toxicity/README.md) We also added toxic words from Toxicity-200 [corpus](https://github.com/facebookresearch/flores/blob/main/toxicity/README.md) from Facebook Research for all the languages. All credits go to the authors of the original toxic words lists.