--- language: - en - ru - uk - es - de - ar - am - hi - zh license: openrail++ dataset_info: features: - name: text dtype: string splits: - name: am num_bytes: 4573 num_examples: 261 - name: es num_bytes: 14683 num_examples: 1195 - name: ru num_bytes: 4169218 num_examples: 140296 - name: uk num_bytes: 153865 num_examples: 7356 - name: en num_bytes: 39323 num_examples: 3386 - name: zh num_bytes: 9031 num_examples: 823 - name: ar num_bytes: 6050 num_examples: 430 - name: hi num_bytes: 2771 num_examples: 133 - name: de num_bytes: 3497 num_examples: 272 download_size: 2040710 dataset_size: 4403011 configs: - config_name: default data_files: - split: am path: data/am-* - split: es path: data/es-* - split: ru path: data/ru-* - split: uk path: data/uk-* - split: en path: data/en-* - split: zh path: data/zh-* - split: ar path: data/ar-* - split: hi path: data/hi-* - split: de path: data/de-* --- This is the compilation of 9 languages (English, Russian, Ukrainian, Spanish, German, Amharic, Arabic, Chinese, Hindi) toxic words lists which is used for [CLEF TextDetox 2024](https://pan.webis.de/clef24/pan24-web/text-detoxification.html) shared task. The list of original sources: * English: [link](https://github.com/coffee-and-fun/google-profanity-words/blob/main/data/en.txt) * Russian: [link](https://github.com/s-nlp/rudetoxifier/blob/main/data/train/MAT_FINAL_with_unigram_inflections.txt) * Ukrainian: [link](https://github.com/saganoren/obscene-ukr) * Spanish: [link](https://github.com/facebookresearch/flores/blob/main/toxicity/README.md) * German: [link](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) * Amhairc: ours * Arabic: ours * Hindi: [link](https://github.com/facebookresearch/flores/blob/main/toxicity/README.md) All credits go to the authors of the original toxic words lists.