--- license: apache-2.0 task_categories: - text-classification - token-classification - zero-shot-classification size_categories: - 1M ### Original Source? Around 11 months ago, I downloaded and preprocessed 2.7M rows of text data, but completely forgot the original source of these datasets... All I remember is that I downloaded datasets from everywhere I could: HuggingFace, research papers, GitHub, Kaggle, SurgeAI, and Google search. I even fetched 20K+ tweets using the Twitter API. Recently, I came across 6 datasets, so I remembered to credit them below. Known datasets: - tomekkorbak/pile-toxicity-balanced2 (HuggingFace) - datasets/thai_toxicity_tweet (HuggingFace) - datasets/ethos (HuggingFace) - inspection-ai/japanese-toxic-dataset (GitHub) - mathigatti/sexting-dataset (GitHub) - omar-sharif03/BAD-Bangla-Aggressive-Text-Dataset (GitHub) I manually collected and wrote 100 rows of data.
### Limitations Limitations include: - All labels were rounded to the nearest integer. If a text was classified as 46%-54% toxic, the text itself might not be noticeably toxic or neutral. - There were disagreements among moderators on some labels, due to ambiguity and lack of context. - When there're only URL(s), emojis, or anything that's unrecognizable as natural language in the "text" column, the corresponding "lang" is "unknown". Have fun modelling!