|
--- |
|
license: mit |
|
task_categories: |
|
- text-classification |
|
- token-classification |
|
size_categories: |
|
- 1M<n<10M |
|
datasets: |
|
- tomekkorbak/pile-toxicity-balanced2 |
|
- datasets/thai_toxicity_tweet |
|
--- |
|
|
|
|
|
About 11 months ago, I downloaded and preprocessed 2.7M rows of text data, but completely forgot the original source of these datasets... |
|
All I know is that I looked everywhere: HuggingFace, research papers, GitHub, Kaggle, and Google search. I even fetched 20K+ tweets using the Twitter API. |
|
Today (6/28/2023) I came across three newer HuggingFace datasets, so I added them to this dataset. |
|
|
|
|
|
The deduplicated training data alone consists of 2,880,230 rows of comments and messages. Among these rows, 416,457 are classified as toxic, while the remaining 2,463,773 are considered neutral. Below is a table to illustrate the data composition: |
|
| | Toxic | Neutral | Total | |
|
|-------|----------|----------|----------| |
|
| [multilingual-train-deduplicated.csv](./multilingual-train-deduplicated.csv) | 416,457 | 2,463,773 | 2,880,230 | |
|
| [multilingual-validation.csv](./multilingual-validation.csv) | 1,230 | 6,770 | 8,000 | |
|
| [multilingual-test.csv](./multilingual-test.csv) | 14,410 | 49,402 | 63,812 | |
|
Each CSV file has two columns: `text` and `is_toxic`. |
|
|
|
Have fun modelling! |
|
|