toxi-text-3M / README.md
FredZhang7's picture
update source and limitations
51685eb
|
raw
history blame
2.79 kB
---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
- token-classification
size_categories:
- 1M<n<10M
language:
- ar
- es
- pa
- th
- et
- fr
- fi
- no
- hu
- lt
- ur
- so
- pl
- el
- mr
- sk
- gu
- he
- af
- te
- ro
- lv
- sv
- ne
- kn
- it
- mk
- cs
- en
- de
- da
- ta
- bn
- pt
- sq
- tl
- uk
- bg
- ca
- sw
- hi
- zh
- ja
- hr
- ru
- vi
- id
- sl
- cy
- ko
- nl
- ml
- tr
- fa
tags:
- nlp
---
This is a large multilingual toxicity dataset with nearly 3M rows of text data from 55 natural languages, all of which are written/sent by humans, not machine translation models.
The preprocessed training data alone consists of 2,880,230 rows of comments, tweets, and messages. Among these rows, 416,457 are classified as toxic, while the remaining 2,463,773 are considered neutral. Below is a table to illustrate the data composition:
| | Toxic | Neutral | Total |
|-------|----------|----------|----------|
| [multilingual-train-deduplicated.csv](./multilingual-train-deduplicated.csv) | 416,457 | 2,463,773 | 2,880,230 |
| [multilingual-validation.csv](./multilingual-validation.csv) | 1,230 | 6,770 | 8,000 |
| [multilingual-test.csv](./multilingual-test.csv) | 14,410 | 49,402 | 63,812 |
Each CSV file has two columns: `text` and `is_toxic`.
Supported languages:
- Afrikaans
- Albanian
- Arabic
- Bengali
- Bulgarian
- Catalan
- Chinese (Simplified)
- Chinese (Traditional)
- Croatian
- Czech
- Danish
- Dutch
- English
- Estonian
- Finnish
- French
- German
- Greek
- Gujarati
- Hebrew
- Hindi
- Hungarian
- Indonesian
- Italian
- Japanese
- Kannada
- Korean
- Latvian
- Lithuanian
- Macedonian
- Malayalam
- Marathi
- Nepali
- Norwegian
- Persian
- Polish
- Portuguese
- Punjabi
- Romanian
- Russian
- Slovak
- Slovenian
- Somali
- Spanish
- Swahili
- Swedish
- Tagalog
- Tamil
- Telugu
- Thai
- Turkish
- Ukrainian
- Urdu
- Vietnamese
- Welsh
<br>
### Original Source?
Around 11 months ago, I downloaded and preprocessed 2.7M rows of text data, but completely forgot the original source of these datasets...
All I remember is that I downloaded datasets from everywhere I could: HuggingFace, research papers, GitHub, Kaggle, SurgeAI, and Google search. I even fetched 20K+ tweets using the Twitter API.
Today (6/28/2023) I came across two newer HuggingFace datasets, so I remembered to credit them below.
Known datasets:
- tomekkorbak/pile-toxicity-balanced2
- datasets/thai_toxicity_tweet
<br>
### Limitations
Some limitations include:
- All labels were rounded to the nearest integer. If a text was classified as 46%-54% toxic, the text itself might not be noticeably toxic or neutral.
- There were disagreements among moderators on some labels, due to ambiguity and lack of context.
Have fun modelling!