FredZhang7 commited on
Commit
613b266
1 Parent(s): e0e5b16

adding more datasets...

Browse files
Files changed (1) hide show
  1. README.md +11 -6
README.md CHANGED
@@ -71,7 +71,7 @@ tags:
71
 
72
  <br>
73
 
74
- This is a large multilingual toxicity dataset with nearly 3M rows of text data from 55 natural languages, all of which are written/sent by humans, not machine translation models.
75
 
76
  The preprocessed training data alone consists of 2,880,667 rows of comments, tweets, and messages. Among these rows, 416,529 are classified as toxic, while the remaining 2,463,773 are considered neutral. Below is a table to illustrate the data composition:
77
  | | Toxic | Neutral | Total |
@@ -86,6 +86,7 @@ Supported types of toxicity:
86
  - Violent Extremism
87
  - Hate Speech
88
  - Serious Insults
 
89
  - Obscene
90
  - Threats
91
  - Harassment
@@ -158,12 +159,17 @@ Supported languages:
158
  ### Original Source?
159
  Around 11 months ago, I downloaded and preprocessed 2.7M rows of text data, but completely forgot the original source of these datasets...
160
  All I remember is that I downloaded datasets from everywhere I could: HuggingFace, research papers, GitHub, Kaggle, SurgeAI, and Google search. I even fetched 20K+ tweets using the Twitter API.
161
- Recently, I came across two newer HuggingFace datasets, so I remembered to credit them below.
162
 
163
  Known datasets:
164
- - tomekkorbak/pile-toxicity-balanced2
165
- - datasets/thai_toxicity_tweet
 
166
  - inspection-ai/japanese-toxic-dataset (GitHub)
 
 
 
 
167
 
168
  <br>
169
 
@@ -171,7 +177,6 @@ Known datasets:
171
  Limitations include:
172
  - All labels were rounded to the nearest integer. If a text was classified as 46%-54% toxic, the text itself might not be noticeably toxic or neutral.
173
  - There were disagreements among moderators on some labels, due to ambiguity and lack of context.
174
- - When there're only URL(s), emojis, or anything that's unrecognizable as natural language in the "text" column, the corresponding "lang" is "unkown".
175
- - The validation data is not representative of the training data.
176
 
177
  Have fun modelling!
 
71
 
72
  <br>
73
 
74
+ This is a large multilingual toxicity dataset with 3M rows of text data from 55 natural languages, all of which are written/sent by humans, not machine translation models.
75
 
76
  The preprocessed training data alone consists of 2,880,667 rows of comments, tweets, and messages. Among these rows, 416,529 are classified as toxic, while the remaining 2,463,773 are considered neutral. Below is a table to illustrate the data composition:
77
  | | Toxic | Neutral | Total |
 
86
  - Violent Extremism
87
  - Hate Speech
88
  - Serious Insults
89
+ - Sexting
90
  - Obscene
91
  - Threats
92
  - Harassment
 
159
  ### Original Source?
160
  Around 11 months ago, I downloaded and preprocessed 2.7M rows of text data, but completely forgot the original source of these datasets...
161
  All I remember is that I downloaded datasets from everywhere I could: HuggingFace, research papers, GitHub, Kaggle, SurgeAI, and Google search. I even fetched 20K+ tweets using the Twitter API.
162
+ Recently, I came across 6 datasets, so I remembered to credit them below.
163
 
164
  Known datasets:
165
+ - tomekkorbak/pile-toxicity-balanced2 (HuggingFace)
166
+ - datasets/thai_toxicity_tweet (HuggingFace)
167
+ - datasets/ethos (HuggingFace)
168
  - inspection-ai/japanese-toxic-dataset (GitHub)
169
+ - mathigatti/sexting-dataset (GitHub)
170
+ - omar-sharif03/BAD-Bangla-Aggressive-Text-Dataset (GitHub)
171
+
172
+ I manually collected and wrote 100 rows of data.
173
 
174
  <br>
175
 
 
177
  Limitations include:
178
  - All labels were rounded to the nearest integer. If a text was classified as 46%-54% toxic, the text itself might not be noticeably toxic or neutral.
179
  - There were disagreements among moderators on some labels, due to ambiguity and lack of context.
180
+ - When there're only URL(s), emojis, or anything that's unrecognizable as natural language in the "text" column, the corresponding "lang" is "unknown".
 
181
 
182
  Have fun modelling!