FredZhang7 commited on
Commit
81606aa
1 Parent(s): fafef7a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -2
README.md CHANGED
@@ -11,6 +11,16 @@ datasets:
11
  ---
12
 
13
 
14
- About 11 months ago, I downloaded and preprocessed 2.7M rows of text/toxicity data, but completely forgot the original source of these datasets...
15
  All I know is that I looked everywhere: HuggingFace, research papers, GitHub, Kaggle, and Google search. I even fetched 20K+ tweets using the Twitter API.
16
- Today (6/28/2023) I came across three newer HuggingFace datasets, so I added them to this dataset.
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
 
14
+ About 11 months ago, I downloaded and preprocessed 2.7M rows of text data, but completely forgot the original source of these datasets...
15
  All I know is that I looked everywhere: HuggingFace, research papers, GitHub, Kaggle, and Google search. I even fetched 20K+ tweets using the Twitter API.
16
+ Today (6/28/2023) I came across three newer HuggingFace datasets, so I added them to this dataset.
17
+
18
+
19
+ The deduplicated training data alone consists of 2,880,230 rows of comments and messages. Among these rows, 416,457 are classified as toxic, while the remaining 2,463,773 are considered neutral. Below is a table to illustrate the data composition:
20
+
21
+ | | Toxic | Neutral | Total |
22
+ |-------|----------|----------|----------|
23
+ | [multilingual-train-deduplicated.csv](./multilingual-train-deduplicated.csv) | 416,457 | 2,463,773 | 2,880,230 |
24
+ | [multilingual-validation.csv](./multilingual-validation.csv) | 1,230 | 6,770 | 8,000 |
25
+ | [multilingual-test.csv](./multilingual-test.csv) | 14,410 | 49,402 | 63,812 |
26
+