NoisyHate commited on
Commit
f8b8a67
1 Parent(s): 6668863

upload data

Browse files
README.md ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # toxic-detection-testset
2
+
3
+ ## Table of Contents
4
+ - [Table of Contents](#table-of-contents)
5
+ - [Dataset Description](#dataset-description)
6
+ - [Dataset Summary](#dataset-summary)
7
+ - [Languages](#languages)
8
+ - [Dataset Structure](#dataset-structure)
9
+ - [Data Instances](#data-instances)
10
+ - [Data Fields](#data-fields)
11
+ - [Data Splits](#data-splits)
12
+ - [Dataset Creation](#dataset-creation)
13
+ - [Curation Rationale](#curation-rationale)
14
+ - [Source Data](#source-data)
15
+ - [Annotations](#annotations)
16
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
17
+ - [Additional Information](#additional-information)
18
+ - [Dataset Curators](#dataset-curators)
19
+ - [Citation Information](#citation-information)
20
+
21
+ ## Dataset Description
22
+
23
+ ### Dataset Summary
24
+
25
+ This dataset a test set for toxic detection that contains both clean data and it's perturbed version with human-written perturbations online.
26
+ In addition, our dataset can be used to benchmark misspelling correctors as well.
27
+
28
+
29
+ ### Languages
30
+
31
+ English
32
+
33
+ ## Dataset Structure
34
+
35
+ ### Data Instances
36
+ ```
37
+ {
38
+ "clean_version": "this is pretty much exactly how i feel damn",
39
+ "perturbed_version": "this is pretty much exactly how i feel daaammnn",
40
+ "toxicity": 0.7,
41
+ "obscene": 0.7,
42
+ "sexual_explicit": 0,
43
+ "identity_attack": 0,
44
+ ...
45
+ "insult": 0.2,
46
+ "quality_mean": 4
47
+ }
48
+
49
+ ```
50
+
51
+ ### Data Fields
52
+
53
+ This dataset is derived from the [Jigsaw data](https://www.kaggle.com/competitions/jigsaw-unintended-bias-in-toxicity-classification/data). Hence, it keeps all the useful metrics and attributes.
54
+
55
+ **Main**
56
+ * clean_version
57
+ * perturbed_version
58
+
59
+
60
+ **Metrics**
61
+ * toxicity
62
+ * severe_toxicity
63
+ * obscene
64
+ * threat
65
+ * insult
66
+ * identity_attack
67
+ * sexual_explicit
68
+ **Identity attributes**
69
+ * male
70
+ * female
71
+ * transgender
72
+ * other_gender
73
+ * heterosexual
74
+ * homosexual_gay_or_lesbian
75
+ * bisexual
76
+ * other_sexual_orientation
77
+ * christian
78
+ * jewish
79
+ * muslim
80
+ * hindu
81
+ * buddhist
82
+ * atheist
83
+ * other_religion
84
+ * black
85
+ * white
86
+ * asian
87
+ * latino
88
+ * other_race_or_ethnicity
89
+ * physical_disability
90
+ * intellectual_or_learning_disability
91
+ * psychiatric_or_mental_illness
92
+ * other_disability
93
+ ### Data Splits
94
+
95
+ test: 1339
96
+ ## Dataset Creation
97
+ ### Curation Rationale
98
+ [More Information Needed]
99
+ ### Source Data
100
+ #### Initial Data Collection and Normalization
101
+ Jigsaw is a famous toxic speech classification dataset containing approximately 2 million public comments from the Civil Comments platform. In addition to the toxic score labels for toxicity classification, the Jigsaw dataset also provides several toxicity sub-type dimensions which indicate particular comments' target groups, such as male, female, black, and Asian. Due to these prolific identity annotations and significant data volume, we adopt this dataset as our raw data source. Since the dataset has been used as the standard benchmark dataset for content moderation tasks, this adoption will also help reduce the entry barrier in adopting NoisyHate from the community.
102
+
103
+ Since the comments from the Jigsaw dataset contain a lot of special characters, emojis, and informal language, data cleaning was necessary to ensure data quality. Following a typical text processing pipeline, we removed duplicated texts, special characters, special punctuation, hyperlinks, and numbers. Since we only focused on English texts, sentences containing non-standard English words were filtered out. 13,1982 comments remained after this cleaning step.
104
+
105
+ #### Who are the source language producers?
106
+ The source data is provided by the Conversation AI team, a research initiative founded by Jigsaw and Google.
107
+ ### Annotations
108
+ #### Annotation process
109
+ In the annotation process, we display a guideline to explain the definition of human-generated perturbation and provide examples of both high-quality and low-quality perturbations. This training phase has been suggested to warrant high-quality responses from the human worker, especially for labeling tasks. Each MTurk worker is then presented with a pair of a perturbed sentences and its clean version and is asked to determine the quality of the perturbed one (Guideline and UI can be found in our [paper](#citation-information)).
110
+
111
+
112
+ We recruited five different workers from the North America region through five assignments to assess each pair. A five-second countdown timer was also set for each task to ensure workers spent enough time on it. To ensure the quality of their responses, we designed an attention question that asks them to click on the perturbed word in the given sentences before they provide their quality ratings. Workers who cannot correctly identify the perturbation's location in the given sentence will be blocked for future batches. We aimed to pay the workers at an average rate of \$10 per hour, which is well above the federal minimum wage (\$7.25 per hour). The payment of each task was estimated by the average length of the sentences, which totals around 25 words per pair, and the average reading speed of native speakers is around 228 words per minute.
113
+
114
+ #### Who are the annotators?
115
+ US Amazon MTurk workers with HIT Approval Rate greater than 98%, and Number of HITs approved greater than 1000.
116
+ ### Personal and Sensitive Information
117
+ N/A
118
+ ## Additional Information
119
+ ### Dataset Curators
120
+ [More Information Needed]
121
+ ### Citation Information
122
+ paper is coming soon
123
+
124
+ huggingface link: https://huggingface.co/datasets/yiran223/toxic-detection-testset-perturbations
data/clean.csv ADDED
The diff for this file is too large to render. See raw diff
 
data/pert.csv ADDED
The diff for this file is too large to render. See raw diff
 
toxic detection model test set with perturbations.csv ADDED
The diff for this file is too large to render. See raw diff