Datasets:

Languages:
English
ArXiv:
License:
system HF staff commited on
Commit
48a7521
1 Parent(s): 6df319d

Update files from the datasets library (from 1.9.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.9.0

Files changed (1) hide show
  1. README.md +30 -18
README.md CHANGED
@@ -52,7 +52,7 @@ paperswithcode_id: hatexplain
52
  - **Repository:** https://github.com/punyajoy/HateXplain/
53
  - **Paper:** https://arxiv.org/abs/2012.10289
54
  - **Leaderboard:** [Needs More Information]
55
- - **Point of Contact:** [Needs More Information]
56
 
57
  ### Dataset Summary
58
 
@@ -115,7 +115,7 @@ Sample Entry:
115
 
116
  ### Data Splits
117
 
118
- [Post_id_divisions](https://github.com/punyajoy/HateXplain/blob/master/Data/post_id_divisions.json) has a dictionary having train, valid and test post ids that are used to divide the dataset into train, val and test set in the ratio of 8:1:1.
119
 
120
 
121
 
@@ -123,37 +123,46 @@ Sample Entry:
123
 
124
  ### Curation Rationale
125
 
126
- [Needs More Information]
127
 
128
  ### Source Data
129
 
 
 
130
  #### Initial Data Collection and Normalization
131
 
132
- [Needs More Information]
133
 
134
  #### Who are the source language producers?
135
 
136
- [Needs More Information]
137
 
138
  ### Annotations
139
 
140
  #### Annotation process
141
 
142
- [Needs More Information]
 
 
143
 
144
  #### Who are the annotators?
145
 
146
- [Needs More Information]
 
 
 
 
 
147
 
148
  ### Personal and Sensitive Information
149
 
150
- [Needs More Information]
151
 
152
  ## Considerations for Using the Data
153
 
154
  ### Social Impact of Dataset
155
 
156
- [Needs More Information]
157
 
158
  ### Discussion of Biases
159
 
@@ -161,30 +170,33 @@ Sample Entry:
161
 
162
  ### Other Known Limitations
163
 
164
- [Needs More Information]
165
 
166
  ## Additional Information
167
 
168
  ### Dataset Curators
169
 
170
- [Needs More Information]
 
 
 
 
 
171
 
172
  ### Licensing Information
173
 
174
- [Needs More Information]
175
 
176
  ### Citation Information
177
 
178
  ```bibtex
179
- @misc{mathew2020hatexplain,
180
  title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection},
181
  author={Binny Mathew and Punyajoy Saha and Seid Muhie Yimam and Chris Biemann and Pawan Goyal and Animesh Mukherjee},
182
- year={2020},
183
- eprint={2012.10289},
184
- archivePrefix={arXiv},
185
- primaryClass={cs.CL}
186
  }
187
 
188
  ### Contributions
189
 
190
- Thanks to [@kushal2000](https://github.com/kushal2000) for adding this dataset.
 
52
  - **Repository:** https://github.com/punyajoy/HateXplain/
53
  - **Paper:** https://arxiv.org/abs/2012.10289
54
  - **Leaderboard:** [Needs More Information]
55
+ - **Point of Contact:** Punyajoy Saha (punyajoys@iitkgp.ac.in)
56
 
57
  ### Dataset Summary
58
 
 
115
 
116
  ### Data Splits
117
 
118
+ [Post_id_divisions](https://github.com/hate-alert/HateXplain/blob/master/Data/post_id_divisions.json) has a dictionary having train, valid and test post ids that are used to divide the dataset into train, val and test set in the ratio of 8:1:1.
119
 
120
 
121
 
 
123
 
124
  ### Curation Rationale
125
 
126
+ The existing hate speech datasets do not provide human rationale which could justify the human reasoning behind their annotation process. This dataset allows researchers to move a step in this direction. The dataset provides token-level annotatoins for the annotation decision.
127
 
128
  ### Source Data
129
 
130
+ We collected the data from Twitter and Gab.
131
+
132
  #### Initial Data Collection and Normalization
133
 
134
+ We combined the lexicon set provided by [Davidson 2017](https://arxiv.org/abs/1703.04009), [Ousidhoum 2019](https://arxiv.org/abs/1908.11049), and [Mathew 2019](https://arxiv.org/abs/1812.01693) to generate a single lexicon. We do not consider reposts and remove duplicates. We also ensure that the posts do not contain links, pictures, or videos as they indicate additional information that mightnot be available to the annotators. However, we do not exclude the emojis from the text as they might carry importantinformation for the hate and offensive speech labeling task.
135
 
136
  #### Who are the source language producers?
137
 
138
+ The dataset is human generated using Amazon Mechanical Turk (AMT).
139
 
140
  ### Annotations
141
 
142
  #### Annotation process
143
 
144
+ Each post in our dataset contains three types of annotations. First, whether the text is a hate speech, offensive speech, or normal. Second, the target communities in the text. Third, if the text is considered as hate speech, or offensive by majority of the annotators, we further ask the annotators to annotate parts of the text, which are words orphrases that could be a potential reason for the given annotation.
145
+
146
+ Before starting the annotation task, workers are explicitly warned that the annotation task displays some hateful or offensive content. We prepare instructions for workers that clearly explain the goal of the annotation task, how to annotate spans and also include a definition for each category. We provide multiple examples with classification, target community and span annotations to help the annotators understand the task.
147
 
148
  #### Who are the annotators?
149
 
150
+ To ensure high quality dataset, we use built-in MTurk qualification requirements, namely the HITApproval Rate(95%) for all Requesters’ HITs and the Number of HITs Approved(5,000) requirements.
151
+
152
+ Pilot annotation: In the pilot task, each annotator was provided with 20 posts and they were required to do the hate/offensive speech classification as well as identify the target community (if any). In order to have a clear understanding of the task, they were provided with multiple examples along with explanations for the labelling process. The main purpose of the pilot task was to shortlist those annotators who were able to do the classification accurately. We also collected feedback from annotators to improve the main annotation task. A total of 621 annotators took part in the pilot task. Out of these, 253 were selected for the main task.
153
+
154
+
155
+ Main annotation: After the pilot annotation, once we had ascertained the quality of the annotators, we started with the main annotation task. In each round, we would select a batch of around 200 posts. Each post was annotated by three annotators, then majority voting was applied to decide the final label. The final dataset is composed of 9,055 posts from Twitter and 11,093 posts from Gab. The Krippendorff's alpha for the inter-annotator agreement is 0.46 which is higher than other hate speech datasets.
156
 
157
  ### Personal and Sensitive Information
158
 
159
+ The posts were anonymized by replacing the usernames with <user> token.
160
 
161
  ## Considerations for Using the Data
162
 
163
  ### Social Impact of Dataset
164
 
165
+ The dataset could prove beneficial to develop models which are more explainable and less biased.
166
 
167
  ### Discussion of Biases
168
 
 
170
 
171
  ### Other Known Limitations
172
 
173
+ The dataset has some limitations. First is the lack of external context. The dataset lacks any external context such as profile bio, user gender, history of posts etc., which might be helpful in the classification task. Another issue is the focus on English language and lack of multilingual hate speech.
174
 
175
  ## Additional Information
176
 
177
  ### Dataset Curators
178
 
179
+ Binny Mathew - IIT Kharagpur, India
180
+ Punyajoy Saha - IIT Kharagpur, India
181
+ Seid Muhie Yimam - Universit ̈at Hamburg, Germany
182
+ Chris Biemann - Universit ̈at Hamburg, Germany
183
+ Pawan Goyal - IIT Kharagpur, India
184
+ Animesh Mukherjee - IIT Kharagpur, India
185
 
186
  ### Licensing Information
187
 
188
+ MIT License
189
 
190
  ### Citation Information
191
 
192
  ```bibtex
193
+ @article{mathew2020hatexplain,
194
  title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection},
195
  author={Binny Mathew and Punyajoy Saha and Seid Muhie Yimam and Chris Biemann and Pawan Goyal and Animesh Mukherjee},
196
+ year={2021},
197
+ conference={AAAI conference on artificial intelligence}
 
 
198
  }
199
 
200
  ### Contributions
201
 
202
+ Thanks to [@kushal2000](https://github.com/kushal2000) for adding this dataset.