Datasets:

Languages:
Portuguese
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
License:
João Augusto Leite commited on
Commit
07e4878
1 Parent(s): 6557f49

added told-br (brazilian hate speech) dataset (#3683)

Browse files

* added told-br

* addying contributions section to readme

* adding size_categories

* Update datasets/told-br/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/told-br/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/told-br/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/told-br/told-br.py

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Changing labels from int32 to classlabels

* changing pandas import

* Update datasets/told-br/told-br.py

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/told-br/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/told-br/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* improving readme

* styling

* quickfix

* fixing ClassLabel for the multilabel version

* multilabel version not returning text quickfix

* improving readme readability

* readme quickfix

* updating dataset_infos

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

Commit from https://github.com/huggingface/datasets/commit/4809d9ffd2fc78b05d9a8cef7c377f0efdd633f3

README.md ADDED
@@ -0,0 +1,235 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - pt-BR
8
+ licenses:
9
+ - cc-by-sa-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: ToLD-Br
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-classification
19
+ task_ids:
20
+ - text-classification-other-hate-speech-detection
21
+ paperswithcode_id: told-br
22
+ ---
23
+
24
+ # Dataset Card for "ToLD-Br"
25
+
26
+ ## Table of Contents
27
+ - [Dataset Card for "ToLD-Br"](#dataset-card-for-told-br)
28
+ - [Table of Contents](#table-of-contents)
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-fields)
36
+ - [Data Splits](#data-splits)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
41
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
42
+ - [Annotations](#annotations)
43
+ - [Annotation process](#annotation-process)
44
+ - [Who are the annotators?](#who-are-the-annotators)
45
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
46
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
47
+ - [Social Impact of Dataset](#social-impact-of-dataset)
48
+ - [Discussion of Biases](#discussion-of-biases)
49
+ - [Other Known Limitations](#other-known-limitations)
50
+ - [Additional Information](#additional-information)
51
+ - [Dataset Curators](#dataset-curators)
52
+ - [Licensing Information](#licensing-information)
53
+ - [Citation Information](#citation-information)
54
+ - [Contributions](#contributions)
55
+
56
+ ## Dataset Description
57
+
58
+ - **Homepage:** https://paperswithcode.com/dataset/told-br
59
+ - **Repository:** https://github.com/JAugusto97/ToLD-Br
60
+ - **Paper:** https://arxiv.org/abs/2010.04543
61
+ - **Leaderboard:** https://paperswithcode.com/sota/hate-speech-detection-on-told-br
62
+ - **Point of Contact:** joao.leite@estudante.ufscar.br
63
+
64
+ ### Dataset Summary
65
+
66
+ ToLD-Br is the biggest dataset for toxic tweets in Brazilian Portuguese, crowdsourced by 42 annotators selected from a pool of 129 volunteers. Annotators were selected aiming to create a plural group in terms of demographics (ethnicity, sexual orientation, age, gender). Each tweet was labeled by three annotators in 6 possible categories: LGBTQ+phobia, Xenophobia, Obscene, Insult, Misogyny and Racism.
67
+
68
+ ### Supported Tasks and Leaderboards
69
+
70
+ -`text-classification-other-hate-speech-detection`: The dataset can be used to train a model for Hate Speech Detection, either using it's multi-label classes or by grouping them into a binary Hate vs. Non-Hate class. A [BERT](https://huggingface.co/docs/transformers/model_doc/bert) model can be fine-tuned to perform this task and achieve 0.75 F1-Score for it's binary version.
71
+
72
+ ### Languages
73
+
74
+ The text in the dataset is in Brazilian Portuguese, as spoken by Tweet users. The associated BCP-47 code is `pt-BR`.
75
+
76
+ ## Dataset Structure
77
+
78
+ ### Data Instances
79
+ ToLD-Br has two versions: binary and multilabel.
80
+
81
+ Multilabel:
82
+ A data point consists of the tweet text (string) followed by 6 categories that have values ranging from 0 to 3, meaning the amount of votes from annotators for that specific class on homophobia, obscene, insult, racism, misogyny and xenophobia.
83
+
84
+ An example from multilabel ToLD-Br looks as follows:
85
+ ```
86
+ {'text': '@user bandido dissimulado. esse sérgio moro é uma espécie de mal carater com ditadura e pitadas de atraso'
87
+ 'homophobia': 0
88
+ 'obscene': 0
89
+ 'insult': 2
90
+ 'racism': 0
91
+ 'misogyny': 0
92
+ 'xenophobia': 0}
93
+ ```
94
+
95
+ Binary:
96
+ A data point consists of the tweet text (string) followed by a binary class "toxic" with values 0 or 1.
97
+
98
+ An example from binary ToLD-Br looks as follows:
99
+ ```
100
+ {'text': '@user bandido dissimulado. esse sérgio moro é uma espécie de mal carater com ditadura e pitadas de atraso'
101
+ 'toxic': 1}
102
+ ```
103
+ ### Data Fields
104
+
105
+ Multilabel:
106
+ - text: A string representing the tweet posted by a user. Mentions to other users are anonymized by replacing the mention with a @user tag.
107
+ - homophobia: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as homophobic.
108
+ - obscene: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as obscene.
109
+ - insult: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as insult.
110
+ - racism: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as racism.
111
+ - misogyny: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as misogyny.
112
+ - xenophobia: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as xenophobia.
113
+
114
+ Binary:
115
+ - text: A string representing the tweet posted by a user. Mentions to other users are anonymized by replacing the mention with a @user tag.
116
+ - label: numerical binary value {0, 1} representing if the respective text is toxic/abusive or not.
117
+
118
+ ### Data Splits
119
+ Multilabel:
120
+ The entire dataset consists of 21.000 examples.
121
+
122
+ Binary:
123
+ The train set consists of 16.800 examples, validation set consists of 2.100 examples and test set consists of 2.100 examples.
124
+
125
+
126
+ ## Dataset Creation
127
+
128
+ ### Curation Rationale
129
+
130
+ Despite Portuguese being the 5th most spoken language in the world and Brazil being the 4th country with most unique users, Brazilian Portuguese was underrepresented in the hate-speech detection task. Only two other datasets were available, one of them being European Portuguese. ToLD-Br is 4x bigger than both these datasets combined. Also, none of them had multiple annotators per instance. Also, this work proposes a plural and diverse group of annotators carefully selected to avoid inserting bias into the annotation.
131
+
132
+ ### Source Data
133
+
134
+ #### Initial Data Collection and Normalization
135
+
136
+ Data was collected in 15 days in August 2019 using Gate Cloud's Tweet Collector. Ten million tweets were collected using two methods: a keyword-based method and a user-mention method. The first method collected tweets mentioning the following keywords:
137
+
138
+ viado,veado,viadinho,veadinho,viadao,veadao,bicha,bixa,bichinha,bixinha,bichona,bixona,baitola,sapatão,sapatao,traveco,bambi,biba,boiola,marica,gayzão,gayzao,flor,florzinha,vagabundo,vagaba,desgraçada,desgraçado,desgracado,arrombado,arrombada,foder,fuder,fudido,fodido,cú,cu,pinto,pau,pal,caralho,caraio,carai,pica,cacete,rola,porra,escroto,buceta,fdp,pqp,vsf,tnc,vtnc,puto,putinho,acéfalo,acefalo,burro,idiota,trouxa,estúpido,estupido,estúpida,canalha,demente,retardado,retardada,verme,maldito,maldita,ridículo,ridiculo,ridícula,ridicula,morfético,morfetico,morfética,morfetica,lazarento,lazarenta,lixo,mongolóide,mongoloide,mongol,asqueroso,asquerosa,cretino,cretina,babaca,pilantra,neguinho,neguinha,pretinho,pretinha,escurinho,escurinha,pretinha,pretinho,crioulo,criolo,crioula,criola,macaco,macaca,gorila,puta,vagabunda,vagaba,mulherzinha,piranha,feminazi,putinha,piriguete,vaca,putinha,bahiano,baiano,baianagem,xingling,xing ling,xing-ling,carioca,paulista,sulista,mineiro,gringo
139
+
140
+ The list of most followed Brazilian Twitter accounts can be found [here](https://assuperlistas.com/2022/01/21/os-100-brasileiros-mais-seguidos-do-twitter/).
141
+
142
+ #### Who are the source language producers?
143
+
144
+ The language producers are Twitter users from Brazil, speakers of Portuguese.
145
+
146
+ ### Annotations
147
+
148
+ #### Annotation process
149
+
150
+ A form was published at the Federal University of São Carlos asking for volunteers to annotate our dataset. 129 people volunteered and 42 were selected according to their demographics in order to create a diverse and plural annotation group. Guidelines were produced and presented to the annotators. The entire process was done asynchronously because of the Covid-19 pandemic. The tool used was Google Sheets. Annotators were grouped into 14 teams of three annotators each. Each group annotated a respective file containing 1500 tweets. Annotators didn't have contact with each other, nor did they know that other annotators were labelling the same tweets as they were.
151
+
152
+ #### Who are the annotators?
153
+
154
+ Annotators were people from the Federal University of São Carlos' Facebook group. Their demographics are described below:
155
+
156
+ | Gender | |
157
+ |--------|--------|
158
+ | Male | 18 |
159
+ | Female | 24 |
160
+
161
+ | Sexual Orientation | |
162
+ |--------------------|----|
163
+ | Heterosexual | 22 |
164
+ | Bisexual | 12 |
165
+ | Homosexual | 5 |
166
+ | Pansexual | 3 |
167
+
168
+ | Ethnicity | |
169
+ |--------------|----|
170
+ | White | 25 |
171
+ | Brown | 9 |
172
+ | Black | 5 |
173
+ | Asian | 2 |
174
+ | Non-Declared | 1 |
175
+
176
+ Ages range from 18 to 37 years old.
177
+
178
+ Annotators were paid R$50 ($10) to label 1500 examples each.
179
+
180
+ ### Personal and Sensitive Information
181
+
182
+ The dataset contains sensitive information for homophobia, obscene, insult, racism, misogyny and xenophobia.
183
+
184
+ Tweets were anonymized by replacing user mentions with a @user tag.
185
+
186
+ ## Considerations for Using the Data
187
+
188
+ ### Social Impact of Dataset
189
+
190
+ The purpose of this dataset is to help develop better hate speech detection systems.
191
+
192
+ A system that succeeds at this task would be able to identify hate speech tweets associated with the classes available in the dataset.
193
+
194
+ ### Discussion of Biases
195
+
196
+ An effort was made to reduce annotation bias by selecting annotators with a diverse demographic background. In terms of data collection, by using keywords and user mentions, we are introducing some bias to the data, restricting our scope to the list of keywords and users we created.
197
+
198
+ ### Other Known Limitations
199
+
200
+ Because of the massive data skew for the multilabel classes, it is extremely hard to train a robust model for this version of the dataset. We advise using it for analysis and experimentation only. The binary version of the dataset is robust enough to train a classifier with up to 76% F1-score.
201
+
202
+ ## Additional Information
203
+
204
+ ### Dataset Curators
205
+
206
+ The dataset was created by João Augusto Leite, Diego Furtado Silva, both from the Federal University of São Carlos (BR), Carolina Scarton and Kalina Bontcheva both from the University of Sheffield (UK)
207
+
208
+ ### Licensing Information
209
+
210
+ ToLD-Br is licensed under a Creative Commons BY-SA 4.0
211
+
212
+ ### Citation Information
213
+
214
+ ```
215
+ @article{DBLP:journals/corr/abs-2010-04543,
216
+ author = {Joao Augusto Leite and
217
+ Diego F. Silva and
218
+ Kalina Bontcheva and
219
+ Carolina Scarton},
220
+ title = {Toxic Language Detection in Social Media for Brazilian Portuguese:
221
+ New Dataset and Multilingual Analysis},
222
+ journal = {CoRR},
223
+ volume = {abs/2010.04543},
224
+ year = {2020},
225
+ url = {https://arxiv.org/abs/2010.04543},
226
+ eprinttype = {arXiv},
227
+ eprint = {2010.04543},
228
+ timestamp = {Tue, 15 Dec 2020 16:10:16 +0100},
229
+ biburl = {https://dblp.org/rec/journals/corr/abs-2010-04543.bib},
230
+ bibsource = {dblp computer science bibliography, https://dblp.org}
231
+ }
232
+ ```
233
+ ### Contributions
234
+
235
+ Thanks to [@JAugusto97](https://github.com/JAugusto97) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"multilabel": {"description": "ToLD-Br is the biggest dataset for toxic tweets in Brazilian Portuguese, crowdsourced\nby 42 annotators selected from a pool of 129 volunteers. Annotators were selected aiming\nto create a plural group in terms of demographics (ethnicity, sexual orientation, age, gender).\nEach tweet was labeled by three annotators in 6 possible categories:\nLGBTQ+phobia,Xenophobia, Obscene, Insult, Misogyny and Racism.\n", "citation": "@article{DBLP:journals/corr/abs-2010-04543,\n author = {Joao Augusto Leite and\n Diego F. Silva and\n Kalina Bontcheva and\n Carolina Scarton},\n title = {Toxic Language Detection in Social Media for Brazilian Portuguese:\n New Dataset and Multilingual Analysis},\n journal = {CoRR},\n volume = {abs/2010.04543},\n year = {2020},\n url = {https://arxiv.org/abs/2010.04543},\n eprinttype = {arXiv},\n eprint = {2010.04543},\n timestamp = {Tue, 15 Dec 2020 16:10:16 +0100},\n biburl = {https://dblp.org/rec/journals/corr/abs-2010-04543.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "https://github.com/JAugusto97/ToLD-Br", "license": "https://github.com/JAugusto97/ToLD-Br/blob/main/LICENSE_ToLD-Br.txt ", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "homophobia": {"num_classes": 4, "names": ["zero_votes", "one_vote", "two_votes", "three_votes"], "names_file": null, "id": null, "_type": "ClassLabel"}, "obscene": {"num_classes": 4, "names": ["zero_votes", "one_vote", "two_votes", "three_votes"], "names_file": null, "id": null, "_type": "ClassLabel"}, "insult": {"num_classes": 4, "names": ["zero_votes", "one_vote", "two_votes", "three_votes"], "names_file": null, "id": null, "_type": "ClassLabel"}, "racism": {"num_classes": 4, "names": ["zero_votes", "one_vote", "two_votes", "three_votes"], "names_file": null, "id": null, "_type": "ClassLabel"}, "misogyny": {"num_classes": 4, "names": ["zero_votes", "one_vote", "two_votes", "three_votes"], "names_file": null, "id": null, "_type": "ClassLabel"}, "xenophobia": {"num_classes": 4, "names": ["zero_votes", "one_vote", "two_votes", "three_votes"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "told_br", "config_name": "multilabel", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2978006, "num_examples": 21000, "dataset_name": "told_br"}}, "download_checksums": {"https://raw.githubusercontent.com/JAugusto97/ToLD-Br/main/ToLD-BR.csv": {"num_bytes": 2430416, "checksum": "a905b90d5886b9c80d737aa26c41dd29077155f23b9b857080634a78f8203c90"}}, "download_size": 2430416, "post_processing_size": null, "dataset_size": 2978006, "size_in_bytes": 5408422}, "binary": {"description": "ToLD-Br is the biggest dataset for toxic tweets in Brazilian Portuguese, crowdsourced\nby 42 annotators selected from a pool of 129 volunteers. Annotators were selected aiming\nto create a plural group in terms of demographics (ethnicity, sexual orientation, age, gender).\nEach tweet was labeled by three annotators in 6 possible categories:\nLGBTQ+phobia,Xenophobia, Obscene, Insult, Misogyny and Racism.\n", "citation": "@article{DBLP:journals/corr/abs-2010-04543,\n author = {Joao Augusto Leite and\n Diego F. Silva and\n Kalina Bontcheva and\n Carolina Scarton},\n title = {Toxic Language Detection in Social Media for Brazilian Portuguese:\n New Dataset and Multilingual Analysis},\n journal = {CoRR},\n volume = {abs/2010.04543},\n year = {2020},\n url = {https://arxiv.org/abs/2010.04543},\n eprinttype = {arXiv},\n eprint = {2010.04543},\n timestamp = {Tue, 15 Dec 2020 16:10:16 +0100},\n biburl = {https://dblp.org/rec/journals/corr/abs-2010-04543.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "https://github.com/JAugusto97/ToLD-Br", "license": "https://github.com/JAugusto97/ToLD-Br/blob/main/LICENSE_ToLD-Br.txt ", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["not-toxic", "toxic"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "told_br", "config_name": "binary", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1709560, "num_examples": 16800, "dataset_name": "told_br"}, "test": {"name": "test", "num_bytes": 216297, "num_examples": 2100, "dataset_name": "told_br"}, "validation": {"name": "validation", "num_bytes": 212153, "num_examples": 2100, "dataset_name": "told_br"}}, "download_checksums": {"https://github.com/JAugusto97/ToLD-Br/raw/main/experiments/data/1annotator.zip": {"num_bytes": 853322, "checksum": "dd470ff58f7bfb05ad80e1c97cdaf00147b6da4684c8518b8120186bee91aa4d"}}, "download_size": 853322, "post_processing_size": null, "dataset_size": 2138010, "size_in_bytes": 2991332}}
dummy/binary/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:388e2324de0b4eb671dcad45cb4599d787fe5e07a4300dfe90dfddc27be428eb
3
+ size 3887
dummy/multilabel/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d67dcabc4f1b3fa7785df3ebac4736ac51c45aa82dc855bd830b2289a56e006
3
+ size 614
told-br.py ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """Toxic/Abusive Tweets Multilabel Classification Dataset for Brazilian Portuguese."""
18
+
19
+
20
+ import os
21
+
22
+ import pandas as pd
23
+
24
+ import datasets
25
+
26
+
27
+ # TODO: Add BibTeX citation
28
+ # Find for instance the citation on arxiv or on the dataset repo/website
29
+ _CITATION = """\
30
+ @article{DBLP:journals/corr/abs-2010-04543,
31
+ author = {Joao Augusto Leite and
32
+ Diego F. Silva and
33
+ Kalina Bontcheva and
34
+ Carolina Scarton},
35
+ title = {Toxic Language Detection in Social Media for Brazilian Portuguese:
36
+ New Dataset and Multilingual Analysis},
37
+ journal = {CoRR},
38
+ volume = {abs/2010.04543},
39
+ year = {2020},
40
+ url = {https://arxiv.org/abs/2010.04543},
41
+ eprinttype = {arXiv},
42
+ eprint = {2010.04543},
43
+ timestamp = {Tue, 15 Dec 2020 16:10:16 +0100},
44
+ biburl = {https://dblp.org/rec/journals/corr/abs-2010-04543.bib},
45
+ bibsource = {dblp computer science bibliography, https://dblp.org}
46
+ }
47
+ """
48
+
49
+ _DESCRIPTION = """\
50
+ ToLD-Br is the biggest dataset for toxic tweets in Brazilian Portuguese, crowdsourced
51
+ by 42 annotators selected from a pool of 129 volunteers. Annotators were selected aiming
52
+ to create a plural group in terms of demographics (ethnicity, sexual orientation, age, gender).
53
+ Each tweet was labeled by three annotators in 6 possible categories:
54
+ LGBTQ+phobia,Xenophobia, Obscene, Insult, Misogyny and Racism.
55
+ """
56
+
57
+ # TODO: Add a link to an official homepage for the dataset here
58
+ _HOMEPAGE = "https://github.com/JAugusto97/ToLD-Br"
59
+
60
+ # TODO: Add the licence for the dataset here if you can find it
61
+ _LICENSE = "https://github.com/JAugusto97/ToLD-Br/blob/main/LICENSE_ToLD-Br.txt "
62
+
63
+ # TODO: Add link to the official dataset URLs here
64
+ # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
65
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
66
+
67
+ _URLS = {
68
+ "multilabel": "https://raw.githubusercontent.com/JAugusto97/ToLD-Br/main/ToLD-BR.csv",
69
+ "binary": "https://github.com/JAugusto97/ToLD-Br/raw/main/experiments/data/1annotator.zip",
70
+ }
71
+
72
+
73
+ class ToldBr(datasets.GeneratorBasedBuilder):
74
+ """Toxic/Abusive Tweets Classification Dataset for Brazilian Portuguese."""
75
+
76
+ VERSION = datasets.Version("1.0.0")
77
+
78
+ # This is an example of a dataset with multiple configurations.
79
+ # If you don't want/need to define several sub-sets in your dataset,
80
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
81
+
82
+ # If you need to make complex sub-parts in the datasets with configurable options
83
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
84
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
85
+
86
+ # You will be able to load one or the other configurations in the following list with
87
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
88
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
89
+ BUILDER_CONFIGS = [
90
+ datasets.BuilderConfig(
91
+ name="multilabel",
92
+ version=VERSION,
93
+ description="""
94
+ Full multilabel dataset with target values ranging
95
+ from 0 to 3 representing the votes from each annotator.
96
+ """,
97
+ ),
98
+ datasets.BuilderConfig(
99
+ name="binary",
100
+ version=VERSION,
101
+ description="""
102
+ Binary classification dataset version separated in train, dev and test test.
103
+ A text is considered toxic if at least one of the multilabel classes were labeled
104
+ by at least one annotator.
105
+ """,
106
+ ),
107
+ ]
108
+
109
+ DEFAULT_CONFIG_NAME = "binary"
110
+
111
+ def _info(self):
112
+ if self.config.name == "binary":
113
+ features = datasets.Features(
114
+ {
115
+ "text": datasets.Value("string"),
116
+ "label": datasets.ClassLabel(names=["not-toxic", "toxic"]),
117
+ }
118
+ )
119
+ else:
120
+ features = datasets.Features(
121
+ {
122
+ "text": datasets.Value("string"),
123
+ "homophobia": datasets.ClassLabel(names=["zero_votes", "one_vote", "two_votes", "three_votes"]),
124
+ "obscene": datasets.ClassLabel(names=["zero_votes", "one_vote", "two_votes", "three_votes"]),
125
+ "insult": datasets.ClassLabel(names=["zero_votes", "one_vote", "two_votes", "three_votes"]),
126
+ "racism": datasets.ClassLabel(names=["zero_votes", "one_vote", "two_votes", "three_votes"]),
127
+ "misogyny": datasets.ClassLabel(names=["zero_votes", "one_vote", "two_votes", "three_votes"]),
128
+ "xenophobia": datasets.ClassLabel(names=["zero_votes", "one_vote", "two_votes", "three_votes"]),
129
+ }
130
+ )
131
+
132
+ return datasets.DatasetInfo(
133
+ description=_DESCRIPTION, features=features, homepage=_HOMEPAGE, license=_LICENSE, citation=_CITATION
134
+ )
135
+
136
+ def _split_generators(self, dl_manager):
137
+ urls = _URLS[self.config.name]
138
+ data_dir = dl_manager.download_and_extract(urls)
139
+ if self.config.name == "binary":
140
+ return [
141
+ datasets.SplitGenerator(
142
+ name=datasets.Split.TRAIN,
143
+ gen_kwargs={"filepath": os.path.join(data_dir, "1annotator/ptbr_train_1annotator.csv")},
144
+ ),
145
+ datasets.SplitGenerator(
146
+ name=datasets.Split.TEST,
147
+ gen_kwargs={"filepath": os.path.join(data_dir, "1annotator/ptbr_test_1annotator.csv")},
148
+ ),
149
+ datasets.SplitGenerator(
150
+ name=datasets.Split.VALIDATION,
151
+ gen_kwargs={"filepath": os.path.join(data_dir, "1annotator/ptbr_validation_1annotator.csv")},
152
+ ),
153
+ ]
154
+ else:
155
+ return [
156
+ datasets.SplitGenerator(
157
+ name=datasets.Split.TRAIN,
158
+ gen_kwargs={
159
+ "filepath": os.path.join(data_dir),
160
+ },
161
+ )
162
+ ]
163
+
164
+ def _generate_examples(self, filepath):
165
+ df = pd.read_csv(filepath, engine="python")
166
+ for key, row in enumerate(df.itertuples()):
167
+ if self.config.name == "multilabel":
168
+ yield key, {
169
+ "text": row.text,
170
+ "homophobia": int(float(row.homophobia)),
171
+ "obscene": int(float(row.obscene)),
172
+ "insult": int(float(row.insult)),
173
+ "racism": int(float(row.racism)),
174
+ "misogyny": int(float(row.misogyny)),
175
+ "xenophobia": int(float(row.xenophobia)),
176
+ }
177
+ else:
178
+ yield key, {"text": row.text, "label": int(row.toxic)}