File size: 7,653 Bytes
f0b9866
1ce4eed
f0b9866
1214f27
1ce4eed
 
1214f27
1ce4eed
 
1214f27
1ce4eed
231fb27
8b0e22a
1ce4eed
 
 
 
 
 
 
 
 
 
8b0e22a
d33d1d9
422d35a
dbaf36f
c1e71da
 
a8a722f
 
c8313d3
c1e71da
 
 
83ae291
 
 
 
 
 
f7aca26
3fec152
c8313d3
3fec152
 
dbaf36f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d33d1d9
51a7671
d33d1d9
6a1d648
 
3849251
6a1d648
d33d1d9
6a1d648
d33d1d9
 
 
49186e0
c8313d3
6b5c1d6
c1e71da
d276f56
 
 
 
 
 
 
 
 
 
 
 
 
6b5c1d6
2dd090a
49186e0
d33d1d9
 
 
6b5c1d6
16e3440
 
 
 
 
 
 
6a1d648
192f653
98b5369
192f653
6a1d648
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
pretty_name: TuPiHateSpeech
license: mit
task_categories:
  - text-classification

language:
  - pt

size_categories:
  - 10K<n<100K

tags:
  - hate-speech-detection
  - brazilian-portuguese

splits:
  - name: train
    num_bytes: 826130
    num_examples: 5670
    download_size: 763846
    dataset_size: 826130


---
# TuPI dataset 

The TuPi dataset stands as an extensive compilation of annotated texts meticulously crafted to discern hate speech across diverse social networks. 
Comprising 10,000 unpublished documents sourced from Twitter, this repository offers a refined presentation of the TuPi 
dataset tailored for adept handling in both binary and multiclass classification tasks.For additional insights into the dataset's construction, refer to the following section.

Table 1 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents

**Table 1 - Count of documents for categories non-aggressive and aggressive.**

| Label                | Count  |
|----------------------|--------|
| Non-aggressive       | 31121  |
| Aggressive - Not hate| 3180   |
| Aggressive - Hate    | 9367   |
| Total                | 43668  |


Table 2 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech.

**Table 2 - Count of documents for categories non-aggressive and aggressive.**

| Label                    | Count |
|--------------------------|-------|
| Ageism                   | 57    |
| Aporophobia              | 66    |
| Body shame               | 285   |
| Capacitism               | 99    |
| LGBTphobia               | 805   |
| Political                | 1149  |
| Racism                   | 290   |
| Religious intolerance   | 108   |
| Misogyny                 | 1675  |
| Xenophobia               | 357   |
| Other                    | 4476  |
| Total                    | 9367  |


# Dataset creation

To overcome the notable shortcomings in existing Portuguese repositories of hate speech instances, we present the TuPI dataset. 
Recognizing the importance of prior research in this domain and the absence of annotated datasets for automated hate speech detection, 
we propose consolidating this comprehensive dataset by integrating the discoveries from [Fortuna et al. (2019)](https://aclanthology.org/W19-3510/); [Leite et al. (2020)](https://arxiv.org/abs/2010.04543); [Vargas et al. (2022)](https://arxiv.org/abs/2103.14972), 
alongside a new, proprietary dataset. 
Regarding the unpublished part of the TuPI dataset, we spent about seven months, from March 2023 to September 2023, building the corpus. We collaborated with a team of experts, including a linguist, a human rights lawyer, several behavior psychologists with master’s degrees, and NLP and machine learning researchers.
A framework inspired by [Vargas et al. (2022)](https://github.com/franciellevargas/HateBR/tree/main) and [Fortuna (2017)](https://github.com/paulafortuna/Portuguese-Hate-Speech-Dataset) was adhered to by establishing a stringent set of criteria for the selection of annotators, encompassing the following key attributes:
* Diverse political orientations, encompassing individuals from the right-wing, liberal, and far-left spectrums.
*  A high level of academic attainment comprising individuals with master’s degrees, doctoral candidates, and holders of doctoral degrees.
* Expertise in fields of study closely aligned with the focus and objectives of our research.
  
The subsequent table provides a concise summary of the annotators' profiles and qualifications Table 3.

**Table 3 – Annotators’ profiles and qualifications.**

| Annotator    | Gender | Education                                     | Political  | Color  |
|--------------|--------|-----------------------------------------------|------------|--------|
| Annotator 1  | Female | Ph.D. Candidate in civil engineering           | Far-left   | White  |
| Annotator 2  | Male   | Master's candidate in human rights             | Far-left   | Black  |
| Annotator 3  | Female | Master's degree in behavioral psychology       | Liberal    | White  |
| Annotator 4  | Male   | Master's degree in behavioral psychology       | Right-wing | Black  |
| Annotator 5  | Female | Ph.D. Candidate in behavioral psychology       | Liberal    | Black  |
| Annotator 6  | Male   | Ph.D. Candidate in linguistics                 | Far-left   | White  |
| Annotator 7  | Female | Ph.D. Candidate in civil engineering           | Liberal    | White  |
| Annotator 8  | Male   | Ph.D. Candidate in civil engineering           | Liberal    | Black  |
| Annotator 9  | Male   | Master's degree in behavioral psychology       | Far-left   | White  |


To consolidate data from the prominent works in the domain of automatic hate speech detection in Portuguese, we established a database by merging labeled document sets from [Fortuna et al. (2019)](https://aclanthology.org/W19-3510/); [Leite et al. (2020)](https://arxiv.org/abs/2010.04543); [Vargas et al. (2022)](https://arxiv.org/abs/2103.14972). To ensure consistency and compatibility in our dataset, we applied the following guidelines for text integration:

* Fortuna et al. (2019) constructed a database comprising 5,670 tweets, each labeled by three distinct annotators, to determine the presence or absence of hate speech. To maintain consistency, we employed a simple majority-voting process for document classification.
* The corpus compiled by Leite et al. (2020) consists of 21,000 tweets labeled by 129 volunteers, with each text assessed by three different evaluators. This study encompassed six types of toxic speech: homophobia, racism, xenophobia, offensive language, obscene language, and misogyny. Texts containing offensive and obscene language were excluded from the hate speech categorization. Following this criterion, we applied a straightforward majority-voting process for classification.
* Vargas et al. (2022) compiled a collection of 7,000 comments extracted from the Instagram platform, with three annotators labeled. These data had previously undergone a simple majority-voting process, eliminating the need for additional text classification procedures.

After completing the previous steps, the corpus was annotated using two different classification levels. 
The initial level involves a binary classification, distinguishing between aggressive and non-aggressive language. 
The second classification level involved assigning a hate speech category to each tweet previously marked as aggressive in the previous step. 
The categories used included ageism, aporophobia, body shame, capacitism, LGBTphobia, political, racism, religious intolerance, misogyny, and xenophobia. 
It is important to note that a single tweet could fall under one or more of these categories.

## References 
[1] P. Fortuna, J. Rocha Da Silva, J. Soler-Company, L. Wanner, and S. Nunes, “A Hierarchically-Labeled Portuguese Hate Speech Dataset,” 2019. [Online]. Available: https://github.com/t-davidson/hate-s

[2] J. A. Leite, D. F. Silva, K. Bontcheva, and C. Scarton, “Toxic Language Detection in Social Media for Brazilian Portuguese: New Dataset and Multilingual Analysis,” Oct. 2020, [Online]. Available: http://arxiv.org/abs/2010.04543

[3] F. Vargas, I. Carvalho, F. Góes, T. A. S. Pardo, and F. Benevenuto, “HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments for Offensive Language and Hate Speech Detection,” 2022. [Online]. Available: https://aclanthology.org/2022.lrec-1.777/