FpOliveira commited on
Commit
2dd090a
1 Parent(s): 90c1300

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -80,7 +80,7 @@ The subsequent table provides a concise summary of the annotators' profiles and
80
  | Annotator 9 | Male | Master's degree in behavioral psychology | Far-left | White |
81
 
82
 
83
- To consolidate data from the prominent works in the domain of automatic hate speech detection in Portuguese, we established a database by merging labeled document sets from Fortuna et al. (2019)](https://aclanthology.org/W19-3510/); [Leite et al. (2020)](https://arxiv.org/abs/2010.04543); [Vargas et al. (2022)](https://arxiv.org/abs/2103.14972). To ensure consistency and compatibility in our dataset, we applied the following guidelines for text integration:
84
 
85
  * Fortuna et al. (2019) constructed a database comprising 5,670 tweets, each labeled by three distinct annotators, to determine the presence or absence of hate speech. To maintain consistency, we employed a simple majority-voting process for document classification.
86
  * The corpus compiled by Leite et al. (2020) consists of 21,000 tweets labeled by 129 volunteers, with each text assessed by three different evaluators. This study encompassed six types of toxic speech: homophobia, racism, xenophobia, offensive language, obscene language, and misogyny. Texts containing offensive and obscene language were excluded from the hate speech categorization. Following this criterion, we applied a straightforward majority-voting process for classification.
 
80
  | Annotator 9 | Male | Master's degree in behavioral psychology | Far-left | White |
81
 
82
 
83
+ To consolidate data from the prominent works in the domain of automatic hate speech detection in Portuguese, we established a database by merging labeled document sets from [Fortuna et al. (2019)](https://aclanthology.org/W19-3510/); [Leite et al. (2020)](https://arxiv.org/abs/2010.04543); [Vargas et al. (2022)](https://arxiv.org/abs/2103.14972). To ensure consistency and compatibility in our dataset, we applied the following guidelines for text integration:
84
 
85
  * Fortuna et al. (2019) constructed a database comprising 5,670 tweets, each labeled by three distinct annotators, to determine the presence or absence of hate speech. To maintain consistency, we employed a simple majority-voting process for document classification.
86
  * The corpus compiled by Leite et al. (2020) consists of 21,000 tweets labeled by 129 volunteers, with each text assessed by three different evaluators. This study encompassed six types of toxic speech: homophobia, racism, xenophobia, offensive language, obscene language, and misogyny. Texts containing offensive and obscene language were excluded from the hate speech categorization. Following this criterion, we applied a straightforward majority-voting process for classification.