FpOliveira commited on
Commit
6b5c1d6
1 Parent(s): 49186e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -18,10 +18,13 @@ A framework inspired by Vargas et al. (2022) and Fortuna (2017) was adhered to b
18
  * Expertise in fields of study closely aligned with the focus and objectives of our research.
19
 
20
  The subsequent table provides a concise summary of the annotators' profiles and qualifications (Table 1).
 
21
  (TABLE)
 
22
  To consolidate data from the prominent works in the domain of automatic hate speech detection in Portuguese, we established a database by merging labeled document sets from Fortuna et al. (2019); Leite et al. (2020); Vargas et al. (2022). To ensure consistency and compatibility in our dataset, we applied the following guidelines for text integration:
23
 
24
  * Fortuna et al. (2019) constructed a database comprising 5,670 tweets, each labeled by three distinct annotators, to determine the presence or absence of hate speech. To maintain consistency, we employed a simple majority-voting process for document classification.
25
  * The corpus compiled by Leite et al. (2020) consists of 21,000 tweets labeled by 129 volunteers, with each text assessed by three different evaluators. This study encompassed six types of toxic speech: homophobia, racism, xenophobia, offensive language, obscene language, and misogyny. Texts containing offensive and obscene language were excluded from the hate speech categorization. Following this criterion, we applied a straightforward majority-voting process for classification.
26
  * Vargas et al. (2022) compiled a collection of 7,000 comments extracted from the Instagram platform, with three annotators labeled. These data had previously undergone a simple majority-voting process, eliminating the need for additional text classification procedures.
 
27
  After completing the previous steps, the corpus was annotated using two different classification levels. The initial level involves a binary classification, distinguishing between aggressive and non-aggressive language. The second classification level involved assigning a hate speech category to each tweet previously marked as aggressive in the previous step. The categories used included ageism, aporophobia, body shame, capacitism, LGBTphobia, political, racism, religious intolerance, misogyny, and xenophobia. It is important to note that a single tweet could fall under one or more of these categories.
 
18
  * Expertise in fields of study closely aligned with the focus and objectives of our research.
19
 
20
  The subsequent table provides a concise summary of the annotators' profiles and qualifications (Table 1).
21
+
22
  (TABLE)
23
+
24
  To consolidate data from the prominent works in the domain of automatic hate speech detection in Portuguese, we established a database by merging labeled document sets from Fortuna et al. (2019); Leite et al. (2020); Vargas et al. (2022). To ensure consistency and compatibility in our dataset, we applied the following guidelines for text integration:
25
 
26
  * Fortuna et al. (2019) constructed a database comprising 5,670 tweets, each labeled by three distinct annotators, to determine the presence or absence of hate speech. To maintain consistency, we employed a simple majority-voting process for document classification.
27
  * The corpus compiled by Leite et al. (2020) consists of 21,000 tweets labeled by 129 volunteers, with each text assessed by three different evaluators. This study encompassed six types of toxic speech: homophobia, racism, xenophobia, offensive language, obscene language, and misogyny. Texts containing offensive and obscene language were excluded from the hate speech categorization. Following this criterion, we applied a straightforward majority-voting process for classification.
28
  * Vargas et al. (2022) compiled a collection of 7,000 comments extracted from the Instagram platform, with three annotators labeled. These data had previously undergone a simple majority-voting process, eliminating the need for additional text classification procedures.
29
+
30
  After completing the previous steps, the corpus was annotated using two different classification levels. The initial level involves a binary classification, distinguishing between aggressive and non-aggressive language. The second classification level involved assigning a hate speech category to each tweet previously marked as aggressive in the previous step. The categories used included ageism, aporophobia, body shame, capacitism, LGBTphobia, political, racism, religious intolerance, misogyny, and xenophobia. It is important to note that a single tweet could fall under one or more of these categories.