FpOliveira commited on
Commit
c8313d3
1 Parent(s): 6a1d648

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -13,7 +13,7 @@ The TuPi dataset stands as an extensive compilation of annotated texts meticulou
13
  Comprising 10,000 unpublished documents sourced from Twitter, this repository offers a refined presentation of the TuPi
14
  dataset tailored for adept handling in both binary and multiclass classification tasks.For additional insights into the dataset's construction, refer to the following section.
15
 
16
- (Table 1) provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents
17
 
18
  **Table 1 - Count of documents for categories non-aggressive and aggressive.**
19
 
@@ -25,7 +25,7 @@ dataset tailored for adept handling in both binary and multiclass classification
25
  | Total | 43668 |
26
 
27
 
28
- (Table 2) provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech.
29
 
30
  **Table 2 - Count of documents for categories non-aggressive and aggressive.**
31
 
@@ -57,7 +57,7 @@ A framework inspired by [Vargas et al. (2022)](https://github.com/franciellevarg
57
  * A high level of academic attainment comprising individuals with master’s degrees, doctoral candidates, and holders of doctoral degrees.
58
  * Expertise in fields of study closely aligned with the focus and objectives of our research.
59
 
60
- The subsequent table provides a concise summary of the annotators' profiles and qualifications (Table 3).
61
 
62
  **Table 3 – Annotators’ profiles and qualifications.**
63
 
@@ -74,7 +74,7 @@ The subsequent table provides a concise summary of the annotators' profiles and
74
  | Annotator 9 | Male | Master's degree in behavioral psychology | Far-left | White |
75
 
76
 
77
- To consolidate data from the prominent works in the domain of automatic hate speech detection in Portuguese, we established a database by merging labeled document sets from Fortuna et al. (2019); Leite et al. (2020); Vargas et al. (2022). To ensure consistency and compatibility in our dataset, we applied the following guidelines for text integration:
78
 
79
  * Fortuna et al. (2019) constructed a database comprising 5,670 tweets, each labeled by three distinct annotators, to determine the presence or absence of hate speech. To maintain consistency, we employed a simple majority-voting process for document classification.
80
  * The corpus compiled by Leite et al. (2020) consists of 21,000 tweets labeled by 129 volunteers, with each text assessed by three different evaluators. This study encompassed six types of toxic speech: homophobia, racism, xenophobia, offensive language, obscene language, and misogyny. Texts containing offensive and obscene language were excluded from the hate speech categorization. Following this criterion, we applied a straightforward majority-voting process for classification.
 
13
  Comprising 10,000 unpublished documents sourced from Twitter, this repository offers a refined presentation of the TuPi
14
  dataset tailored for adept handling in both binary and multiclass classification tasks.For additional insights into the dataset's construction, refer to the following section.
15
 
16
+ Table 1 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents
17
 
18
  **Table 1 - Count of documents for categories non-aggressive and aggressive.**
19
 
 
25
  | Total | 43668 |
26
 
27
 
28
+ Table 2 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech.
29
 
30
  **Table 2 - Count of documents for categories non-aggressive and aggressive.**
31
 
 
57
  * A high level of academic attainment comprising individuals with master’s degrees, doctoral candidates, and holders of doctoral degrees.
58
  * Expertise in fields of study closely aligned with the focus and objectives of our research.
59
 
60
+ The subsequent table provides a concise summary of the annotators' profiles and qualifications Table 3.
61
 
62
  **Table 3 – Annotators’ profiles and qualifications.**
63
 
 
74
  | Annotator 9 | Male | Master's degree in behavioral psychology | Far-left | White |
75
 
76
 
77
+ To consolidate data from the prominent works in the domain of automatic hate speech detection in Portuguese, we established a database by merging labeled document sets from Fortuna et al. (2019)](https://aclanthology.org/W19-3510/); [Leite et al. (2020)](https://arxiv.org/abs/2010.04543); [Vargas et al. (2022)](https://aclanthology.org/2022.lrec-1.777/). To ensure consistency and compatibility in our dataset, we applied the following guidelines for text integration:
78
 
79
  * Fortuna et al. (2019) constructed a database comprising 5,670 tweets, each labeled by three distinct annotators, to determine the presence or absence of hate speech. To maintain consistency, we employed a simple majority-voting process for document classification.
80
  * The corpus compiled by Leite et al. (2020) consists of 21,000 tweets labeled by 129 volunteers, with each text assessed by three different evaluators. This study encompassed six types of toxic speech: homophobia, racism, xenophobia, offensive language, obscene language, and misogyny. Texts containing offensive and obscene language were excluded from the hate speech categorization. Following this criterion, we applied a straightforward majority-voting process for classification.