Datasets:
Tasks:
Text Classification
Formats:
csv
Languages:
Portuguese
Size:
10K - 100K
ArXiv:
Tags:
hate-speech-detection
DOI:
License:
FpOliveira
commited on
Commit
•
00bf85b
1
Parent(s):
b71d8ba
Update README.md
Browse files
README.md
CHANGED
@@ -49,12 +49,26 @@ root.
|
|
49 |
├── multilabel : multilabel dataset (including training and testing split)
|
50 |
└── README.md : documentation and card metadata
|
51 |
```
|
52 |
-
##
|
53 |
To generate the binary matrices, we employed a straightforward voting process. Three distinct evaluations were assigned to each document. In cases where a document received two or more identical classifications, the adopted value is set to 1; otherwise, it is marked as 0.Raw data can be checked into the repository in the [project repository](https://github.com/Silly-Machine/TuPy-Dataset)
|
|
|
54 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
|
56 |
## Data structure
|
57 |
-
A data point comprises the tweet text (a string) along with thirteen categories, each category is assigned a value of 0 when there is an absence of aggressive or hateful content and a value of 1 when such content is present. These values represent the consensus of annotators regarding the presence of aggressive, hate, ageism, aporophobia, body shame, capacitism, lgbtphobia, political, racism, religious intolerance, misogyny, xenophobia, and others. An illustration from the multilabel
|
58 |
|
59 |
```python
|
60 |
{
|
|
|
49 |
├── multilabel : multilabel dataset (including training and testing split)
|
50 |
└── README.md : documentation and card metadata
|
51 |
```
|
52 |
+
## Annotation and voting process
|
53 |
To generate the binary matrices, we employed a straightforward voting process. Three distinct evaluations were assigned to each document. In cases where a document received two or more identical classifications, the adopted value is set to 1; otherwise, it is marked as 0.Raw data can be checked into the repository in the [project repository](https://github.com/Silly-Machine/TuPy-Dataset)
|
54 |
+
The subsequent table provides a concise summary of the annotators' profiles and qualifications:
|
55 |
|
56 |
+
**Table 3 – Annotators’ profiles and qualifications.**
|
57 |
+
|
58 |
+
| Annotator | Gender | Education | Political | Color |
|
59 |
+
|--------------|--------|-----------------------------------------------|------------|--------|
|
60 |
+
| Annotator 1 | Female | Ph.D. Candidate in civil engineering | Far-left | White |
|
61 |
+
| Annotator 2 | Male | Master's candidate in human rights | Far-left | Black |
|
62 |
+
| Annotator 3 | Female | Master's degree in behavioral psychology | Liberal | White |
|
63 |
+
| Annotator 4 | Male | Master's degree in behavioral psychology | Right-wing | Black |
|
64 |
+
| Annotator 5 | Female | Ph.D. Candidate in behavioral psychology | Liberal | Black |
|
65 |
+
| Annotator 6 | Male | Ph.D. Candidate in linguistics | Far-left | White |
|
66 |
+
| Annotator 7 | Female | Ph.D. Candidate in civil engineering | Liberal | White |
|
67 |
+
| Annotator 8 | Male | Ph.D. Candidate in civil engineering | Liberal | Black |
|
68 |
+
| Annotator 9 | Male | Master's degree in behavioral psychology | Far-left | White |
|
69 |
|
70 |
## Data structure
|
71 |
+
A data point comprises the tweet text (a string) along with thirteen categories, each category is assigned a value of 0 when there is an absence of aggressive or hateful content and a value of 1 when such content is present. These values represent the consensus of annotators regarding the presence of aggressive, hate, ageism, aporophobia, body shame, capacitism, lgbtphobia, political, racism, religious intolerance, misogyny, xenophobia, and others. An illustration from the multilabel TuPy dataset is depicted below:
|
72 |
|
73 |
```python
|
74 |
{
|