TuPy-Dataset / README.md
FpOliveira's picture
Update README.md
baa868b
metadata
license: cc-by-4.0
annotations_creators:
  - crowdsourced
language_creators:
  - Brazilian-Portuguese
language:
  - pt
multilinguality:
  - monolingual
size_categories:
  - 10K<n<100K
source_datasets:
  - original
task_categories:
  - text-classification
task_ids: []
pretty_name: TuPy-Dataset
language_bcp47:
  - pt-BR
tags:
  - hate-speech-detection
configs:
  - config_name: multilabel
    data_files:
      - split: train
        path: multilabel/multilabel_train.csv
      - split: test
        path: multilabel/multilabel_test.csv
  - config_name: binary
    data_files:
      - split: train
        path: binary/binary_train.csv
      - split: test
        path: binary/binary_test.csv

Portuguese Hate Speech Dataset (TuPy)

The Portuguese hate speech dataset (TuPy) is an annotated corpus designed to facilitate the development of advanced hate speech detection models using machine learning (ML) and natural language processing (NLP) techniques. TuPy is comprised of 10,000 (ten thousand) unpublished, annotated, and anonymized documents collected on Twitter (currently known as X) in 2023. This repository is organized as follows:

root.
    ├── binary     : binary dataset (including training and testing split)
    ├── multilabel : multilabel dataset (including training and testing split)
    └── README.md  : documentation and card metadata

TuPy is one of the datasets comprising the expanded dataset called TuPy-E, both under the ownership of Silly Machine. We highly recommend reading the associated research paper to gain comprehensive insights into the advancements integrated into this extension.

Security measures

To safeguard user identity and uphold the integrity of this dataset, all user mentions have been anonymized as "@user," and any references to external websites have been omitted

Annotation and voting process

To generate the binary matrices, we utilized a simple voting process. Each document underwent three separate evaluations. If a document received two or more identical classifications, the assigned value was set to 1; otherwise, it was marked as 0. The annotated raw data can be accessed in the project repository. The following table offers a brief summary of the annotators' profiles and qualifications:

Table 1 – Annotators

Annotator Gender Education Political Color
Annotator 1 Female Ph.D. Candidate in civil engineering Far-left White
Annotator 2 Male Master's candidate in human rights Far-left Black
Annotator 3 Female Master's degree in behavioral psychology Liberal White
Annotator 4 Male Master's degree in behavioral psychology Right-wing Black
Annotator 5 Female Ph.D. Candidate in behavioral psychology Liberal Black
Annotator 6 Male Ph.D. Candidate in linguistics Far-left White
Annotator 7 Female Ph.D. Candidate in civil engineering Liberal White
Annotator 8 Male Ph.D. Candidate in civil engineering Liberal Black
Annotator 9 Male Master's degree in behavioral psychology Far-left White

Data structure

A data point comprises the tweet text (a string) along with thirteen categories, each category is assigned a value of 0 when there is an absence of aggressive or hateful content and a value of 1 when such content is present. These values represent the consensus of annotators regarding the presence of aggressive, hate, ageism, aporophobia, body shame, capacitism, lgbtphobia, political, racism, religious intolerance, misogyny, xenophobia, and others. An illustration from the multilabel TuPy dataset is depicted below:

{
text: "e tem pobre de direita imbecil que ainda defendia a manutenção da política de preços atrelada ao dólar link",
aggressive: 1, hate: 1, ageism: 0, aporophobia: 1, body shame: 0, capacitism: 0, lgbtphobia: 0, political: 1, racism : 0,
religious intolerance : 0, misogyny : 0, xenophobia : 0, other : 0
}

Dataset content

Table 2 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents

Table 2 - Count of non-aggressive and aggressive documents

Label Count
Non-aggressive 8013
Aggressive - Not hate 689
Aggressive - Hate 1298
Total 10000

Table 3 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech.

Table 3 - Hate categories count

Label Count
Ageism 53
Aporophobia 61
Body shame 120
Capacitism 92
LGBTphobia 96
Political 532
Racism 38
Religious intolerance 28
Misogyny 207
Xenophobia 70
Other 1
Total 1298

BibTeX citation

This dataset can be cited as follows:

@misc {silly-machine_2023,
    author       = { {Silly-Machine} },
    title        = { TuPy-Dataset (Revision de6b18c) },
    year         = 2023,
    url          = { https://huggingface.co/datasets/Silly-Machine/TuPy-Dataset },
    doi          = { 10.57967/hf/1529 },
    publisher    = { Hugging Face }
}

Acknowledge

The TuPy project is the result of the development of Felipe Oliveira's thesis and the work of several collaborators. This project is financed by the Federal University of Rio de Janeiro (UFRJ) and the Alberto Luiz Coimbra Institute for Postgraduate Studies and Research in Engineering (COPPE).