Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -31,33 +31,71 @@ Changes:
|
|
| 31 |
- Added Polarized column
|
| 32 |
```
|
| 33 |
|
| 34 |
-
IndoToxic2024
|
| 35 |
-
The data are obtained from social media and are annotated by 29 annotators of diverse backgrounds.
|
| 36 |
-
The tasks supported by this dataset are text classification tasks around hate speech, toxic, and polarizing content.
|
| 37 |
|
| 38 |
-
|
| 39 |
-
12700 out of 28448 (44.64%) entries are annotated by more than 1 annotator.
|
| 40 |
|
| 41 |
-
|
| 42 |
-
```py
|
| 43 |
-
from datasets import load_dataset
|
| 44 |
|
| 45 |
-
|
| 46 |
-
```
|
| 47 |
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
|
|
|
| 51 |
|
| 52 |
-
|
| 53 |
-
```
|
| 54 |
-
# PENDING Baseline Performance
|
| 55 |
-
<!-- The table below is an excerpt from the paper, listing the baseline performance of each task:
|
| 56 |
-
 -->
|
| 57 |
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
- Added Polarized column
|
| 32 |
```
|
| 33 |
|
| 34 |
+
# IndoToxic2024: A Multi-Labeled Indonesian Discourse Dataset
|
|
|
|
|
|
|
| 35 |
|
| 36 |
+
## Dataset Overview
|
|
|
|
| 37 |
|
| 38 |
+
IndoToxic2024 is a multi-labeled dataset designed to analyze online discourse in Indonesia, focusing on **toxicity, polarization, and annotator demographic information**. This dataset provides insights into the growing political and social divisions in Indonesia, particularly in the context of the **2024 presidential election**. Unlike previous datasets, IndoToxic2024 offers a **multi-label annotation** framework, enabling nuanced research on the interplay between toxicity and polarization.
|
|
|
|
|
|
|
| 39 |
|
| 40 |
+
## Dataset Statistics
|
|
|
|
| 41 |
|
| 42 |
+
- **Total annotated texts:** **28,477**
|
| 43 |
+
- **Platforms:** X (formerly Twitter), Facebook, Instagram, and news articles
|
| 44 |
+
- **Timeframe:** September 2023 – January 2024
|
| 45 |
+
- **Annotators:** 29 individuals from diverse demographic backgrounds
|
| 46 |
|
| 47 |
+
### Label Distribution
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
+
| Label | Count |
|
| 50 |
+
|-------------|-------|
|
| 51 |
+
| **Toxic** | 2,156 (balanced) |
|
| 52 |
+
| **Non-Toxic** | 6,468 (balanced) |
|
| 53 |
+
| **Polarized** | 3,811 (balanced) |
|
| 54 |
+
| **Non-Polarized** | 11,433 (balanced) |
|
| 55 |
+
|
| 56 |
+
## Dataset Structure
|
| 57 |
+
|
| 58 |
+
The dataset consists of texts labeled for **toxicity and polarization**, along with **annotator demographics**. Each text is annotated by at least one coder, with **44.6% of texts receiving multiple annotations**. Annotations were aggregated using majority voting, excluding texts with perfect disagreement.
|
| 59 |
+
|
| 60 |
+
### Features:
|
| 61 |
+
- `text`: The Indonesian social media or news text
|
| 62 |
+
- `toxicity`: List of toxicity annotations (1 = Toxic, 0 = Non-Toxic)
|
| 63 |
+
- `polarization`: List of polarization annotations (1 = Polarized, 0 = Non-Polarized)
|
| 64 |
+
- `annotators_id`: List of annotator_id that annotate the text (anonymized) -- Refer to `annotator` subset for each annotator_id's demographic informatino
|
| 65 |
+
|
| 66 |
+
## Baseline Model Performance
|
| 67 |
+
|
| 68 |
+

|
| 69 |
+
|
| 70 |
+
### Key Results:
|
| 71 |
+
|
| 72 |
+
We benchmarked IndoToxic2024 using **BERT-based models** and **large language models (LLMs)**. The results indicate that:
|
| 73 |
+
|
| 74 |
+
- **BERT-based models outperform LLMs**, with **IndoBERTweet** achieving the highest accuracy.
|
| 75 |
+
- **Polarization detection is harder than toxicity detection**, as evidenced by lower recall scores.
|
| 76 |
+
- **Demographic information improves classification**, especially for polarization detection.
|
| 77 |
+
|
| 78 |
+
### Additional Findings:
|
| 79 |
+
- **Polarization and toxicity are correlated**: Using polarization as a feature improves toxicity detection, and vice versa.
|
| 80 |
+
- **Demographic-aware models perform better for polarization detection**: Including coder demographics boosts classification performance.
|
| 81 |
+
- **Wisdom of the crowd**: Texts labeled by multiple annotators lead to higher recall in toxicity detection.
|
| 82 |
+
|
| 83 |
+
## Ethical Considerations
|
| 84 |
+
|
| 85 |
+
- **Data Privacy**: All annotator demographic data is anonymized.
|
| 86 |
+
- **Use Case**: This dataset is released **for research purposes only** and should not be used for surveillance or profiling.
|
| 87 |
+
|
| 88 |
+
## Citation
|
| 89 |
+
|
| 90 |
+
If you use IndoToxic2024, please cite:
|
| 91 |
+
|
| 92 |
+
```bibtex
|
| 93 |
+
@misc{susanto2025multilabeleddatasetindonesiandiscourse,
|
| 94 |
+
title={A Multi-Labeled Dataset for Indonesian Discourse: Examining Toxicity, Polarization, and Demographics Information},
|
| 95 |
+
author={Lucky Susanto and Musa Wijanarko and Prasetia Pratama and Zilu Tang and Fariz Akyas and Traci Hong and Ika Idris and Alham Aji and Derry Wijaya},
|
| 96 |
+
year={2025},
|
| 97 |
+
eprint={2503.00417},
|
| 98 |
+
archivePrefix={arXiv},
|
| 99 |
+
primaryClass={cs.CL},
|
| 100 |
+
url={https://arxiv.org/abs/2503.00417},
|
| 101 |
+
}```
|