Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Languages:
Indonesian
Size:
1K - 10K
License:
We do not maintain this repository further. For accessing the most recent Indonesian Fake News dataset that we created, please visit BRIN's dataverse: https://data.brin.go.id/dataset.xhtml?persistentId=hdl:20.500.12690/RIN/7QBRKQ
Dataset for "Fact-Aware Fake-news Classification for Indonesian Language"
Data originates from https://saberhoaks.jabarprov.go.id/v2/ ; https://opendata.jabarprov.go.id/id/dataset/ ; https://klinikhoaks.jatimprov.go.id/
The attributes of data are:
- Label_id: Binary class labels ("HOAX"==1 ; "NON-HOAX"==0).
- Label: Binary class labels ("HOAX" or "NON-HOAX").
- Title: Claim or headline of news article.
- Content: the content of news article.
- Fact: The summary of factual evidence that is either supporting or contradicting the correponding claim.
- References: URL link of news article and the corresponding verdict or factual evidence as the justification of the news article.
- Classification: Fine-grained classification labels for the news article:
Class labels for saberhoax_data.csv: 'DISINFORMASI', ,'MISINFORMASI', 'FABRICATED CONTENT', 'FALSE CONNECTION', 'FALSE CONTEXT', 'IMPOSTER CONTENT',
'MANIPULATED CONTENT', 'MISLEADING CONTENT', 'SATIRE OR PARODI', 'BENAR'.
Class labels for opendata_jabar.csv: 'BENAR', 'DISINFORMASI (HOAKS)', 'FABRICATED CONTENT',
'FALSE CONNECTION', 'FALSE CONTEXT', 'IMPOSTER CONTENT',
'MANIPULATED CONTENT', 'MISINFORMASI (HOAKS)',
'MISLEADING CONTENT'
Example of usage:
>>> from datasets import load_dataset
>>> train_dataset = load_dataset(
... "nlp-brin-id/id-hoax-report",
... split="train",
... keep_default_na=False,
... ).select_columns(['Label_id', 'Title', 'Content', 'Fact'])
- Downloads last month
- 37
Collection including nlp-brin-id/id-hoax-report
Collection
4 items
•
Updated