Datasets:
metadata
license: apache-2.0
language:
- ar
task_categories:
- text-classification
- zero-shot-classification
tags:
- nlp
- moderation
size_categories:
- 10K<n<100K
This is a large corpus of 42,619 preprocessed text messages and emails sent by humans in 43 languages. is_spam=1
means spam and is_spam=0
means ham.
1040 rows of balanced data, consisting of casual conversations and scam emails in ≈10 languages, were manually collected and annotated by me, with some help from ChatGPT.
Some preprcoessing algorithms
- spam_assassin.js, followed by spam_assassin.py
- enron_spam.py
Data composition
Description
To make the text format between sms messages and emails consistent, email subjects and content are separated by two newlines:
text = email.subject + "\n\n" + email.content
Suggestions
- If you plan to train a model based on this dataset alone, I recommend adding some rows with
is_toxic=0
fromFredZhang7/toxi-text-3M
. Make sure the rows aren't spam.