Arabic-Tweets / README.md
pain's picture
Update README.md
eb78435
metadata
license: cc-by-4.0
language:
  - ar

Dataset Card for Dataset Arabic-Tweets

Dataset Description

Dataset Summary

This dataset has been collected from twitter which is more than 41 GB of clean data of Arabic Tweets with nearly 4-billion Arabic words (12-million unique Arabic words).

Languages

Arabic

Source Data

Twitter

Example on data loading using streaming:

from datasets import load_dataset
dataset = load_dataset("pain/Arabic-Tweets",split='train', streaming=True)
print(next(iter(dataset)))

Example on data loading without streaming "It will be downloaded locally":

from datasets import load_dataset
dataset = load_dataset("pain/Arabic-Tweets",split='train')
print(dataset["train"][0])

Initial Data Collection and Normalization

The collected data comprises 100 GB of Twitter raw data. Only tweets with Arabic characters were crawled. It was observed that the new data contained a large number of Persian tweets as well as many Arabic words with repeated characters. Because of this and in order to improve the data efficiency the raw data was processed as follows: hashtags, mentions, and links were removed; tweets that contain Persian characters, 3 consecutive characters, or a singlecharacter word were dropped out; normalization of Arabic letters was considered.

This has resulted in more than 41 GB of clean data with nearly 4-billion Arabic words (12-million unique Arabic words).

Considerations for Using the Data

  • This data has been collected to create a language model. The tweets published without checking the tweets data. Therefore, we are not responsible for any tweets content at all.

Licensing Information

Creative Commons Attribution

Citation Information

@INPROCEEDINGS{10022652,
  author={Al-Fetyani, Mohammad and Al-Barham, Muhammad and Abandah, Gheith and Alsharkawi, Adham and Dawas, Maha},
  booktitle={2022 IEEE Spoken Language Technology Workshop (SLT)}, 
  title={MASC: Massive Arabic Speech Corpus}, 
  year={2023},
  volume={},
  number={},
  pages={1006-1013},
  doi={10.1109/SLT54892.2023.10022652}}