Dataset:

Task Categories: sequence-modeling
Languages: ar
Multilinguality: monolingual
Size Categories: 10K<n<100K1K<n<10K
Licenses: unkown
Language Creators: found
Annotations Creators: found
Source Datasets: original

Dataset Card for Arabic Billion Words Corpus

Dataset Summary

Abu El-Khair Corpus is an Arabic text corpus, that includes more than five million newspaper articles. It contains over a billion and a half words in total, out of which, there are about three million unique words. The corpus is encoded with two types of encoding, namely: UTF-8, and Windows CP-1256. Also it was marked with two mark-up languages, namely: SGML, and XML.

Supported Tasks and Leaderboards

[More Information Needed]

Languages

Arabic

Dataset Structure

Data Instances

[More Information Needed]

Data Fields

The data fields are:

  • "url": string, original url of the article,
  • "head_line": string, headline of the article,
  • "date": string, date of the article,
  • "text": string, text content of the article,

Data Splits

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

@article{el20161,
  title={1.5 billion words arabic corpus},
  author={El-Khair, Ibrahim Abu},
  journal={arXiv preprint arXiv:1611.04033},
  year={2016}
}

Contributions

Thanks to @zaidalyafeai for adding this dataset.

Update on GitHub
Explore dataset Edit Model Tags

Models trained or fine-tuned on arabic_billion_words