Datasets:
CALM
/

Modalities:
Text
Formats:
text
Languages:
Arabic
Libraries:
Datasets
License:
arwiki / README.md
julien-c's picture
julien-c HF staff
Fix `license` metadata
00d2314
|
raw
history blame
1.5 kB
metadata
pretty_name: Wikipedia Arabic dumps dataset.
language:
  - ar
license:
  - unknown
multilinguality:
  - monolingual

Arabic Wiki Dataset

Dataset Summary

This dataset is extracted using wikiextractor tool, from Wikipedia Arabic pages.

Supported Tasks and Leaderboards

Intended to train Arabic language models on MSA (Modern Standard Arabic).

Dataset Structure

The dataset is structured into 2 folders:

  • arwiki_20211213_txt: dataset is divided into subfolders each of which contains no more than 100 documents.
  • arwiki_20211213_txt_single: all documents merged together in a single txt file.

Dataset Statistics

Extracts from December 13, 2021:

documents vocabulary words
1,136,455 5,446,560 175,566,016

Usage

Load all dataset from the single txt file:

load_dataset('CALM/arwiki', 
              data_files='arwiki_2021_txt_single/arwiki_20211213.txt')

# OR with stream

load_dataset('CALM/arwiki', 
              data_files='arwiki_2021_txt_single/arwiki_20211213.txt', 
              streaming=True)

Load a smaller subset from the individual txt files:

load_dataset('CALM/arwiki', 
              data_files='arwiki_2021_txt/AA/arwiki_20211213_1208.txt')

# OR with stream

load_dataset('CALM/arwiki', 
              data_files='arwiki_2021_txt/AA/arwiki_20211213_1208.txt', 
              streaming=True)