Datasets:
CALM
/

Modalities:
Text
Formats:
text
Languages:
Arabic
Libraries:
Datasets
License:
pr_Mais commited on
Commit
42bba5d
1 Parent(s): 7161f6e

docs: update README

Browse files
Files changed (1) hide show
  1. README.md +42 -1
README.md CHANGED
@@ -1,3 +1,44 @@
1
  # Arabic Wiki Dataset
2
 
3
- This dataset is extracted using [wikiextractor](https://github.com/attardi/wikiextractor) tool, from [Wikipedia Arabic pages](https://dumps.wikimedia.org/arwiki/).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Arabic Wiki Dataset
2
 
3
+ ## Dataset Summary
4
+ This dataset is extracted using [`wikiextractor`](https://github.com/attardi/wikiextractor) tool, from [Wikipedia Arabic pages](https://dumps.wikimedia.org/arwiki/).
5
+
6
+ ## Supported Tasks and Leaderboards
7
+ Intended to train **Arabic** language models on MSA (Modern Spoken Arabic).
8
+
9
+ ## Dataset Structure
10
+ The dataset is structured into 2 folders:
11
+ - `arwiki_20211213_txt`: dataset is divided into subfolders each of which contains no more than 100 documents.
12
+ - `arwiki_20211213_txt_single`: all documents merged together in a single txt file.
13
+
14
+ ## Dataset Statistics
15
+
16
+ #### Extracts from **December 13, 2021**:
17
+
18
+ | documents | vocabulary | words |
19
+ | --- | --- | --- |
20
+ | 1,136,455 | 5,446,560 | 175,566,016 |
21
+
22
+ ## Usage
23
+ Load all dataset from the single txt file:
24
+ ```python
25
+ load_dataset('CALM/arwiki',
26
+ data_files='arwiki_2021_txt_single/arwiki_20211213.txt')
27
+
28
+ # OR with stream
29
+
30
+ load_dataset('CALM/arwiki',
31
+ data_files='arwiki_2021_txt_single/arwiki_20211213.txt',
32
+ streaming=True)
33
+ ```
34
+ Load a smaller subset from the individual txt files:
35
+ ```python
36
+ load_dataset('CALM/arwiki',
37
+ data_files='arwiki_2021_txt/AA/arwiki_20211213_1208.txt')
38
+
39
+ # OR with stream
40
+
41
+ load_dataset('CALM/arwiki',
42
+ data_files='arwiki_2021_txt/AA/arwiki_20211213_1208.txt',
43
+ streaming=True)
44
+ ```