File size: 1,498 Bytes
a9b174b
 
36dcd9c
a9b174b
36dcd9c
a9b174b
 
 
 
 
eafaeab
 
42bba5d
 
 
 
d259955
42bba5d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
pretty_name: Wikipedia Arabic dumps dataset.
language:
- ar
license:
- unknown
multilinguality:
- monolingual
---

# Arabic Wiki Dataset

## Dataset Summary
This dataset is extracted using [`wikiextractor`](https://github.com/attardi/wikiextractor) tool, from [Wikipedia Arabic pages](https://dumps.wikimedia.org/arwiki/).

## Supported Tasks and Leaderboards
Intended to train **Arabic** language models on MSA (Modern Standard Arabic).

## Dataset Structure
The dataset is structured into 2 folders:
- `arwiki_20211213_txt`: dataset is divided into subfolders each of which contains no more than 100 documents.
- `arwiki_20211213_txt_single`: all documents merged together in a single txt file.

## Dataset Statistics

#### Extracts from **December 13, 2021**:

| documents | vocabulary | words |
| --- | --- | --- |
| 1,136,455 | 5,446,560 | 175,566,016 |

## Usage
Load all dataset from the single txt file:
```python
load_dataset('CALM/arwiki', 
              data_files='arwiki_2021_txt_single/arwiki_20211213.txt')

# OR with stream

load_dataset('CALM/arwiki', 
              data_files='arwiki_2021_txt_single/arwiki_20211213.txt', 
              streaming=True)
```
Load a smaller subset from the individual txt files:
```python
load_dataset('CALM/arwiki', 
              data_files='arwiki_2021_txt/AA/arwiki_20211213_1208.txt')

# OR with stream

load_dataset('CALM/arwiki', 
              data_files='arwiki_2021_txt/AA/arwiki_20211213_1208.txt', 
              streaming=True)
```