File size: 4,186 Bytes
9205a36
c13dd91
 
3b9d64f
c13dd91
 
 
 
51bd2a4
 
 
 
 
 
 
 
 
 
 
 
5257f50
 
 
e888da2
 
 
 
5257f50
51bd2a4
 
 
 
 
 
 
 
 
 
 
 
96acea3
51bd2a4
a009b48
 
 
 
 
51bd2a4
 
 
 
 
 
 
 
 
 
 
 
7d6b02a
51bd2a4
26e7781
 
 
 
 
51bd2a4
 
 
 
 
 
 
 
 
 
 
984fd38
 
 
6aa1473
 
 
 
984fd38
51bd2a4
 
 
 
 
 
 
 
 
 
 
 
7d763a2
51bd2a4
6d17354
 
 
 
 
51bd2a4
 
 
 
 
 
 
 
 
 
 
941a775
 
 
845fa59
 
 
 
 
51bd2a4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9205a36
c13dd91
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
---
language:
- en
license: cc-by-sa-3.0
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
dataset_info:
- config_name: '100'
  features:
  - name: id
    dtype: string
  - name: url
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: partial
    num_bytes: 98031
    num_examples: 100
  - name: full
    num_bytes: 315241.0851032817
    num_examples: 100
  download_size: 839250
  dataset_size: 413272.0851032817
- config_name: 100k
  features:
  - name: id
    dtype: string
  - name: url
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: partial
    num_bytes: 171032873
    num_examples: 100000
  - name: full
    num_bytes: 315241085.10328174
    num_examples: 100000
  download_size: 580606890
  dataset_size: 486273958.10328174
- config_name: 10k
  features:
  - name: id
    dtype: string
  - name: url
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: partial
    num_bytes: 17059273
    num_examples: 10000
  - name: full
    num_bytes: 31524108.51032817
    num_examples: 10000
  download_size: 58371936
  dataset_size: 48583381.51032817
- config_name: 1k
  features:
  - name: id
    dtype: string
  - name: url
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: partial
    num_bytes: 1007863
    num_examples: 1000
  - name: full
    num_bytes: 3152410.8510328173
    num_examples: 1000
  download_size: 8616768
  dataset_size: 4160273.8510328177
- config_name: 50k
  features:
  - name: id
    dtype: string
  - name: url
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: partial
    num_bytes: 85492041
    num_examples: 50000
  - name: full
    num_bytes: 157620542.55164087
    num_examples: 50000
  download_size: 289381080
  dataset_size: 243112583.55164087
- config_name: 5k
  features:
  - name: id
    dtype: string
  - name: url
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: full
    num_bytes: 15762054.255164085
    num_examples: 5000
  - name: partial
    num_bytes: 5082253
    num_examples: 5000
  download_size: 32285884
  dataset_size: 20844307.255164087
configs:
- config_name: '100'
  data_files:
  - split: full
    path: 100/full-*
  - split: partial
    path: 100/partial-*
- config_name: 100k
  data_files:
  - split: full
    path: 100k/full-*
  - split: partial
    path: 100k/partial-*
- config_name: 10k
  data_files:
  - split: full
    path: 10k/full-*
  - split: partial
    path: 10k/partial-*
- config_name: 1k
  data_files:
  - split: full
    path: 1k/full-*
  - split: partial
    path: 1k/partial-*
- config_name: 50k
  data_files:
  - split: full
    path: 50k/full-*
  - split: partial
    path: 50k/partial-*
- config_name: 5k
  data_files:
  - split: full
    path: 5k/full-*
  - split: partial
    path: 5k/partial-*
---
# `mini_wiki`

This is a sampled version of the [`wikimedia/wikipedia`](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. This repository provides scripts to generate the 100, 1k, 5k, 10k, 50k, 100k samples of the dataset, based on the `"20231101.en"` version.

## Usage

There are two possible splits: `full`, which loads the entire article, or `partial`, which is a version that only contains the first 500 words of each article. It is recommended to use partial if you are performing retrieval and are only interested in the first paragraphs of the Wikipedia dataset.

Now, to run:

```python
from datasets import load_dataset

# Load the 100-sample, full version of the dataset:
data = load_dataset('xhluca/mini_wiki', name="100", split="full")

print(data)

# Load partial version with 1k, 5k, 10k, 50k, 100k samples
data = load_dataset('xhluca/mini_wiki', name="1k", split="partial")
data = load_dataset('xhluca/mini_wiki', name="5k", split="partial")
data = load_dataset('xhluca/mini_wiki', name="10k", split="partial")
data = load_dataset('xhluca/mini_wiki', name="50k", split="partial")
data = load_dataset('xhluca/mini_wiki', name="100k", split="partial")
```