xhluca commited on
Commit
c13dd91
1 Parent(s): 9876071

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -152
README.md CHANGED
@@ -1,154 +1,34 @@
1
  ---
2
- dataset_info:
3
- - config_name: '100'
4
- features:
5
- - name: id
6
- dtype: string
7
- - name: url
8
- dtype: string
9
- - name: title
10
- dtype: string
11
- - name: text
12
- dtype: string
13
- splits:
14
- - name: full
15
- num_bytes: 168960
16
- num_examples: 100
17
- - name: partial
18
- num_bytes: 315241.0851032817
19
- num_examples: 100
20
- download_size: 295827
21
- dataset_size: 484201.0851032817
22
- - config_name: 100k
23
- features:
24
- - name: id
25
- dtype: string
26
- - name: url
27
- dtype: string
28
- - name: title
29
- dtype: string
30
- - name: text
31
- dtype: string
32
- splits:
33
- - name: full
34
- num_bytes: 171032873
35
- num_examples: 100000
36
- - name: partial
37
- num_bytes: 315241085.10328174
38
- num_examples: 100000
39
- download_size: 290303445
40
- dataset_size: 486273958.10328174
41
- - config_name: 10k
42
- features:
43
- - name: id
44
- dtype: string
45
- - name: url
46
- dtype: string
47
- - name: title
48
- dtype: string
49
- - name: text
50
- dtype: string
51
- splits:
52
- - name: full
53
- num_bytes: 17059273
54
- num_examples: 10000
55
- - name: partial
56
- num_bytes: 31524108.51032817
57
- num_examples: 10000
58
- download_size: 29185968
59
- dataset_size: 48583381.51032817
60
- - config_name: 1k
61
- features:
62
- - name: id
63
- dtype: string
64
- - name: url
65
- dtype: string
66
- - name: title
67
- dtype: string
68
- - name: text
69
- dtype: string
70
- splits:
71
- - name: full
72
- num_bytes: 1701568
73
- num_examples: 1000
74
- - name: partial
75
- num_bytes: 3152410.8510328173
76
- num_examples: 1000
77
- download_size: 3011775
78
- dataset_size: 4853978.851032818
79
- - config_name: 50k
80
- features:
81
- - name: id
82
- dtype: string
83
- - name: url
84
- dtype: string
85
- - name: title
86
- dtype: string
87
- - name: text
88
- dtype: string
89
- splits:
90
- - name: full
91
- num_bytes: 85492041
92
- num_examples: 50000
93
- - name: partial
94
- num_bytes: 157620542.55164087
95
- num_examples: 50000
96
- download_size: 144690540
97
- dataset_size: 243112583.55164087
98
- - config_name: 5k
99
- features:
100
- - name: id
101
- dtype: string
102
- - name: url
103
- dtype: string
104
- - name: title
105
- dtype: string
106
- - name: text
107
- dtype: string
108
- splits:
109
- - name: full
110
- num_bytes: 8453838
111
- num_examples: 5000
112
- - name: partial
113
- num_bytes: 15762054.255164085
114
- num_examples: 5000
115
- download_size: 14542194
116
- dataset_size: 24215892.255164087
117
- configs:
118
- - config_name: '100'
119
- data_files:
120
- - split: full
121
- path: 100/full-*
122
- - split: partial
123
- path: 100/partial-*
124
- - config_name: 100k
125
- data_files:
126
- - split: full
127
- path: 100k/full-*
128
- - split: partial
129
- path: 100k/partial-*
130
- - config_name: 10k
131
- data_files:
132
- - split: full
133
- path: 10k/full-*
134
- - split: partial
135
- path: 10k/partial-*
136
- - config_name: 1k
137
- data_files:
138
- - split: full
139
- path: 1k/full-*
140
- - split: partial
141
- path: 1k/partial-*
142
- - config_name: 50k
143
- data_files:
144
- - split: full
145
- path: 50k/full-*
146
- - split: partial
147
- path: 50k/partial-*
148
- - config_name: 5k
149
- data_files:
150
- - split: full
151
- path: 5k/full-*
152
- - split: partial
153
- path: 5k/partial-*
154
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-sa-3.0
3
+ language:
4
+ - en
5
+ size_categories:
6
+ - n<1K
7
+ - 1K<n<10K
8
+ - 10K<n<100K
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
+ # `mini_wiki`
11
+
12
+ This is a sampled version of the [`wikimedia/wikipedia`](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. This repository provides scripts to generate the 100, 1k, 5k, 10k, 50k, 100k samples of the dataset, based on the `"20231101.en"` version.
13
+
14
+ ## Usage
15
+
16
+ There are two possible splits: `full`, which loads the entire article, or `partial`, which is a version that only contains the first 500 words of each article. It is recommended to use partial if you are performing retrieval and are only interested in the first paragraphs of the Wikipedia dataset.
17
+
18
+ Now, to run:
19
+
20
+ ```python
21
+ from datasets import load_dataset
22
+
23
+ # Load the 100-sample, full version of the dataset:
24
+ data = load_dataset('xhluca/mini_wiki', name="100", split="full")
25
+
26
+ print(data)
27
+
28
+ # Load partial version with 1k, 5k, 10k, 50k, 100k samples
29
+ data = load_dataset('xhluca/mini_wiki', name="1k", split="partial")
30
+ data = load_dataset('xhluca/mini_wiki', name="5k", split="partial")
31
+ data = load_dataset('xhluca/mini_wiki', name="10k", split="partial")
32
+ data = load_dataset('xhluca/mini_wiki', name="50k", split="partial")
33
+ data = load_dataset('xhluca/mini_wiki', name="100k", split="partial")
34
+ ```