mindchain commited on
Commit
2871496
1 Parent(s): cf1f0d9

Upload 3 files

Browse files
Files changed (3) hide show
  1. README (4).md +311 -0
  2. dataset_infos.json +1 -0
  3. wikitext.py +192 -0
README (4).md ADDED
@@ -0,0 +1,311 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - crowdsourced
6
+ language:
7
+ - en
8
+ license:
9
+ - cc-by-sa-3.0
10
+ - gfdl
11
+ multilinguality:
12
+ - monolingual
13
+ paperswithcode_id: wikitext-2
14
+ pretty_name: WikiText
15
+ size_categories:
16
+ - 1M<n<10M
17
+ source_datasets:
18
+ - original
19
+ task_categories:
20
+ - text-generation
21
+ - fill-mask
22
+ task_ids:
23
+ - language-modeling
24
+ - masked-language-modeling
25
+ dataset_info:
26
+ - config_name: wikitext-103-v1
27
+ features:
28
+ - name: text
29
+ dtype: string
30
+ splits:
31
+ - name: test
32
+ num_bytes: 1295579
33
+ num_examples: 4358
34
+ - name: train
35
+ num_bytes: 545142639
36
+ num_examples: 1801350
37
+ - name: validation
38
+ num_bytes: 1154755
39
+ num_examples: 3760
40
+ download_size: 190229076
41
+ dataset_size: 547592973
42
+ - config_name: wikitext-2-v1
43
+ features:
44
+ - name: text
45
+ dtype: string
46
+ splits:
47
+ - name: test
48
+ num_bytes: 1270951
49
+ num_examples: 4358
50
+ - name: train
51
+ num_bytes: 10918134
52
+ num_examples: 36718
53
+ - name: validation
54
+ num_bytes: 1134127
55
+ num_examples: 3760
56
+ download_size: 4475746
57
+ dataset_size: 13323212
58
+ - config_name: wikitext-103-raw-v1
59
+ features:
60
+ - name: text
61
+ dtype: string
62
+ splits:
63
+ - name: test
64
+ num_bytes: 1305092
65
+ num_examples: 4358
66
+ - name: train
67
+ num_bytes: 546501673
68
+ num_examples: 1801350
69
+ - name: validation
70
+ num_bytes: 1159292
71
+ num_examples: 3760
72
+ download_size: 191984949
73
+ dataset_size: 548966057
74
+ - config_name: wikitext-2-raw-v1
75
+ features:
76
+ - name: text
77
+ dtype: string
78
+ splits:
79
+ - name: test
80
+ num_bytes: 1305092
81
+ num_examples: 4358
82
+ - name: train
83
+ num_bytes: 11061733
84
+ num_examples: 36718
85
+ - name: validation
86
+ num_bytes: 1159292
87
+ num_examples: 3760
88
+ download_size: 4721645
89
+ dataset_size: 13526117
90
+ ---
91
+
92
+ # Dataset Card for "wikitext"
93
+
94
+ ## Table of Contents
95
+ - [Dataset Description](#dataset-description)
96
+ - [Dataset Summary](#dataset-summary)
97
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
98
+ - [Languages](#languages)
99
+ - [Dataset Structure](#dataset-structure)
100
+ - [Data Instances](#data-instances)
101
+ - [Data Fields](#data-fields)
102
+ - [Data Splits](#data-splits)
103
+ - [Dataset Creation](#dataset-creation)
104
+ - [Curation Rationale](#curation-rationale)
105
+ - [Source Data](#source-data)
106
+ - [Annotations](#annotations)
107
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
108
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
109
+ - [Social Impact of Dataset](#social-impact-of-dataset)
110
+ - [Discussion of Biases](#discussion-of-biases)
111
+ - [Other Known Limitations](#other-known-limitations)
112
+ - [Additional Information](#additional-information)
113
+ - [Dataset Curators](#dataset-curators)
114
+ - [Licensing Information](#licensing-information)
115
+ - [Citation Information](#citation-information)
116
+ - [Contributions](#contributions)
117
+
118
+ ## Dataset Description
119
+
120
+ - **Homepage:** [https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/)
121
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
122
+ - **Paper:** [Pointer Sentinel Mixture Models](https://arxiv.org/abs/1609.07843)
123
+ - **Point of Contact:** [Stephen Merity](mailto:smerity@salesforce.com)
124
+ - **Size of downloaded dataset files:** 391.41 MB
125
+ - **Size of the generated dataset:** 1.12 GB
126
+ - **Total amount of disk used:** 1.52 GB
127
+
128
+ ### Dataset Summary
129
+
130
+ The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
131
+ Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.
132
+
133
+ Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over
134
+ 110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation
135
+ and numbers - all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models
136
+ that can take advantage of long term dependencies.
137
+
138
+ Each subset comes in two different variants:
139
+ - Raw (for character level work) contain the raw tokens, before the addition of the <unk> (unknown) tokens.
140
+ - Non-raw (for word level work) contain only the tokens in their vocabulary (wiki.train.tokens, wiki.valid.tokens, and wiki.test.tokens).
141
+ The out-of-vocabulary tokens have been replaced with the the <unk> token.
142
+
143
+
144
+ ### Supported Tasks and Leaderboards
145
+
146
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
147
+
148
+ ### Languages
149
+
150
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
151
+
152
+ ## Dataset Structure
153
+
154
+ ### Data Instances
155
+
156
+ #### wikitext-103-raw-v1
157
+
158
+ - **Size of downloaded dataset files:** 191.98 MB
159
+ - **Size of the generated dataset:** 549.42 MB
160
+ - **Total amount of disk used:** 741.41 MB
161
+
162
+ An example of 'validation' looks as follows.
163
+ ```
164
+ This example was too long and was cropped:
165
+
166
+ {
167
+ "text": "\" The gold dollar or gold one @-@ dollar piece was a coin struck as a regular issue by the United States Bureau of the Mint from..."
168
+ }
169
+ ```
170
+
171
+ #### wikitext-103-v1
172
+
173
+ - **Size of downloaded dataset files:** 190.23 MB
174
+ - **Size of the generated dataset:** 548.05 MB
175
+ - **Total amount of disk used:** 738.27 MB
176
+
177
+ An example of 'train' looks as follows.
178
+ ```
179
+ This example was too long and was cropped:
180
+
181
+ {
182
+ "text": "\" Senjō no Valkyria 3 : <unk> Chronicles ( Japanese : 戦場のヴァルキュリア3 , lit . Valkyria of the Battlefield 3 ) , commonly referred to..."
183
+ }
184
+ ```
185
+
186
+ #### wikitext-2-raw-v1
187
+
188
+ - **Size of downloaded dataset files:** 4.72 MB
189
+ - **Size of the generated dataset:** 13.54 MB
190
+ - **Total amount of disk used:** 18.26 MB
191
+
192
+ An example of 'train' looks as follows.
193
+ ```
194
+ This example was too long and was cropped:
195
+
196
+ {
197
+ "text": "\" The Sinclair Scientific Programmable was introduced in 1975 , with the same case as the Sinclair Oxford . It was larger than t..."
198
+ }
199
+ ```
200
+
201
+ #### wikitext-2-v1
202
+
203
+ - **Size of downloaded dataset files:** 4.48 MB
204
+ - **Size of the generated dataset:** 13.34 MB
205
+ - **Total amount of disk used:** 17.82 MB
206
+
207
+ An example of 'train' looks as follows.
208
+ ```
209
+ This example was too long and was cropped:
210
+
211
+ {
212
+ "text": "\" Senjō no Valkyria 3 : <unk> Chronicles ( Japanese : 戦場のヴァルキュリア3 , lit . Valkyria of the Battlefield 3 ) , commonly referred to..."
213
+ }
214
+ ```
215
+
216
+ ### Data Fields
217
+
218
+ The data fields are the same among all splits.
219
+
220
+ #### wikitext-103-raw-v1
221
+ - `text`: a `string` feature.
222
+
223
+ #### wikitext-103-v1
224
+ - `text`: a `string` feature.
225
+
226
+ #### wikitext-2-raw-v1
227
+ - `text`: a `string` feature.
228
+
229
+ #### wikitext-2-v1
230
+ - `text`: a `string` feature.
231
+
232
+ ### Data Splits
233
+
234
+ | name | train |validation|test|
235
+ |-------------------|------:|---------:|---:|
236
+ |wikitext-103-raw-v1|1801350| 3760|4358|
237
+ |wikitext-103-v1 |1801350| 3760|4358|
238
+ |wikitext-2-raw-v1 | 36718| 3760|4358|
239
+ |wikitext-2-v1 | 36718| 3760|4358|
240
+
241
+ ## Dataset Creation
242
+
243
+ ### Curation Rationale
244
+
245
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
246
+
247
+ ### Source Data
248
+
249
+ #### Initial Data Collection and Normalization
250
+
251
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
252
+
253
+ #### Who are the source language producers?
254
+
255
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
256
+
257
+ ### Annotations
258
+
259
+ #### Annotation process
260
+
261
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
262
+
263
+ #### Who are the annotators?
264
+
265
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
266
+
267
+ ### Personal and Sensitive Information
268
+
269
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
270
+
271
+ ## Considerations for Using the Data
272
+
273
+ ### Social Impact of Dataset
274
+
275
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
276
+
277
+ ### Discussion of Biases
278
+
279
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
280
+
281
+ ### Other Known Limitations
282
+
283
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
284
+
285
+ ## Additional Information
286
+
287
+ ### Dataset Curators
288
+
289
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
290
+
291
+ ### Licensing Information
292
+
293
+ The dataset is available under the [Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
294
+
295
+ ### Citation Information
296
+
297
+ ```
298
+ @misc{merity2016pointer,
299
+ title={Pointer Sentinel Mixture Models},
300
+ author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
301
+ year={2016},
302
+ eprint={1609.07843},
303
+ archivePrefix={arXiv},
304
+ primaryClass={cs.CL}
305
+ }
306
+ ```
307
+
308
+
309
+ ### Contributions
310
+
311
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"wikitext-103-v1": {"description": " The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified\n Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike\n License.\n", "citation": "@misc{merity2016pointer,\n title={Pointer Sentinel Mixture Models},\n author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},\n year={2016},\n eprint={1609.07843},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/", "license": "Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wikitext", "config_name": "wikitext-103-v1", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1295579, "num_examples": 4358, "dataset_name": "wikitext"}, "train": {"name": "train", "num_bytes": 545142639, "num_examples": 1801350, "dataset_name": "wikitext"}, "validation": {"name": "validation", "num_bytes": 1154755, "num_examples": 3760, "dataset_name": "wikitext"}}, "download_checksums": {"https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-v1.zip": {"num_bytes": 190229076, "checksum": "242ba0f20b329cfdf1ccc61e9e9e5b59becf189db7f7a81cd2a0e2fc31539590"}}, "download_size": 190229076, "post_processing_size": null, "dataset_size": 547592973, "size_in_bytes": 737822049}, "wikitext-2-v1": {"description": " The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified\n Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike\n License.\n", "citation": "@misc{merity2016pointer,\n title={Pointer Sentinel Mixture Models},\n author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},\n year={2016},\n eprint={1609.07843},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/", "license": "Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wikitext", "config_name": "wikitext-2-v1", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1270951, "num_examples": 4358, "dataset_name": "wikitext"}, "train": {"name": "train", "num_bytes": 10918134, "num_examples": 36718, "dataset_name": "wikitext"}, "validation": {"name": "validation", "num_bytes": 1134127, "num_examples": 3760, "dataset_name": "wikitext"}}, "download_checksums": {"https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip": {"num_bytes": 4475746, "checksum": "92675f1d63015c1c8b51f1656a52d5bdbc33aafa60cc47a218a66e7ee817488c"}}, "download_size": 4475746, "post_processing_size": null, "dataset_size": 13323212, "size_in_bytes": 17798958}, "wikitext-103-raw-v1": {"description": " The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified\n Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike\n License.\n", "citation": "@misc{merity2016pointer,\n title={Pointer Sentinel Mixture Models},\n author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},\n year={2016},\n eprint={1609.07843},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/", "license": "Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wikitext", "config_name": "wikitext-103-raw-v1", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1305092, "num_examples": 4358, "dataset_name": "wikitext"}, "train": {"name": "train", "num_bytes": 546501673, "num_examples": 1801350, "dataset_name": "wikitext"}, "validation": {"name": "validation", "num_bytes": 1159292, "num_examples": 3760, "dataset_name": "wikitext"}}, "download_checksums": {"https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip": {"num_bytes": 191984949, "checksum": "91c00ae287f0d699e18605c84afc9e45c192bc6b7797ff8837e5474655a33794"}}, "download_size": 191984949, "post_processing_size": null, "dataset_size": 548966057, "size_in_bytes": 740951006}, "wikitext-2-raw-v1": {"description": " The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified\n Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike\n License.\n", "citation": "@misc{merity2016pointer,\n title={Pointer Sentinel Mixture Models},\n author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},\n year={2016},\n eprint={1609.07843},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/", "license": "Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wikitext", "config_name": "wikitext-2-raw-v1", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1305092, "num_examples": 4358, "dataset_name": "wikitext"}, "train": {"name": "train", "num_bytes": 11061733, "num_examples": 36718, "dataset_name": "wikitext"}, "validation": {"name": "validation", "num_bytes": 1159292, "num_examples": 3760, "dataset_name": "wikitext"}}, "download_checksums": {"https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-raw-v1.zip": {"num_bytes": 4721645, "checksum": "ef7edb566e3e2b2d31b29c1fdb0c89a4cc683597484c3dc2517919c615435a11"}}, "download_size": 4721645, "post_processing_size": null, "dataset_size": 13526117, "size_in_bytes": 18247762}}
wikitext.py ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """TODO(wikitext): Add a description here."""
2
+
3
+
4
+ import os
5
+
6
+ import datasets
7
+
8
+
9
+ _CITATION = """\
10
+ @misc{merity2016pointer,
11
+ title={Pointer Sentinel Mixture Models},
12
+ author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
13
+ year={2016},
14
+ eprint={1609.07843},
15
+ archivePrefix={arXiv},
16
+ primaryClass={cs.CL}
17
+ }
18
+ """
19
+
20
+ _DESCRIPTION = """\
21
+ The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
22
+ Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike
23
+ License.
24
+ """
25
+ _HOMEPAGE = "https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/"
26
+ _LICENSE = "Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)"
27
+ _DATA_URL = "https://s3.amazonaws.com/research.metamind.io/wikitext"
28
+
29
+
30
+ class WikitextConfig(datasets.BuilderConfig):
31
+ """BuilderConfig for GLUE."""
32
+
33
+ def __init__(self, data_url, **kwargs):
34
+ """BuilderConfig for Wikitext
35
+
36
+ Args:
37
+ data_url: `string`, url to the dataset (word or raw level)
38
+ **kwargs: keyword arguments forwarded to super.
39
+ """
40
+ super(WikitextConfig, self).__init__(
41
+ version=datasets.Version(
42
+ "1.0.0",
43
+ ),
44
+ **kwargs,
45
+ )
46
+ self.data_url = data_url
47
+
48
+
49
+ class Wikitext(datasets.GeneratorBasedBuilder):
50
+ """TODO(wikitext_103): Short description of my dataset."""
51
+
52
+ # TODO(wikitext_103): Set up version.
53
+ VERSION = datasets.Version("0.1.0")
54
+ BUILDER_CONFIGS = [
55
+ WikitextConfig(
56
+ name="wikitext-103-v1",
57
+ data_url=_DATA_URL + "/" + "wikitext-103-v1.zip",
58
+ description="Word level dataset. No processing is needed other than replacing newlines with <eos> tokens.",
59
+ ),
60
+ WikitextConfig(
61
+ name="wikitext-2-v1",
62
+ data_url=_DATA_URL + "/" + "wikitext-2-v1.zip",
63
+ description="Word level dataset. No processing is needed other than replacing newlines with <eos> tokens.",
64
+ ),
65
+ WikitextConfig(
66
+ name="wikitext-103-raw-v1",
67
+ data_url=_DATA_URL + "/" + "wikitext-103-raw-v1.zip",
68
+ description="Raw level dataset: the raw tokens before the addition of <unk> tokens. "
69
+ "They should only be used for character level work or for creating newly derived datasets.",
70
+ ),
71
+ WikitextConfig(
72
+ name="wikitext-2-raw-v1",
73
+ data_url=_DATA_URL + "/" + "wikitext-2-raw-v1.zip",
74
+ description="Raw level dataset: the raw tokens before the addition of <unk> tokens. "
75
+ "They should only be used for character level work or for creating newly derived datasets.",
76
+ ),
77
+ ]
78
+
79
+ def _info(self):
80
+ # TODO(wikitext): Specifies the datasets.DatasetInfo object
81
+ return datasets.DatasetInfo(
82
+ # This is the description that will appear on the datasets page.
83
+ description=_DESCRIPTION,
84
+ # datasets.features.FeatureConnectors
85
+ features=datasets.Features(
86
+ {
87
+ "text": datasets.Value("string")
88
+ # These are the features of your dataset like images, labels ...
89
+ }
90
+ ),
91
+ # If there's a common (input, target) tuple from the features,
92
+ # specify them here. They'll be used if as_supervised=True in
93
+ # builder.as_dataset.
94
+ supervised_keys=None,
95
+ homepage=_HOMEPAGE,
96
+ license=_LICENSE,
97
+ citation=_CITATION,
98
+ )
99
+
100
+ def _split_generators(self, dl_manager):
101
+ """Returns SplitGenerators."""
102
+ # TODO(wikitext): Downloads the data and defines the splits
103
+ # dl_manager is a datasets.download.DownloadManager that can be used to
104
+ # download and extract URLs
105
+ if self.config.name == "wikitext-103-v1":
106
+ data_file = dl_manager.download_and_extract(self.config.data_url)
107
+ data_dir = os.path.join(data_file, "wikitext-103")
108
+ return [
109
+ datasets.SplitGenerator(
110
+ name=datasets.Split.TEST,
111
+ gen_kwargs={"data_file": os.path.join(data_dir, "wiki.test.tokens"), "split": "test"},
112
+ ),
113
+ datasets.SplitGenerator(
114
+ name=datasets.Split.TRAIN,
115
+ gen_kwargs={"data_file": os.path.join(data_dir, "wiki.train.tokens"), "split": "train"},
116
+ ),
117
+ datasets.SplitGenerator(
118
+ name=datasets.Split.VALIDATION,
119
+ gen_kwargs={"data_file": os.path.join(data_dir, "wiki.valid.tokens"), "split": "valid"},
120
+ ),
121
+ ]
122
+ else:
123
+ if self.config.name == "wikitext-103-raw-v1":
124
+ data_file = dl_manager.download_and_extract(self.config.data_url)
125
+ data_dir = os.path.join(data_file, "wikitext-103-raw")
126
+ return [
127
+ datasets.SplitGenerator(
128
+ name=datasets.Split.TEST,
129
+ gen_kwargs={"data_file": os.path.join(data_dir, "wiki.test.raw"), "split": "test"},
130
+ ),
131
+ datasets.SplitGenerator(
132
+ name=datasets.Split.TRAIN,
133
+ gen_kwargs={"data_file": os.path.join(data_dir, "wiki.train.raw"), "split": "train"},
134
+ ),
135
+ datasets.SplitGenerator(
136
+ name=datasets.Split.VALIDATION,
137
+ gen_kwargs={"data_file": os.path.join(data_dir, "wiki.valid.raw"), "split": "valid"},
138
+ ),
139
+ ]
140
+ else:
141
+ if self.config.name == "wikitext-2-raw-v1":
142
+ data_file = dl_manager.download_and_extract(self.config.data_url)
143
+ data_dir = os.path.join(data_file, "wikitext-2-raw")
144
+ return [
145
+ datasets.SplitGenerator(
146
+ name=datasets.Split.TEST,
147
+ gen_kwargs={"data_file": os.path.join(data_dir, "wiki.test.raw"), "split": "test"},
148
+ ),
149
+ datasets.SplitGenerator(
150
+ name=datasets.Split.TRAIN,
151
+ gen_kwargs={"data_file": os.path.join(data_dir, "wiki.train.raw"), "split": "train"},
152
+ ),
153
+ datasets.SplitGenerator(
154
+ name=datasets.Split.VALIDATION,
155
+ gen_kwargs={"data_file": os.path.join(data_dir, "wiki.valid.raw"), "split": "valid"},
156
+ ),
157
+ ]
158
+ else:
159
+ if self.config.name == "wikitext-2-v1":
160
+ data_file = dl_manager.download_and_extract(self.config.data_url)
161
+ data_dir = os.path.join(data_file, "wikitext-2")
162
+ return [
163
+ datasets.SplitGenerator(
164
+ name=datasets.Split.TEST,
165
+ gen_kwargs={"data_file": os.path.join(data_dir, "wiki.test.tokens"), "split": "test"},
166
+ ),
167
+ datasets.SplitGenerator(
168
+ name=datasets.Split.TRAIN,
169
+ gen_kwargs={
170
+ "data_file": os.path.join(data_dir, "wiki.train.tokens"),
171
+ "split": "train",
172
+ },
173
+ ),
174
+ datasets.SplitGenerator(
175
+ name=datasets.Split.VALIDATION,
176
+ gen_kwargs={
177
+ "data_file": os.path.join(data_dir, "wiki.valid.tokens"),
178
+ "split": "valid",
179
+ },
180
+ ),
181
+ ]
182
+
183
+ def _generate_examples(self, data_file, split):
184
+
185
+ """Yields examples."""
186
+ # TODO(wikitext): Yields (key, example) tuples from the dataset
187
+ with open(data_file, encoding="utf-8") as f:
188
+ for idx, row in enumerate(f):
189
+ if row.strip():
190
+ yield idx, {"text": row}
191
+ else:
192
+ yield idx, {"text": ""}