Patrick Haller commited on
Commit
e34d040
1 Parent(s): acfa9b8

Adding dataset enwik8 (#4321)

Browse files

* Adding dataset enwik8

* Add missing sections to README, using only one source and split in data
loader

* Formatting

* Update datasets/enwik8/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/enwik8/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/enwik8/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/enwik8/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/enwik8/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/enwik8/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Adding dummy data for enwik8 dataset

* Updating zip files to pass tests

* Apply suggestions from code review

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

Commit from https://github.com/huggingface/datasets/commit/d68a7479b4cb906b4b8b79804d61e3315974511a

README.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: enwik8
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - fill-mask
19
+ - text-generation
20
+ task_ids:
21
+ - language-modeling
22
+ - masked-language-modeling
23
+ ---
24
+
25
+ # Dataset Card for enwik8
26
+
27
+ ## Table of Contents
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-instances)
35
+ - [Data Splits](#data-instances)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:** https://cs.fit.edu/~mmahoney/compression/textdata.html
53
+ - **Repository:** [Needs More Information]
54
+ - **Paper:** [Needs More Information]
55
+ - **Leaderboard:** [Needs More Information]
56
+ - **Point of Contact:** [Needs More Information]
57
+
58
+ ### Dataset Summary
59
+
60
+ The enwik8 datasset is based on Wikipedia and is typically used to measure a model's ability to compress data. The data come from a Wikipedia dump from 2006.
61
+
62
+ ### Supported Tasks and Leaderboards
63
+
64
+ [Needs More Information]
65
+
66
+ ### Languages
67
+
68
+ en
69
+
70
+ ## Dataset Structure
71
+
72
+ ### Data Instances
73
+
74
+ - **Size of downloaded dataset files:** 33.39 MB
75
+ - **Size of generated dataset files:** 99.47 MB
76
+ - **Total size:** 132.86 MB
77
+
78
+ ```
79
+ {
80
+ "text": "In [[Denmark]], the [[Freetown Christiania]] was created in downtown [[Copenhagen]]....",
81
+ }
82
+ ```
83
+
84
+ ### Data Fields
85
+
86
+ The data fields are the same among all sets.
87
+
88
+ #### enwik8
89
+
90
+ - `text`: a `string` feature.
91
+
92
+ #### enwik8-raw
93
+
94
+ - `text`: a `string` feature.
95
+
96
+ ### Data Splits
97
+
98
+ | dataset | train |
99
+ | --- | --- |
100
+ | enwik8 | 1128024 |
101
+ | enwik8- raw | 1 |
102
+
103
+ ## Dataset Creation
104
+
105
+ ### Curation Rationale
106
+
107
+ [Needs More Information]
108
+
109
+ ### Source Data
110
+
111
+ #### Initial Data Collection and Normalization
112
+
113
+ [Needs More Information]
114
+
115
+ #### Who are the source language producers?
116
+
117
+ [Needs More Information]
118
+
119
+ ### Annotations
120
+
121
+ #### Annotation process
122
+
123
+ [Needs More Information]
124
+
125
+ #### Who are the annotators?
126
+
127
+ [Needs More Information]
128
+
129
+ ### Personal and Sensitive Information
130
+
131
+ [Needs More Information]
132
+
133
+ ## Considerations for Using the Data
134
+
135
+ ### Social Impact of Dataset
136
+
137
+ [Needs More Information]
138
+
139
+ ### Discussion of Biases
140
+
141
+ [Needs More Information]
142
+
143
+ ### Other Known Limitations
144
+
145
+ [Needs More Information]
146
+
147
+ ## Additional Information
148
+
149
+ ### Dataset Curators
150
+
151
+ [Needs More Information]
152
+
153
+ ### Licensing Information
154
+
155
+ [Needs More Information]
156
+
157
+ ### Citation Information
158
+
159
+ Dataset is not part of a publication, and can therefore not be cited.
160
+
161
+ ### Contributions
162
+
163
+ Thanks to [@HallerPatrick](https://github.com/HallerPatrick) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"enwik8": {"description": "The dataset is based on the Hutter Prize (http://prize.hutter1.net) and contains the first 10^8 byte of Wikipedia\n", "citation": "", "homepage": "https://cs.fit.edu/~mmahoney/compression/textdata.html", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "enwik8", "config_name": "enwik8", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 104299244, "num_examples": 1128024, "dataset_name": "enwik8"}}, "download_checksums": {"http://cs.fit.edu/~mmahoney/compression/enwik8.zip": {"num_bytes": 35012219, "checksum": "9591b88a79ef28eeef58b6213ffbbc1b793db83d67b7d451061829b38e0dcc69"}}, "download_size": 35012219, "post_processing_size": null, "dataset_size": 104299244, "size_in_bytes": 139311463}, "enwik8-raw": {"description": "The dataset is based on the Hutter Prize (http://prize.hutter1.net) and contains the first 10^8 byte of Wikipedia\n", "citation": "", "homepage": "https://cs.fit.edu/~mmahoney/compression/textdata.html", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "enwik8", "config_name": "enwik8-raw", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 100000004, "num_examples": 1, "dataset_name": "enwik8"}}, "download_checksums": {"http://cs.fit.edu/~mmahoney/compression/enwik8.zip": {"num_bytes": 35012219, "checksum": "9591b88a79ef28eeef58b6213ffbbc1b793db83d67b7d451061829b38e0dcc69"}}, "download_size": 35012219, "post_processing_size": null, "dataset_size": 100000004, "size_in_bytes": 135012223}}
dummy/enwik8-raw/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa249d93a4b1ee5690a1849dd4c702a4b7e13f6bd1cf0f04cfdad072677de150
3
+ size 1102
dummy/enwik8/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fcd2e7b6af7a450b2765e95ef896dcd1f53c66af1bea3577183bf04c012a650
3
+ size 1102
enwik8.py ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ import os
16
+
17
+ import datasets
18
+
19
+
20
+ _CITATION = ""
21
+
22
+ # You can copy an official description
23
+ _DESCRIPTION = """\
24
+ The dataset is based on the Hutter Prize (http://prize.hutter1.net) and contains the first 10^8 byte of Wikipedia
25
+ """
26
+
27
+ _HOMEPAGE = "https://cs.fit.edu/~mmahoney/compression/textdata.html"
28
+
29
+ _LICENSE = ""
30
+
31
+ # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
32
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
33
+ _URLS = {"source": "http://cs.fit.edu/~mmahoney/compression/enwik8.zip"}
34
+
35
+
36
+ class Enwik8(datasets.GeneratorBasedBuilder):
37
+
38
+ VERSION = datasets.Version("1.1.0")
39
+
40
+ BUILDER_CONFIGS = [
41
+ datasets.BuilderConfig(
42
+ name="enwik8",
43
+ version=VERSION,
44
+ description="This version of the dataset contains a split by line version with all content",
45
+ ),
46
+ datasets.BuilderConfig(
47
+ name="enwik8-raw",
48
+ version=VERSION,
49
+ description="This version of the dataset contains a raw string version split with all content",
50
+ ),
51
+ ]
52
+
53
+ DEFAULT_CONFIG_NAME = "enwik8"
54
+
55
+ def _info(self):
56
+
57
+ return datasets.DatasetInfo(
58
+ description=_DESCRIPTION,
59
+ features=datasets.Features(
60
+ {
61
+ "text": datasets.Value("string"),
62
+ }
63
+ ),
64
+ homepage=_HOMEPAGE,
65
+ license=_LICENSE,
66
+ citation=_CITATION,
67
+ )
68
+
69
+ def _split_generators(self, dl_manager):
70
+
71
+ urls = _URLS["source"]
72
+ data_dir = dl_manager.download_and_extract(urls)
73
+
74
+ return [
75
+ datasets.SplitGenerator(
76
+ name=datasets.Split.TRAIN,
77
+ gen_kwargs={
78
+ "filepath": os.path.join(data_dir, "enwik8"),
79
+ "split": "train",
80
+ },
81
+ )
82
+ ]
83
+
84
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
85
+ def _generate_examples(self, filepath, split):
86
+ with open(filepath, encoding="utf-8") as f:
87
+ if self.config.name.endswith("raw"):
88
+ yield 0, {"text": f.read()}
89
+ else:
90
+ for key, line in enumerate(f):
91
+ yield key, {"text": line.strip()}