mtanghu commited on
Commit
382b6d2
1 Parent(s): f3a6d99

Update Enwik8 broken link and information (#4950)

Browse files

* Update enwik8 fixing the broken link

* Update enwik8 README file sizes

Commit from https://github.com/huggingface/datasets/commit/c7e540d2791a6eff7bdd5d5ecf029ed0da35802e

Files changed (3) hide show
  1. README.md +9 -9
  2. dataset_infos.json +1 -1
  3. enwik8.py +3 -3
README.md CHANGED
@@ -49,19 +49,19 @@ task_ids:
49
 
50
  ## Dataset Description
51
 
52
- - **Homepage:** https://cs.fit.edu/~mmahoney/compression/textdata.html
53
  - **Repository:** [Needs More Information]
54
  - **Paper:** [Needs More Information]
55
- - **Leaderboard:** [Needs More Information]
56
  - **Point of Contact:** [Needs More Information]
57
 
58
  ### Dataset Summary
59
 
60
- The enwik8 datasset is based on Wikipedia and is typically used to measure a model's ability to compress data. The data come from a Wikipedia dump from 2006.
61
 
62
  ### Supported Tasks and Leaderboards
63
 
64
- [Needs More Information]
65
 
66
  ### Languages
67
 
@@ -71,9 +71,9 @@ en
71
 
72
  ### Data Instances
73
 
74
- - **Size of downloaded dataset files:** 33.39 MB
75
- - **Size of generated dataset files:** 99.47 MB
76
- - **Total size:** 132.86 MB
77
 
78
  ```
79
  {
@@ -110,7 +110,7 @@ The data fields are the same among all sets.
110
 
111
  #### Initial Data Collection and Normalization
112
 
113
- [Needs More Information]
114
 
115
  #### Who are the source language producers?
116
 
@@ -160,4 +160,4 @@ Dataset is not part of a publication, and can therefore not be cited.
160
 
161
  ### Contributions
162
 
163
- Thanks to [@HallerPatrick](https://github.com/HallerPatrick) for adding this dataset.
49
 
50
  ## Dataset Description
51
 
52
+ - **Homepage:** http://mattmahoney.net/dc/textdata.html
53
  - **Repository:** [Needs More Information]
54
  - **Paper:** [Needs More Information]
55
+ - **Leaderboard:** https://paperswithcode.com/sota/language-modelling-on-enwiki8
56
  - **Point of Contact:** [Needs More Information]
57
 
58
  ### Dataset Summary
59
 
60
+ The enwik8 dataset is the first 100,000,000 (100M) bytes of the English Wikipedia XML dump on Mar. 3, 2006 and is typically used to measure a model's ability to compress data.
61
 
62
  ### Supported Tasks and Leaderboards
63
 
64
+ A leaderboard for byte-level causal language modelling can be found on [paperswithcode](https://paperswithcode.com/sota/language-modelling-on-enwiki8)
65
 
66
  ### Languages
67
 
71
 
72
  ### Data Instances
73
 
74
+ - **Size of downloaded dataset files:** 34.76 MB
75
+ - **Size of generated dataset files:** 97.64 MB
76
+ - **Total size:** 132.40 MB
77
 
78
  ```
79
  {
110
 
111
  #### Initial Data Collection and Normalization
112
 
113
+ The data is just English Wikipedia XML dump on Mar. 3, 2006 split by line for enwik8 and not split by line for enwik8-raw.
114
 
115
  #### Who are the source language producers?
116
 
160
 
161
  ### Contributions
162
 
163
+ Thanks to [@HallerPatrick](https://github.com/HallerPatrick) for adding this dataset and [@mtanghu](https://github.com/mtanghu) for updating it.
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"enwik8": {"description": "The dataset is based on the Hutter Prize (http://prize.hutter1.net) and contains the first 10^8 byte of Wikipedia\n", "citation": "", "homepage": "https://cs.fit.edu/~mmahoney/compression/textdata.html", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "enwik8", "config_name": "enwik8", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 104299244, "num_examples": 1128024, "dataset_name": "enwik8"}}, "download_checksums": {"http://cs.fit.edu/~mmahoney/compression/enwik8.zip": {"num_bytes": 35012219, "checksum": "9591b88a79ef28eeef58b6213ffbbc1b793db83d67b7d451061829b38e0dcc69"}}, "download_size": 35012219, "post_processing_size": null, "dataset_size": 104299244, "size_in_bytes": 139311463}, "enwik8-raw": {"description": "The dataset is based on the Hutter Prize (http://prize.hutter1.net) and contains the first 10^8 byte of Wikipedia\n", "citation": "", "homepage": "https://cs.fit.edu/~mmahoney/compression/textdata.html", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "enwik8", "config_name": "enwik8-raw", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 100000004, "num_examples": 1, "dataset_name": "enwik8"}}, "download_checksums": {"http://cs.fit.edu/~mmahoney/compression/enwik8.zip": {"num_bytes": 35012219, "checksum": "9591b88a79ef28eeef58b6213ffbbc1b793db83d67b7d451061829b38e0dcc69"}}, "download_size": 35012219, "post_processing_size": null, "dataset_size": 100000004, "size_in_bytes": 135012223}}
1
+ {"enwik8": {"description": "The dataset is based on the Hutter Prize (http://prize.hutter1.net) and contains the first 10^8 bytes of English Wikipedia in 2006 in XML\n", "citation": "", "homepage": "http://mattmahoney.net/dc/textdata.html", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "enwik8", "config_name": "enwik8", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 104299244, "num_examples": 1128024, "dataset_name": "enwik8"}}, "download_checksums": {"http://mattmahoney.net/dc/enwik8.zip": {"num_bytes": 36445475, "checksum": "547994d9980ebed1288380d652999f38a14fe291a6247c157c3d33d4932534bc"}}, "download_size": 36445475, "post_processing_size": null, "dataset_size": 102383126, "size_in_bytes": 138828601}, "enwik8-raw": {"description": "The dataset is based on the Hutter Prize (http://prize.hutter1.net) and contains the first 10^8 bytes of English Wikipedia in 2006 in XML\n", "citation": "", "homepage": "http://mattmahoney.net/dc/textdata.html", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "enwik8", "config_name": "enwik8-raw", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 100000008, "num_examples": 1, "dataset_name": "enwik8"}}, "download_checksums": {"http://mattmahoney.net/dc/enwik8.zip": {"num_bytes": 36445475, "checksum": "547994d9980ebed1288380d652999f38a14fe291a6247c157c3d33d4932534bc"}}, "download_size": 36445475, "post_processing_size": null, "dataset_size": 100000008, "size_in_bytes": 136445483}}
enwik8.py CHANGED
@@ -21,16 +21,16 @@ _CITATION = ""
21
 
22
  # You can copy an official description
23
  _DESCRIPTION = """\
24
- The dataset is based on the Hutter Prize (http://prize.hutter1.net) and contains the first 10^8 byte of Wikipedia
25
  """
26
 
27
- _HOMEPAGE = "https://cs.fit.edu/~mmahoney/compression/textdata.html"
28
 
29
  _LICENSE = ""
30
 
31
  # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
32
  # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
33
- _URLS = {"source": "http://cs.fit.edu/~mmahoney/compression/enwik8.zip"}
34
 
35
 
36
  class Enwik8(datasets.GeneratorBasedBuilder):
21
 
22
  # You can copy an official description
23
  _DESCRIPTION = """\
24
+ The dataset is based on the Hutter Prize (http://prize.hutter1.net) and contains the first 10^8 bytes of English Wikipedia in 2006 in XML
25
  """
26
 
27
+ _HOMEPAGE = "http://mattmahoney.net/dc/textdata.html"
28
 
29
  _LICENSE = ""
30
 
31
  # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
32
  # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
33
+ _URLS = {"source": "http://mattmahoney.net/dc/enwik8.zip"}
34
 
35
 
36
  class Enwik8(datasets.GeneratorBasedBuilder):