parquet-converter commited on
Commit
a025b01
1 Parent(s): 1e0cb50

Update parquet files

Browse files
README.md DELETED
@@ -1,175 +0,0 @@
1
- ---
2
- language:
3
- - en
4
- paperswithcode_id: wiki-40b
5
- pretty_name: Wiki-40B
6
- dataset_info:
7
- features:
8
- - name: wikidata_id
9
- dtype: string
10
- - name: text
11
- dtype: string
12
- - name: version_id
13
- dtype: string
14
- config_name: en
15
- splits:
16
- - name: train
17
- num_bytes: 9423623904
18
- num_examples: 2926536
19
- - name: validation
20
- num_bytes: 527383016
21
- num_examples: 163597
22
- - name: test
23
- num_bytes: 522219464
24
- num_examples: 162274
25
- download_size: 0
26
- dataset_size: 10473226384
27
- ---
28
-
29
- # Dataset Card for "wiki40b"
30
-
31
- ## Table of Contents
32
- - [Dataset Description](#dataset-description)
33
- - [Dataset Summary](#dataset-summary)
34
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
35
- - [Languages](#languages)
36
- - [Dataset Structure](#dataset-structure)
37
- - [Data Instances](#data-instances)
38
- - [Data Fields](#data-fields)
39
- - [Data Splits](#data-splits)
40
- - [Dataset Creation](#dataset-creation)
41
- - [Curation Rationale](#curation-rationale)
42
- - [Source Data](#source-data)
43
- - [Annotations](#annotations)
44
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
- - [Considerations for Using the Data](#considerations-for-using-the-data)
46
- - [Social Impact of Dataset](#social-impact-of-dataset)
47
- - [Discussion of Biases](#discussion-of-biases)
48
- - [Other Known Limitations](#other-known-limitations)
49
- - [Additional Information](#additional-information)
50
- - [Dataset Curators](#dataset-curators)
51
- - [Licensing Information](#licensing-information)
52
- - [Citation Information](#citation-information)
53
- - [Contributions](#contributions)
54
-
55
- ## Dataset Description
56
-
57
- - **Homepage:** [https://research.google/pubs/pub49029/](https://research.google/pubs/pub49029/)
58
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
59
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
60
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
61
- - **Size of downloaded dataset files:** 0.00 MB
62
- - **Size of the generated dataset:** 9988.05 MB
63
- - **Total amount of disk used:** 9988.05 MB
64
-
65
- ### Dataset Summary
66
-
67
- Clean-up text for 40+ Wikipedia languages editions of pages
68
- correspond to entities. The datasets have train/dev/test splits per language.
69
- The dataset is cleaned up by page filtering to remove disambiguation pages,
70
- redirect pages, deleted pages, and non-entity pages. Each example contains the
71
- wikidata id of the entity, and the full Wikipedia article after page processing
72
- that removes non-content sections and structured objects.
73
-
74
- ### Supported Tasks and Leaderboards
75
-
76
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
77
-
78
- ### Languages
79
-
80
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
81
-
82
- ## Dataset Structure
83
-
84
- ### Data Instances
85
-
86
- #### en
87
-
88
- - **Size of downloaded dataset files:** 0.00 MB
89
- - **Size of the generated dataset:** 9988.05 MB
90
- - **Total amount of disk used:** 9988.05 MB
91
-
92
- An example of 'train' looks as follows.
93
- ```
94
-
95
- ```
96
-
97
- ### Data Fields
98
-
99
- The data fields are the same among all splits.
100
-
101
- #### en
102
- - `wikidata_id`: a `string` feature.
103
- - `text`: a `string` feature.
104
- - `version_id`: a `string` feature.
105
-
106
- ### Data Splits
107
-
108
- |name| train |validation| test |
109
- |----|------:|---------:|-----:|
110
- |en |2926536| 163597|162274|
111
-
112
- ## Dataset Creation
113
-
114
- ### Curation Rationale
115
-
116
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
117
-
118
- ### Source Data
119
-
120
- #### Initial Data Collection and Normalization
121
-
122
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
123
-
124
- #### Who are the source language producers?
125
-
126
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
127
-
128
- ### Annotations
129
-
130
- #### Annotation process
131
-
132
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
-
134
- #### Who are the annotators?
135
-
136
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
137
-
138
- ### Personal and Sensitive Information
139
-
140
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
141
-
142
- ## Considerations for Using the Data
143
-
144
- ### Social Impact of Dataset
145
-
146
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
147
-
148
- ### Discussion of Biases
149
-
150
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
151
-
152
- ### Other Known Limitations
153
-
154
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
155
-
156
- ## Additional Information
157
-
158
- ### Dataset Curators
159
-
160
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
161
-
162
- ### Licensing Information
163
-
164
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
165
-
166
- ### Citation Information
167
-
168
- ```
169
-
170
- ```
171
-
172
-
173
- ### Contributions
174
-
175
- Thanks to [@jplu](https://github.com/jplu), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ar/test/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f37c6fd7630195215931f9064b6638242f44fe02341d89c75bd59c75423d17b
3
+ size 21041564
ar/train/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4c3f98f6e4558bad28d000f8e8f4d6c2f13dd28a6a9a25006ececa60cc5e5ac
3
+ size 183524669
ar/train/0001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ca1e07f8698d39b75cf48ff3bb2d1077a36728d85230bc6821cd7cbaa169a0c
3
+ size 188004754
ar/validation/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79e6e1403b99e73c930e6061094836960e3a7135e6506fff7df3f3b5ffe1af97
3
+ size 21112541
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"en": {"description": "\nClean-up text for 40+ Wikipedia languages editions of pages\ncorrespond to entities. The datasets have train/dev/test splits per language.\nThe dataset is cleaned up by page filtering to remove disambiguation pages,\nredirect pages, deleted pages, and non-entity pages. Each example contains the\nwikidata id of the entity, and the full Wikipedia article after page processing\nthat removes non-content sections and structured objects.\n", "citation": "\n", "homepage": "https://research.google/pubs/pub49029/", "license": "", "features": {"wikidata_id": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "version_id": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "wiki40b", "config_name": "en", "version": {"version_str": "1.1.0", "description": null, "datasets_version_to_prepare": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 9423623904, "num_examples": 2926536, "dataset_name": "wiki40b"}, "validation": {"name": "validation", "num_bytes": 527383016, "num_examples": 163597, "dataset_name": "wiki40b"}, "test": {"name": "test", "num_bytes": 522219464, "num_examples": 162274, "dataset_name": "wiki40b"}}, "download_checksums": {}, "download_size": 0, "dataset_size": 10473226384, "size_in_bytes": 10473226384}}
 
 
wiki40b.py DELETED
@@ -1,182 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- """Wiki40B: A clean Wikipedia dataset for 40+ languages."""
17
-
18
-
19
- import datasets
20
-
21
-
22
- logger = datasets.logging.get_logger(__name__)
23
-
24
-
25
- _CITATION = """
26
- """
27
-
28
- _DESCRIPTION = """
29
- Clean-up text for 40+ Wikipedia languages editions of pages
30
- correspond to entities. The datasets have train/dev/test splits per language.
31
- The dataset is cleaned up by page filtering to remove disambiguation pages,
32
- redirect pages, deleted pages, and non-entity pages. Each example contains the
33
- wikidata id of the entity, and the full Wikipedia article after page processing
34
- that removes non-content sections and structured objects.
35
- """
36
-
37
- _LICENSE = """
38
- This work is licensed under the Creative Commons Attribution-ShareAlike
39
- 3.0 Unported License. To view a copy of this license, visit
40
- http://creativecommons.org/licenses/by-sa/3.0/ or send a letter to
41
- Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
42
- """
43
-
44
- _URL = "https://research.google/pubs/pub49029/"
45
-
46
- _DATA_DIRECTORY = "gs://tfds-data/downloads/wiki40b/tfrecord_prod"
47
-
48
- WIKIPEDIA_LANGUAGES = [
49
- "en",
50
- "ar",
51
- "zh-cn",
52
- "zh-tw",
53
- "nl",
54
- "fr",
55
- "de",
56
- "it",
57
- "ja",
58
- "ko",
59
- "pl",
60
- "pt",
61
- "ru",
62
- "es",
63
- "th",
64
- "tr",
65
- "bg",
66
- "ca",
67
- "cs",
68
- "da",
69
- "el",
70
- "et",
71
- "fa",
72
- "fi",
73
- "he",
74
- "hi",
75
- "hr",
76
- "hu",
77
- "id",
78
- "lt",
79
- "lv",
80
- "ms",
81
- "no",
82
- "ro",
83
- "sk",
84
- "sl",
85
- "sr",
86
- "sv",
87
- "tl",
88
- "uk",
89
- "vi",
90
- ]
91
-
92
-
93
- class Wiki40bConfig(datasets.BuilderConfig):
94
- """BuilderConfig for Wiki40B."""
95
-
96
- def __init__(self, language=None, **kwargs):
97
- """BuilderConfig for Wiki40B.
98
-
99
- Args:
100
- language: string, the language code for the Wiki40B dataset to use.
101
- **kwargs: keyword arguments forwarded to super.
102
- """
103
- super(Wiki40bConfig, self).__init__(
104
- name=str(language), description=f"Wiki40B dataset for {language}.", **kwargs
105
- )
106
- self.language = language
107
-
108
-
109
- _VERSION = datasets.Version("1.1.0")
110
-
111
-
112
- class Wiki40b(datasets.BeamBasedBuilder):
113
- """Wiki40B: A Clean Wikipedia Dataset for Mutlilingual Language Modeling."""
114
-
115
- BUILDER_CONFIGS = [
116
- Wiki40bConfig(
117
- version=_VERSION,
118
- language=lang,
119
- ) # pylint:disable=g-complex-comprehension
120
- for lang in WIKIPEDIA_LANGUAGES
121
- ]
122
-
123
- def _info(self):
124
- return datasets.DatasetInfo(
125
- description=_DESCRIPTION,
126
- features=datasets.Features(
127
- {
128
- "wikidata_id": datasets.Value("string"),
129
- "text": datasets.Value("string"),
130
- "version_id": datasets.Value("string"),
131
- }
132
- ),
133
- supervised_keys=None,
134
- homepage=_URL,
135
- citation=_CITATION,
136
- )
137
-
138
- def _split_generators(self, dl_manager):
139
- """Returns SplitGenerators."""
140
-
141
- lang = self.config.language
142
-
143
- return [
144
- datasets.SplitGenerator(
145
- name=datasets.Split.TRAIN,
146
- gen_kwargs={"filepaths": f"{_DATA_DIRECTORY}/train/{lang}_examples-*"},
147
- ),
148
- datasets.SplitGenerator(
149
- name=datasets.Split.VALIDATION,
150
- gen_kwargs={"filepaths": f"{_DATA_DIRECTORY}/dev/{lang}_examples-*"},
151
- ),
152
- datasets.SplitGenerator(
153
- name=datasets.Split.TEST,
154
- gen_kwargs={"filepaths": f"{_DATA_DIRECTORY}/test/{lang}_examples-*"},
155
- ),
156
- ]
157
-
158
- def _build_pcollection(self, pipeline, filepaths):
159
- """Build PCollection of examples."""
160
- import apache_beam as beam
161
- import tensorflow as tf
162
-
163
- logger.info("generating examples from = %s", filepaths)
164
-
165
- def _extract_content(example):
166
- """Extracts content from a TFExample."""
167
- wikidata_id = example.features.feature["wikidata_id"].bytes_list.value[0].decode("utf-8")
168
- text = example.features.feature["text"].bytes_list.value[0].decode("utf-8")
169
- version_id = example.features.feature["version_id"].bytes_list.value[0].decode("utf-8")
170
-
171
- # wikidata_id could be duplicated with different texts.
172
- yield wikidata_id + text, {
173
- "wikidata_id": wikidata_id,
174
- "text": text,
175
- "version_id": version_id,
176
- }
177
-
178
- return (
179
- pipeline
180
- | beam.io.ReadFromTFRecord(filepaths, coder=beam.coders.ProtoCoder(tf.train.Example))
181
- | beam.FlatMap(_extract_content)
182
- )