albertvillanova HF staff commited on
Commit
81aded6
1 Parent(s): c5263e3

Remove old wikipedia leftovers (#3989)

Browse files

* Remove old wikipedia leftovers

* Fix some typos in wiki_snippets script

* Update metadata JSON of wiki_snippets dataset

* Update dataset card of wiki_snippets

* Fix tag in dataset card

* Add task tags to dataset card

* Make consistent the conversion to MB

* Add comment warning not to use load_dataset in another script

Commit from https://github.com/huggingface/datasets/commit/58ee1bbd36b87b8a791cc663b90553bcf903b71f

Files changed (3) hide show
  1. README.md +70 -34
  2. dataset_infos.json +1 -1
  3. wiki_snippets.py +11 -8
README.md CHANGED
@@ -1,8 +1,27 @@
1
  ---
 
 
 
 
2
  languages:
3
  - en
4
- paperswithcode_id: null
 
 
 
5
  pretty_name: WikiSnippets
 
 
 
 
 
 
 
 
 
 
 
 
6
  ---
7
 
8
  # Dataset Card for "wiki_snippets"
@@ -37,9 +56,6 @@ pretty_name: WikiSnippets
37
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
38
  - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
39
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
40
- - **Size of downloaded dataset files:** 0.00 MB
41
- - **Size of the generated dataset:** 35001.08 MB
42
- - **Total amount of disk used:** 35001.08 MB
43
 
44
  ### Dataset Summary
45
 
@@ -55,37 +71,56 @@ Wikipedia version split into plain text snippets for dense semantic indexing.
55
 
56
  ## Dataset Structure
57
 
58
- We show detailed information for up to 5 configurations of the dataset.
 
 
 
59
 
60
  ### Data Instances
61
 
62
  #### wiki40b_en_100_0
63
 
64
  - **Size of downloaded dataset files:** 0.00 MB
65
- - **Size of the generated dataset:** 12268.10 MB
66
- - **Total amount of disk used:** 12268.10 MB
67
 
68
- An example of 'train' looks as follows.
69
  ```
70
-
 
 
 
 
 
 
 
 
 
71
  ```
72
 
73
  #### wikipedia_en_100_0
74
 
75
  - **Size of downloaded dataset files:** 0.00 MB
76
- - **Size of the generated dataset:** 22732.97 MB
77
- - **Total amount of disk used:** 22732.97 MB
78
 
79
- An example of 'train' looks as follows.
80
  ```
81
-
 
 
 
 
 
 
 
 
 
82
  ```
83
 
84
  ### Data Fields
85
 
86
- The data fields are the same among all splits.
87
-
88
- #### wiki40b_en_100_0
89
  - `_id`: a `string` feature.
90
  - `datasets_id`: a `int32` feature.
91
  - `wiki_id`: a `string` feature.
@@ -97,24 +132,13 @@ The data fields are the same among all splits.
97
  - `section_title`: a `string` feature.
98
  - `passage_text`: a `string` feature.
99
 
100
- #### wikipedia_en_100_0
101
- - `_id`: a `string` feature.
102
- - `datasets_id`: a `int32` feature.
103
- - `wiki_id`: a `string` feature.
104
- - `start_paragraph`: a `int32` feature.
105
- - `start_character`: a `int32` feature.
106
- - `end_paragraph`: a `int32` feature.
107
- - `end_character`: a `int32` feature.
108
- - `article_title`: a `string` feature.
109
- - `section_title`: a `string` feature.
110
- - `passage_text`: a `string` feature.
111
 
112
  ### Data Splits
113
 
114
- | name | train |
115
- |------------------|-------:|
116
- |wiki40b_en_100_0 |17553713|
117
- |wikipedia_en_100_0|30820408|
118
 
119
  ## Dataset Creation
120
 
@@ -168,17 +192,29 @@ The data fields are the same among all splits.
168
 
169
  ### Licensing Information
170
 
171
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
172
 
173
  ### Citation Information
174
 
 
 
 
 
 
 
 
 
 
 
175
  ```
176
- @ONLINE {wikidump,
 
 
 
177
  author = "Wikimedia Foundation",
178
  title = "Wikimedia Downloads",
179
  url = "https://dumps.wikimedia.org"
180
  }
181
-
182
  ```
183
 
184
 
1
  ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - crowdsourced
6
  languages:
7
  - en
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - multilingual
12
  pretty_name: WikiSnippets
13
+ size_categories:
14
+ - "10M<n<100M"
15
+ source_datasets:
16
+ - extended|wiki40b
17
+ - extended|wikipedia
18
+ task_categories:
19
+ - sequence-modeling
20
+ - other
21
+ task_ids:
22
+ - language-modeling
23
+ - other-text-search
24
+ paperswithcode_id: null
25
  ---
26
 
27
  # Dataset Card for "wiki_snippets"
56
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
57
  - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
58
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
59
 
60
  ### Dataset Summary
61
 
71
 
72
  ## Dataset Structure
73
 
74
+ We show detailed information for 2 configurations of the dataset (with 100 snippet passage length and 0 overlap) in
75
+ English:
76
+ - wiki40b_en_100_0: Wiki-40B
77
+ - wikipedia_en_100_0: Wikipedia
78
 
79
  ### Data Instances
80
 
81
  #### wiki40b_en_100_0
82
 
83
  - **Size of downloaded dataset files:** 0.00 MB
84
+ - **Size of the generated dataset:** 12339.25 MB
85
+ - **Total amount of disk used:** 12339.25 MB
86
 
87
+ An example of 'train' looks as follows:
88
  ```
89
+ {'_id': '{"datasets_id": 0, "wiki_id": "Q1294448", "sp": 2, "sc": 0, "ep": 6, "ec": 610}',
90
+ 'datasets_id': 0,
91
+ 'wiki_id': 'Q1294448',
92
+ 'start_paragraph': 2,
93
+ 'start_character': 0,
94
+ 'end_paragraph': 6,
95
+ 'end_character': 610,
96
+ 'article_title': 'Ági Szalóki',
97
+ 'section_title': 'Life',
98
+ 'passage_text': "Ági Szalóki Life She started singing as a toddler, considering Márta Sebestyén a role model. Her musical background is traditional folk music; she first won recognition for singing with Ökrös in a traditional folk style, and Besh o droM, a Balkan gypsy brass band. With these ensembles she toured around the world from the Montreal Jazz Festival, through Glastonbury Festival to the Théatre de la Ville in Paris, from New York to Beijing.\nSince 2005, she began to pursue her solo career and explore various genres, such as jazz, thirties ballads, or children's songs.\nUntil now, three of her six released albums"}
99
  ```
100
 
101
  #### wikipedia_en_100_0
102
 
103
  - **Size of downloaded dataset files:** 0.00 MB
104
+ - **Size of the generated dataset:** 25184.52 MB
105
+ - **Total amount of disk used:** 25184.52 MB
106
 
107
+ An example of 'train' looks as follows:
108
  ```
109
+ {'_id': '{"datasets_id": 0, "wiki_id": "Anarchism", "sp": 0, "sc": 0, "ep": 2, "ec": 129}',
110
+ 'datasets_id': 0,
111
+ 'wiki_id': 'Anarchism',
112
+ 'start_paragraph': 0,
113
+ 'start_character': 0,
114
+ 'end_paragraph': 2,
115
+ 'end_character': 129,
116
+ 'article_title': 'Anarchism',
117
+ 'section_title': 'Start',
118
+ 'passage_text': 'Anarchism is a political philosophy and movement that is sceptical of authority and rejects all involuntary, coercive forms of hierarchy. Anarchism calls for the abolition of the state, which it holds to be unnecessary, undesirable, and harmful. As a historically left-wing movement, placed on the farthest left of the political spectrum, it is usually described alongside communalism and libertarian Marxism as the libertarian wing (libertarian socialism) of the socialist movement, and has a strong historical association with anti-capitalism and socialism. Humans lived in societies without formal hierarchies long before the establishment of formal states, realms, or empires. With the'}
119
  ```
120
 
121
  ### Data Fields
122
 
123
+ The data fields are the same for all configurations:
 
 
124
  - `_id`: a `string` feature.
125
  - `datasets_id`: a `int32` feature.
126
  - `wiki_id`: a `string` feature.
132
  - `section_title`: a `string` feature.
133
  - `passage_text`: a `string` feature.
134
 
 
 
 
 
 
 
 
 
 
 
 
135
 
136
  ### Data Splits
137
 
138
+ | name | train |
139
+ |:-------------------|---------:|
140
+ | wiki40b_en_100_0 | 17553713 |
141
+ | wikipedia_en_100_0 | 33849898 |
142
 
143
  ## Dataset Creation
144
 
192
 
193
  ### Licensing Information
194
 
195
+ See licensing information of source datasets.
196
 
197
  ### Citation Information
198
 
199
+ Cite source datasets:
200
+
201
+ - Wiki-40B:
202
+ ```
203
+ @inproceedings{49029,
204
+ title = {Wiki-40B: Multilingual Language Model Dataset},
205
+ author = {Mandy Guo and Zihang Dai and Denny Vrandecic and Rami Al-Rfou},
206
+ year = {2020},
207
+ booktitle = {LREC 2020}
208
+ }
209
  ```
210
+
211
+ - Wikipedia:
212
+ ```
213
+ @ONLINE{wikidump,
214
  author = "Wikimedia Foundation",
215
  title = "Wikimedia Downloads",
216
  url = "https://dumps.wikimedia.org"
217
  }
 
218
  ```
219
 
220
 
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"wiki40b_en_100_0": {"description": "Wikipedia version split into plain text snippets for dense semantic indexing.\n", "citation": "@ONLINE {wikidump,\n author = \"Wikimedia Foundation\",\n title = \"Wikimedia Downloads\",\n url = \"https://dumps.wikimedia.org\"\n}\n", "homepage": "https://dumps.wikimedia.org", "license": "", "features": {"_id": {"dtype": "string", "id": null, "_type": "Value"}, "datasets_id": {"dtype": "int32", "id": null, "_type": "Value"}, "wiki_id": {"dtype": "string", "id": null, "_type": "Value"}, "start_paragraph": {"dtype": "int32", "id": null, "_type": "Value"}, "start_character": {"dtype": "int32", "id": null, "_type": "Value"}, "end_paragraph": {"dtype": "int32", "id": null, "_type": "Value"}, "end_character": {"dtype": "int32", "id": null, "_type": "Value"}, "article_title": {"dtype": "string", "id": null, "_type": "Value"}, "section_title": {"dtype": "string", "id": null, "_type": "Value"}, "passage_text": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "wiki_snippets", "config_name": "wiki40b_en_100_0", "version": {"version_str": "1.0.0", "description": null, "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 12864038411, "num_examples": 17553713, "dataset_name": "wiki_snippets"}}, "download_checksums": {}, "download_size": 0, "dataset_size": 12864038411, "size_in_bytes": 12864038411}, "wikipedia_en_100_0": {"description": "Wikipedia version split into plain text snippets for dense semantic indexing.\n", "citation": "@ONLINE {wikidump,\n author = \"Wikimedia Foundation\",\n title = \"Wikimedia Downloads\",\n url = \"https://dumps.wikimedia.org\"\n}\n", "homepage": "https://dumps.wikimedia.org", "license": "", "features": {"_id": {"dtype": "string", "id": null, "_type": "Value"}, "datasets_id": {"dtype": "int32", "id": null, "_type": "Value"}, "wiki_id": {"dtype": "string", "id": null, "_type": "Value"}, "start_paragraph": {"dtype": "int32", "id": null, "_type": "Value"}, "start_character": {"dtype": "int32", "id": null, "_type": "Value"}, "end_paragraph": {"dtype": "int32", "id": null, "_type": "Value"}, "end_character": {"dtype": "int32", "id": null, "_type": "Value"}, "article_title": {"dtype": "string", "id": null, "_type": "Value"}, "section_title": {"dtype": "string", "id": null, "_type": "Value"}, "passage_text": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "wiki_snippets", "config_name": "wikipedia_en_100_0", "version": {"version_str": "1.0.0", "description": null, "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 23837250095, "num_examples": 30820408, "dataset_name": "wiki_snippets"}}, "download_checksums": {}, "download_size": 0, "dataset_size": 23837250095, "size_in_bytes": 23837250095}}
1
+ {"wiki40b_en_100_0": {"description": "Wikipedia version split into plain text snippets for dense semantic indexing.\n", "citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n", "homepage": "https://dumps.wikimedia.org", "license": "", "features": {"_id": {"dtype": "string", "id": null, "_type": "Value"}, "datasets_id": {"dtype": "int32", "id": null, "_type": "Value"}, "wiki_id": {"dtype": "string", "id": null, "_type": "Value"}, "start_paragraph": {"dtype": "int32", "id": null, "_type": "Value"}, "start_character": {"dtype": "int32", "id": null, "_type": "Value"}, "end_paragraph": {"dtype": "int32", "id": null, "_type": "Value"}, "end_character": {"dtype": "int32", "id": null, "_type": "Value"}, "article_title": {"dtype": "string", "id": null, "_type": "Value"}, "section_title": {"dtype": "string", "id": null, "_type": "Value"}, "passage_text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_snippets", "config_name": "wiki40b_en_100_0", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 12938641686, "num_examples": 17553713, "dataset_name": "wiki_snippets"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 12938641686, "size_in_bytes": 12938641686}, "wikipedia_en_100_0": {"description": "Wikipedia version split into plain text snippets for dense semantic indexing.\n", "citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n", "homepage": "https://dumps.wikimedia.org", "license": "", "features": {"_id": {"dtype": "string", "id": null, "_type": "Value"}, "datasets_id": {"dtype": "int32", "id": null, "_type": "Value"}, "wiki_id": {"dtype": "string", "id": null, "_type": "Value"}, "start_paragraph": {"dtype": "int32", "id": null, "_type": "Value"}, "start_character": {"dtype": "int32", "id": null, "_type": "Value"}, "end_paragraph": {"dtype": "int32", "id": null, "_type": "Value"}, "end_character": {"dtype": "int32", "id": null, "_type": "Value"}, "article_title": {"dtype": "string", "id": null, "_type": "Value"}, "section_title": {"dtype": "string", "id": null, "_type": "Value"}, "passage_text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_snippets", "config_name": "wikipedia_en_100_0", "version": {"version_str": "2.0.0", "description": null, "major": 2, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 26407884393, "num_examples": 33849898, "dataset_name": "wiki_snippets"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 26407884393, "size_in_bytes": 26407884393}}
wiki_snippets.py CHANGED
@@ -1,3 +1,6 @@
 
 
 
1
  import json
2
  import math
3
 
@@ -80,7 +83,7 @@ def wikipedia_article_snippets(article, passage_len=100, overlap=0):
80
  for i in range(math.ceil(len(word_map) / step_size)):
81
  pre_toks = word_map[i * step_size : i * step_size + passage_len]
82
  start_section_id = max([0] + [j for j in section_indices if j <= pre_toks[0][0]])
83
- section_ids = [j for j in section_indices if j >= start_section_id and j <= pre_toks[-1][0]]
84
  section_ids = section_ids if len(section_ids) > 0 else [-1]
85
  passage_text = " ".join([w for p_id, s_id, w in pre_toks])
86
  passages += [
@@ -98,15 +101,15 @@ def wikipedia_article_snippets(article, passage_len=100, overlap=0):
98
  return passages
99
 
100
 
101
- _SPLIT_FUCNTION_MAP = {
102
  "wikipedia": wikipedia_article_snippets,
103
  "wiki40b": wiki40b_article_snippets,
104
  }
105
 
106
 
107
- def generate_snippets(wikipedia, split_funtion, passage_len=100, overlap=0):
108
  for i, article in enumerate(wikipedia):
109
- for doc in split_funtion(article, passage_len, overlap):
110
  part_id = json.dumps(
111
  {
112
  "datasets_id": i,
@@ -152,9 +155,9 @@ class WikiSnippets(datasets.GeneratorBasedBuilder):
152
  ),
153
  WikiSnippetsConfig(
154
  name="wikipedia_en_100_0",
155
- version=datasets.Version("1.0.0"),
156
  wikipedia_name="wikipedia",
157
- wikipedia_version_name="20200501.en",
158
  snippets_length=100,
159
  snippets_overlap=0,
160
  ),
@@ -185,7 +188,7 @@ class WikiSnippets(datasets.GeneratorBasedBuilder):
185
  )
186
 
187
  def _split_generators(self, dl_manager):
188
-
189
  wikipedia = datasets.load_dataset(
190
  path=self.config.wikipedia_name,
191
  name=self.config.wikipedia_version_name,
@@ -199,7 +202,7 @@ class WikiSnippets(datasets.GeneratorBasedBuilder):
199
  logger.info(f"generating examples from = {self.config.wikipedia_name} {self.config.wikipedia_version_name}")
200
  for split in wikipedia:
201
  dset = wikipedia[split]
202
- split_function = _SPLIT_FUCNTION_MAP[self.config.wikipedia_name]
203
  for doc in generate_snippets(
204
  dset, split_function, passage_len=self.config.snippets_length, overlap=self.config.snippets_overlap
205
  ):
1
+ # WARNING: Please, do not use the code in this script as a template to create another script:
2
+ # - It is a bad practice to use `datasets.load_dataset` inside a loading script. Please, avoid doing it.
3
+
4
  import json
5
  import math
6
 
83
  for i in range(math.ceil(len(word_map) / step_size)):
84
  pre_toks = word_map[i * step_size : i * step_size + passage_len]
85
  start_section_id = max([0] + [j for j in section_indices if j <= pre_toks[0][0]])
86
+ section_ids = [j for j in section_indices if start_section_id <= j <= pre_toks[-1][0]]
87
  section_ids = section_ids if len(section_ids) > 0 else [-1]
88
  passage_text = " ".join([w for p_id, s_id, w in pre_toks])
89
  passages += [
101
  return passages
102
 
103
 
104
+ _SPLIT_FUNCTION_MAP = {
105
  "wikipedia": wikipedia_article_snippets,
106
  "wiki40b": wiki40b_article_snippets,
107
  }
108
 
109
 
110
+ def generate_snippets(wikipedia, split_function, passage_len=100, overlap=0):
111
  for i, article in enumerate(wikipedia):
112
+ for doc in split_function(article, passage_len, overlap):
113
  part_id = json.dumps(
114
  {
115
  "datasets_id": i,
155
  ),
156
  WikiSnippetsConfig(
157
  name="wikipedia_en_100_0",
158
+ version=datasets.Version("2.0.0"),
159
  wikipedia_name="wikipedia",
160
+ wikipedia_version_name="20220301.en",
161
  snippets_length=100,
162
  snippets_overlap=0,
163
  ),
188
  )
189
 
190
  def _split_generators(self, dl_manager):
191
+ # WARNING: It is a bad practice to use `datasets.load_dataset` inside a loading script. Please, avoid doing it.
192
  wikipedia = datasets.load_dataset(
193
  path=self.config.wikipedia_name,
194
  name=self.config.wikipedia_version_name,
202
  logger.info(f"generating examples from = {self.config.wikipedia_name} {self.config.wikipedia_version_name}")
203
  for split in wikipedia:
204
  dset = wikipedia[split]
205
+ split_function = _SPLIT_FUNCTION_MAP[self.config.wikipedia_name]
206
  for doc in generate_snippets(
207
  dset, split_function, passage_len=self.config.snippets_length, overlap=self.config.snippets_overlap
208
  ):