Datasets:
wmt
/

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
Dask
License:
parquet-converter commited on
Commit
9f68d4b
1 Parent(s): e9c1892

Update parquet files

Browse files
README.md DELETED
@@ -1,360 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - no-annotation
4
- language_creators:
5
- - found
6
- language:
7
- - cs
8
- - de
9
- - en
10
- - fi
11
- - lv
12
- - ru
13
- - tr
14
- - zh
15
- license:
16
- - unknown
17
- multilinguality:
18
- - translation
19
- size_categories:
20
- - 10M<n<100M
21
- source_datasets:
22
- - extended|europarl_bilingual
23
- - extended|news_commentary
24
- - extended|setimes
25
- - extended|un_multi
26
- task_categories:
27
- - translation
28
- task_ids: []
29
- pretty_name: WMT17
30
- paperswithcode_id: null
31
- dataset_info:
32
- - config_name: cs-en
33
- features:
34
- - name: translation
35
- dtype:
36
- translation:
37
- languages:
38
- - cs
39
- - en
40
- splits:
41
- - name: train
42
- num_bytes: 300698431
43
- num_examples: 1018291
44
- - name: validation
45
- num_bytes: 707870
46
- num_examples: 2999
47
- - name: test
48
- num_bytes: 674430
49
- num_examples: 3005
50
- download_size: 1784240523
51
- dataset_size: 302080731
52
- - config_name: de-en
53
- features:
54
- - name: translation
55
- dtype:
56
- translation:
57
- languages:
58
- - de
59
- - en
60
- splits:
61
- - name: train
62
- num_bytes: 1715537443
63
- num_examples: 5906184
64
- - name: validation
65
- num_bytes: 735516
66
- num_examples: 2999
67
- - name: test
68
- num_bytes: 729519
69
- num_examples: 3004
70
- download_size: 1945382236
71
- dataset_size: 1717002478
72
- - config_name: fi-en
73
- features:
74
- - name: translation
75
- dtype:
76
- translation:
77
- languages:
78
- - fi
79
- - en
80
- splits:
81
- - name: train
82
- num_bytes: 743856525
83
- num_examples: 2656542
84
- - name: validation
85
- num_bytes: 1410515
86
- num_examples: 6000
87
- - name: test
88
- num_bytes: 1388828
89
- num_examples: 6004
90
- download_size: 434531933
91
- dataset_size: 746655868
92
- - config_name: lv-en
93
- features:
94
- - name: translation
95
- dtype:
96
- translation:
97
- languages:
98
- - lv
99
- - en
100
- splits:
101
- - name: train
102
- num_bytes: 517419100
103
- num_examples: 3567528
104
- - name: validation
105
- num_bytes: 544604
106
- num_examples: 2003
107
- - name: test
108
- num_bytes: 530474
109
- num_examples: 2001
110
- download_size: 169634544
111
- dataset_size: 518494178
112
- - config_name: ru-en
113
- features:
114
- - name: translation
115
- dtype:
116
- translation:
117
- languages:
118
- - ru
119
- - en
120
- splits:
121
- - name: train
122
- num_bytes: 11000075522
123
- num_examples: 24782720
124
- - name: validation
125
- num_bytes: 1050677
126
- num_examples: 2998
127
- - name: test
128
- num_bytes: 1040195
129
- num_examples: 3001
130
- download_size: 3582640660
131
- dataset_size: 11002166394
132
- - config_name: tr-en
133
- features:
134
- - name: translation
135
- dtype:
136
- translation:
137
- languages:
138
- - tr
139
- - en
140
- splits:
141
- - name: train
142
- num_bytes: 60416617
143
- num_examples: 205756
144
- - name: validation
145
- num_bytes: 732436
146
- num_examples: 3000
147
- - name: test
148
- num_bytes: 752773
149
- num_examples: 3007
150
- download_size: 62263061
151
- dataset_size: 61901826
152
- - config_name: zh-en
153
- features:
154
- - name: translation
155
- dtype:
156
- translation:
157
- languages:
158
- - zh
159
- - en
160
- splits:
161
- - name: train
162
- num_bytes: 5529286149
163
- num_examples: 25134743
164
- - name: validation
165
- num_bytes: 589591
166
- num_examples: 2002
167
- - name: test
168
- num_bytes: 540347
169
- num_examples: 2001
170
- download_size: 2314906945
171
- dataset_size: 5530416087
172
- ---
173
-
174
- # Dataset Card for "wmt17"
175
-
176
- ## Table of Contents
177
- - [Dataset Description](#dataset-description)
178
- - [Dataset Summary](#dataset-summary)
179
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
180
- - [Languages](#languages)
181
- - [Dataset Structure](#dataset-structure)
182
- - [Data Instances](#data-instances)
183
- - [Data Fields](#data-fields)
184
- - [Data Splits](#data-splits)
185
- - [Dataset Creation](#dataset-creation)
186
- - [Curation Rationale](#curation-rationale)
187
- - [Source Data](#source-data)
188
- - [Annotations](#annotations)
189
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
190
- - [Considerations for Using the Data](#considerations-for-using-the-data)
191
- - [Social Impact of Dataset](#social-impact-of-dataset)
192
- - [Discussion of Biases](#discussion-of-biases)
193
- - [Other Known Limitations](#other-known-limitations)
194
- - [Additional Information](#additional-information)
195
- - [Dataset Curators](#dataset-curators)
196
- - [Licensing Information](#licensing-information)
197
- - [Citation Information](#citation-information)
198
- - [Contributions](#contributions)
199
-
200
- ## Dataset Description
201
-
202
- - **Homepage:** [http://www.statmt.org/wmt17/translation-task.html](http://www.statmt.org/wmt17/translation-task.html)
203
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
204
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
205
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
206
- - **Size of downloaded dataset files:** 1700.58 MB
207
- - **Size of the generated dataset:** 288.10 MB
208
- - **Total amount of disk used:** 1988.68 MB
209
-
210
- ### Dataset Summary
211
-
212
- <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
213
- <p><b>Warning:</b> There are issues with the Common Crawl corpus data (<a href="https://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz">training-parallel-commoncrawl.tgz</a>):</p>
214
- <ul>
215
- <li>Non-English files contain many English sentences.</li>
216
- <li>Their "parallel" sentences in English are not aligned: they are uncorrelated with their counterpart.</li>
217
- </ul>
218
- <p>We have contacted the WMT organizers.</p>
219
- </div>
220
-
221
- Translation dataset based on the data from statmt.org.
222
-
223
- Versions exist for different years using a combination of data
224
- sources. The base `wmt` allows you to create a custom dataset by choosing
225
- your own data/language pair. This can be done as follows:
226
-
227
- ```python
228
- from datasets import inspect_dataset, load_dataset_builder
229
-
230
- inspect_dataset("wmt17", "path/to/scripts")
231
- builder = load_dataset_builder(
232
- "path/to/scripts/wmt_utils.py",
233
- language_pair=("fr", "de"),
234
- subsets={
235
- datasets.Split.TRAIN: ["commoncrawl_frde"],
236
- datasets.Split.VALIDATION: ["euelections_dev2019"],
237
- },
238
- )
239
-
240
- # Standard version
241
- builder.download_and_prepare()
242
- ds = builder.as_dataset()
243
-
244
- # Streamable version
245
- ds = builder.as_streaming_dataset()
246
- ```
247
-
248
- ### Supported Tasks and Leaderboards
249
-
250
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
251
-
252
- ### Languages
253
-
254
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
255
-
256
- ## Dataset Structure
257
-
258
- ### Data Instances
259
-
260
- #### cs-en
261
-
262
- - **Size of downloaded dataset files:** 1700.58 MB
263
- - **Size of the generated dataset:** 288.10 MB
264
- - **Total amount of disk used:** 1988.68 MB
265
-
266
- An example of 'train' looks as follows.
267
- ```
268
-
269
- ```
270
-
271
- ### Data Fields
272
-
273
- The data fields are the same among all splits.
274
-
275
- #### cs-en
276
- - `translation`: a multilingual `string` variable, with possible languages including `cs`, `en`.
277
-
278
- ### Data Splits
279
-
280
- |name | train |validation|test|
281
- |-----|------:|---------:|---:|
282
- |cs-en|1018291| 2999|3005|
283
-
284
- ## Dataset Creation
285
-
286
- ### Curation Rationale
287
-
288
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
289
-
290
- ### Source Data
291
-
292
- #### Initial Data Collection and Normalization
293
-
294
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
295
-
296
- #### Who are the source language producers?
297
-
298
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
299
-
300
- ### Annotations
301
-
302
- #### Annotation process
303
-
304
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
305
-
306
- #### Who are the annotators?
307
-
308
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
309
-
310
- ### Personal and Sensitive Information
311
-
312
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
313
-
314
- ## Considerations for Using the Data
315
-
316
- ### Social Impact of Dataset
317
-
318
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
319
-
320
- ### Discussion of Biases
321
-
322
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
323
-
324
- ### Other Known Limitations
325
-
326
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
327
-
328
- ## Additional Information
329
-
330
- ### Dataset Curators
331
-
332
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
333
-
334
- ### Licensing Information
335
-
336
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
337
-
338
- ### Citation Information
339
-
340
- ```
341
-
342
- @InProceedings{bojar-EtAl:2017:WMT1,
343
- author = {Bojar, Ond
344
- {r}ej and Chatterjee, Rajen and Federmann, Christian and Graham, Yvette and Haddow, Barry and Huang, Shujian and Huck, Matthias and Koehn, Philipp and Liu, Qun and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Rubino, Raphael and Specia, Lucia and Turchi, Marco},
345
- title = {Findings of the 2017 Conference on Machine Translation (WMT17)},
346
- booktitle = {Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers},
347
- month = {September},
348
- year = {2017},
349
- address = {Copenhagen, Denmark},
350
- publisher = {Association for Computational Linguistics},
351
- pages = {169--214},
352
- url = {http://www.aclweb.org/anthology/W17-4717}
353
- }
354
-
355
- ```
356
-
357
-
358
- ### Contributions
359
-
360
- Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cs-en/wmt17-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77bec00914be24ac54cb42fb293fb16809b72c6e35da52f1bcdcc83da89e6296
3
+ size 453071
cs-en/wmt17-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92f3bf72cb2b1e597ac84ec9dffdf6588d6779efe3fbdd507de56935d65dcacd
3
+ size 180768549
cs-en/wmt17-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07ede26f1b77cea8f7403c1f059b78e10ab5efc914c84cc72cfd74125a73d419
3
+ size 468787
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"cs-en": {"description": "Translate dataset based on the data from statmt.org.\n\nVersions exists for the different years using a combination of multiple data\nsources. The base `wmt_translate` allows you to create your own config to choose\nyour own data/language pair by creating a custom `datasets.translate.wmt.WmtConfig`.\n\n```\nconfig = datasets.wmt.WmtConfig(\n version=\"0.0.1\",\n language_pair=(\"fr\", \"de\"),\n subsets={\n datasets.Split.TRAIN: [\"commoncrawl_frde\"],\n datasets.Split.VALIDATION: [\"euelections_dev2019\"],\n },\n)\nbuilder = datasets.builder(\"wmt_translate\", config=config)\n```\n\n", "citation": "\n@InProceedings{bojar-EtAl:2017:WMT1,\n author = {Bojar, Ond\u000b{r}ej and Chatterjee, Rajen and Federmann, Christian and Graham, Yvette and Haddow, Barry and Huang, Shujian and Huck, Matthias and Koehn, Philipp and Liu, Qun and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Rubino, Raphael and Specia, Lucia and Turchi, Marco},\n title = {Findings of the 2017 Conference on Machine Translation (WMT17)},\n booktitle = {Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers},\n month = {September},\n year = {2017},\n address = {Copenhagen, Denmark},\n publisher = {Association for Computational Linguistics},\n pages = {169--214},\n url = {http://www.aclweb.org/anthology/W17-4717}\n}\n", "homepage": "http://www.statmt.org/wmt17/translation-task.html", "license": "", "features": {"translation": {"languages": ["cs", "en"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "cs", "output": "en"}, "task_templates": null, "builder_name": "wmt17", "config_name": "cs-en", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 300698431, "num_examples": 1018291, "dataset_name": "wmt17"}, "validation": {"name": "validation", "num_bytes": 707870, "num_examples": 2999, "dataset_name": "wmt17"}, "test": {"name": "test", "num_bytes": 674430, "num_examples": 3005, "dataset_name": "wmt17"}}, "download_checksums": {"https://huggingface.co/datasets/wmt/wmt13/resolve/main-zip/training-parallel-europarl-v7.zip": {"num_bytes": 658092427, "checksum": "5b2d8b32c2396da739b4e731871c597fcc6e75729becd74619d0712eecf7770e"}, "https://huggingface.co/datasets/wmt/wmt13/resolve/main-zip/training-parallel-commoncrawl.zip": {"num_bytes": 918734483, "checksum": "5ffe980072ea29adfd84568d099bea366d9f72772b988e670794ae851b4e5627"}, "https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/training-parallel-nc-v12.zip": {"num_bytes": 168699339, "checksum": "a3e922fd19485a25870e628fdecb81b7d621f545e16df21a38fae15127413122"}, "https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip": {"num_bytes": 38714274, "checksum": "d796e363740fdc4261aa6f5a3d2f8223e3adaee7d737b7724863325b8956dfd1"}}, "download_size": 1784240523, "post_processing_size": null, "dataset_size": 302080731, "size_in_bytes": 2086321254}, "de-en": {"description": "Translate dataset based on the data from statmt.org.\n\nVersions exists for the different years using a combination of multiple data\nsources. The base `wmt_translate` allows you to create your own config to choose\nyour own data/language pair by creating a custom `datasets.translate.wmt.WmtConfig`.\n\n```\nconfig = datasets.wmt.WmtConfig(\n version=\"0.0.1\",\n language_pair=(\"fr\", \"de\"),\n subsets={\n datasets.Split.TRAIN: [\"commoncrawl_frde\"],\n datasets.Split.VALIDATION: [\"euelections_dev2019\"],\n },\n)\nbuilder = datasets.builder(\"wmt_translate\", config=config)\n```\n\n", "citation": "\n@InProceedings{bojar-EtAl:2017:WMT1,\n author = {Bojar, Ond\u000b{r}ej and Chatterjee, Rajen and Federmann, Christian and Graham, Yvette and Haddow, Barry and Huang, Shujian and Huck, Matthias and Koehn, Philipp and Liu, Qun and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Rubino, Raphael and Specia, Lucia and Turchi, Marco},\n title = {Findings of the 2017 Conference on Machine Translation (WMT17)},\n booktitle = {Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers},\n month = {September},\n year = {2017},\n address = {Copenhagen, Denmark},\n publisher = {Association for Computational Linguistics},\n pages = {169--214},\n url = {http://www.aclweb.org/anthology/W17-4717}\n}\n", "homepage": "http://www.statmt.org/wmt17/translation-task.html", "license": "", "features": {"translation": {"languages": ["de", "en"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "de", "output": "en"}, "task_templates": null, "builder_name": "wmt17", "config_name": "de-en", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1715537443, "num_examples": 5906184, "dataset_name": "wmt17"}, "validation": {"name": "validation", "num_bytes": 735516, "num_examples": 2999, "dataset_name": "wmt17"}, "test": {"name": "test", "num_bytes": 729519, "num_examples": 3004, "dataset_name": "wmt17"}}, "download_checksums": {"https://huggingface.co/datasets/wmt/wmt13/resolve/main-zip/training-parallel-europarl-v7.zip": {"num_bytes": 658092427, "checksum": "5b2d8b32c2396da739b4e731871c597fcc6e75729becd74619d0712eecf7770e"}, "https://huggingface.co/datasets/wmt/wmt13/resolve/main-zip/training-parallel-commoncrawl.zip": {"num_bytes": 918734483, "checksum": "5ffe980072ea29adfd84568d099bea366d9f72772b988e670794ae851b4e5627"}, "https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/training-parallel-nc-v12.zip": {"num_bytes": 168699339, "checksum": "a3e922fd19485a25870e628fdecb81b7d621f545e16df21a38fae15127413122"}, "https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/rapid2016.zip": {"num_bytes": 161141713, "checksum": "93217093c624d9e16023fee98afb089208cca5937c2c08ee7edc707196d09a28"}, "https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip": {"num_bytes": 38714274, "checksum": "d796e363740fdc4261aa6f5a3d2f8223e3adaee7d737b7724863325b8956dfd1"}}, "download_size": 1945382236, "post_processing_size": null, "dataset_size": 1717002478, "size_in_bytes": 3662384714}, "fi-en": {"description": "Translate dataset based on the data from statmt.org.\n\nVersions exists for the different years using a combination of multiple data\nsources. The base `wmt_translate` allows you to create your own config to choose\nyour own data/language pair by creating a custom `datasets.translate.wmt.WmtConfig`.\n\n```\nconfig = datasets.wmt.WmtConfig(\n version=\"0.0.1\",\n language_pair=(\"fr\", \"de\"),\n subsets={\n datasets.Split.TRAIN: [\"commoncrawl_frde\"],\n datasets.Split.VALIDATION: [\"euelections_dev2019\"],\n },\n)\nbuilder = datasets.builder(\"wmt_translate\", config=config)\n```\n\n", "citation": "\n@InProceedings{bojar-EtAl:2017:WMT1,\n author = {Bojar, Ond\u000b{r}ej and Chatterjee, Rajen and Federmann, Christian and Graham, Yvette and Haddow, Barry and Huang, Shujian and Huck, Matthias and Koehn, Philipp and Liu, Qun and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Rubino, Raphael and Specia, Lucia and Turchi, Marco},\n title = {Findings of the 2017 Conference on Machine Translation (WMT17)},\n booktitle = {Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers},\n month = {September},\n year = {2017},\n address = {Copenhagen, Denmark},\n publisher = {Association for Computational Linguistics},\n pages = {169--214},\n url = {http://www.aclweb.org/anthology/W17-4717}\n}\n", "homepage": "http://www.statmt.org/wmt17/translation-task.html", "license": "", "features": {"translation": {"languages": ["fi", "en"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "fi", "output": "en"}, "task_templates": null, "builder_name": "wmt17", "config_name": "fi-en", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 743856525, "num_examples": 2656542, "dataset_name": "wmt17"}, "validation": {"name": "validation", "num_bytes": 1410515, "num_examples": 6000, "dataset_name": "wmt17"}, "test": {"name": "test", "num_bytes": 1388828, "num_examples": 6004, "dataset_name": "wmt17"}}, "download_checksums": {"https://huggingface.co/datasets/wmt/wmt16/resolve/main-zip/translation-task/training-parallel-ep-v8.zip": {"num_bytes": 225190342, "checksum": "387e570a6812948e30c64885e64a1d3735a66b7c0bc424fcff1208ef11110149"}, "https://huggingface.co/datasets/wmt/wmt15/resolve/main-zip/wiki-titles.zip": {"num_bytes": 9485604, "checksum": "b3134566261b39d830eed345df1be1864039339cfeccf24b1bf86398c9e4a87c"}, "https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/rapid2016.zip": {"num_bytes": 161141713, "checksum": "93217093c624d9e16023fee98afb089208cca5937c2c08ee7edc707196d09a28"}, "https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip": {"num_bytes": 38714274, "checksum": "d796e363740fdc4261aa6f5a3d2f8223e3adaee7d737b7724863325b8956dfd1"}}, "download_size": 434531933, "post_processing_size": null, "dataset_size": 746655868, "size_in_bytes": 1181187801}, "lv-en": {"description": "Translate dataset based on the data from statmt.org.\n\nVersions exists for the different years using a combination of multiple data\nsources. The base `wmt_translate` allows you to create your own config to choose\nyour own data/language pair by creating a custom `datasets.translate.wmt.WmtConfig`.\n\n```\nconfig = datasets.wmt.WmtConfig(\n version=\"0.0.1\",\n language_pair=(\"fr\", \"de\"),\n subsets={\n datasets.Split.TRAIN: [\"commoncrawl_frde\"],\n datasets.Split.VALIDATION: [\"euelections_dev2019\"],\n },\n)\nbuilder = datasets.builder(\"wmt_translate\", config=config)\n```\n\n", "citation": "\n@InProceedings{bojar-EtAl:2017:WMT1,\n author = {Bojar, Ond\u000b{r}ej and Chatterjee, Rajen and Federmann, Christian and Graham, Yvette and Haddow, Barry and Huang, Shujian and Huck, Matthias and Koehn, Philipp and Liu, Qun and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Rubino, Raphael and Specia, Lucia and Turchi, Marco},\n title = {Findings of the 2017 Conference on Machine Translation (WMT17)},\n booktitle = {Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers},\n month = {September},\n year = {2017},\n address = {Copenhagen, Denmark},\n publisher = {Association for Computational Linguistics},\n pages = {169--214},\n url = {http://www.aclweb.org/anthology/W17-4717}\n}\n", "homepage": "http://www.statmt.org/wmt17/translation-task.html", "license": "", "features": {"translation": {"languages": ["lv", "en"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "lv", "output": "en"}, "task_templates": null, "builder_name": "wmt17", "config_name": "lv-en", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 517419100, "num_examples": 3567528, "dataset_name": "wmt17"}, "validation": {"name": "validation", "num_bytes": 544604, "num_examples": 2003, "dataset_name": "wmt17"}, "test": {"name": "test", "num_bytes": 530474, "num_examples": 2001, "dataset_name": "wmt17"}}, "download_checksums": {"https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/leta.v1.zip": {"num_bytes": 2027044, "checksum": "b30b9a729a41dc1bc6cb6867a1bf8367c5a573fbc321e5de6d545280328f7da8"}, "https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/dcep.lv-en.v1.zip": {"num_bytes": 128577127, "checksum": "a387a8bfbc367d4b6a0db6d1f4ea6499ceba4731f17f41d3dcec28c94925b503"}, "https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/books.lv-en.v1.zip": {"num_bytes": 316099, "checksum": "d1092e19cbc10682859360eb777cc0f9cf32698bcb7181b8b22ca6ca570e7fdf"}, "https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip": {"num_bytes": 38714274, "checksum": "d796e363740fdc4261aa6f5a3d2f8223e3adaee7d737b7724863325b8956dfd1"}}, "download_size": 169634544, "post_processing_size": null, "dataset_size": 518494178, "size_in_bytes": 688128722}, "ru-en": {"description": "Translate dataset based on the data from statmt.org.\n\nVersions exists for the different years using a combination of multiple data\nsources. The base `wmt_translate` allows you to create your own config to choose\nyour own data/language pair by creating a custom `datasets.translate.wmt.WmtConfig`.\n\n```\nconfig = datasets.wmt.WmtConfig(\n version=\"0.0.1\",\n language_pair=(\"fr\", \"de\"),\n subsets={\n datasets.Split.TRAIN: [\"commoncrawl_frde\"],\n datasets.Split.VALIDATION: [\"euelections_dev2019\"],\n },\n)\nbuilder = datasets.builder(\"wmt_translate\", config=config)\n```\n\n", "citation": "\n@InProceedings{bojar-EtAl:2017:WMT1,\n author = {Bojar, Ond\u000b{r}ej and Chatterjee, Rajen and Federmann, Christian and Graham, Yvette and Haddow, Barry and Huang, Shujian and Huck, Matthias and Koehn, Philipp and Liu, Qun and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Rubino, Raphael and Specia, Lucia and Turchi, Marco},\n title = {Findings of the 2017 Conference on Machine Translation (WMT17)},\n booktitle = {Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers},\n month = {September},\n year = {2017},\n address = {Copenhagen, Denmark},\n publisher = {Association for Computational Linguistics},\n pages = {169--214},\n url = {http://www.aclweb.org/anthology/W17-4717}\n}\n", "homepage": "http://www.statmt.org/wmt17/translation-task.html", "license": "", "features": {"translation": {"languages": ["ru", "en"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "ru", "output": "en"}, "task_templates": null, "builder_name": "wmt17", "config_name": "ru-en", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 11000075522, "num_examples": 24782720, "dataset_name": "wmt17"}, "validation": {"name": "validation", "num_bytes": 1050677, "num_examples": 2998, "dataset_name": "wmt17"}, "test": {"name": "test", "num_bytes": 1040195, "num_examples": 3001, "dataset_name": "wmt17"}}, "download_checksums": {"https://huggingface.co/datasets/wmt/wmt13/resolve/main-zip/training-parallel-commoncrawl.zip": {"num_bytes": 918734483, "checksum": "5ffe980072ea29adfd84568d099bea366d9f72772b988e670794ae851b4e5627"}, "https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/training-parallel-nc-v12.zip": {"num_bytes": 168699339, "checksum": "a3e922fd19485a25870e628fdecb81b7d621f545e16df21a38fae15127413122"}, "https://huggingface.co/datasets/wmt/wmt15/resolve/main-zip/wiki-titles.zip": {"num_bytes": 9485604, "checksum": "b3134566261b39d830eed345df1be1864039339cfeccf24b1bf86398c9e4a87c"}, "https://huggingface.co/datasets/wmt/uncorpus/resolve/main-zip/UNv1.0.en-ru.zip": {"num_bytes": 2447006960, "checksum": "72c2670fa6aadb36d541cba91cd26b9da291a976bf1a2748177a57baf8261f4c"}, "https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip": {"num_bytes": 38714274, "checksum": "d796e363740fdc4261aa6f5a3d2f8223e3adaee7d737b7724863325b8956dfd1"}}, "download_size": 3582640660, "post_processing_size": null, "dataset_size": 11002166394, "size_in_bytes": 14584807054}, "tr-en": {"description": "Translate dataset based on the data from statmt.org.\n\nVersions exists for the different years using a combination of multiple data\nsources. The base `wmt_translate` allows you to create your own config to choose\nyour own data/language pair by creating a custom `datasets.translate.wmt.WmtConfig`.\n\n```\nconfig = datasets.wmt.WmtConfig(\n version=\"0.0.1\",\n language_pair=(\"fr\", \"de\"),\n subsets={\n datasets.Split.TRAIN: [\"commoncrawl_frde\"],\n datasets.Split.VALIDATION: [\"euelections_dev2019\"],\n },\n)\nbuilder = datasets.builder(\"wmt_translate\", config=config)\n```\n\n", "citation": "\n@InProceedings{bojar-EtAl:2017:WMT1,\n author = {Bojar, Ond\u000b{r}ej and Chatterjee, Rajen and Federmann, Christian and Graham, Yvette and Haddow, Barry and Huang, Shujian and Huck, Matthias and Koehn, Philipp and Liu, Qun and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Rubino, Raphael and Specia, Lucia and Turchi, Marco},\n title = {Findings of the 2017 Conference on Machine Translation (WMT17)},\n booktitle = {Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers},\n month = {September},\n year = {2017},\n address = {Copenhagen, Denmark},\n publisher = {Association for Computational Linguistics},\n pages = {169--214},\n url = {http://www.aclweb.org/anthology/W17-4717}\n}\n", "homepage": "http://www.statmt.org/wmt17/translation-task.html", "license": "", "features": {"translation": {"languages": ["tr", "en"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "tr", "output": "en"}, "task_templates": null, "builder_name": "wmt17", "config_name": "tr-en", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 60416617, "num_examples": 205756, "dataset_name": "wmt17"}, "validation": {"name": "validation", "num_bytes": 732436, "num_examples": 3000, "dataset_name": "wmt17"}, "test": {"name": "test", "num_bytes": 752773, "num_examples": 3007, "dataset_name": "wmt17"}}, "download_checksums": {"https://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-tr.tmx.gz": {"num_bytes": 23548787, "checksum": "23581212dc3267383198a92636219fceb3f23207bfc1d1e78ab60a2cb465eff8"}, "https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip": {"num_bytes": 38714274, "checksum": "d796e363740fdc4261aa6f5a3d2f8223e3adaee7d737b7724863325b8956dfd1"}}, "download_size": 62263061, "post_processing_size": null, "dataset_size": 61901826, "size_in_bytes": 124164887}, "zh-en": {"description": "Translate dataset based on the data from statmt.org.\n\nVersions exists for the different years using a combination of multiple data\nsources. The base `wmt_translate` allows you to create your own config to choose\nyour own data/language pair by creating a custom `datasets.translate.wmt.WmtConfig`.\n\n```\nconfig = datasets.wmt.WmtConfig(\n version=\"0.0.1\",\n language_pair=(\"fr\", \"de\"),\n subsets={\n datasets.Split.TRAIN: [\"commoncrawl_frde\"],\n datasets.Split.VALIDATION: [\"euelections_dev2019\"],\n },\n)\nbuilder = datasets.builder(\"wmt_translate\", config=config)\n```\n\n", "citation": "\n@InProceedings{bojar-EtAl:2017:WMT1,\n author = {Bojar, Ond\u000b{r}ej and Chatterjee, Rajen and Federmann, Christian and Graham, Yvette and Haddow, Barry and Huang, Shujian and Huck, Matthias and Koehn, Philipp and Liu, Qun and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Rubino, Raphael and Specia, Lucia and Turchi, Marco},\n title = {Findings of the 2017 Conference on Machine Translation (WMT17)},\n booktitle = {Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers},\n month = {September},\n year = {2017},\n address = {Copenhagen, Denmark},\n publisher = {Association for Computational Linguistics},\n pages = {169--214},\n url = {http://www.aclweb.org/anthology/W17-4717}\n}\n", "homepage": "http://www.statmt.org/wmt17/translation-task.html", "license": "", "features": {"translation": {"languages": ["zh", "en"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "zh", "output": "en"}, "task_templates": null, "builder_name": "wmt17", "config_name": "zh-en", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5529286149, "num_examples": 25134743, "dataset_name": "wmt17"}, "validation": {"name": "validation", "num_bytes": 589591, "num_examples": 2002, "dataset_name": "wmt17"}, "test": {"name": "test", "num_bytes": 540347, "num_examples": 2001, "dataset_name": "wmt17"}}, "download_checksums": {"https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/training-parallel-nc-v12.zip": {"num_bytes": 168699339, "checksum": "a3e922fd19485a25870e628fdecb81b7d621f545e16df21a38fae15127413122"}, "https://huggingface.co/datasets/wmt/uncorpus/resolve/main-zip/UNv1.0.en-zh.zip": {"num_bytes": 1385832125, "checksum": "97f5ce0892084cdbb2332b52ffcc0299a649ba0a43712d921575fe2b7edfb4b4"}, "https://huggingface.co/datasets/wmt/wmt18/resolve/main/cwmt-wmt/casia2015.zip": {"num_bytes": 98159063, "checksum": "c939f1528f96c419e9bbffb9caad869616a969e7704ffac896e245a02aff59a9"}, "https://huggingface.co/datasets/wmt/wmt18/resolve/main/cwmt-wmt/casict2011.zip": {"num_bytes": 166957775, "checksum": "606adc0ccc5d8fc7c47f8589991286616342a1a379a571ce3038918731ae0182"}, "https://huggingface.co/datasets/wmt/wmt18/resolve/main/cwmt-wmt/casict2015.zip": {"num_bytes": 106836569, "checksum": "eef8e25b297c1aff12ab24719247d3588e756d7a4e2c30d4d34fcb4d05ab1050"}, "https://huggingface.co/datasets/wmt/wmt18/resolve/main/cwmt-wmt/datum2015.zip": {"num_bytes": 100118018, "checksum": "654afce6731485c40ce856514ab80cd2bfd836126bcaf48cdb911ebc32b021a4"}, "https://huggingface.co/datasets/wmt/wmt18/resolve/main/cwmt-wmt/datum2017.zip": {"num_bytes": 99278067, "checksum": "737455c139596f4abf3b1da73bc38932b3ef9534549328eff47d867e29950ed2"}, "https://huggingface.co/datasets/wmt/wmt18/resolve/main/cwmt-wmt/neu2017.zip": {"num_bytes": 150311715, "checksum": "5c5ea9ac5cbc43c974bd53796a3a29829800865b6398b52cda0a3854cb0d2e03"}, "https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip": {"num_bytes": 38714274, "checksum": "d796e363740fdc4261aa6f5a3d2f8223e3adaee7d737b7724863325b8956dfd1"}}, "download_size": 2314906945, "post_processing_size": null, "dataset_size": 5530416087, "size_in_bytes": 7845323032}}
 
 
wmt17.py DELETED
@@ -1,83 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """WMT17: Translate dataset."""
18
-
19
- import datasets
20
-
21
- from .wmt_utils import CWMT_SUBSET_NAMES, Wmt, WmtConfig
22
-
23
-
24
- _URL = "http://www.statmt.org/wmt17/translation-task.html"
25
- _CITATION = """
26
- @InProceedings{bojar-EtAl:2017:WMT1,
27
- author = {Bojar, Ond\v{r}ej and Chatterjee, Rajen and Federmann, Christian and Graham, Yvette and Haddow, Barry and Huang, Shujian and Huck, Matthias and Koehn, Philipp and Liu, Qun and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Rubino, Raphael and Specia, Lucia and Turchi, Marco},
28
- title = {Findings of the 2017 Conference on Machine Translation (WMT17)},
29
- booktitle = {Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers},
30
- month = {September},
31
- year = {2017},
32
- address = {Copenhagen, Denmark},
33
- publisher = {Association for Computational Linguistics},
34
- pages = {169--214},
35
- url = {http://www.aclweb.org/anthology/W17-4717}
36
- }
37
- """
38
-
39
- _LANGUAGE_PAIRS = [(lang, "en") for lang in ["cs", "de", "fi", "lv", "ru", "tr", "zh"]]
40
-
41
-
42
- class Wmt17(Wmt):
43
- """WMT 17 translation datasets for all {xx, "en"} language pairs."""
44
-
45
- BUILDER_CONFIGS = [
46
- WmtConfig( # pylint:disable=g-complex-comprehension
47
- description="WMT 2017 %s-%s translation task dataset." % (l1, l2),
48
- url=_URL,
49
- citation=_CITATION,
50
- language_pair=(l1, l2),
51
- version=datasets.Version("1.0.0"),
52
- )
53
- for l1, l2 in _LANGUAGE_PAIRS
54
- ]
55
-
56
- @property
57
- def manual_download_instructions(self):
58
- if self.config.language_pair[1] in ["cs", "hi", "ru"]:
59
- return "Please download the data manually as explained. TODO(PVP)"
60
-
61
- @property
62
- def _subsets(self):
63
- return {
64
- datasets.Split.TRAIN: [
65
- "europarl_v7",
66
- "europarl_v8_16",
67
- "commoncrawl",
68
- "newscommentary_v12",
69
- "czeng_16",
70
- "yandexcorpus",
71
- "wikiheadlines_fi",
72
- "wikiheadlines_ru",
73
- "setimes_2",
74
- "uncorpus_v1",
75
- "rapid_2016",
76
- "leta_v1",
77
- "dcep_v1",
78
- "onlinebooks_v1",
79
- ]
80
- + CWMT_SUBSET_NAMES,
81
- datasets.Split.VALIDATION: ["newsdev2017", "newstest2016", "newstestB2016"],
82
- datasets.Split.TEST: ["newstest2017", "newstestB2017"],
83
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
wmt_utils.py DELETED
@@ -1,1025 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """WMT: Translate dataset."""
18
-
19
-
20
- import codecs
21
- import functools
22
- import glob
23
- import gzip
24
- import itertools
25
- import os
26
- import re
27
- import xml.etree.cElementTree as ElementTree
28
-
29
- import datasets
30
-
31
-
32
- logger = datasets.logging.get_logger(__name__)
33
-
34
-
35
- _DESCRIPTION = """\
36
- Translation dataset based on the data from statmt.org.
37
-
38
- Versions exist for different years using a combination of data
39
- sources. The base `wmt` allows you to create a custom dataset by choosing
40
- your own data/language pair. This can be done as follows:
41
-
42
- ```python
43
- from datasets import inspect_dataset, load_dataset_builder
44
-
45
- inspect_dataset("wmt17", "path/to/scripts")
46
- builder = load_dataset_builder(
47
- "path/to/scripts/wmt_utils.py",
48
- language_pair=("fr", "de"),
49
- subsets={
50
- datasets.Split.TRAIN: ["commoncrawl_frde"],
51
- datasets.Split.VALIDATION: ["euelections_dev2019"],
52
- },
53
- )
54
-
55
- # Standard version
56
- builder.download_and_prepare()
57
- ds = builder.as_dataset()
58
-
59
- # Streamable version
60
- ds = builder.as_streaming_dataset()
61
- ```
62
-
63
- """
64
-
65
-
66
- CWMT_SUBSET_NAMES = ["casia2015", "casict2011", "casict2015", "datum2015", "datum2017", "neu2017"]
67
-
68
-
69
- class SubDataset:
70
- """Class to keep track of information on a sub-dataset of WMT."""
71
-
72
- def __init__(self, name, target, sources, url, path, manual_dl_files=None):
73
- """Sub-dataset of WMT.
74
-
75
- Args:
76
- name: `string`, a unique dataset identifier.
77
- target: `string`, the target language code.
78
- sources: `set<string>`, the set of source language codes.
79
- url: `string` or `(string, string)`, URL(s) or URL template(s) specifying
80
- where to download the raw data from. If two strings are provided, the
81
- first is used for the source language and the second for the target.
82
- Template strings can either contain '{src}' placeholders that will be
83
- filled in with the source language code, '{0}' and '{1}' placeholders
84
- that will be filled in with the source and target language codes in
85
- alphabetical order, or all 3.
86
- path: `string` or `(string, string)`, path(s) or path template(s)
87
- specifing the path to the raw data relative to the root of the
88
- downloaded archive. If two strings are provided, the dataset is assumed
89
- to be made up of parallel text files, the first being the source and the
90
- second the target. If one string is provided, both languages are assumed
91
- to be stored within the same file and the extension is used to determine
92
- how to parse it. Template strings should be formatted the same as in
93
- `url`.
94
- manual_dl_files: `<list>(string)` (optional), the list of files that must
95
- be manually downloaded to the data directory.
96
- """
97
- self._paths = (path,) if isinstance(path, str) else path
98
- self._urls = (url,) if isinstance(url, str) else url
99
- self._manual_dl_files = manual_dl_files if manual_dl_files else []
100
- self.name = name
101
- self.target = target
102
- self.sources = set(sources)
103
-
104
- def _inject_language(self, src, strings):
105
- """Injects languages into (potentially) template strings."""
106
- if src not in self.sources:
107
- raise ValueError(f"Invalid source for '{self.name}': {src}")
108
-
109
- def _format_string(s):
110
- if "{0}" in s and "{1}" and "{src}" in s:
111
- return s.format(*sorted([src, self.target]), src=src)
112
- elif "{0}" in s and "{1}" in s:
113
- return s.format(*sorted([src, self.target]))
114
- elif "{src}" in s:
115
- return s.format(src=src)
116
- else:
117
- return s
118
-
119
- return [_format_string(s) for s in strings]
120
-
121
- def get_url(self, src):
122
- return self._inject_language(src, self._urls)
123
-
124
- def get_manual_dl_files(self, src):
125
- return self._inject_language(src, self._manual_dl_files)
126
-
127
- def get_path(self, src):
128
- return self._inject_language(src, self._paths)
129
-
130
-
131
- # Subsets used in the training sets for various years of WMT.
132
- _TRAIN_SUBSETS = [
133
- # pylint:disable=line-too-long
134
- SubDataset(
135
- name="commoncrawl",
136
- target="en", # fr-de pair in commoncrawl_frde
137
- sources={"cs", "de", "es", "fr", "ru"},
138
- url="https://huggingface.co/datasets/wmt/wmt13/resolve/main-zip/training-parallel-commoncrawl.zip",
139
- path=("commoncrawl.{src}-en.{src}", "commoncrawl.{src}-en.en"),
140
- ),
141
- SubDataset(
142
- name="commoncrawl_frde",
143
- target="de",
144
- sources={"fr"},
145
- url=(
146
- "https://huggingface.co/datasets/wmt/wmt19/resolve/main/translation-task/fr-de/bitexts/commoncrawl.fr.gz",
147
- "https://huggingface.co/datasets/wmt/wmt19/resolve/main/translation-task/fr-de/bitexts/commoncrawl.de.gz",
148
- ),
149
- path=("", ""),
150
- ),
151
- SubDataset(
152
- name="czeng_10",
153
- target="en",
154
- sources={"cs"},
155
- url="http://ufal.mff.cuni.cz/czeng/czeng10",
156
- manual_dl_files=["data-plaintext-format.%d.tar" % i for i in range(10)],
157
- # Each tar contains multiple files, which we process specially in
158
- # _parse_czeng.
159
- path=("data.plaintext-format/??train.gz",) * 10,
160
- ),
161
- SubDataset(
162
- name="czeng_16pre",
163
- target="en",
164
- sources={"cs"},
165
- url="http://ufal.mff.cuni.cz/czeng/czeng16pre",
166
- manual_dl_files=["czeng16pre.deduped-ignoring-sections.txt.gz"],
167
- path="",
168
- ),
169
- SubDataset(
170
- name="czeng_16",
171
- target="en",
172
- sources={"cs"},
173
- url="http://ufal.mff.cuni.cz/czeng",
174
- manual_dl_files=["data-plaintext-format.%d.tar" % i for i in range(10)],
175
- # Each tar contains multiple files, which we process specially in
176
- # _parse_czeng.
177
- path=("data.plaintext-format/??train.gz",) * 10,
178
- ),
179
- SubDataset(
180
- # This dataset differs from the above in the filtering that is applied
181
- # during parsing.
182
- name="czeng_17",
183
- target="en",
184
- sources={"cs"},
185
- url="http://ufal.mff.cuni.cz/czeng",
186
- manual_dl_files=["data-plaintext-format.%d.tar" % i for i in range(10)],
187
- # Each tar contains multiple files, which we process specially in
188
- # _parse_czeng.
189
- path=("data.plaintext-format/??train.gz",) * 10,
190
- ),
191
- SubDataset(
192
- name="dcep_v1",
193
- target="en",
194
- sources={"lv"},
195
- url="https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/dcep.lv-en.v1.zip",
196
- path=("dcep.en-lv/dcep.lv", "dcep.en-lv/dcep.en"),
197
- ),
198
- SubDataset(
199
- name="europarl_v7",
200
- target="en",
201
- sources={"cs", "de", "es", "fr"},
202
- url="https://huggingface.co/datasets/wmt/wmt13/resolve/main-zip/training-parallel-europarl-v7.zip",
203
- path=("training/europarl-v7.{src}-en.{src}", "training/europarl-v7.{src}-en.en"),
204
- ),
205
- SubDataset(
206
- name="europarl_v7_frde",
207
- target="de",
208
- sources={"fr"},
209
- url=(
210
- "https://huggingface.co/datasets/wmt/wmt19/resolve/main/translation-task/fr-de/bitexts/europarl-v7.fr.gz",
211
- "https://huggingface.co/datasets/wmt/wmt19/resolve/main/translation-task/fr-de/bitexts/europarl-v7.de.gz",
212
- ),
213
- path=("", ""),
214
- ),
215
- SubDataset(
216
- name="europarl_v8_18",
217
- target="en",
218
- sources={"et", "fi"},
219
- url="https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/training-parallel-ep-v8.zip",
220
- path=("training/europarl-v8.{src}-en.{src}", "training/europarl-v8.{src}-en.en"),
221
- ),
222
- SubDataset(
223
- name="europarl_v8_16",
224
- target="en",
225
- sources={"fi", "ro"},
226
- url="https://huggingface.co/datasets/wmt/wmt16/resolve/main-zip/translation-task/training-parallel-ep-v8.zip",
227
- path=("training-parallel-ep-v8/europarl-v8.{src}-en.{src}", "training-parallel-ep-v8/europarl-v8.{src}-en.en"),
228
- ),
229
- SubDataset(
230
- name="europarl_v9",
231
- target="en",
232
- sources={"cs", "de", "fi", "lt"},
233
- url="https://huggingface.co/datasets/wmt/europarl/resolve/main/v9/training/europarl-v9.{src}-en.tsv.gz",
234
- path="",
235
- ),
236
- SubDataset(
237
- name="gigafren",
238
- target="en",
239
- sources={"fr"},
240
- url="https://huggingface.co/datasets/wmt/wmt10/resolve/main-zip/training-giga-fren.zip",
241
- path=("giga-fren.release2.fixed.fr.gz", "giga-fren.release2.fixed.en.gz"),
242
- ),
243
- SubDataset(
244
- name="hindencorp_01",
245
- target="en",
246
- sources={"hi"},
247
- url="http://ufallab.ms.mff.cuni.cz/~bojar/hindencorp",
248
- manual_dl_files=["hindencorp0.1.gz"],
249
- path="",
250
- ),
251
- SubDataset(
252
- name="leta_v1",
253
- target="en",
254
- sources={"lv"},
255
- url="https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/leta.v1.zip",
256
- path=("LETA-lv-en/leta.lv", "LETA-lv-en/leta.en"),
257
- ),
258
- SubDataset(
259
- name="multiun",
260
- target="en",
261
- sources={"es", "fr"},
262
- url="https://huggingface.co/datasets/wmt/wmt13/resolve/main-zip/training-parallel-un.zip",
263
- path=("un/undoc.2000.{src}-en.{src}", "un/undoc.2000.{src}-en.en"),
264
- ),
265
- SubDataset(
266
- name="newscommentary_v9",
267
- target="en",
268
- sources={"cs", "de", "fr", "ru"},
269
- url="https://huggingface.co/datasets/wmt/wmt14/resolve/main-zip/training-parallel-nc-v9.zip",
270
- path=("training/news-commentary-v9.{src}-en.{src}", "training/news-commentary-v9.{src}-en.en"),
271
- ),
272
- SubDataset(
273
- name="newscommentary_v10",
274
- target="en",
275
- sources={"cs", "de", "fr", "ru"},
276
- url="https://huggingface.co/datasets/wmt/wmt15/resolve/main-zip/training-parallel-nc-v10.zip",
277
- path=("news-commentary-v10.{src}-en.{src}", "news-commentary-v10.{src}-en.en"),
278
- ),
279
- SubDataset(
280
- name="newscommentary_v11",
281
- target="en",
282
- sources={"cs", "de", "ru"},
283
- url="https://huggingface.co/datasets/wmt/wmt16/resolve/main-zip/translation-task/training-parallel-nc-v11.zip",
284
- path=(
285
- "training-parallel-nc-v11/news-commentary-v11.{src}-en.{src}",
286
- "training-parallel-nc-v11/news-commentary-v11.{src}-en.en",
287
- ),
288
- ),
289
- SubDataset(
290
- name="newscommentary_v12",
291
- target="en",
292
- sources={"cs", "de", "ru", "zh"},
293
- url="https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/training-parallel-nc-v12.zip",
294
- path=("training/news-commentary-v12.{src}-en.{src}", "training/news-commentary-v12.{src}-en.en"),
295
- ),
296
- SubDataset(
297
- name="newscommentary_v13",
298
- target="en",
299
- sources={"cs", "de", "ru", "zh"},
300
- url="https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/training-parallel-nc-v13.zip",
301
- path=(
302
- "training-parallel-nc-v13/news-commentary-v13.{src}-en.{src}",
303
- "training-parallel-nc-v13/news-commentary-v13.{src}-en.en",
304
- ),
305
- ),
306
- SubDataset(
307
- name="newscommentary_v14",
308
- target="en", # fr-de pair in newscommentary_v14_frde
309
- sources={"cs", "de", "kk", "ru", "zh"},
310
- url="http://data.statmt.org/news-commentary/v14/training/news-commentary-v14.{0}-{1}.tsv.gz",
311
- path="",
312
- ),
313
- SubDataset(
314
- name="newscommentary_v14_frde",
315
- target="de",
316
- sources={"fr"},
317
- url="http://data.statmt.org/news-commentary/v14/training/news-commentary-v14.de-fr.tsv.gz",
318
- path="",
319
- ),
320
- SubDataset(
321
- name="onlinebooks_v1",
322
- target="en",
323
- sources={"lv"},
324
- url="https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/books.lv-en.v1.zip",
325
- path=("farewell/farewell.lv", "farewell/farewell.en"),
326
- ),
327
- SubDataset(
328
- name="paracrawl_v1",
329
- target="en",
330
- sources={"cs", "de", "et", "fi", "ru"},
331
- url="https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-{src}.zipporah0-dedup-clean.tgz", # TODO(QL): use gzip for streaming
332
- path=(
333
- "paracrawl-release1.en-{src}.zipporah0-dedup-clean.{src}",
334
- "paracrawl-release1.en-{src}.zipporah0-dedup-clean.en",
335
- ),
336
- ),
337
- SubDataset(
338
- name="paracrawl_v1_ru",
339
- target="en",
340
- sources={"ru"},
341
- url="https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-ru.zipporah0-dedup-clean.tgz", # TODO(QL): use gzip for streaming
342
- path=(
343
- "paracrawl-release1.en-ru.zipporah0-dedup-clean.ru",
344
- "paracrawl-release1.en-ru.zipporah0-dedup-clean.en",
345
- ),
346
- ),
347
- SubDataset(
348
- name="paracrawl_v3",
349
- target="en", # fr-de pair in paracrawl_v3_frde
350
- sources={"cs", "de", "fi", "lt"},
351
- url="https://s3.amazonaws.com/web-language-models/paracrawl/release3/en-{src}.bicleaner07.tmx.gz",
352
- path="",
353
- ),
354
- SubDataset(
355
- name="paracrawl_v3_frde",
356
- target="de",
357
- sources={"fr"},
358
- url=(
359
- "https://huggingface.co/datasets/wmt/wmt19/resolve/main/translation-task/fr-de/bitexts/de-fr.bicleaner07.de.gz",
360
- "https://huggingface.co/datasets/wmt/wmt19/resolve/main/translation-task/fr-de/bitexts/de-fr.bicleaner07.fr.gz",
361
- ),
362
- path=("", ""),
363
- ),
364
- SubDataset(
365
- name="rapid_2016",
366
- target="en",
367
- sources={"de", "et", "fi"},
368
- url="https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/rapid2016.zip",
369
- path=("rapid2016.{0}-{1}.{src}", "rapid2016.{0}-{1}.en"),
370
- ),
371
- SubDataset(
372
- name="rapid_2016_ltfi",
373
- target="en",
374
- sources={"fi", "lt"},
375
- url="https://tilde-model.s3-eu-west-1.amazonaws.com/rapid2016.en-{src}.tmx.zip",
376
- path="rapid2016.en-{src}.tmx",
377
- ),
378
- SubDataset(
379
- name="rapid_2019",
380
- target="en",
381
- sources={"de"},
382
- url="https://s3-eu-west-1.amazonaws.com/tilde-model/rapid2019.de-en.zip",
383
- path=("rapid2019.de-en.de", "rapid2019.de-en.en"),
384
- ),
385
- SubDataset(
386
- name="setimes_2",
387
- target="en",
388
- sources={"ro", "tr"},
389
- url="https://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-{src}.tmx.gz",
390
- path="",
391
- ),
392
- SubDataset(
393
- name="uncorpus_v1",
394
- target="en",
395
- sources={"ru", "zh"},
396
- url="https://huggingface.co/datasets/wmt/uncorpus/resolve/main-zip/UNv1.0.en-{src}.zip",
397
- path=("en-{src}/UNv1.0.en-{src}.{src}", "en-{src}/UNv1.0.en-{src}.en"),
398
- ),
399
- SubDataset(
400
- name="wikiheadlines_fi",
401
- target="en",
402
- sources={"fi"},
403
- url="https://huggingface.co/datasets/wmt/wmt15/resolve/main-zip/wiki-titles.zip",
404
- path="wiki/fi-en/titles.fi-en",
405
- ),
406
- SubDataset(
407
- name="wikiheadlines_hi",
408
- target="en",
409
- sources={"hi"},
410
- url="https://huggingface.co/datasets/wmt/wmt14/resolve/main-zip/wiki-titles.zip",
411
- path="wiki/hi-en/wiki-titles.hi-en",
412
- ),
413
- SubDataset(
414
- # Verified that wmt14 and wmt15 files are identical.
415
- name="wikiheadlines_ru",
416
- target="en",
417
- sources={"ru"},
418
- url="https://huggingface.co/datasets/wmt/wmt15/resolve/main-zip/wiki-titles.zip",
419
- path="wiki/ru-en/wiki.ru-en",
420
- ),
421
- SubDataset(
422
- name="wikititles_v1",
423
- target="en",
424
- sources={"cs", "de", "fi", "gu", "kk", "lt", "ru", "zh"},
425
- url="https://huggingface.co/datasets/wmt/wikititles/resolve/main/v1/wikititles-v1.{src}-en.tsv.gz",
426
- path="",
427
- ),
428
- SubDataset(
429
- name="yandexcorpus",
430
- target="en",
431
- sources={"ru"},
432
- url="https://translate.yandex.ru/corpus?lang=en",
433
- manual_dl_files=["1mcorpus.zip"],
434
- path=("corpus.en_ru.1m.ru", "corpus.en_ru.1m.en"),
435
- ),
436
- # pylint:enable=line-too-long
437
- ] + [
438
- SubDataset( # pylint:disable=g-complex-comprehension
439
- name=ss,
440
- target="en",
441
- sources={"zh"},
442
- url="https://huggingface.co/datasets/wmt/wmt18/resolve/main/cwmt-wmt/%s.zip" % ss,
443
- path=("%s/*_c[hn].txt" % ss, "%s/*_en.txt" % ss),
444
- )
445
- for ss in CWMT_SUBSET_NAMES
446
- ]
447
-
448
- _DEV_SUBSETS = [
449
- SubDataset(
450
- name="euelections_dev2019",
451
- target="de",
452
- sources={"fr"},
453
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
454
- path=("dev/euelections_dev2019.fr-de.src.fr", "dev/euelections_dev2019.fr-de.tgt.de"),
455
- ),
456
- SubDataset(
457
- name="newsdev2014",
458
- target="en",
459
- sources={"hi"},
460
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
461
- path=("dev/newsdev2014.hi", "dev/newsdev2014.en"),
462
- ),
463
- SubDataset(
464
- name="newsdev2015",
465
- target="en",
466
- sources={"fi"},
467
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
468
- path=("dev/newsdev2015-fien-src.{src}.sgm", "dev/newsdev2015-fien-ref.en.sgm"),
469
- ),
470
- SubDataset(
471
- name="newsdiscussdev2015",
472
- target="en",
473
- sources={"ro", "tr"},
474
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
475
- path=("dev/newsdiscussdev2015-{src}en-src.{src}.sgm", "dev/newsdiscussdev2015-{src}en-ref.en.sgm"),
476
- ),
477
- SubDataset(
478
- name="newsdev2016",
479
- target="en",
480
- sources={"ro", "tr"},
481
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
482
- path=("dev/newsdev2016-{src}en-src.{src}.sgm", "dev/newsdev2016-{src}en-ref.en.sgm"),
483
- ),
484
- SubDataset(
485
- name="newsdev2017",
486
- target="en",
487
- sources={"lv", "zh"},
488
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
489
- path=("dev/newsdev2017-{src}en-src.{src}.sgm", "dev/newsdev2017-{src}en-ref.en.sgm"),
490
- ),
491
- SubDataset(
492
- name="newsdev2018",
493
- target="en",
494
- sources={"et"},
495
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
496
- path=("dev/newsdev2018-{src}en-src.{src}.sgm", "dev/newsdev2018-{src}en-ref.en.sgm"),
497
- ),
498
- SubDataset(
499
- name="newsdev2019",
500
- target="en",
501
- sources={"gu", "kk", "lt"},
502
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
503
- path=("dev/newsdev2019-{src}en-src.{src}.sgm", "dev/newsdev2019-{src}en-ref.en.sgm"),
504
- ),
505
- SubDataset(
506
- name="newsdiscussdev2015",
507
- target="en",
508
- sources={"fr"},
509
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
510
- path=("dev/newsdiscussdev2015-{src}en-src.{src}.sgm", "dev/newsdiscussdev2015-{src}en-ref.en.sgm"),
511
- ),
512
- SubDataset(
513
- name="newsdiscusstest2015",
514
- target="en",
515
- sources={"fr"},
516
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
517
- path=("dev/newsdiscusstest2015-{src}en-src.{src}.sgm", "dev/newsdiscusstest2015-{src}en-ref.en.sgm"),
518
- ),
519
- SubDataset(
520
- name="newssyscomb2009",
521
- target="en",
522
- sources={"cs", "de", "es", "fr"},
523
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
524
- path=("dev/newssyscomb2009.{src}", "dev/newssyscomb2009.en"),
525
- ),
526
- SubDataset(
527
- name="newstest2008",
528
- target="en",
529
- sources={"cs", "de", "es", "fr", "hu"},
530
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
531
- path=("dev/news-test2008.{src}", "dev/news-test2008.en"),
532
- ),
533
- SubDataset(
534
- name="newstest2009",
535
- target="en",
536
- sources={"cs", "de", "es", "fr"},
537
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
538
- path=("dev/newstest2009.{src}", "dev/newstest2009.en"),
539
- ),
540
- SubDataset(
541
- name="newstest2010",
542
- target="en",
543
- sources={"cs", "de", "es", "fr"},
544
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
545
- path=("dev/newstest2010.{src}", "dev/newstest2010.en"),
546
- ),
547
- SubDataset(
548
- name="newstest2011",
549
- target="en",
550
- sources={"cs", "de", "es", "fr"},
551
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
552
- path=("dev/newstest2011.{src}", "dev/newstest2011.en"),
553
- ),
554
- SubDataset(
555
- name="newstest2012",
556
- target="en",
557
- sources={"cs", "de", "es", "fr", "ru"},
558
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
559
- path=("dev/newstest2012.{src}", "dev/newstest2012.en"),
560
- ),
561
- SubDataset(
562
- name="newstest2013",
563
- target="en",
564
- sources={"cs", "de", "es", "fr", "ru"},
565
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
566
- path=("dev/newstest2013.{src}", "dev/newstest2013.en"),
567
- ),
568
- SubDataset(
569
- name="newstest2014",
570
- target="en",
571
- sources={"cs", "de", "es", "fr", "hi", "ru"},
572
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
573
- path=("dev/newstest2014-{src}en-src.{src}.sgm", "dev/newstest2014-{src}en-ref.en.sgm"),
574
- ),
575
- SubDataset(
576
- name="newstest2015",
577
- target="en",
578
- sources={"cs", "de", "fi", "ru"},
579
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
580
- path=("dev/newstest2015-{src}en-src.{src}.sgm", "dev/newstest2015-{src}en-ref.en.sgm"),
581
- ),
582
- SubDataset(
583
- name="newsdiscusstest2015",
584
- target="en",
585
- sources={"fr"},
586
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
587
- path=("dev/newsdiscusstest2015-{src}en-src.{src}.sgm", "dev/newsdiscusstest2015-{src}en-ref.en.sgm"),
588
- ),
589
- SubDataset(
590
- name="newstest2016",
591
- target="en",
592
- sources={"cs", "de", "fi", "ro", "ru", "tr"},
593
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
594
- path=("dev/newstest2016-{src}en-src.{src}.sgm", "dev/newstest2016-{src}en-ref.en.sgm"),
595
- ),
596
- SubDataset(
597
- name="newstestB2016",
598
- target="en",
599
- sources={"fi"},
600
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
601
- path=("dev/newstestB2016-enfi-ref.{src}.sgm", "dev/newstestB2016-enfi-src.en.sgm"),
602
- ),
603
- SubDataset(
604
- name="newstest2017",
605
- target="en",
606
- sources={"cs", "de", "fi", "lv", "ru", "tr", "zh"},
607
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
608
- path=("dev/newstest2017-{src}en-src.{src}.sgm", "dev/newstest2017-{src}en-ref.en.sgm"),
609
- ),
610
- SubDataset(
611
- name="newstestB2017",
612
- target="en",
613
- sources={"fi"},
614
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
615
- path=("dev/newstestB2017-fien-src.fi.sgm", "dev/newstestB2017-fien-ref.en.sgm"),
616
- ),
617
- SubDataset(
618
- name="newstest2018",
619
- target="en",
620
- sources={"cs", "de", "et", "fi", "ru", "tr", "zh"},
621
- url="https://huggingface.co/datasets/wmt/wmt19/resolve/main-zip/translation-task/dev.zip",
622
- path=("dev/newstest2018-{src}en-src.{src}.sgm", "dev/newstest2018-{src}en-ref.en.sgm"),
623
- ),
624
- ]
625
-
626
- DATASET_MAP = {dataset.name: dataset for dataset in _TRAIN_SUBSETS + _DEV_SUBSETS}
627
-
628
- _CZENG17_FILTER = SubDataset(
629
- name="czeng17_filter",
630
- target="en",
631
- sources={"cs"},
632
- url="http://ufal.mff.cuni.cz/czeng/download.php?f=convert_czeng16_to_17.pl.zip",
633
- path="convert_czeng16_to_17.pl",
634
- )
635
-
636
-
637
- class WmtConfig(datasets.BuilderConfig):
638
- """BuilderConfig for WMT."""
639
-
640
- def __init__(self, url=None, citation=None, description=None, language_pair=(None, None), subsets=None, **kwargs):
641
- """BuilderConfig for WMT.
642
-
643
- Args:
644
- url: The reference URL for the dataset.
645
- citation: The paper citation for the dataset.
646
- description: The description of the dataset.
647
- language_pair: pair of languages that will be used for translation. Should
648
- contain 2 letter coded strings. For example: ("en", "de").
649
- configuration for the `datasets.features.text.TextEncoder` used for the
650
- `datasets.features.text.Translation` features.
651
- subsets: Dict[split, list[str]]. List of the subset to use for each of the
652
- split. Note that WMT subclasses overwrite this parameter.
653
- **kwargs: keyword arguments forwarded to super.
654
- """
655
- name = "%s-%s" % (language_pair[0], language_pair[1])
656
- if "name" in kwargs: # Add name suffix for custom configs
657
- name += "." + kwargs.pop("name")
658
-
659
- super(WmtConfig, self).__init__(name=name, description=description, **kwargs)
660
-
661
- self.url = url or "http://www.statmt.org"
662
- self.citation = citation
663
- self.language_pair = language_pair
664
- self.subsets = subsets
665
-
666
- # TODO(PVP): remove when manual dir works
667
- # +++++++++++++++++++++
668
- if language_pair[1] in ["cs", "hi", "ru"]:
669
- assert NotImplementedError(f"The dataset for {language_pair[1]}-en is currently not fully supported.")
670
- # +++++++++++++++++++++
671
-
672
-
673
- class Wmt(datasets.GeneratorBasedBuilder):
674
- """WMT translation dataset."""
675
-
676
- BUILDER_CONFIG_CLASS = WmtConfig
677
-
678
- def __init__(self, *args, **kwargs):
679
- super(Wmt, self).__init__(*args, **kwargs)
680
-
681
- @property
682
- def _subsets(self):
683
- """Subsets that make up each split of the dataset."""
684
- raise NotImplementedError("This is a abstract method")
685
-
686
- @property
687
- def subsets(self):
688
- """Subsets that make up each split of the dataset for the language pair."""
689
- source, target = self.config.language_pair
690
- filtered_subsets = {}
691
- subsets = self._subsets if self.config.subsets is None else self.config.subsets
692
- for split, ss_names in subsets.items():
693
- filtered_subsets[split] = []
694
- for ss_name in ss_names:
695
- dataset = DATASET_MAP[ss_name]
696
- if dataset.target != target or source not in dataset.sources:
697
- logger.info("Skipping sub-dataset that does not include language pair: %s", ss_name)
698
- else:
699
- filtered_subsets[split].append(ss_name)
700
- logger.info("Using sub-datasets: %s", filtered_subsets)
701
- return filtered_subsets
702
-
703
- def _info(self):
704
- src, target = self.config.language_pair
705
- return datasets.DatasetInfo(
706
- description=_DESCRIPTION,
707
- features=datasets.Features(
708
- {"translation": datasets.features.Translation(languages=self.config.language_pair)}
709
- ),
710
- supervised_keys=(src, target),
711
- homepage=self.config.url,
712
- citation=self.config.citation,
713
- )
714
-
715
- def _vocab_text_gen(self, split_subsets, extraction_map, language):
716
- for _, ex in self._generate_examples(split_subsets, extraction_map, with_translation=False):
717
- yield ex[language]
718
-
719
- def _split_generators(self, dl_manager):
720
- source, _ = self.config.language_pair
721
- manual_paths_dict = {}
722
- urls_to_download = {}
723
- for ss_name in itertools.chain.from_iterable(self.subsets.values()):
724
- if ss_name == "czeng_17":
725
- # CzEng1.7 is CzEng1.6 with some blocks filtered out. We must download
726
- # the filtering script so we can parse out which blocks need to be
727
- # removed.
728
- urls_to_download[_CZENG17_FILTER.name] = _CZENG17_FILTER.get_url(source)
729
-
730
- # get dataset
731
- dataset = DATASET_MAP[ss_name]
732
- if dataset.get_manual_dl_files(source):
733
- # TODO(PVP): following two lines skip configs that are incomplete for now
734
- # +++++++++++++++++++++
735
- logger.info("Skipping {dataset.name} for now. Incomplete dataset for {self.config.name}")
736
- continue
737
- # +++++++++++++++++++++
738
-
739
- manual_dl_files = dataset.get_manual_dl_files(source)
740
- manual_paths = [
741
- os.path.join(os.path.abspath(os.path.expanduser(dl_manager.manual_dir)), fname)
742
- for fname in manual_dl_files
743
- ]
744
- assert all(
745
- os.path.exists(path) for path in manual_paths
746
- ), f"For {dataset.name}, you must manually download the following file(s) from {dataset.get_url(source)} and place them in {dl_manager.manual_dir}: {', '.join(manual_dl_files)}"
747
-
748
- # set manual path for correct subset
749
- manual_paths_dict[ss_name] = manual_paths
750
- else:
751
- urls_to_download[ss_name] = dataset.get_url(source)
752
-
753
- # Download and extract files from URLs.
754
- downloaded_files = dl_manager.download_and_extract(urls_to_download)
755
- # Extract manually downloaded files.
756
- manual_files = dl_manager.extract(manual_paths_dict)
757
- extraction_map = dict(downloaded_files, **manual_files)
758
-
759
- for language in self.config.language_pair:
760
- self._vocab_text_gen(self.subsets[datasets.Split.TRAIN], extraction_map, language)
761
-
762
- return [
763
- datasets.SplitGenerator( # pylint:disable=g-complex-comprehension
764
- name=split, gen_kwargs={"split_subsets": split_subsets, "extraction_map": extraction_map}
765
- )
766
- for split, split_subsets in self.subsets.items()
767
- ]
768
-
769
- def _generate_examples(self, split_subsets, extraction_map, with_translation=True):
770
- """Returns the examples in the raw (text) form."""
771
- source, _ = self.config.language_pair
772
-
773
- def _get_local_paths(dataset, extract_dirs):
774
- rel_paths = dataset.get_path(source)
775
- if len(extract_dirs) == 1:
776
- extract_dirs = extract_dirs * len(rel_paths)
777
- return [
778
- os.path.join(ex_dir, rel_path) if rel_path else ex_dir
779
- for ex_dir, rel_path in zip(extract_dirs, rel_paths)
780
- ]
781
-
782
- def _get_filenames(dataset):
783
- rel_paths = dataset.get_path(source)
784
- urls = dataset.get_url(source)
785
- if len(urls) == 1:
786
- urls = urls * len(rel_paths)
787
- return [rel_path if rel_path else os.path.basename(url) for url, rel_path in zip(urls, rel_paths)]
788
-
789
- for ss_name in split_subsets:
790
- # TODO(PVP) remove following five lines when manual data works
791
- # +++++++++++++++++++++
792
- dataset = DATASET_MAP[ss_name]
793
- source, _ = self.config.language_pair
794
- if dataset.get_manual_dl_files(source):
795
- logger.info(f"Skipping {dataset.name} for now. Incomplete dataset for {self.config.name}")
796
- continue
797
- # +++++++++++++++++++++
798
-
799
- logger.info("Generating examples from: %s", ss_name)
800
- dataset = DATASET_MAP[ss_name]
801
- extract_dirs = extraction_map[ss_name]
802
- files = _get_local_paths(dataset, extract_dirs)
803
- filenames = _get_filenames(dataset)
804
-
805
- sub_generator_args = tuple(files)
806
-
807
- if ss_name.startswith("czeng"):
808
- if ss_name.endswith("16pre"):
809
- sub_generator = functools.partial(_parse_tsv, language_pair=("en", "cs"))
810
- sub_generator_args += tuple(filenames)
811
- elif ss_name.endswith("17"):
812
- filter_path = _get_local_paths(_CZENG17_FILTER, extraction_map[_CZENG17_FILTER.name])[0]
813
- sub_generator = functools.partial(_parse_czeng, filter_path=filter_path)
814
- else:
815
- sub_generator = _parse_czeng
816
- elif ss_name == "hindencorp_01":
817
- sub_generator = _parse_hindencorp
818
- elif len(files) == 2:
819
- if ss_name.endswith("_frde"):
820
- sub_generator = _parse_frde_bitext
821
- else:
822
- sub_generator = _parse_parallel_sentences
823
- sub_generator_args += tuple(filenames)
824
- elif len(files) == 1:
825
- fname = filenames[0]
826
- # Note: Due to formatting used by `download_manager`, the file
827
- # extension may not be at the end of the file path.
828
- if ".tsv" in fname:
829
- sub_generator = _parse_tsv
830
- sub_generator_args += tuple(filenames)
831
- elif (
832
- ss_name.startswith("newscommentary_v14")
833
- or ss_name.startswith("europarl_v9")
834
- or ss_name.startswith("wikititles_v1")
835
- ):
836
- sub_generator = functools.partial(_parse_tsv, language_pair=self.config.language_pair)
837
- sub_generator_args += tuple(filenames)
838
- elif "tmx" in fname or ss_name.startswith("paracrawl_v3"):
839
- sub_generator = _parse_tmx
840
- elif ss_name.startswith("wikiheadlines"):
841
- sub_generator = _parse_wikiheadlines
842
- else:
843
- raise ValueError("Unsupported file format: %s" % fname)
844
- else:
845
- raise ValueError("Invalid number of files: %d" % len(files))
846
-
847
- for sub_key, ex in sub_generator(*sub_generator_args):
848
- if not all(ex.values()):
849
- continue
850
- # TODO(adarob): Add subset feature.
851
- # ex["subset"] = subset
852
- key = f"{ss_name}/{sub_key}"
853
- if with_translation is True:
854
- ex = {"translation": ex}
855
- yield key, ex
856
-
857
-
858
- def _parse_parallel_sentences(f1, f2, filename1, filename2):
859
- """Returns examples from parallel SGML or text files, which may be gzipped."""
860
-
861
- def _parse_text(path, original_filename):
862
- """Returns the sentences from a single text file, which may be gzipped."""
863
- split_path = original_filename.split(".")
864
-
865
- if split_path[-1] == "gz":
866
- lang = split_path[-2]
867
-
868
- def gen():
869
- with open(path, "rb") as f, gzip.GzipFile(fileobj=f) as g:
870
- for line in g:
871
- yield line.decode("utf-8").rstrip()
872
-
873
- return gen(), lang
874
-
875
- if split_path[-1] == "txt":
876
- # CWMT
877
- lang = split_path[-2].split("_")[-1]
878
- lang = "zh" if lang in ("ch", "cn", "c[hn]") else lang
879
- else:
880
- lang = split_path[-1]
881
-
882
- def gen():
883
- with open(path, "rb") as f:
884
- for line in f:
885
- yield line.decode("utf-8").rstrip()
886
-
887
- return gen(), lang
888
-
889
- def _parse_sgm(path, original_filename):
890
- """Returns sentences from a single SGML file."""
891
- lang = original_filename.split(".")[-2]
892
- # Note: We can't use the XML parser since some of the files are badly
893
- # formatted.
894
- seg_re = re.compile(r"<seg id=\"\d+\">(.*)</seg>")
895
-
896
- def gen():
897
- with open(path, encoding="utf-8") as f:
898
- for line in f:
899
- seg_match = re.match(seg_re, line)
900
- if seg_match:
901
- assert len(seg_match.groups()) == 1
902
- yield seg_match.groups()[0]
903
-
904
- return gen(), lang
905
-
906
- parse_file = _parse_sgm if os.path.basename(f1).endswith(".sgm") else _parse_text
907
-
908
- # Some datasets (e.g., CWMT) contain multiple parallel files specified with
909
- # a wildcard. We sort both sets to align them and parse them one by one.
910
- f1_files = sorted(glob.glob(f1))
911
- f2_files = sorted(glob.glob(f2))
912
-
913
- assert f1_files and f2_files, "No matching files found: %s, %s." % (f1, f2)
914
- assert len(f1_files) == len(f2_files), "Number of files do not match: %d vs %d for %s vs %s." % (
915
- len(f1_files),
916
- len(f2_files),
917
- f1,
918
- f2,
919
- )
920
-
921
- for f_id, (f1_i, f2_i) in enumerate(zip(sorted(f1_files), sorted(f2_files))):
922
- l1_sentences, l1 = parse_file(f1_i, filename1)
923
- l2_sentences, l2 = parse_file(f2_i, filename2)
924
-
925
- for line_id, (s1, s2) in enumerate(zip(l1_sentences, l2_sentences)):
926
- key = f"{f_id}/{line_id}"
927
- yield key, {l1: s1, l2: s2}
928
-
929
-
930
- def _parse_frde_bitext(fr_path, de_path):
931
- with open(fr_path, encoding="utf-8") as fr_f:
932
- with open(de_path, encoding="utf-8") as de_f:
933
- for line_id, (s1, s2) in enumerate(zip(fr_f, de_f)):
934
- yield line_id, {"fr": s1.rstrip(), "de": s2.rstrip()}
935
-
936
-
937
- def _parse_tmx(path):
938
- """Generates examples from TMX file."""
939
-
940
- def _get_tuv_lang(tuv):
941
- for k, v in tuv.items():
942
- if k.endswith("}lang"):
943
- return v
944
- raise AssertionError("Language not found in `tuv` attributes.")
945
-
946
- def _get_tuv_seg(tuv):
947
- segs = tuv.findall("seg")
948
- assert len(segs) == 1, "Invalid number of segments: %d" % len(segs)
949
- return segs[0].text
950
-
951
- with open(path, "rb") as f:
952
- # Workaround due to: https://github.com/tensorflow/tensorflow/issues/33563
953
- utf_f = codecs.getreader("utf-8")(f)
954
- for line_id, (_, elem) in enumerate(ElementTree.iterparse(utf_f)):
955
- if elem.tag == "tu":
956
- yield line_id, {_get_tuv_lang(tuv): _get_tuv_seg(tuv) for tuv in elem.iterfind("tuv")}
957
- elem.clear()
958
-
959
-
960
- def _parse_tsv(path, filename, language_pair=None):
961
- """Generates examples from TSV file."""
962
- if language_pair is None:
963
- lang_match = re.match(r".*\.([a-z][a-z])-([a-z][a-z])\.tsv", filename)
964
- assert lang_match is not None, "Invalid TSV filename: %s" % filename
965
- l1, l2 = lang_match.groups()
966
- else:
967
- l1, l2 = language_pair
968
- with open(path, encoding="utf-8") as f:
969
- for j, line in enumerate(f):
970
- cols = line.split("\t")
971
- if len(cols) != 2:
972
- logger.warning("Skipping line %d in TSV (%s) with %d != 2 columns.", j, path, len(cols))
973
- continue
974
- s1, s2 = cols
975
- yield j, {l1: s1.strip(), l2: s2.strip()}
976
-
977
-
978
- def _parse_wikiheadlines(path):
979
- """Generates examples from Wikiheadlines dataset file."""
980
- lang_match = re.match(r".*\.([a-z][a-z])-([a-z][a-z])$", path)
981
- assert lang_match is not None, "Invalid Wikiheadlines filename: %s" % path
982
- l1, l2 = lang_match.groups()
983
- with open(path, encoding="utf-8") as f:
984
- for line_id, line in enumerate(f):
985
- s1, s2 = line.split("|||")
986
- yield line_id, {l1: s1.strip(), l2: s2.strip()}
987
-
988
-
989
- def _parse_czeng(*paths, **kwargs):
990
- """Generates examples from CzEng v1.6, with optional filtering for v1.7."""
991
- filter_path = kwargs.get("filter_path", None)
992
- if filter_path:
993
- re_block = re.compile(r"^[^-]+-b(\d+)-\d\d[tde]")
994
- with open(filter_path, encoding="utf-8") as f:
995
- bad_blocks = {blk for blk in re.search(r"qw{([\s\d]*)}", f.read()).groups()[0].split()}
996
- logger.info("Loaded %d bad blocks to filter from CzEng v1.6 to make v1.7.", len(bad_blocks))
997
-
998
- for path in paths:
999
- for gz_path in sorted(glob.glob(path)):
1000
- with open(gz_path, "rb") as g, gzip.GzipFile(fileobj=g) as f:
1001
- filename = os.path.basename(gz_path)
1002
- for line_id, line in enumerate(f):
1003
- line = line.decode("utf-8") # required for py3
1004
- if not line.strip():
1005
- continue
1006
- id_, unused_score, cs, en = line.split("\t")
1007
- if filter_path:
1008
- block_match = re.match(re_block, id_)
1009
- if block_match and block_match.groups()[0] in bad_blocks:
1010
- continue
1011
- sub_key = f"{filename}/{line_id}"
1012
- yield sub_key, {
1013
- "cs": cs.strip(),
1014
- "en": en.strip(),
1015
- }
1016
-
1017
-
1018
- def _parse_hindencorp(path):
1019
- with open(path, encoding="utf-8") as f:
1020
- for line_id, line in enumerate(f):
1021
- split_line = line.split("\t")
1022
- if len(split_line) != 5:
1023
- logger.warning("Skipping invalid HindEnCorp line: %s", line)
1024
- continue
1025
- yield line_id, {"translation": {"en": split_line[3].strip(), "hi": split_line[4].strip()}}