Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
71f7463
·
1 Parent(s): 6f57725

Update parquet files

Browse files
README.md DELETED
@@ -1,325 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language_creators:
5
- - found
6
- language:
7
- - ar
8
- - bg
9
- - cs
10
- - de
11
- - el
12
- - en
13
- - es
14
- - fa
15
- - fr
16
- - he
17
- - hu
18
- - it
19
- - nl
20
- - pl
21
- - pt
22
- - ro
23
- - ru
24
- - sl
25
- - tr
26
- - vi
27
- license:
28
- - unknown
29
- multilinguality:
30
- - multilingual
31
- size_categories:
32
- - 100K<n<1M
33
- - 10K<n<100K
34
- source_datasets:
35
- - original
36
- task_categories:
37
- - translation
38
- task_ids: []
39
- paperswithcode_id: null
40
- pretty_name: OpusWikipedia
41
- configs:
42
- - ar-en
43
- - ar-pl
44
- - en-ru
45
- - en-sl
46
- - en-vi
47
- dataset_info:
48
- - config_name: ar-en
49
- features:
50
- - name: id
51
- dtype: string
52
- - name: translation
53
- dtype:
54
- translation:
55
- languages:
56
- - ar
57
- - en
58
- splits:
59
- - name: train
60
- num_bytes: 45207715
61
- num_examples: 151136
62
- download_size: 16097997
63
- dataset_size: 45207715
64
- - config_name: ar-pl
65
- features:
66
- - name: id
67
- dtype: string
68
- - name: translation
69
- dtype:
70
- translation:
71
- languages:
72
- - ar
73
- - pl
74
- splits:
75
- - name: train
76
- num_bytes: 304851676
77
- num_examples: 823715
78
- download_size: 104585718
79
- dataset_size: 304851676
80
- - config_name: en-sl
81
- features:
82
- - name: id
83
- dtype: string
84
- - name: translation
85
- dtype:
86
- translation:
87
- languages:
88
- - en
89
- - sl
90
- splits:
91
- - name: train
92
- num_bytes: 30479739
93
- num_examples: 140124
94
- download_size: 11727538
95
- dataset_size: 30479739
96
- - config_name: en-ru
97
- features:
98
- - name: id
99
- dtype: string
100
- - name: translation
101
- dtype:
102
- translation:
103
- languages:
104
- - en
105
- - ru
106
- splits:
107
- - name: train
108
- num_bytes: 167649057
109
- num_examples: 572717
110
- download_size: 57356138
111
- dataset_size: 167649057
112
- - config_name: en-vi
113
- features:
114
- - name: id
115
- dtype: string
116
- - name: translation
117
- dtype:
118
- translation:
119
- languages:
120
- - en
121
- - vi
122
- splits:
123
- - name: train
124
- num_bytes: 7571598
125
- num_examples: 58116
126
- download_size: 2422413
127
- dataset_size: 7571598
128
- ---
129
-
130
- # Dataset Card for OpusWikipedia
131
-
132
- ## Table of Contents
133
- - [Dataset Description](#dataset-description)
134
- - [Dataset Summary](#dataset-summary)
135
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
136
- - [Languages](#languages)
137
- - [Dataset Structure](#dataset-structure)
138
- - [Data Instances](#data-instances)
139
- - [Data Fields](#data-fields)
140
- - [Data Splits](#data-splits)
141
- - [Dataset Creation](#dataset-creation)
142
- - [Curation Rationale](#curation-rationale)
143
- - [Source Data](#source-data)
144
- - [Annotations](#annotations)
145
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
146
- - [Considerations for Using the Data](#considerations-for-using-the-data)
147
- - [Social Impact of Dataset](#social-impact-of-dataset)
148
- - [Discussion of Biases](#discussion-of-biases)
149
- - [Other Known Limitations](#other-known-limitations)
150
- - [Additional Information](#additional-information)
151
- - [Dataset Curators](#dataset-curators)
152
- - [Licensing Information](#licensing-information)
153
- - [Citation Information](#citation-information)
154
- - [Contributions](#contributions)
155
-
156
- ## Dataset Description
157
-
158
- - **Homepage:** http://opus.nlpl.eu/Wikipedia.php
159
- - **Repository:** None
160
- - **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
161
- - **Leaderboard:** [More Information Needed]
162
- - **Point of Contact:** [More Information Needed]
163
-
164
- ### Dataset Summary
165
-
166
- This is a corpus of parallel sentences extracted from Wikipedia by Krzysztof Wołk and Krzysztof Marasek.
167
-
168
- Tha dataset contains 20 languages and 36 bitexts.
169
-
170
- To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs,
171
- e.g.
172
-
173
- ```python
174
- dataset = load_dataset("opus_wikipedia", lang1="it", lang2="pl")
175
- ```
176
-
177
- You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/Wikipedia.php
178
-
179
-
180
- ### Supported Tasks and Leaderboards
181
-
182
- [More Information Needed]
183
-
184
- ### Languages
185
-
186
- The languages in the dataset are:
187
- - ar
188
- - bg
189
- - cs
190
- - de
191
- - el
192
- - en
193
- - es
194
- - fa
195
- - fr
196
- - he
197
- - hu
198
- - it
199
- - nl
200
- - pl
201
- - pt
202
- - ro
203
- - ru
204
- - sl
205
- - tr
206
- - vi
207
-
208
- ## Dataset Structure
209
-
210
- ### Data Instances
211
-
212
- ```
213
- {
214
- 'id': '0',
215
- 'translation': {
216
- "ar": "* Encyclopaedia of Mathematics online encyclopaedia from Springer, Graduate-level reference work with over 8,000 entries, illuminating nearly 50,000 notions in mathematics.",
217
- "en": "*Encyclopaedia of Mathematics online encyclopaedia from Springer, Graduate-level reference work with over 8,000 entries, illuminating nearly 50,000 notions in mathematics."
218
- }
219
- }
220
- ```
221
-
222
- ### Data Fields
223
-
224
- - `id` (`str`): Unique identifier of the parallel sentence for the pair of languages.
225
- - `translation` (`dict`): Parallel sentences for the pair of languages.
226
-
227
- ### Data Splits
228
-
229
- The dataset contains a single `train` split.
230
-
231
- ## Dataset Creation
232
-
233
- ### Curation Rationale
234
-
235
- [More Information Needed]
236
-
237
- ### Source Data
238
-
239
- [More Information Needed]
240
-
241
- #### Initial Data Collection and Normalization
242
-
243
- [More Information Needed]
244
-
245
- #### Who are the source language producers?
246
-
247
- [More Information Needed]
248
-
249
- ### Annotations
250
-
251
- [More Information Needed]
252
-
253
- #### Annotation process
254
-
255
- [More Information Needed]
256
-
257
- #### Who are the annotators?
258
-
259
- [More Information Needed]
260
-
261
- ### Personal and Sensitive Information
262
-
263
- [More Information Needed]
264
-
265
- ## Considerations for Using the Data
266
-
267
- ### Social Impact of Dataset
268
-
269
- [More Information Needed]
270
-
271
- ### Discussion of Biases
272
-
273
- [More Information Needed]
274
-
275
- ### Other Known Limitations
276
-
277
- [More Information Needed]
278
-
279
- ## Additional Information
280
-
281
- ### Dataset Curators
282
-
283
- [More Information Needed]
284
-
285
- ### Licensing Information
286
-
287
- [More Information Needed]
288
-
289
- ### Citation Information
290
-
291
- ```bibtex
292
- @article{WOLK2014126,
293
- title = {Building Subject-aligned Comparable Corpora and Mining it for Truly Parallel Sentence Pairs},
294
- journal = {Procedia Technology},
295
- volume = {18},
296
- pages = {126-132},
297
- year = {2014},
298
- note = {International workshop on Innovations in Information and Communication Science and Technology, IICST 2014, 3-5 September 2014, Warsaw, Poland},
299
- issn = {2212-0173},
300
- doi = {https://doi.org/10.1016/j.protcy.2014.11.024},
301
- url = {https://www.sciencedirect.com/science/article/pii/S2212017314005453},
302
- author = {Krzysztof Wołk and Krzysztof Marasek},
303
- keywords = {Comparable corpora, machine translation, NLP},
304
- }
305
- ```
306
-
307
- ```bibtex
308
- @InProceedings{TIEDEMANN12.463,
309
- author = {J{\"o}rg Tiedemann},
310
- title = {Parallel Data, Tools and Interfaces in OPUS},
311
- booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
312
- year = {2012},
313
- month = {may},
314
- date = {23-25},
315
- address = {Istanbul, Turkey},
316
- editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
317
- publisher = {European Language Resources Association (ELRA)},
318
- isbn = {978-2-9517408-7-7},
319
- language = {english}
320
- }
321
- ```
322
-
323
- ### Contributions
324
-
325
- Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ar-en/opus_wikipedia-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d439e4b187eea8c1fa685a1fbdbfa66a417de22ccf57c28ca932d015c59b1a9c
3
+ size 26617750
ar-pl/opus_wikipedia-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d667032678ae09f05c34657c75bea34f87fe1faa1f605ed4e4452c25067ce46e
3
+ size 175806050
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"ar-en": {"description": "This is a corpus of parallel sentences extracted from Wikipedia by Krzysztof Wo\u0142k and Krzysztof Marasek. Please cite the following publication if you use the data: Krzysztof Wo\u0142k and Krzysztof Marasek: Building Subject-aligned Comparable Corpora and Mining it for Truly Parallel Sentence Pairs., Procedia Technology, 18, Elsevier, p.126-132, 2014\n20 languages, 36 bitexts\ntotal number of files: 114\ntotal number of tokens: 610.13M\ntotal number of sentence fragments: 25.90M\n", "citation": "@InProceedings{TIEDEMANN12.463,\n author = {J\ufffdrg Tiedemann},\n title = {Parallel Data, Tools and Interfaces in OPUS},\n booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},\n year = {2012},\n month = {may},\n date = {23-25},\n address = {Istanbul, Turkey},\n editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},\n publisher = {European Language Resources Association (ELRA)},\n isbn = {978-2-9517408-7-7},\n language = {english}\n }\n", "homepage": "http://opus.nlpl.eu/Wikipedia.php", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "translation": {"languages": ["ar", "en"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "opus_wikipedia", "config_name": "ar-en", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 45207715, "num_examples": 151136, "dataset_name": "opus_wikipedia"}}, "download_checksums": {"https://object.pouta.csc.fi/OPUS-Wikipedia/v1.0/moses/ar-en.txt.zip": {"num_bytes": 16097997, "checksum": "6cae36918d61b77db5015db59fbd7d0425f148a620e44d1a6d54e821dfe41a08"}}, "download_size": 16097997, "post_processing_size": null, "dataset_size": 45207715, "size_in_bytes": 61305712}, "ar-pl": {"description": "This is a corpus of parallel sentences extracted from Wikipedia by Krzysztof Wo\u0142k and Krzysztof Marasek. Please cite the following publication if you use the data: Krzysztof Wo\u0142k and Krzysztof Marasek: Building Subject-aligned Comparable Corpora and Mining it for Truly Parallel Sentence Pairs., Procedia Technology, 18, Elsevier, p.126-132, 2014\n20 languages, 36 bitexts\ntotal number of files: 114\ntotal number of tokens: 610.13M\ntotal number of sentence fragments: 25.90M\n", "citation": "@InProceedings{TIEDEMANN12.463,\n author = {J\ufffdrg Tiedemann},\n title = {Parallel Data, Tools and Interfaces in OPUS},\n booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},\n year = {2012},\n month = {may},\n date = {23-25},\n address = {Istanbul, Turkey},\n editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},\n publisher = {European Language Resources Association (ELRA)},\n isbn = {978-2-9517408-7-7},\n language = {english}\n }\n", "homepage": "http://opus.nlpl.eu/Wikipedia.php", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "translation": {"languages": ["ar", "pl"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "opus_wikipedia", "config_name": "ar-pl", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 304851676, "num_examples": 823715, "dataset_name": "opus_wikipedia"}}, "download_checksums": {"https://object.pouta.csc.fi/OPUS-Wikipedia/v1.0/moses/ar-pl.txt.zip": {"num_bytes": 104585718, "checksum": "4e52cfe2fe1bc4561249091afadd2079b65d739b0ac254a9db0f1e3a3ce9d396"}}, "download_size": 104585718, "post_processing_size": null, "dataset_size": 304851676, "size_in_bytes": 409437394}, "en-sl": {"description": "This is a corpus of parallel sentences extracted from Wikipedia by Krzysztof Wo\u0142k and Krzysztof Marasek. Please cite the following publication if you use the data: Krzysztof Wo\u0142k and Krzysztof Marasek: Building Subject-aligned Comparable Corpora and Mining it for Truly Parallel Sentence Pairs., Procedia Technology, 18, Elsevier, p.126-132, 2014\n20 languages, 36 bitexts\ntotal number of files: 114\ntotal number of tokens: 610.13M\ntotal number of sentence fragments: 25.90M\n", "citation": "@InProceedings{TIEDEMANN12.463,\n author = {J\ufffdrg Tiedemann},\n title = {Parallel Data, Tools and Interfaces in OPUS},\n booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},\n year = {2012},\n month = {may},\n date = {23-25},\n address = {Istanbul, Turkey},\n editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},\n publisher = {European Language Resources Association (ELRA)},\n isbn = {978-2-9517408-7-7},\n language = {english}\n }\n", "homepage": "http://opus.nlpl.eu/Wikipedia.php", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "translation": {"languages": ["en", "sl"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "opus_wikipedia", "config_name": "en-sl", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 30479739, "num_examples": 140124, "dataset_name": "opus_wikipedia"}}, "download_checksums": {"https://object.pouta.csc.fi/OPUS-Wikipedia/v1.0/moses/en-sl.txt.zip": {"num_bytes": 11727538, "checksum": "050c79504fb8e49f32e0ab2625dfbe777ed5f7d430e0f154aaafa230dc71350e"}}, "download_size": 11727538, "post_processing_size": null, "dataset_size": 30479739, "size_in_bytes": 42207277}, "en-ru": {"description": "This is a corpus of parallel sentences extracted from Wikipedia by Krzysztof Wo\u0142k and Krzysztof Marasek. Please cite the following publication if you use the data: Krzysztof Wo\u0142k and Krzysztof Marasek: Building Subject-aligned Comparable Corpora and Mining it for Truly Parallel Sentence Pairs., Procedia Technology, 18, Elsevier, p.126-132, 2014\n20 languages, 36 bitexts\ntotal number of files: 114\ntotal number of tokens: 610.13M\ntotal number of sentence fragments: 25.90M\n", "citation": "@InProceedings{TIEDEMANN12.463,\n author = {J\ufffdrg Tiedemann},\n title = {Parallel Data, Tools and Interfaces in OPUS},\n booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},\n year = {2012},\n month = {may},\n date = {23-25},\n address = {Istanbul, Turkey},\n editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},\n publisher = {European Language Resources Association (ELRA)},\n isbn = {978-2-9517408-7-7},\n language = {english}\n }\n", "homepage": "http://opus.nlpl.eu/Wikipedia.php", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "translation": {"languages": ["en", "ru"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "opus_wikipedia", "config_name": "en-ru", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 167649057, "num_examples": 572717, "dataset_name": "opus_wikipedia"}}, "download_checksums": {"https://object.pouta.csc.fi/OPUS-Wikipedia/v1.0/moses/en-ru.txt.zip": {"num_bytes": 57356138, "checksum": "7c863f5038ff572738ecab2bc5a4ecef08eb5d27cba6cc54a671a596098fc689"}}, "download_size": 57356138, "post_processing_size": null, "dataset_size": 167649057, "size_in_bytes": 225005195}, "en-vi": {"description": "This is a corpus of parallel sentences extracted from Wikipedia by Krzysztof Wo\u0142k and Krzysztof Marasek. Please cite the following publication if you use the data: Krzysztof Wo\u0142k and Krzysztof Marasek: Building Subject-aligned Comparable Corpora and Mining it for Truly Parallel Sentence Pairs., Procedia Technology, 18, Elsevier, p.126-132, 2014\n20 languages, 36 bitexts\ntotal number of files: 114\ntotal number of tokens: 610.13M\ntotal number of sentence fragments: 25.90M\n", "citation": "@InProceedings{TIEDEMANN12.463,\n author = {J\ufffdrg Tiedemann},\n title = {Parallel Data, Tools and Interfaces in OPUS},\n booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},\n year = {2012},\n month = {may},\n date = {23-25},\n address = {Istanbul, Turkey},\n editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},\n publisher = {European Language Resources Association (ELRA)},\n isbn = {978-2-9517408-7-7},\n language = {english}\n }\n", "homepage": "http://opus.nlpl.eu/Wikipedia.php", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "translation": {"languages": ["en", "vi"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "opus_wikipedia", "config_name": "en-vi", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 7571598, "num_examples": 58116, "dataset_name": "opus_wikipedia"}}, "download_checksums": {"https://object.pouta.csc.fi/OPUS-Wikipedia/v1.0/moses/en-vi.txt.zip": {"num_bytes": 2422413, "checksum": "4214b9b17a3b6780ebacfdcd9d7ad256d03e61bbe43c786228f740c7c7a4a1e1"}}, "download_size": 2422413, "post_processing_size": null, "dataset_size": 7571598, "size_in_bytes": 9994011}}
 
 
en-ru/opus_wikipedia-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46ba23484d27deae019dae627b4c0697983b3cff921d3e8eecc9f87caf229f43
3
+ size 97008375
en-sl/opus_wikipedia-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:306c91d1e2e4dbebaab826219908a56871102af59de7fc7426d9c873afec35b3
3
+ size 18557818
en-vi/opus_wikipedia-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c2c91fc42571ac439fe6fd557e13cc39cb986ea88d37a2a70fee77195bb6b65
3
+ size 3969558
opus_wikipedia.py DELETED
@@ -1,127 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- import os
18
-
19
- import datasets
20
-
21
-
22
- _DESCRIPTION = """\
23
- This is a corpus of parallel sentences extracted from Wikipedia by Krzysztof Wołk and Krzysztof Marasek. Please cite the following publication if you use the data: Krzysztof Wołk and Krzysztof Marasek: Building Subject-aligned Comparable Corpora and Mining it for Truly Parallel Sentence Pairs., Procedia Technology, 18, Elsevier, p.126-132, 2014
24
- 20 languages, 36 bitexts
25
- total number of files: 114
26
- total number of tokens: 610.13M
27
- total number of sentence fragments: 25.90M
28
- """
29
- _HOMEPAGE_URL = "http://opus.nlpl.eu/Wikipedia.php"
30
- _CITATION = """\
31
- @InProceedings{TIEDEMANN12.463,
32
- author = {J{\"o}rg Tiedemann},
33
- title = {Parallel Data, Tools and Interfaces in OPUS},
34
- booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
35
- year = {2012},
36
- month = {may},
37
- date = {23-25},
38
- address = {Istanbul, Turkey},
39
- editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
40
- publisher = {European Language Resources Association (ELRA)},
41
- isbn = {978-2-9517408-7-7},
42
- language = {english}
43
- }
44
- """
45
-
46
- _VERSION = "1.0.0"
47
- _BASE_NAME = "Wikipedia.{}.{}"
48
- _BASE_URL = "https://object.pouta.csc.fi/OPUS-Wikipedia/v1.0/moses/{}-{}.txt.zip"
49
- # Please note that only few pairs are shown here. You can use config to generate data for all language pairs
50
- _LANGUAGE_PAIRS = [
51
- ("ar", "en"),
52
- ("ar", "pl"),
53
- ("en", "sl"),
54
- ("en", "ru"),
55
- ("en", "vi"),
56
- ]
57
-
58
-
59
- class WikipediaConfig(datasets.BuilderConfig):
60
- def __init__(self, *args, lang1=None, lang2=None, **kwargs):
61
- super().__init__(
62
- *args,
63
- name=f"{lang1}-{lang2}",
64
- **kwargs,
65
- )
66
- self.lang1 = lang1
67
- self.lang2 = lang2
68
-
69
-
70
- class OpusWikipedia(datasets.GeneratorBasedBuilder):
71
- BUILDER_CONFIGS = [
72
- WikipediaConfig(
73
- lang1=lang1,
74
- lang2=lang2,
75
- description=f"Translating {lang1} to {lang2} or vice versa",
76
- version=datasets.Version(_VERSION),
77
- )
78
- for lang1, lang2 in _LANGUAGE_PAIRS
79
- ]
80
- BUILDER_CONFIG_CLASS = WikipediaConfig
81
-
82
- def _info(self):
83
- return datasets.DatasetInfo(
84
- description=_DESCRIPTION,
85
- features=datasets.Features(
86
- {
87
- "id": datasets.Value("string"),
88
- "translation": datasets.Translation(languages=(self.config.lang1, self.config.lang2)),
89
- },
90
- ),
91
- supervised_keys=None,
92
- homepage=_HOMEPAGE_URL,
93
- citation=_CITATION,
94
- )
95
-
96
- def _split_generators(self, dl_manager):
97
- def _base_url(lang1, lang2):
98
- return _BASE_URL.format(lang1, lang2)
99
-
100
- download_url = _base_url(self.config.lang1, self.config.lang2)
101
- path = dl_manager.download_and_extract(download_url)
102
- return [
103
- datasets.SplitGenerator(
104
- name=datasets.Split.TRAIN,
105
- gen_kwargs={"datapath": path},
106
- )
107
- ]
108
-
109
- def _generate_examples(self, datapath):
110
- l1, l2 = self.config.lang1, self.config.lang2
111
- folder = l1 + "-" + l2
112
- l1_file = _BASE_NAME.format(folder, l1)
113
- l2_file = _BASE_NAME.format(folder, l2)
114
- l1_path = os.path.join(datapath, l1_file)
115
- l2_path = os.path.join(datapath, l2_file)
116
- with open(l1_path, encoding="utf-8") as f1, open(l2_path, encoding="utf-8") as f2:
117
- for sentence_counter, (x, y) in enumerate(zip(f1, f2)):
118
- x = x.strip()
119
- y = y.strip()
120
- result = (
121
- sentence_counter,
122
- {
123
- "id": str(sentence_counter),
124
- "translation": {l1: x, l2: y},
125
- },
126
- )
127
- yield result