Datasets:

Multilinguality:
multilingual
Size Categories:
100K<n<1M
Language Creators:
found
Annotations Creators:
found
Source Datasets:
original
Tags:
License:
system HF staff commited on
Commit
1c813af
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - am
8
+ - ar
9
+ - az
10
+ - bg
11
+ - bn
12
+ - bs
13
+ - cs
14
+ - de
15
+ - dv
16
+ - en
17
+ - es
18
+ - fa
19
+ - fr
20
+ - ha
21
+ - hi
22
+ - id
23
+ - it
24
+ - ja
25
+ - ko
26
+ - ku
27
+ - ml
28
+ - ms
29
+ - nl
30
+ - no
31
+ - pl
32
+ - pt
33
+ - ro
34
+ - ru
35
+ - sd
36
+ - so
37
+ - sq
38
+ - sv
39
+ - sw
40
+ - ta
41
+ - tg
42
+ - th
43
+ - tr
44
+ - tt
45
+ - ug
46
+ - ur
47
+ - uz
48
+ - zh
49
+ licenses:
50
+ - unknown
51
+ multilinguality:
52
+ - multilingual
53
+ size_categories:
54
+ - 100K<n<1M
55
+ source_datasets:
56
+ - original
57
+ task_categories:
58
+ - conditional-text-generation
59
+ task_ids:
60
+ - machine-translation
61
+ ---
62
+
63
+ # Dataset Card Creation Guide
64
+
65
+ ## Table of Contents
66
+ - [Dataset Description](#dataset-description)
67
+ - [Dataset Summary](#dataset-summary)
68
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
69
+ - [Languages](#languages)
70
+ - [Dataset Structure](#dataset-structure)
71
+ - [Data Instances](#data-instances)
72
+ - [Data Fields](#data-instances)
73
+ - [Data Splits](#data-instances)
74
+ - [Dataset Creation](#dataset-creation)
75
+ - [Curation Rationale](#curation-rationale)
76
+ - [Source Data](#source-data)
77
+ - [Annotations](#annotations)
78
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
79
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
80
+ - [Social Impact of Dataset](#social-impact-of-dataset)
81
+ - [Discussion of Biases](#discussion-of-biases)
82
+ - [Other Known Limitations](#other-known-limitations)
83
+ - [Additional Information](#additional-information)
84
+ - [Dataset Curators](#dataset-curators)
85
+ - [Licensing Information](#licensing-information)
86
+ - [Citation Information](#citation-information)
87
+
88
+ ## Dataset Description
89
+
90
+ - **Homepage:** http://opus.nlpl.eu/Tanzil.php
91
+ - **Repository:** None
92
+ - **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
93
+ - **Leaderboard:** [More Information Needed]
94
+ - **Point of Contact:** [More Information Needed]
95
+
96
+ ### Dataset Summary
97
+
98
+
99
+ To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
100
+ You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/Tanzil.php
101
+ E.g.
102
+
103
+ `dataset = load_dataset("tanzil", lang1="en", lang2="ru")`
104
+
105
+
106
+ ### Supported Tasks and Leaderboards
107
+
108
+ [More Information Needed]
109
+
110
+ ### Languages
111
+
112
+ [More Information Needed]
113
+
114
+ ## Dataset Structure
115
+
116
+ ### Data Instances
117
+
118
+ Here are some examples of questions and facts:
119
+
120
+
121
+ ### Data Fields
122
+
123
+ [More Information Needed]
124
+
125
+ ### Data Splits
126
+
127
+ [More Information Needed]
128
+
129
+ ## Dataset Creation
130
+
131
+ ### Curation Rationale
132
+
133
+ [More Information Needed]
134
+
135
+ ### Source Data
136
+
137
+ [More Information Needed]
138
+
139
+ #### Initial Data Collection and Normalization
140
+
141
+ [More Information Needed]
142
+
143
+ #### Who are the source language producers?
144
+
145
+ [More Information Needed]
146
+
147
+ ### Annotations
148
+
149
+ [More Information Needed]
150
+
151
+ #### Annotation process
152
+
153
+ [More Information Needed]
154
+
155
+ #### Who are the annotators?
156
+
157
+ [More Information Needed]
158
+
159
+ ### Personal and Sensitive Information
160
+
161
+ [More Information Needed]
162
+
163
+ ## Considerations for Using the Data
164
+
165
+ ### Social Impact of Dataset
166
+
167
+ [More Information Needed]
168
+
169
+ ### Discussion of Biases
170
+
171
+ [More Information Needed]
172
+
173
+ ### Other Known Limitations
174
+
175
+ [More Information Needed]
176
+
177
+ ## Additional Information
178
+
179
+ ### Dataset Curators
180
+
181
+ [More Information Needed]
182
+
183
+ ### Licensing Information
184
+
185
+ [More Information Needed]
186
+
187
+ ### Citation Information
188
+
189
+ [More Information Needed]
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bg-en": {"description": "This is a collection of Quran translations compiled by the Tanzil project\nThe translations provided at this page are for non-commercial purposes only. If used otherwise, you need to obtain necessary permission from the translator or the publisher.\n\nIf you are using more than three of the following translations in a website or application, we require you to put a link back to this page to make sure that subsequent users have access to the latest updates.\n\n42 languages, 878 bitexts\ntotal number of files: 105\ntotal number of tokens: 22.33M\ntotal number of sentence fragments: 1.01M\n", "citation": "J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)\n", "homepage": "http://opus.nlpl.eu/Tanzil.php", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "translation": {"languages": ["bg", "en"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "tanzil", "config_name": "bg-en", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 34473016, "num_examples": 135477, "dataset_name": "tanzil"}}, "download_checksums": {"https://object.pouta.csc.fi/OPUS-Tanzil/v1/moses/bg-en.txt.zip": {"num_bytes": 9305292, "checksum": "dc33a92325363e171d1232761696d9f295a75e38768d91af84be5a76c9f7374b"}}, "download_size": 9305292, "post_processing_size": null, "dataset_size": 34473016, "size_in_bytes": 43778308}, "bn-hi": {"description": "This is a collection of Quran translations compiled by the Tanzil project\nThe translations provided at this page are for non-commercial purposes only. If used otherwise, you need to obtain necessary permission from the translator or the publisher.\n\nIf you are using more than three of the following translations in a website or application, we require you to put a link back to this page to make sure that subsequent users have access to the latest updates.\n\n42 languages, 878 bitexts\ntotal number of files: 105\ntotal number of tokens: 22.33M\ntotal number of sentence fragments: 1.01M\n", "citation": "J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)\n", "homepage": "http://opus.nlpl.eu/Tanzil.php", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "translation": {"languages": ["bn", "hi"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "tanzil", "config_name": "bn-hi", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 18869103, "num_examples": 24942, "dataset_name": "tanzil"}}, "download_checksums": {"https://object.pouta.csc.fi/OPUS-Tanzil/v1/moses/bn-hi.txt.zip": {"num_bytes": 3542740, "checksum": "456186653dc1aa7a7777d6687b32ee9e455ca851bffb525292741a8807f64118"}}, "download_size": 3542740, "post_processing_size": null, "dataset_size": 18869103, "size_in_bytes": 22411843}, "fa-sv": {"description": "This is a collection of Quran translations compiled by the Tanzil project\nThe translations provided at this page are for non-commercial purposes only. If used otherwise, you need to obtain necessary permission from the translator or the publisher.\n\nIf you are using more than three of the following translations in a website or application, we require you to put a link back to this page to make sure that subsequent users have access to the latest updates.\n\n42 languages, 878 bitexts\ntotal number of files: 105\ntotal number of tokens: 22.33M\ntotal number of sentence fragments: 1.01M\n", "citation": "J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)\n", "homepage": "http://opus.nlpl.eu/Tanzil.php", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "translation": {"languages": ["fa", "sv"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "tanzil", "config_name": "fa-sv", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 29281634, "num_examples": 68601, "dataset_name": "tanzil"}}, "download_checksums": {"https://object.pouta.csc.fi/OPUS-Tanzil/v1/moses/fa-sv.txt.zip": {"num_bytes": 8550826, "checksum": "2734bd6c6d6328510d018b9cb6acfcd20c301d6a3c91707a80a0a8aab3499c17"}}, "download_size": 8550826, "post_processing_size": null, "dataset_size": 29281634, "size_in_bytes": 37832460}, "ru-zh": {"description": "This is a collection of Quran translations compiled by the Tanzil project\nThe translations provided at this page are for non-commercial purposes only. If used otherwise, you need to obtain necessary permission from the translator or the publisher.\n\nIf you are using more than three of the following translations in a website or application, we require you to put a link back to this page to make sure that subsequent users have access to the latest updates.\n\n42 languages, 878 bitexts\ntotal number of files: 105\ntotal number of tokens: 22.33M\ntotal number of sentence fragments: 1.01M\n", "citation": "J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)\n", "homepage": "http://opus.nlpl.eu/Tanzil.php", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "translation": {"languages": ["ru", "zh"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "tanzil", "config_name": "ru-zh", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 59736143, "num_examples": 99779, "dataset_name": "tanzil"}}, "download_checksums": {"https://object.pouta.csc.fi/OPUS-Tanzil/v1/moses/ru-zh.txt.zip": {"num_bytes": 16214659, "checksum": "bed214567a53ab49bee0f4d9662f55ba4419e9a01ed8d64d24610a489e576d62"}}, "download_size": 16214659, "post_processing_size": null, "dataset_size": 59736143, "size_in_bytes": 75950802}, "en-tr": {"description": "This is a collection of Quran translations compiled by the Tanzil project\nThe translations provided at this page are for non-commercial purposes only. If used otherwise, you need to obtain necessary permission from the translator or the publisher.\n\nIf you are using more than three of the following translations in a website or application, we require you to put a link back to this page to make sure that subsequent users have access to the latest updates.\n\n42 languages, 878 bitexts\ntotal number of files: 105\ntotal number of tokens: 22.33M\ntotal number of sentence fragments: 1.01M\n", "citation": "J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)\n", "homepage": "http://opus.nlpl.eu/Tanzil.php", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "translation": {"languages": ["en", "tr"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "tanzil", "config_name": "en-tr", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 255891913, "num_examples": 1189967, "dataset_name": "tanzil"}}, "download_checksums": {"https://object.pouta.csc.fi/OPUS-Tanzil/v1/moses/en-tr.txt.zip": {"num_bytes": 82954694, "checksum": "dc5002bdbc053f99c099660c5166a9d3ae8cf14188d47e797b201890f883f060"}}, "download_size": 82954694, "post_processing_size": null, "dataset_size": 255891913, "size_in_bytes": 338846607}}
dummy/bg-en/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ffb9c967ce4bdc7d48e4b4c98296f4773af7a17a494643475d2ccbfab231b617
3
+ size 1094
dummy/bn-hi/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c4a1be47b8c11246a26f49d9f9a1e5295c98358149abb7c775b5124b5f4378c
3
+ size 1266
dummy/en-tr/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da54d6f7ae7181983a162c5713cddc245dc8d20162ecac8e2c0c063befd82ee9
3
+ size 1094
dummy/fa-sv/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a22a93818287d36937e53d79babafd4e5edcd5bfbe4b1790712712e78a8c512
3
+ size 1169
dummy/ru-zh/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38c0cd0cbcd593752f7a27e69e850f7cbdb98bc95e15eb5b3d827caf689d70b7
3
+ size 1251
tanzil.py ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ import os
18
+
19
+ import datasets
20
+
21
+
22
+ _DESCRIPTION = """\
23
+ This is a collection of Quran translations compiled by the Tanzil project
24
+ The translations provided at this page are for non-commercial purposes only. If used otherwise, you need to obtain necessary permission from the translator or the publisher.
25
+
26
+ If you are using more than three of the following translations in a website or application, we require you to put a link back to this page to make sure that subsequent users have access to the latest updates.
27
+
28
+ 42 languages, 878 bitexts
29
+ total number of files: 105
30
+ total number of tokens: 22.33M
31
+ total number of sentence fragments: 1.01M
32
+ """
33
+ _HOMEPAGE_URL = "http://opus.nlpl.eu/Tanzil.php"
34
+ _CITATION = """\
35
+ J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
36
+ """
37
+
38
+ _VERSION = "1.0.0"
39
+ _BASE_NAME = "Tanzil.{}.{}"
40
+ _BASE_URL = "https://object.pouta.csc.fi/OPUS-Tanzil/v1/moses/{}-{}.txt.zip"
41
+
42
+ # Please note that only few pairs are shown here. You can use config to generate data for all language pairs
43
+ _LANGUAGE_PAIRS = [
44
+ ("bg", "en"),
45
+ ("bn", "hi"),
46
+ ("fa", "sv"),
47
+ ("ru", "zh"),
48
+ ("en", "tr"),
49
+ ]
50
+
51
+
52
+ class TanzilConfig(datasets.BuilderConfig):
53
+ def __init__(self, *args, lang1=None, lang2=None, **kwargs):
54
+ super().__init__(
55
+ *args,
56
+ name=f"{lang1}-{lang2}",
57
+ **kwargs,
58
+ )
59
+ self.lang1 = lang1
60
+ self.lang2 = lang2
61
+
62
+
63
+ class Tanzil(datasets.GeneratorBasedBuilder):
64
+ BUILDER_CONFIGS = [
65
+ TanzilConfig(
66
+ lang1=lang1,
67
+ lang2=lang2,
68
+ description=f"Translating {lang1} to {lang2} or vice versa",
69
+ version=datasets.Version(_VERSION),
70
+ )
71
+ for lang1, lang2 in _LANGUAGE_PAIRS
72
+ ]
73
+ BUILDER_CONFIG_CLASS = TanzilConfig
74
+
75
+ def _info(self):
76
+ return datasets.DatasetInfo(
77
+ description=_DESCRIPTION,
78
+ features=datasets.Features(
79
+ {
80
+ "id": datasets.Value("string"),
81
+ "translation": datasets.Translation(languages=(self.config.lang1, self.config.lang2)),
82
+ },
83
+ ),
84
+ supervised_keys=None,
85
+ homepage=_HOMEPAGE_URL,
86
+ citation=_CITATION,
87
+ )
88
+
89
+ def _split_generators(self, dl_manager):
90
+ def _base_url(lang1, lang2):
91
+ return _BASE_URL.format(lang1, lang2)
92
+
93
+ download_url = _base_url(self.config.lang1, self.config.lang2)
94
+ path = dl_manager.download_and_extract(download_url)
95
+ return [
96
+ datasets.SplitGenerator(
97
+ name=datasets.Split.TRAIN,
98
+ gen_kwargs={"datapath": path},
99
+ )
100
+ ]
101
+
102
+ def _generate_examples(self, datapath):
103
+ l1, l2 = self.config.lang1, self.config.lang2
104
+ folder = l1 + "-" + l2
105
+ l1_file = _BASE_NAME.format(folder, l1)
106
+ l2_file = _BASE_NAME.format(folder, l2)
107
+ l1_path = os.path.join(datapath, l1_file)
108
+ l2_path = os.path.join(datapath, l2_file)
109
+ with open(l1_path, encoding="utf-8") as f1, open(l2_path, encoding="utf-8") as f2:
110
+ for sentence_counter, (x, y) in enumerate(zip(f1, f2)):
111
+ x = x.strip()
112
+ y = y.strip()
113
+ result = (
114
+ sentence_counter,
115
+ {
116
+ "id": str(sentence_counter),
117
+ "translation": {l1: x, l2: y},
118
+ },
119
+ )
120
+ sentence_counter += 1
121
+ yield result