parquet-converter commited on
Commit
40018a6
1 Parent(s): 1c1fe0d

Update parquet files

Browse files
README.md DELETED
@@ -1,247 +0,0 @@
1
- ---
2
- pretty_name: TEDMulti
3
- paperswithcode_id: null
4
- dataset_info:
5
- features:
6
- - name: translations
7
- dtype:
8
- translation_variable_languages:
9
- languages:
10
- - ar
11
- - az
12
- - be
13
- - bg
14
- - bn
15
- - bs
16
- - calv
17
- - cs
18
- - da
19
- - de
20
- - el
21
- - en
22
- - eo
23
- - es
24
- - et
25
- - eu
26
- - fa
27
- - fi
28
- - fr
29
- - fr-ca
30
- - gl
31
- - he
32
- - hi
33
- - hr
34
- - hu
35
- - hy
36
- - id
37
- - it
38
- - ja
39
- - ka
40
- - kk
41
- - ko
42
- - ku
43
- - lt
44
- - mk
45
- - mn
46
- - mr
47
- - ms
48
- - my
49
- - nb
50
- - nl
51
- - pl
52
- - pt
53
- - pt-br
54
- - ro
55
- - ru
56
- - sk
57
- - sl
58
- - sq
59
- - sr
60
- - sv
61
- - ta
62
- - th
63
- - tr
64
- - uk
65
- - ur
66
- - vi
67
- - zh
68
- - zh-cn
69
- - zh-tw
70
- num_languages: 60
71
- - name: talk_name
72
- dtype: string
73
- config_name: plain_text
74
- splits:
75
- - name: test
76
- num_bytes: 23364983
77
- num_examples: 7213
78
- - name: train
79
- num_bytes: 748209995
80
- num_examples: 258098
81
- - name: validation
82
- num_bytes: 19435383
83
- num_examples: 6049
84
- download_size: 352222045
85
- dataset_size: 791010361
86
- ---
87
-
88
- # Dataset Card for "ted_multi"
89
-
90
- ## Table of Contents
91
- - [Dataset Description](#dataset-description)
92
- - [Dataset Summary](#dataset-summary)
93
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
94
- - [Languages](#languages)
95
- - [Dataset Structure](#dataset-structure)
96
- - [Data Instances](#data-instances)
97
- - [Data Fields](#data-fields)
98
- - [Data Splits](#data-splits)
99
- - [Dataset Creation](#dataset-creation)
100
- - [Curation Rationale](#curation-rationale)
101
- - [Source Data](#source-data)
102
- - [Annotations](#annotations)
103
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
104
- - [Considerations for Using the Data](#considerations-for-using-the-data)
105
- - [Social Impact of Dataset](#social-impact-of-dataset)
106
- - [Discussion of Biases](#discussion-of-biases)
107
- - [Other Known Limitations](#other-known-limitations)
108
- - [Additional Information](#additional-information)
109
- - [Dataset Curators](#dataset-curators)
110
- - [Licensing Information](#licensing-information)
111
- - [Citation Information](#citation-information)
112
- - [Contributions](#contributions)
113
-
114
- ## Dataset Description
115
-
116
- - **Homepage:** [https://github.com/neulab/word-embeddings-for-nmt](https://github.com/neulab/word-embeddings-for-nmt)
117
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
118
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
119
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
- - **Size of downloaded dataset files:** 335.91 MB
121
- - **Size of the generated dataset:** 754.37 MB
122
- - **Total amount of disk used:** 1090.27 MB
123
-
124
- ### Dataset Summary
125
-
126
- Massively multilingual (60 language) data set derived from TED Talk transcripts.
127
- Each record consists of parallel arrays of language and text. Missing and
128
- incomplete translations will be filtered out.
129
-
130
- ### Supported Tasks and Leaderboards
131
-
132
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
-
134
- ### Languages
135
-
136
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
137
-
138
- ## Dataset Structure
139
-
140
- ### Data Instances
141
-
142
- #### plain_text
143
-
144
- - **Size of downloaded dataset files:** 335.91 MB
145
- - **Size of the generated dataset:** 754.37 MB
146
- - **Total amount of disk used:** 1090.27 MB
147
-
148
- An example of 'validation' looks as follows.
149
- ```
150
- This example was too long and was cropped:
151
-
152
- {
153
- "talk_name": "shabana_basij_rasikh_dare_to_educate_afghan_girls",
154
- "translations": "{\"language\": [\"ar\", \"az\", \"bg\", \"bn\", \"cs\", \"da\", \"de\", \"el\", \"en\", \"es\", \"fa\", \"fr\", \"he\", \"hi\", \"hr\", \"hu\", \"hy\", \"id\", \"it\", ..."
155
- }
156
- ```
157
-
158
- ### Data Fields
159
-
160
- The data fields are the same among all splits.
161
-
162
- #### plain_text
163
- - `translations`: a multilingual `string` variable, with possible languages including `ar`, `az`, `be`, `bg`, `bn`.
164
- - `talk_name`: a `string` feature.
165
-
166
- ### Data Splits
167
-
168
- | name |train |validation|test|
169
- |----------|-----:|---------:|---:|
170
- |plain_text|258098| 6049|7213|
171
-
172
- ## Dataset Creation
173
-
174
- ### Curation Rationale
175
-
176
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
177
-
178
- ### Source Data
179
-
180
- #### Initial Data Collection and Normalization
181
-
182
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
183
-
184
- #### Who are the source language producers?
185
-
186
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
187
-
188
- ### Annotations
189
-
190
- #### Annotation process
191
-
192
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
193
-
194
- #### Who are the annotators?
195
-
196
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
197
-
198
- ### Personal and Sensitive Information
199
-
200
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
201
-
202
- ## Considerations for Using the Data
203
-
204
- ### Social Impact of Dataset
205
-
206
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
207
-
208
- ### Discussion of Biases
209
-
210
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
211
-
212
- ### Other Known Limitations
213
-
214
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
215
-
216
- ## Additional Information
217
-
218
- ### Dataset Curators
219
-
220
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
221
-
222
- ### Licensing Information
223
-
224
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
225
-
226
- ### Citation Information
227
-
228
- ```
229
- @InProceedings{qi-EtAl:2018:N18-2,
230
- author = {Qi, Ye and Sachan, Devendra and Felix, Matthieu and Padmanabhan, Sarguna and Neubig, Graham},
231
- title = {When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?},
232
- booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)},
233
- month = {June},
234
- year = {2018},
235
- address = {New Orleans, Louisiana},
236
- publisher = {Association for Computational Linguistics},
237
- pages = {529--535},
238
- abstract = {The performance of Neural Machine Translation (NMT) systems often suffers in low-resource scenarios where sufficiently large-scale parallel corpora cannot be obtained. Pre-trained word embeddings have proven to be invaluable for improving performance in natural language analysis tasks, which often suffer from paucity of data. However, their utility for NMT has not been extensively explored. In this work, we perform five sets of experiments that analyze when we can expect pre-trained word embeddings to help in NMT tasks. We show that such embeddings can be surprisingly effective in some cases -- providing gains of up to 20 BLEU points in the most favorable setting.},
239
- url = {http://www.aclweb.org/anthology/N18-2084}
240
- }
241
-
242
- ```
243
-
244
-
245
- ### Contributions
246
-
247
- Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"plain_text": {"description": "Massively multilingual (60 language) data set derived from TED Talk transcripts.\nEach record consists of parallel arrays of language and text. Missing and\nincomplete translations will be filtered out.\n", "citation": "@InProceedings{qi-EtAl:2018:N18-2,\n author = {Qi, Ye and Sachan, Devendra and Felix, Matthieu and Padmanabhan, Sarguna and Neubig, Graham},\n title = {When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?},\n booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)},\n month = {June},\n year = {2018},\n address = {New Orleans, Louisiana},\n publisher = {Association for Computational Linguistics},\n pages = {529--535},\n abstract = {The performance of Neural Machine Translation (NMT) systems often suffers in low-resource scenarios where sufficiently large-scale parallel corpora cannot be obtained. Pre-trained word embeddings have proven to be invaluable for improving performance in natural language analysis tasks, which often suffer from paucity of data. However, their utility for NMT has not been extensively explored. In this work, we perform five sets of experiments that analyze when we can expect pre-trained word embeddings to help in NMT tasks. We show that such embeddings can be surprisingly effective in some cases -- providing gains of up to 20 BLEU points in the most favorable setting.},\n url = {http://www.aclweb.org/anthology/N18-2084}\n}\n", "homepage": "https://github.com/neulab/word-embeddings-for-nmt", "license": "", "features": {"translations": {"languages": ["ar", "az", "be", "bg", "bn", "bs", "calv", "cs", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fr-ca", "gl", "he", "hi", "hr", "hu", "hy", "id", "it", "ja", "ka", "kk", "ko", "ku", "lt", "mk", "mn", "mr", "ms", "my", "nb", "nl", "pl", "pt", "pt-br", "ro", "ru", "sk", "sl", "sq", "sr", "sv", "ta", "th", "tr", "uk", "ur", "vi", "zh", "zh-cn", "zh-tw"], "num_languages": 60, "id": null, "_type": "TranslationVariableLanguages"}, "talk_name": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "ted_multi_translate", "config_name": "plain_text", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 23364983, "num_examples": 7213, "dataset_name": "ted_multi_translate"}, "train": {"name": "train", "num_bytes": 748209995, "num_examples": 258098, "dataset_name": "ted_multi_translate"}, "validation": {"name": "validation", "num_bytes": 19435383, "num_examples": 6049, "dataset_name": "ted_multi_translate"}}, "download_checksums": {"http://phontron.com/data/ted_talks.tar.gz": {"num_bytes": 352222045, "checksum": "03457b9ebc6d60839f1a48c5a03c940266aff78b81fcda4c6d9e2a5a7fb670ae"}}, "download_size": 352222045, "dataset_size": 791010361, "size_in_bytes": 1143232406}}
 
 
plain_text/ted_multi-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d69f8bf5494304cf9186da09e8a1e3a0ee3634171f82f2af05e1693136f53223
3
+ size 15400437
plain_text/ted_multi-train-00000-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f9b75e40cc5d888d11f37956e072eacf4878d978eb91ddcd66fdce905f50231
3
+ size 334377927
plain_text/ted_multi-train-00001-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94cbd7a2533ef3d25b54f8028f4e4dae8d8eb605e64fed196f88c554cafb57ee
3
+ size 164032406
plain_text/ted_multi-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:882dac5ac3a03d56881dd17764e165581c68907320f841400c274cced118ee89
3
+ size 12877695
ted_multi.py DELETED
@@ -1,184 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """TED talk multilingual data set."""
18
-
19
- import csv
20
-
21
- import datasets
22
-
23
-
24
- _DESCRIPTION = """\
25
- Massively multilingual (60 language) data set derived from TED Talk transcripts.
26
- Each record consists of parallel arrays of language and text. Missing and
27
- incomplete translations will be filtered out.
28
- """
29
-
30
- _CITATION = """\
31
- @InProceedings{qi-EtAl:2018:N18-2,
32
- author = {Qi, Ye and Sachan, Devendra and Felix, Matthieu and Padmanabhan, Sarguna and Neubig, Graham},
33
- title = {When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?},
34
- booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)},
35
- month = {June},
36
- year = {2018},
37
- address = {New Orleans, Louisiana},
38
- publisher = {Association for Computational Linguistics},
39
- pages = {529--535},
40
- abstract = {The performance of Neural Machine Translation (NMT) systems often suffers in low-resource scenarios where sufficiently large-scale parallel corpora cannot be obtained. Pre-trained word embeddings have proven to be invaluable for improving performance in natural language analysis tasks, which often suffer from paucity of data. However, their utility for NMT has not been extensively explored. In this work, we perform five sets of experiments that analyze when we can expect pre-trained word embeddings to help in NMT tasks. We show that such embeddings can be surprisingly effective in some cases -- providing gains of up to 20 BLEU points in the most favorable setting.},
41
- url = {http://www.aclweb.org/anthology/N18-2084}
42
- }
43
- """
44
-
45
- _DATA_URL = "http://phontron.com/data/ted_talks.tar.gz"
46
-
47
- _LANGUAGES = (
48
- "en",
49
- "es",
50
- "pt-br",
51
- "fr",
52
- "ru",
53
- "he",
54
- "ar",
55
- "ko",
56
- "zh-cn",
57
- "it",
58
- "ja",
59
- "zh-tw",
60
- "nl",
61
- "ro",
62
- "tr",
63
- "de",
64
- "vi",
65
- "pl",
66
- "pt",
67
- "bg",
68
- "el",
69
- "fa",
70
- "sr",
71
- "hu",
72
- "hr",
73
- "uk",
74
- "cs",
75
- "id",
76
- "th",
77
- "sv",
78
- "sk",
79
- "sq",
80
- "lt",
81
- "da",
82
- "calv",
83
- "my",
84
- "sl",
85
- "mk",
86
- "fr-ca",
87
- "fi",
88
- "hy",
89
- "hi",
90
- "nb",
91
- "ka",
92
- "mn",
93
- "et",
94
- "ku",
95
- "gl",
96
- "mr",
97
- "zh",
98
- "ur",
99
- "eo",
100
- "ms",
101
- "az",
102
- "ta",
103
- "bn",
104
- "kk",
105
- "be",
106
- "eu",
107
- "bs",
108
- )
109
-
110
-
111
- class TedMultiTranslate(datasets.GeneratorBasedBuilder):
112
- """TED talk multilingual data set."""
113
-
114
- BUILDER_CONFIGS = [
115
- datasets.BuilderConfig(
116
- name="plain_text",
117
- version=datasets.Version("1.0.0", ""),
118
- description="Plain text import of multilingual TED talk translations",
119
- )
120
- ]
121
-
122
- def _info(self):
123
- return datasets.DatasetInfo(
124
- description=_DESCRIPTION,
125
- features=datasets.Features(
126
- {
127
- "translations": datasets.features.TranslationVariableLanguages(languages=_LANGUAGES),
128
- "talk_name": datasets.Value("string"),
129
- }
130
- ),
131
- homepage="https://github.com/neulab/word-embeddings-for-nmt",
132
- citation=_CITATION,
133
- )
134
-
135
- def _split_generators(self, dl_manager):
136
- archive = dl_manager.download(_DATA_URL)
137
-
138
- return [
139
- datasets.SplitGenerator(
140
- name=datasets.Split.TRAIN,
141
- gen_kwargs={
142
- "data_file": "all_talks_train.tsv",
143
- "files": dl_manager.iter_archive(archive),
144
- },
145
- ),
146
- datasets.SplitGenerator(
147
- name=datasets.Split.VALIDATION,
148
- gen_kwargs={
149
- "data_file": "all_talks_dev.tsv",
150
- "files": dl_manager.iter_archive(archive),
151
- },
152
- ),
153
- datasets.SplitGenerator(
154
- name=datasets.Split.TEST,
155
- gen_kwargs={
156
- "data_file": "all_talks_test.tsv",
157
- "files": dl_manager.iter_archive(archive),
158
- },
159
- ),
160
- ]
161
-
162
- def _generate_examples(self, data_file, files):
163
- """This function returns the examples in the raw (text) form."""
164
- for path, f in files:
165
- if path == data_file:
166
- lines = (line.decode("utf-8") for line in f)
167
- reader = csv.DictReader(lines, delimiter="\t", quoting=csv.QUOTE_NONE)
168
- for idx, row in enumerate(reader):
169
- # Everything in the row except for 'talk_name' will be a translation.
170
- # Missing/incomplete translations will contain the string "__NULL__" or
171
- # "_ _ NULL _ _".
172
- yield idx, {
173
- "translations": {
174
- lang: text
175
- for lang, text in row.items()
176
- if lang != "talk_name" and _is_translation_complete(text)
177
- },
178
- "talk_name": row["talk_name"],
179
- }
180
- break
181
-
182
-
183
- def _is_translation_complete(text):
184
- return text and "__NULL__" not in text and "_ _ NULL _ _" not in text