system HF staff commited on
Commit
bdd7c95
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ multilinguality:
7
+ - multilingual
8
+ languages:
9
+ - en
10
+ - vi
11
+ licenses:
12
+ - unknown
13
+ size_categories:
14
+ - 100K<n<1M
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - conditional-text-generation
19
+ task_ids:
20
+ - machine-translation
21
+ ---
22
+
23
+ # Dataset Card for mt_eng_vietnamese
24
+
25
+ ## Table of Contents
26
+ - [Dataset Card for mt_eng_vietnamese](#dataset-card-for-mt_eng_vietnamese)
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
40
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
41
+ - [Annotations](#annotations)
42
+ - [Annotation process](#annotation-process)
43
+ - [Who are the annotators?](#who-are-the-annotators)
44
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
46
+ - [Social Impact of Dataset](#social-impact-of-dataset)
47
+ - [Discussion of Biases](#discussion-of-biases)
48
+ - [Other Known Limitations](#other-known-limitations)
49
+ - [Additional Information](#additional-information)
50
+ - [Dataset Curators](#dataset-curators)
51
+ - [Licensing Information](#licensing-information)
52
+ - [Citation Information](#citation-information)
53
+
54
+ ## Dataset Description
55
+
56
+ - **Homepage:** https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/
57
+ - **Repository:** [Needs More Information]
58
+ - **Paper:** [Needs More Information]
59
+ - **Leaderboard:** [Needs More Information]
60
+ - **Point of Contact:** [Needs More Information]
61
+
62
+ ### Dataset Summary
63
+
64
+ Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese.
65
+
66
+ ### Supported Tasks and Leaderboards
67
+
68
+ Machine Translation
69
+
70
+ ### Languages
71
+
72
+ English, Vietnamese
73
+
74
+ ## Dataset Structure
75
+
76
+ ### Data Instances
77
+
78
+ An example from the dataset:
79
+ ```
80
+ {
81
+ 'translation': {
82
+ 'en': 'In 4 minutes , atmospheric chemist Rachel Pike provides a glimpse of the massive scientific effort behind the bold headlines on climate change , with her team -- one of thousands who contributed -- taking a risky flight over the rainforest in pursuit of data on a key molecule .',
83
+ 'vi': 'Trong 4 phút , chuyên gia hoá học khí quyển Rachel Pike giới thiệu sơ lược về những nỗ lực khoa học miệt mài đằng sau những tiêu đề táo bạo về biến đổi khí hậu , cùng với đoàn nghiên cứu của mình -- hàng ngàn người đã cống hiến cho dự án này -- một chuyến bay mạo hiểm qua rừng già để tìm kiếm thông tin về một phân tử then chốt .'
84
+ }
85
+ }
86
+ ```
87
+
88
+
89
+ ### Data Fields
90
+
91
+ - translation:
92
+ - en: text in english
93
+ - vi: text in vietnamese
94
+
95
+
96
+ ### Data Splits
97
+
98
+ train: 133318, validation: 1269, test: 1269
99
+
100
+ ## Dataset Creation
101
+
102
+ ### Curation Rationale
103
+
104
+ [More Information Needed]
105
+
106
+ ### Source Data
107
+
108
+ #### Initial Data Collection and Normalization
109
+
110
+ [More Information Needed]
111
+
112
+ #### Who are the source language producers?
113
+
114
+ [More Information Needed]
115
+
116
+ ### Annotations
117
+
118
+ #### Annotation process
119
+
120
+ [More Information Needed]
121
+
122
+ #### Who are the annotators?
123
+
124
+ [More Information Needed]
125
+
126
+ ### Personal and Sensitive Information
127
+
128
+ [More Information Needed]
129
+
130
+ ## Considerations for Using the Data
131
+
132
+ ### Social Impact of Dataset
133
+
134
+ [More Information Needed]
135
+
136
+ ### Discussion of Biases
137
+
138
+ [More Information Needed]
139
+
140
+ ### Other Known Limitations
141
+
142
+ [More Information Needed]
143
+
144
+ ## Additional Information
145
+
146
+ ### Dataset Curators
147
+
148
+ [More Information Needed]
149
+
150
+ ### Licensing Information
151
+
152
+ [More Information Needed]
153
+
154
+ ### Citation Information
155
+
156
+ ```
157
+ @inproceedings{Luong-Manning:iwslt15,
158
+ Address = {Da Nang, Vietnam}
159
+ Author = {Luong, Minh-Thang and Manning, Christopher D.},
160
+ Booktitle = {International Workshop on Spoken Language Translation},
161
+ Title = {Stanford Neural Machine Translation Systems for Spoken Language Domain},
162
+ Year = {2015}}
163
+
164
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"iwslt2015-vi-en": {"description": "Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese.\n", "citation": "@inproceedings{Luong-Manning:iwslt15,\n Address = {Da Nang, Vietnam}\n Author = {Luong, Minh-Thang and Manning, Christopher D.},\n Booktitle = {International Workshop on Spoken Language Translation},\n Title = {Stanford Neural Machine Translation Systems for Spoken Language Domain},\n Year = {2015}}\n", "homepage": "https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/", "license": "", "features": {"translation": {"languages": ["vi", "en"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "vi", "output": "en"}, "builder_name": "mt_eng_vietnamese", "config_name": "iwslt2015-vi-en", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 32478282, "num_examples": 133318, "dataset_name": "mt_eng_vietnamese"}, "validation": {"name": "validation", "num_bytes": 323743, "num_examples": 1269, "dataset_name": "mt_eng_vietnamese"}, "test": {"name": "test", "num_bytes": 323743, "num_examples": 1269, "dataset_name": "mt_eng_vietnamese"}}, "download_checksums": {"https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/train.vi": {"num_bytes": 18074646, "checksum": "707206edf2dc0280273952c7b70544ea8a1363aa69aaeb9d70514b888dc3067d"}, "https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/train.en": {"num_bytes": 13603614, "checksum": "c26dfeed74b6bf3752f5ca552f2412456f0de153f7c804df8717931fb3a5c78a"}, "https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/tst2012.vi": {"num_bytes": 188396, "checksum": "01004078b3f36b0c46e81b65fc6851230b3c823405c76d1d82ef3b786b512ff0"}, "https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/tst2012.en": {"num_bytes": 140250, "checksum": "b8fbcd0d4199d41276421d1f7228298df67dd9e2c5c134c2ee8cdf3ec27801e2"}, "https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/tst2013.vi": {"num_bytes": 183855, "checksum": "29af16842808da2812b8fd4d47fc05d082ba870e60ab7755090ad04b380994cd"}, "https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/tst2013.en": {"num_bytes": 132264, "checksum": "3860ce60bc5b85d96cc2d3d580cf1bc9d62440e939c1b964676d3607c4e8f1df"}}, "download_size": 32323025, "post_processing_size": null, "dataset_size": 33125768, "size_in_bytes": 65448793}, "iwslt2015-en-vi": {"description": "Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese.\n", "citation": "@inproceedings{Luong-Manning:iwslt15,\n Address = {Da Nang, Vietnam}\n Author = {Luong, Minh-Thang and Manning, Christopher D.},\n Booktitle = {International Workshop on Spoken Language Translation},\n Title = {Stanford Neural Machine Translation Systems for Spoken Language Domain},\n Year = {2015}}\n", "homepage": "https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/", "license": "", "features": {"translation": {"languages": ["en", "vi"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": {"input": "en", "output": "vi"}, "builder_name": "mt_eng_vietnamese", "config_name": "iwslt2015-en-vi", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 32478282, "num_examples": 133318, "dataset_name": "mt_eng_vietnamese"}, "validation": {"name": "validation", "num_bytes": 323743, "num_examples": 1269, "dataset_name": "mt_eng_vietnamese"}, "test": {"name": "test", "num_bytes": 323743, "num_examples": 1269, "dataset_name": "mt_eng_vietnamese"}}, "download_checksums": {"https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/train.en": {"num_bytes": 13603614, "checksum": "c26dfeed74b6bf3752f5ca552f2412456f0de153f7c804df8717931fb3a5c78a"}, "https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/train.vi": {"num_bytes": 18074646, "checksum": "707206edf2dc0280273952c7b70544ea8a1363aa69aaeb9d70514b888dc3067d"}, "https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/tst2012.en": {"num_bytes": 140250, "checksum": "b8fbcd0d4199d41276421d1f7228298df67dd9e2c5c134c2ee8cdf3ec27801e2"}, "https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/tst2012.vi": {"num_bytes": 188396, "checksum": "01004078b3f36b0c46e81b65fc6851230b3c823405c76d1d82ef3b786b512ff0"}, "https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/tst2013.en": {"num_bytes": 132264, "checksum": "3860ce60bc5b85d96cc2d3d580cf1bc9d62440e939c1b964676d3607c4e8f1df"}, "https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/tst2013.vi": {"num_bytes": 183855, "checksum": "29af16842808da2812b8fd4d47fc05d082ba870e60ab7755090ad04b380994cd"}}, "download_size": 32323025, "post_processing_size": null, "dataset_size": 33125768, "size_in_bytes": 65448793}}
dummy/iwslt2015-en-vi/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b1bace4f0281a8838fd6e91c7975ebd446060f2c484f391bd09aabb7e35c6fb
3
+ size 3462
dummy/iwslt2015-vi-en/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:877ddccb95c9e07e16ddd05c45369fde7a56168e8e1957a7b4d87ff9b1770eb0
3
+ size 3462
mt_eng_vietnamese.py ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ from __future__ import absolute_import, division, print_function
17
+
18
+ import collections
19
+
20
+ import datasets
21
+
22
+
23
+ _DESCRIPTION = """\
24
+ Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese.
25
+ """
26
+
27
+ _CITATION = """\
28
+ @inproceedings{Luong-Manning:iwslt15,
29
+ Address = {Da Nang, Vietnam}
30
+ Author = {Luong, Minh-Thang and Manning, Christopher D.},
31
+ Booktitle = {International Workshop on Spoken Language Translation},
32
+ Title = {Stanford Neural Machine Translation Systems for Spoken Language Domain},
33
+ Year = {2015}}
34
+ """
35
+
36
+ _DATA_URL = "https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/{}.{}"
37
+
38
+ # Tuple that describes a single pair of files with matching translations.
39
+ # language_to_file is the map from language (2 letter string: example 'en')
40
+ # to the file path in the extracted directory.
41
+ TranslateData = collections.namedtuple("TranslateData", ["url", "language_to_file"])
42
+
43
+
44
+ class MT_Eng_ViConfig(datasets.BuilderConfig):
45
+ """BuilderConfig for MT_Eng_Vietnamese."""
46
+
47
+ def __init__(self, language_pair=(None, None), **kwargs):
48
+ """BuilderConfig for MT_Eng_Vi.
49
+ Args:
50
+ for the `datasets.features.text.TextEncoder` used for the features feature.
51
+ language_pair: pair of languages that will be used for translation. Should
52
+ contain 2-letter coded strings. First will be used at source and second
53
+ as target in supervised mode. For example: ("vi", "en").
54
+ **kwargs: keyword arguments forwarded to super.
55
+ """
56
+
57
+ description = ("Translation dataset from %s to %s") % (language_pair[0], language_pair[1])
58
+ super(MT_Eng_ViConfig, self).__init__(
59
+ description=description,
60
+ version=datasets.Version("1.0.0"),
61
+ **kwargs,
62
+ )
63
+ self.language_pair = language_pair
64
+
65
+
66
+ class MTEngVietnamese(datasets.GeneratorBasedBuilder):
67
+ """English Vietnamese machine translation dataset from IWSLT2015."""
68
+
69
+ BUILDER_CONFIGS = [
70
+ MT_Eng_ViConfig(
71
+ name="iwslt2015-vi-en",
72
+ language_pair=("vi", "en"),
73
+ ),
74
+ MT_Eng_ViConfig(
75
+ name="iwslt2015-en-vi",
76
+ language_pair=("en", "vi"),
77
+ ),
78
+ ]
79
+ BUILDER_CONFIG_CLASS = MT_Eng_ViConfig
80
+
81
+ def _info(self):
82
+ source, target = self.config.language_pair
83
+ return datasets.DatasetInfo(
84
+ description=_DESCRIPTION,
85
+ features=datasets.Features(
86
+ {"translation": datasets.features.Translation(languages=self.config.language_pair)}
87
+ ),
88
+ supervised_keys=(source, target),
89
+ homepage="https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/",
90
+ citation=_CITATION,
91
+ )
92
+
93
+ def _split_generators(self, dl_manager):
94
+ source, target = self.config.language_pair
95
+
96
+ files = {}
97
+ for split in ("train", "dev", "test"):
98
+ if split == "dev":
99
+ dl_dir_src = dl_manager.download_and_extract(_DATA_URL.format("tst2012", source))
100
+ dl_dir_tar = dl_manager.download_and_extract(_DATA_URL.format("tst2012", target))
101
+ if split == "dev":
102
+ dl_dir_src = dl_manager.download_and_extract(_DATA_URL.format("tst2013", source))
103
+ dl_dir_tar = dl_manager.download_and_extract(_DATA_URL.format("tst2013", target))
104
+ if split == "train":
105
+ dl_dir_src = dl_manager.download_and_extract(_DATA_URL.format(split, source))
106
+ dl_dir_tar = dl_manager.download_and_extract(_DATA_URL.format(split, target))
107
+
108
+ files[split] = {"source_file": dl_dir_src, "target_file": dl_dir_tar}
109
+
110
+ return [
111
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs=files["train"]),
112
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs=files["dev"]),
113
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs=files["test"]),
114
+ ]
115
+
116
+ def _generate_examples(self, source_file, target_file):
117
+ """This function returns the examples in the raw (text) form."""
118
+ with open(source_file, encoding="utf-8") as f:
119
+ source_sentences = f.read().split("\n")
120
+ with open(target_file, encoding="utf-8") as f:
121
+ target_sentences = f.read().split("\n")
122
+
123
+ source, target = self.config.language_pair
124
+ for idx, (l1, l2) in enumerate(zip(source_sentences, target_sentences)):
125
+ result = {"translation": {source: l1, target: l2}}
126
+ # Make sure that both translations are non-empty.
127
+ yield idx, result