Datasets:

Multilinguality:
translation
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
expert-generated
found
Source Datasets:
original
ArXiv:
Tags:
License:
albertvillanova HF staff commited on
Commit
41b6f38
1 Parent(s): eb09ed7

Add validation and test splits (#2)

Browse files

- added details for downloading the dev and test sets (58e2235ff5d3b3eb35349affefa46ffdedcda2f1)
- Fix bug in script (883841d3a6af249c43ab0d93a0d601c8d12b68cb)
- Fix dataset_info metadata (cde229ddd27c8d336b6d70e409bfa3bf3bfb7647)
- Revert update of JSON metadata (db96be98c0d2bc8c4ded26262c50820ec2ea7654)
- Update dataset card (7c71f7f42df2ba2b81ed226d20c9778985090b73)

Files changed (2) hide show
  1. README.md +46 -33
  2. menyo20k_mt.py +12 -5
README.md CHANGED
@@ -8,7 +8,7 @@ language:
8
  - en
9
  - yo
10
  license:
11
- - cc-by-4.0
12
  multilinguality:
13
  - translation
14
  size_categories:
@@ -18,7 +18,7 @@ source_datasets:
18
  task_categories:
19
  - translation
20
  task_ids: []
21
- paperswithcode_id: null
22
  pretty_name: MENYO-20k
23
  dataset_info:
24
  features:
@@ -31,10 +31,16 @@ dataset_info:
31
  config_name: menyo20k_mt
32
  splits:
33
  - name: train
34
- num_bytes: 2551273
35
  num_examples: 10070
36
- download_size: 2490852
37
- dataset_size: 2551273
 
 
 
 
 
 
38
  ---
39
 
40
  # Dataset Card for MENYO-20k
@@ -65,15 +71,15 @@ dataset_info:
65
 
66
  ## Dataset Description
67
 
68
- - **Homepage:** [Homepage for Menyo-20k](https://zenodo.org/record/4297448#.X81G7s0zZPY)
69
- - **Repository:**[Github Repo](https://github.com/dadelani/menyo-20k_MT)
70
- - **Paper:**
71
  - **Leaderboard:**
72
  - **Point of Contact:**
73
 
74
  ### Dataset Summary
75
 
76
- MENYO-20k is a multi-domain parallel dataset with texts obtained from news articles, ted talks, movie transcripts, radio transcripts, science and technology texts, and other short articles curated from the web and professional translators. The dataset has 20,100 parallel sentences split into 10,070 training sentences, 3,397 development sentences, and 6,633 test sentences (3,419 multi-domain, 1,714 news domain, and 1,500 ted talks speech transcript domain)
77
 
78
  ### Supported Tasks and Leaderboards
79
 
@@ -81,32 +87,32 @@ MENYO-20k is a multi-domain parallel dataset with texts obtained from news artic
81
 
82
  ### Languages
83
 
84
- Languages are English and YOruba
85
 
86
  ## Dataset Structure
87
 
88
  ### Data Instances
89
 
90
- The data consists of tab seperated entries
91
 
92
  ```
93
- {'translation':
94
  {'en': 'Unit 1: What is Creative Commons?',
95
  'yo': 'Ìdá 1: Kín ni Creative Commons?'
96
  }
97
  }
98
-
99
  ```
100
 
101
  ### Data Fields
102
 
103
- - `en`: English sentence
104
- - `yo`: Yoruba sentence
 
105
 
106
 
107
  ### Data Splits
108
 
109
- Only training dataset available
110
 
111
  ## Dataset Creation
112
 
@@ -160,27 +166,34 @@ Only training dataset available
160
 
161
  ### Licensing Information
162
 
163
- The dataset is open but for non-commercial use because some of the data sources like Ted talks and JW news requires permission for commercial use.
 
 
164
 
165
  ### Citation Information
 
 
166
  ```
167
- @dataset{david_ifeoluwa_adelani_2020_4297448,
168
- author = {David Ifeoluwa Adelani and
169
- Jesujoba O. Alabi and
170
- Damilola Adebonojo and
171
- Adesina Ayeni and
172
- Mofe Adeyemi and
173
- Ayodele Awokoya},
174
- title = {{MENYO-20k: A Multi-domain English - Yorùbá Corpus
175
- for Machine Translation}},
176
- month = nov,
177
- year = 2020,
178
- publisher = {Zenodo},
179
- version = {1.0},
180
- doi = {10.5281/zenodo.4297448},
181
- url = {https://doi.org/10.5281/zenodo.4297448}
 
 
 
182
  }
183
  ```
184
  ### Contributions
185
 
186
- Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
8
  - en
9
  - yo
10
  license:
11
+ - cc-by-nc-4.0
12
  multilinguality:
13
  - translation
14
  size_categories:
18
  task_categories:
19
  - translation
20
  task_ids: []
21
+ paperswithcode_id: menyo-20k
22
  pretty_name: MENYO-20k
23
  dataset_info:
24
  features:
31
  config_name: menyo20k_mt
32
  splits:
33
  - name: train
34
+ num_bytes: 2551345
35
  num_examples: 10070
36
+ - name: validation
37
+ num_bytes: 870011
38
+ num_examples: 3397
39
+ - name: test
40
+ num_bytes: 1905432
41
+ num_examples: 6633
42
+ download_size: 5206234
43
+ dataset_size: 5326788
44
  ---
45
 
46
  # Dataset Card for MENYO-20k
71
 
72
  ## Dataset Description
73
 
74
+ - **Homepage:**
75
+ - **Repository:** https://github.com/uds-lsv/menyo-20k_MT/
76
+ - **Paper:** [The Effect of Domain and Diacritics in Yorùbá-English Neural Machine Translation](https://arxiv.org/abs/2103.08647)
77
  - **Leaderboard:**
78
  - **Point of Contact:**
79
 
80
  ### Dataset Summary
81
 
82
+ MENYO-20k is a multi-domain parallel dataset with texts obtained from news articles, ted talks, movie transcripts, radio transcripts, science and technology texts, and other short articles curated from the web and professional translators. The dataset has 20,100 parallel sentences split into 10,070 training sentences, 3,397 development sentences, and 6,633 test sentences (3,419 multi-domain, 1,714 news domain, and 1,500 ted talks speech transcript domain).
83
 
84
  ### Supported Tasks and Leaderboards
85
 
87
 
88
  ### Languages
89
 
90
+ Languages are English and Yoruba.
91
 
92
  ## Dataset Structure
93
 
94
  ### Data Instances
95
 
96
+ An instance example:
97
 
98
  ```
99
+ {'translation':
100
  {'en': 'Unit 1: What is Creative Commons?',
101
  'yo': 'Ìdá 1: Kín ni Creative Commons?'
102
  }
103
  }
 
104
  ```
105
 
106
  ### Data Fields
107
 
108
+ - `translation`:
109
+ - `en`: English sentence.
110
+ - `yo`: Yoruba sentence.
111
 
112
 
113
  ### Data Splits
114
 
115
+ Training, validation and test splits are available.
116
 
117
  ## Dataset Creation
118
 
166
 
167
  ### Licensing Information
168
 
169
+ The dataset is open but for non-commercial use because some data sources like Ted talks and JW news require permission for commercial use.
170
+
171
+ The dataset is licensed under Creative Commons [Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/) License: https://github.com/uds-lsv/menyo-20k_MT/blob/master/LICENSE
172
 
173
  ### Citation Information
174
+
175
+ If you use this dataset, please cite this paper:
176
  ```
177
+ @inproceedings{adelani-etal-2021-effect,
178
+ title = "The Effect of Domain and Diacritics in {Y}oruba{--}{E}nglish Neural Machine Translation",
179
+ author = "Adelani, David and
180
+ Ruiter, Dana and
181
+ Alabi, Jesujoba and
182
+ Adebonojo, Damilola and
183
+ Ayeni, Adesina and
184
+ Adeyemi, Mofe and
185
+ Awokoya, Ayodele Esther and
186
+ Espa{\~n}a-Bonet, Cristina",
187
+ booktitle = "Proceedings of the 18th Biennial Machine Translation Summit (Volume 1: Research Track)",
188
+ month = aug,
189
+ year = "2021",
190
+ address = "Virtual",
191
+ publisher = "Association for Machine Translation in the Americas",
192
+ url = "https://aclanthology.org/2021.mtsummit-research.6",
193
+ pages = "61--75",
194
+ abstract = "Massively multilingual machine translation (MT) has shown impressive capabilities and including zero and few-shot translation between low-resource language pairs. However and these models are often evaluated on high-resource languages with the assumption that they generalize to low-resource ones. The difficulty of evaluating MT models on low-resource pairs is often due to lack of standardized evaluation datasets. In this paper and we present MENYO-20k and the first multi-domain parallel corpus with a especially curated orthography for Yoruba{--}English with standardized train-test splits for benchmarking. We provide several neural MT benchmarks and compare them to the performance of popular pre-trained (massively multilingual) MT models both for the heterogeneous test set and its subdomains. Since these pre-trained models use huge amounts of data with uncertain quality and we also analyze the effect of diacritics and a major characteristic of Yoruba and in the training data. We investigate how and when this training condition affects the final quality of a translation and its understandability.Our models outperform massively multilingual models such as Google ($+8.7$ BLEU) and Facebook M2M ($+9.1$) when translating to Yoruba and setting a high quality benchmark for future research.",
195
  }
196
  ```
197
  ### Contributions
198
 
199
+ Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
menyo20k_mt.py CHANGED
@@ -55,13 +55,19 @@ _LICENSE = "For non-commercial use because some of the data sources like Ted tal
55
 
56
  # The HuggingFace dataset library don't host the datasets but only point to the original files
57
  # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
58
- _URL = "https://raw.githubusercontent.com/uds-lsv/menyo-20k_MT/master/data/train.tsv"
 
 
 
 
 
 
59
 
60
 
61
  class Menyo20kMt(datasets.GeneratorBasedBuilder):
62
  """MENYO-20k: A Multi-domain English - Yorùbá Corpus for Machine Translations"""
63
 
64
- VERSION = datasets.Version("1.0.0")
65
 
66
  BUILDER_CONFIGS = [
67
  datasets.BuilderConfig(
@@ -89,10 +95,11 @@ class Menyo20kMt(datasets.GeneratorBasedBuilder):
89
 
90
  def _split_generators(self, dl_manager):
91
  """Returns SplitGenerators."""
92
- train_path = dl_manager.download_and_extract(_URL)
93
-
94
  return [
95
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
 
 
96
  ]
97
 
98
  def _generate_examples(self, filepath):
55
 
56
  # The HuggingFace dataset library don't host the datasets but only point to the original files
57
  # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
58
+ _URLS = {
59
+ "train": "https://raw.githubusercontent.com/uds-lsv/menyo-20k_MT/master/data/train.tsv",
60
+ "dev": "https://raw.githubusercontent.com/uds-lsv/menyo-20k_MT/master/data/dev.tsv",
61
+ "test": "https://raw.githubusercontent.com/uds-lsv/menyo-20k_MT/master/data/test.tsv",
62
+ }
63
+
64
+
65
 
66
 
67
  class Menyo20kMt(datasets.GeneratorBasedBuilder):
68
  """MENYO-20k: A Multi-domain English - Yorùbá Corpus for Machine Translations"""
69
 
70
+ VERSION = datasets.Version("1.1.0")
71
 
72
  BUILDER_CONFIGS = [
73
  datasets.BuilderConfig(
95
 
96
  def _split_generators(self, dl_manager):
97
  """Returns SplitGenerators."""
98
+ data_files = dl_manager.download(_URLS)
 
99
  return [
100
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_files["train"]}),
101
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": data_files["dev"]}),
102
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": data_files["test"]}),
103
  ]
104
 
105
  def _generate_examples(self, filepath):