Datasets:

Multilinguality:
translation
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
expert-generated
found
Source Datasets:
original
ArXiv:
Tags:
License:
system HF staff commited on
Commit
212418f
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ - found
5
+ language_creators:
6
+ - found
7
+ languages:
8
+ - en
9
+ - yo
10
+ licenses:
11
+ - cc-by-4-0
12
+ multilinguality:
13
+ - translation
14
+ size_categories:
15
+ - 10K<n<100K
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - conditional-text-generation
20
+ task_ids:
21
+ - machine-translation
22
+ ---
23
+
24
+ # Dataset Card for MENYO-20k
25
+
26
+ ## Table of Contents
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-instances)
34
+ - [Data Splits](#data-instances)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Homepage:** [Homepage for Menyo-20k](https://zenodo.org/record/4297448#.X81G7s0zZPY)
52
+ - **Repository:**[Github Repo](https://github.com/dadelani/menyo-20k_MT)
53
+ - **Paper:**
54
+ - **Leaderboard:**
55
+ - **Point of Contact:**
56
+
57
+ ### Dataset Summary
58
+
59
+ MENYO-20k is a multi-domain parallel dataset with texts obtained from news articles, ted talks, movie transcripts, radio transcripts, science and technology texts, and other short articles curated from the web and professional translators. The dataset has 20,100 parallel sentences split into 10,070 training sentences, 3,397 development sentences, and 6,633 test sentences (3,419 multi-domain, 1,714 news domain, and 1,500 ted talks speech transcript domain)
60
+
61
+ ### Supported Tasks and Leaderboards
62
+
63
+ [More Information Needed]
64
+
65
+ ### Languages
66
+
67
+ Languages are English and YOruba
68
+
69
+ ## Dataset Structure
70
+
71
+ ### Data Instances
72
+
73
+ The data consists of tab seperated entries
74
+
75
+ ```
76
+ {'translation':
77
+ {'en': 'Unit 1: What is Creative Commons?',
78
+ 'yo': 'Ìdá 1: Kín ni Creative Commons?'
79
+ }
80
+ }
81
+
82
+ ```
83
+
84
+ ### Data Fields
85
+
86
+ - `en`: English sentence
87
+ - `yo`: Yoruba sentence
88
+
89
+
90
+ ### Data Splits
91
+
92
+ Only training dataset available
93
+
94
+ ## Dataset Creation
95
+
96
+ ### Curation Rationale
97
+
98
+ [More Information Needed]
99
+
100
+ ### Source Data
101
+
102
+ #### Initial Data Collection and Normalization
103
+
104
+ [More Information Needed]
105
+
106
+ #### Who are the source language producers?
107
+
108
+ [More Information Needed]
109
+
110
+ ### Annotations
111
+
112
+ #### Annotation process
113
+
114
+ [More Information Needed]
115
+
116
+ #### Who are the annotators?
117
+
118
+ [More Information Needed]
119
+
120
+ ### Personal and Sensitive Information
121
+
122
+ [More Information Needed]
123
+
124
+ ## Considerations for Using the Data
125
+
126
+ ### Social Impact of Dataset
127
+
128
+ [More Information Needed]
129
+
130
+ ### Discussion of Biases
131
+
132
+ [More Information Needed]
133
+
134
+ ### Other Known Limitations
135
+
136
+ [More Information Needed]
137
+
138
+ ## Additional Information
139
+
140
+ ### Dataset Curators
141
+
142
+ [More Information Needed]
143
+
144
+ ### Licensing Information
145
+
146
+ The dataset is open but for non-commercial use because some of the data sources like Ted talks and JW news requires permission for commercial use.
147
+
148
+ ### Citation Information
149
+ ```
150
+ @dataset{david_ifeoluwa_adelani_2020_4297448,
151
+ author = {David Ifeoluwa Adelani and
152
+ Jesujoba O. Alabi and
153
+ Damilola Adebonojo and
154
+ Adesina Ayeni and
155
+ Mofe Adeyemi and
156
+ Ayodele Awokoya},
157
+ title = {{MENYO-20k: A Multi-domain English - Yorùbá Corpus
158
+ for Machine Translation}},
159
+ month = nov,
160
+ year = 2020,
161
+ publisher = {Zenodo},
162
+ version = {1.0},
163
+ doi = {10.5281/zenodo.4297448},
164
+ url = {https://doi.org/10.5281/zenodo.4297448}
165
+ }
166
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"menyo20k_mt": {"description": "MENYO-20k is a multi-domain parallel dataset with texts obtained from news articles, ted talks, movie transcripts, radio transcripts, science and technology texts, and other short articles curated from the web and professional translators. The dataset has 20,100 parallel sentences split into 10,070 training sentences, 3,397 development sentences, and 6,633 test sentences (3,419 multi-domain, 1,714 news domain, and 1,500 ted talks speech transcript domain). The development and test sets are available upon request.\n", "citation": "@dataset{david_ifeoluwa_adelani_2020_4297448,\n author = {David Ifeoluwa Adelani and\n Jesujoba O. Alabi and\n Damilola Adebonojo and\n Adesina Ayeni and\n Mofe Adeyemi and\n Ayodele Awokoya},\n title = {MENYO-20k: A Multi-domain English - Yor\u00f9b\u00e1 Corpus\n for Machine Translation},\n month = nov,\n year = 2020,\n publisher = {Zenodo},\n version = {1.0},\n doi = {10.5281/zenodo.4297448},\n url = {https://doi.org/10.5281/zenodo.4297448}\n}\n", "homepage": "https://zenodo.org/record/4297448#.X81G7s0zZPY", "license": "For non-commercial use because some of the data sources like Ted talks and JW news requires permission for commercial use.", "features": {"translation": {"languages": ["en", "yo"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "menyo20k_mt", "config_name": "menyo20k_mt", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2551273, "num_examples": 10070, "dataset_name": "menyo20k_mt"}}, "download_checksums": {"https://github.com/dadelani/menyo-20k_MT/raw/main/data/train.tsv": {"num_bytes": 2490852, "checksum": "3c152119d4dc1fba12ee9424f1e7fd11648acfa8e2ea7f6464a37a18e69d9a06"}}, "download_size": 2490852, "post_processing_size": null, "dataset_size": 2551273, "size_in_bytes": 5042125}}
dummy/menyo20k_mt/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9ef9bfa84a47a63b7581ea8594b5fbe58dbe49db50025beec1a3d6a78e02ecf
3
+ size 869
menyo20k_mt.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """MENYO-20k: A Multi-domain English - Yorùbá Corpus for Machine Translations"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+
21
+ import datasets
22
+
23
+
24
+ # Find for instance the citation on arxiv or on the dataset repo/website
25
+ _CITATION = """\
26
+ @dataset{david_ifeoluwa_adelani_2020_4297448,
27
+ author = {David Ifeoluwa Adelani and
28
+ Jesujoba O. Alabi and
29
+ Damilola Adebonojo and
30
+ Adesina Ayeni and
31
+ Mofe Adeyemi and
32
+ Ayodele Awokoya},
33
+ title = {MENYO-20k: A Multi-domain English - Yorùbá Corpus
34
+ for Machine Translation},
35
+ month = nov,
36
+ year = 2020,
37
+ publisher = {Zenodo},
38
+ version = {1.0},
39
+ doi = {10.5281/zenodo.4297448},
40
+ url = {https://doi.org/10.5281/zenodo.4297448}
41
+ }
42
+ """
43
+
44
+
45
+ # You can copy an official description
46
+ _DESCRIPTION = """\
47
+ MENYO-20k is a multi-domain parallel dataset with texts obtained from news articles, ted talks, movie transcripts, radio transcripts, science and technology texts, and other short articles curated from the web and professional translators. The dataset has 20,100 parallel sentences split into 10,070 training sentences, 3,397 development sentences, and 6,633 test sentences (3,419 multi-domain, 1,714 news domain, and 1,500 ted talks speech transcript domain). The development and test sets are available upon request.
48
+ """
49
+
50
+
51
+ _HOMEPAGE = "https://zenodo.org/record/4297448#.X81G7s0zZPY"
52
+
53
+
54
+ _LICENSE = "For non-commercial use because some of the data sources like Ted talks and JW news requires permission for commercial use."
55
+
56
+
57
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
58
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
59
+ _URL = "https://github.com/dadelani/menyo-20k_MT/raw/main/data/train.tsv"
60
+
61
+
62
+ class Menyo20kMt(datasets.GeneratorBasedBuilder):
63
+ """MENYO-20k: A Multi-domain English - Yorùbá Corpus for Machine Translations"""
64
+
65
+ VERSION = datasets.Version("1.0.0")
66
+
67
+ BUILDER_CONFIGS = [
68
+ datasets.BuilderConfig(
69
+ name="menyo20k_mt",
70
+ version=VERSION,
71
+ description="MENYO-20k: A Multi-domain English - Yorùbá Corpus for Machine Translations",
72
+ )
73
+ ]
74
+
75
+ def _info(self):
76
+
77
+ return datasets.DatasetInfo(
78
+ # This is the description that will appear on the datasets page.
79
+ description=_DESCRIPTION,
80
+ # This defines the different columns of the dataset and their types
81
+ features=datasets.Features({"translation": datasets.features.Translation(languages=("en", "yo"))}),
82
+ supervised_keys=None,
83
+ # Homepage of the dataset for documentation
84
+ homepage=_HOMEPAGE,
85
+ # License for the dataset if available
86
+ license=_LICENSE,
87
+ # Citation for the dataset
88
+ citation=_CITATION,
89
+ )
90
+
91
+ def _split_generators(self, dl_manager):
92
+ """Returns SplitGenerators."""
93
+ train_path = dl_manager.download_and_extract(_URL)
94
+
95
+ return [
96
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
97
+ ]
98
+
99
+ def _generate_examples(self, filepath):
100
+ with open(filepath, encoding="utf-8") as f:
101
+ reader = csv.DictReader(f, delimiter="\t", quoting=csv.QUOTE_NONE)
102
+ for idx, row in enumerate(reader):
103
+ result = {"translation": {"en": row["English"], "yo": row["Yoruba"]}}
104
+ yield idx, result