yhavinga commited on
Commit
03ab1db
1 Parent(s): 65b3ea3

Add script and dataset card

Browse files
Files changed (3) hide show
  1. README.md +223 -0
  2. ccmatrix.py +146 -0
  3. test_ccmatrix.py +108 -0
README.md ADDED
@@ -0,0 +1,223 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - af
8
+ - am
9
+ - ar
10
+ - ast
11
+ - az
12
+ - be
13
+ - bg
14
+ - bn
15
+ - br
16
+ - ca
17
+ - ceb
18
+ - cs
19
+ - cy
20
+ - da
21
+ - de
22
+ - el
23
+ - en
24
+ - eo
25
+ - es
26
+ - et
27
+ - eu
28
+ - fa
29
+ - fi
30
+ - fr
31
+ - fy
32
+ - ga
33
+ - gd
34
+ - gl
35
+ - ha
36
+ - he
37
+ - hi
38
+ - hr
39
+ - hu
40
+ - hy
41
+ - id
42
+ - ig
43
+ - ilo
44
+ - is
45
+ - it
46
+ - ja
47
+ - jv
48
+ - ka
49
+ - kk
50
+ - km
51
+ - ko
52
+ - la
53
+ - lb
54
+ - lg
55
+ - lt
56
+ - lv
57
+ - mg
58
+ - mk
59
+ - ml
60
+ - mr
61
+ - ms
62
+ - my
63
+ - ne
64
+ - nl
65
+ - 'no'
66
+ - oc
67
+ - om
68
+ - or
69
+ - pl
70
+ - pt
71
+ - ro
72
+ - ru
73
+ - sd
74
+ - si
75
+ - sk
76
+ - sl
77
+ - so
78
+ - sq
79
+ - sr
80
+ - su
81
+ - sv
82
+ - sw
83
+ - ta
84
+ - tl
85
+ - tr
86
+ - tt
87
+ - uk
88
+ - ur
89
+ - uz
90
+ - vi
91
+ - wo
92
+ - xh
93
+ - yi
94
+ - yo
95
+ - zh
96
+ - zu
97
+ - se
98
+ licenses:
99
+ - unknown
100
+ multilinguality:
101
+ - multilingual
102
+ size_categories:
103
+ en-nl:
104
+ - n<110M
105
+ en-af:
106
+ - n<9M
107
+ en-lt:
108
+ - <24M
109
+ source_datasets:
110
+ - original
111
+ task_categories:
112
+ - conditional-text-generation
113
+ task_ids:
114
+ - machine-translation
115
+ paperswithcode_id: ccmatrix
116
+ pretty_name: CCMatrixV1
117
+ ---
118
+
119
+ # Dataset Card for CCMatrix v1
120
+
121
+ ## Table of Contents
122
+ - [Dataset Description](#dataset-description)
123
+ - [Dataset Summary](#dataset-summary)
124
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
125
+ - [Languages](#languages)
126
+ - [Dataset Structure](#dataset-structure)
127
+ - [Data Instances](#data-instances)
128
+ - [Data Fields](#data-fields)
129
+ - [Data Splits](#data-splits)
130
+ - [Dataset Creation](#dataset-creation)
131
+ - [Curation Rationale](#curation-rationale)
132
+ - [Source Data](#source-data)
133
+ - [Annotations](#annotations)
134
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
135
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
136
+ - [Social Impact of Dataset](#social-impact-of-dataset)
137
+ - [Discussion of Biases](#discussion-of-biases)
138
+ - [Other Known Limitations](#other-known-limitations)
139
+ - [Additional Information](#additional-information)
140
+ - [Dataset Curators](#dataset-curators)
141
+ - [Licensing Information](#licensing-information)
142
+ - [Citation Information](#citation-information)
143
+ - [Contributions](#contributions)
144
+
145
+ ## Dataset Description
146
+ - **Homepage:** https://opus.nlpl.eu/CCMatrix.php
147
+ - **Repository:** None
148
+ - **Paper:** https://arxiv.org/abs/1911.04944
149
+ ### Dataset Summary
150
+
151
+ This corpus has been extracted from web crawls using the margin-based bitext mining techniques described at https://github.com/facebookresearch/LASER/tree/master/tasks/CCMatrix.
152
+
153
+ * 90 languages, 1,197 bitexts
154
+ * total number of files: 90
155
+ * total number of tokens: 112.14G
156
+ * total number of sentence fragments: 7.37G
157
+
158
+ ### Supported Tasks and Leaderboards
159
+ [More Information Needed]
160
+ ### Languages
161
+
162
+ To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
163
+ You can find the valid pairs in Homepage section of Dataset Description: https://opus.nlpl.eu/CCMatrix.php
164
+ E.g.
165
+
166
+ ```
167
+ dataset = load_dataset("yhavinga/ccmatrix", lang1="en", lang2="nl")
168
+ ```
169
+
170
+ ## Dataset Structure
171
+ ### Data Instances
172
+ For example:
173
+
174
+ ```json
175
+ {
176
+ "id": 1,
177
+ "score": 1.2498379,
178
+ "translation": {
179
+ "nl": "En we moeten elke waarheid vals noemen die niet minstens door een lach vergezeld ging.”",
180
+ "en": "And we should call every truth false which was not accompanied by at least one laugh.”"
181
+ }
182
+ }
183
+ ```
184
+
185
+ ### Data Fields
186
+ Each example contains an integer id starting with 0, a score, and a translation dictionary with the language 1 and
187
+ language 2 texts.
188
+
189
+ ### Data Splits
190
+ Only a `train` split is provided.
191
+
192
+ ## Dataset Creation
193
+ ### Curation Rationale
194
+ [More Information Needed]
195
+ ### Source Data
196
+ [More Information Needed]
197
+ #### Initial Data Collection and Normalization
198
+ [More Information Needed]
199
+ #### Who are the source language producers?
200
+ [More Information Needed]
201
+ ### Annotations
202
+ [More Information Needed]
203
+ #### Annotation process
204
+ [More Information Needed]
205
+ #### Who are the annotators?
206
+ [More Information Needed]
207
+ ### Personal and Sensitive Information
208
+ [More Information Needed]
209
+ ## Considerations for Using the Data
210
+ ### Social Impact of Dataset
211
+ [More Information Needed]
212
+ ### Discussion of Biases
213
+ [More Information Needed]
214
+ ### Other Known Limitations
215
+ [More Information Needed]
216
+ ## Additional Information
217
+ ### Dataset Curators
218
+ [More Information Needed]
219
+ ### Licensing Information
220
+ [More Information Needed]
221
+ ### Citation Information
222
+ [More Information Needed]
223
+ ### Contributions
ccmatrix.py ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ import os
18
+
19
+ import datasets
20
+
21
+ _DESCRIPTION = """\
22
+ CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
23
+
24
+ We show that margin-based bitext mining in LASER's multilingual sentence space can be applied to
25
+ monolingual corpora of billions of sentences to produce high quality aligned translation data.
26
+ We use thirty-two snapshots of a curated common crawl corpus [1] totaling 69 billion unique sentences.
27
+ Using one unified approach for 80 languages, we were able to mine 10.8 billion parallel sentences,
28
+ out of which only 2.9 billion are aligned with English.
29
+
30
+ IMPORTANT: Please cite reference [2][3] if you use this data.
31
+
32
+ [1] Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Jouli
33
+ and Edouard Grave, CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data
34
+
35
+ [2] Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin,
36
+ CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
37
+
38
+ [3] Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines,
39
+ Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky,
40
+ Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin.
41
+ Beyond English-Centric Multilingual Machine Translation
42
+
43
+ 90 languages, 1,197 bitexts
44
+ total number of files: 90
45
+ total number of tokens: 112.14G
46
+ total number of sentence fragments: 7.37G
47
+ """
48
+ _HOMEPAGE_URL = "https://opus.nlpl.eu/CCMatrix.php"
49
+ _CITATION = """\
50
+ Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Jouli and Edouard Grave, CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data
51
+ """
52
+
53
+ _VERSION = "1.0.0"
54
+ _FILE = "CCMatrix.{}.{}" # E.g. CCMatrix.en-nl.nl
55
+ _DOWNLOAD_URL = "https://opus.nlpl.eu/download.php?f=CCMatrix/v1/moses/{}.txt.zip"
56
+
57
+ _LANGUAGES = ["nl", "en", "de", "fr", "es", "lt", "it"]
58
+
59
+ _LANGUAGE_PAIRS = [(l1, l2) for l1 in _LANGUAGES for l2 in _LANGUAGES if l1 != l2]
60
+ _SIZES = ["", "1000_000", "25_000_000"]
61
+
62
+ _CONFIGS = [(l1, l2, size) for (l1, l2) in _LANGUAGE_PAIRS for size in _SIZES]
63
+
64
+
65
+ class CCMatrixConfig(datasets.BuilderConfig):
66
+ def __init__(self, *args, lang1=None, lang2=None, size=None, **kwargs):
67
+ super().__init__(
68
+ *args,
69
+ name=f"{lang1}-{lang2}{'-' + size if size else ''}",
70
+ **kwargs,
71
+ )
72
+ self.lang1 = lang1
73
+ self.lang2 = lang2
74
+ self.size = size
75
+ x, y = (lang1, lang2) if lang1 < lang2 else (lang2, lang1)
76
+ self.download_pair = f"{x}-{y}"
77
+
78
+
79
+ class CCMatrix(datasets.GeneratorBasedBuilder):
80
+ BUILDER_CONFIGS = [
81
+ CCMatrixConfig(
82
+ lang1=lang1,
83
+ lang2=lang2,
84
+ size=size,
85
+ description=f"Translating {lang1} to {lang2} or vice versa{ ' ' + size + ' rows' if size else ''}",
86
+ version=datasets.Version(_VERSION),
87
+ )
88
+ for lang1, lang2, size in _CONFIGS
89
+ ]
90
+ BUILDER_CONFIG_CLASS = CCMatrixConfig
91
+
92
+ def _info(self):
93
+ return datasets.DatasetInfo(
94
+ description=_DESCRIPTION,
95
+ features=datasets.Features(
96
+ {
97
+ "id": datasets.Value("int32"),
98
+ "score": datasets.Value("float"),
99
+ "translation": datasets.Translation(
100
+ languages=(self.config.lang1, self.config.lang2)
101
+ ),
102
+ },
103
+ ),
104
+ supervised_keys=None,
105
+ homepage=_HOMEPAGE_URL,
106
+ citation=_CITATION,
107
+ )
108
+
109
+ def _split_generators(self, dl_manager):
110
+ download_url = _DOWNLOAD_URL.format(self.config.download_pair)
111
+ path = dl_manager.download_and_extract(download_url)
112
+ return [
113
+ datasets.SplitGenerator(
114
+ name=datasets.Split.TRAIN,
115
+ gen_kwargs={"datapath": path},
116
+ )
117
+ ]
118
+
119
+ def _generate_examples(self, datapath):
120
+ l1_path = os.path.join(
121
+ datapath, _FILE.format(self.config.download_pair, self.config.lang1)
122
+ )
123
+ l2_path = os.path.join(
124
+ datapath, _FILE.format(self.config.download_pair, self.config.lang2)
125
+ )
126
+ scores_path = os.path.join(
127
+ datapath, _FILE.format(self.config.download_pair, "scores")
128
+ )
129
+ with open(l1_path, encoding="utf-8") as f1, open(
130
+ l2_path, encoding="utf-8"
131
+ ) as f2, open(scores_path, encoding="utf-8") as f3:
132
+ for sentence_counter, (x, y, score) in enumerate(zip(f1, f2, f3)):
133
+ if self.config.size and sentence_counter == int(self.config.size):
134
+ return
135
+ result = (
136
+ sentence_counter,
137
+ {
138
+ "id": sentence_counter,
139
+ "score": score,
140
+ "translation": {
141
+ self.config.lang1: x.strip(),
142
+ self.config.lang2: y.strip(),
143
+ },
144
+ },
145
+ )
146
+ yield result
test_ccmatrix.py ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from datasets import load_dataset
2
+
3
+
4
+ def test_streaming_dataset():
5
+ datasets = load_dataset("./ccmatrix.py", lang1="nl", lang2="en", streaming=True)
6
+ assert list(datasets.keys()) == ["train"]
7
+
8
+ train_ds = datasets["train"]
9
+
10
+ i = iter(train_ds)
11
+ e = next(i)
12
+
13
+ assert e == {
14
+ "id": 0,
15
+ "score": 1.2499677,
16
+ "translation": {
17
+ "nl": "Zij kwamen uit alle delen van Egypte, evenals zij op de dag van Zijn komst zullen doen.",
18
+ "en": "They come from all parts of Egypt, just like they will at the day of His coming.",
19
+ },
20
+ }
21
+
22
+ e = next(i)
23
+
24
+ assert list(e.keys()) == ["id", "score", "translation"]
25
+
26
+ assert e == {
27
+ "id": 1,
28
+ "score": 1.2498379,
29
+ "translation": {
30
+ "nl": "En we moeten elke waarheid vals noemen die niet minstens door een lach vergezeld ging.”",
31
+ "en": 'And we should call every truth false which was not accompanied by at least one laugh."',
32
+ },
33
+ }
34
+
35
+
36
+ def test_streaming_dataset_2():
37
+ datasets = load_dataset("./ccmatrix.py", "nl-en", streaming=True)
38
+ assert list(datasets.keys()) == ["train"]
39
+
40
+ train_ds = datasets["train"]
41
+
42
+ i = iter(train_ds)
43
+ e = next(i)
44
+
45
+ assert e == {
46
+ "id": 0,
47
+ "score": 1.2499677,
48
+ "translation": {
49
+ "nl": "Zij kwamen uit alle delen van Egypte, evenals zij op de dag van Zijn komst zullen doen.",
50
+ "en": "They come from all parts of Egypt, just like they will at the day of His coming.",
51
+ },
52
+ }
53
+
54
+ e = next(i)
55
+
56
+ assert list(e.keys()) == ["id", "score", "translation"]
57
+
58
+ assert e == {
59
+ "id": 1,
60
+ "score": 1.2498379,
61
+ "translation": {
62
+ "nl": "En we moeten elke waarheid vals noemen die niet minstens door een lach vergezeld ging.”",
63
+ "en": 'And we should call every truth false which was not accompanied by at least one laugh."',
64
+ },
65
+ }
66
+
67
+
68
+ def test_small_config():
69
+ datasets = load_dataset("./ccmatrix.py", "nl-en-1000_000")
70
+ assert list(datasets.keys()) == ["train"]
71
+
72
+ train_ds = datasets["train"]
73
+ assert len(train_ds) == 1000000
74
+
75
+ i = iter(train_ds)
76
+ e = next(i)
77
+
78
+ assert e == {
79
+ "id": 0,
80
+ "score": 1.2499676942825317,
81
+ "translation": {
82
+ "nl": "Zij kwamen uit alle delen van Egypte, evenals zij op de dag van Zijn komst zullen doen.",
83
+ "en": "They come from all parts of Egypt, just like they will at the day of His coming.",
84
+ },
85
+ }
86
+
87
+ e = next(i)
88
+
89
+ assert list(e.keys()) == ["id", "score", "translation"]
90
+
91
+ assert e == {
92
+ "id": 1,
93
+ "score": 1.249837875366211,
94
+ "translation": {
95
+ "nl": "En we moeten elke waarheid vals noemen die niet minstens door een lach vergezeld ging.”",
96
+ "en": 'And we should call every truth false which was not accompanied by at least one laugh."',
97
+ },
98
+ }
99
+
100
+
101
+ def test_medium_config():
102
+ datasets = load_dataset("./ccmatrix.py", "nl-en-25_000_000", streaming=True)
103
+ assert list(datasets.keys()) == ["train"]
104
+
105
+ train_ds = datasets["train"]
106
+
107
+ i = iter(train_ds)
108
+ e = next(i)