parquet-converter commited on
Commit
ea7456b
1 Parent(s): 017c5c5

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,37 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.onnx filter=lfs diff=lfs merge=lfs -text
13
- *.ot filter=lfs diff=lfs merge=lfs -text
14
- *.parquet filter=lfs diff=lfs merge=lfs -text
15
- *.pb filter=lfs diff=lfs merge=lfs -text
16
- *.pt filter=lfs diff=lfs merge=lfs -text
17
- *.pth filter=lfs diff=lfs merge=lfs -text
18
- *.rar filter=lfs diff=lfs merge=lfs -text
19
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
- *.tar.* filter=lfs diff=lfs merge=lfs -text
21
- *.tflite filter=lfs diff=lfs merge=lfs -text
22
- *.tgz filter=lfs diff=lfs merge=lfs -text
23
- *.wasm filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,127 +0,0 @@
1
- ---
2
- language:
3
- - en
4
- - ru
5
- - ayr
6
- - bho
7
- - dyu
8
- - fur
9
- - wol
10
-
11
-
12
- annotations_creators:
13
- - found
14
- language_creators:
15
- - expert-generated
16
- license:
17
- - cc-by-sa-4.0
18
- multilinguality:
19
- - multilingual
20
- - translation
21
- pretty_name: nllb-multi-domain
22
- size_categories:
23
- - unknown
24
- source_datasets:
25
- - extended|flores
26
- task_categories:
27
- - conditional-text-generation
28
- task_ids:
29
- - machine-translation
30
- paperswithcode_id: flores
31
- ---
32
-
33
- # Dataset Card for NLLB Multi-Domain
34
-
35
- ## Table of Contents
36
-
37
- - [Dataset Card for NLLB Multi-Domain](#dataset-card-for-nllb-multi-domain)
38
- - [Table of Contents](#table-of-contents)
39
- - [Dataset Description](#dataset-description)
40
- - [Dataset Summary](#dataset-summary)
41
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
42
- - [Languages](#languages)
43
- - [Dataset Structure](#dataset-structure)
44
- - [Data Instances](#data-instances)
45
- - [Data Fields](#data-fields)
46
- - [Data Splits](#data-splits)
47
- - [Dataset Creation](#dataset-creation)
48
- - [Additional Information](#additional-information)
49
- - [Dataset Curators](#dataset-curators)
50
- - [Licensing Information](#licensing-information)
51
- - [Citation Information](#citation-information)
52
-
53
- ## Dataset Description
54
-
55
- - **Home:** [Flores](https://github.com/facebookresearch/flores/tree/main/nllb_md)
56
- - **Repository:** [Github](https://github.com/facebookresearch/flores/tree/main/nllb_md)
57
-
58
- ### Dataset Summary
59
-
60
- NLLB Multi Domain is a set of professionally-translated sentences in News, Unscripted informal speech, and Health domains. It is designed to enable assessment of out-of-domain performance and to study domain adaptation for machine translation. Each domain has approximately 3000 sentences.
61
-
62
- ### Supported Tasks and Leaderboards
63
- #### Multilingual Machine Translation
64
- Refer to the [Dynabench leaderboard](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL)) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on [Large-Scale Multilingual Machine Translation](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html). Flores 200 is an extention of this.
65
-
66
- ### Languages
67
-
68
- Language | FLORES-200 code
69
- ---|---
70
- Central Aymara | ayr_Latn
71
- Bhojpuri | bho_Deva
72
- Dyula | dyu_Latn
73
- Friulian | fur_Latn
74
- Russian | rus_Cyrl
75
- Wolof | wol_Latn
76
-
77
- Use a hyphenated pairing to get two langauges in one datapoint (e.g., "eng_Latn-rus_Cyrl" will provide sentences in the format below).
78
-
79
- ## Dataset Structure
80
- ### Data Instances
81
-
82
- See Dataset Viewer.
83
-
84
- The text is provided as-in the original dataset, without further preprocessing or tokenization.
85
-
86
- ### Data Fields
87
- - `id`: Row number for the data entry, starting at 1.
88
- - `sentence`: The full sentence in the specific language (may have _lang for pairings)
89
- - `domain`: The domain of the sentence.
90
-
91
- ### Dataset Creation
92
- Please refer to the original article [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) for additional information on dataset creation.
93
- ## Additional Information
94
- ### Dataset Curators
95
- See paper for details.
96
- ### Licensing Information
97
- Licensed with Creative Commons Attribution Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-sa/4.0/).
98
-
99
- ### Citation Information
100
- Please cite the authors if you use these corpora in your work:
101
-
102
- ```bibtex
103
- @article{nllb2022,
104
- author = {NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Jeff Wang},
105
- title = {No Language Left Behind: Scaling Human-Centered Machine Translation},
106
- year = {2022}
107
- }
108
- ```
109
-
110
- Please also cite prior work that this dataset builds on:
111
-
112
- ```bibtex
113
- @inproceedings{,
114
- title={The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation},
115
- author={Goyal, Naman and Gao, Cynthia and Chaudhary, Vishrav and Chen, Peng-Jen and Wenzek, Guillaume and Ju, Da and Krishnan, Sanjana and Ranzato, Marc'Aurelio and Guzm\'{a}n, Francisco and Fan, Angela},
116
- year={2021}
117
- }
118
- ```
119
-
120
- ```bibtex
121
- @inproceedings{,
122
- title={Two New Evaluation Datasets for Low-Resource Machine Translation: Nepali-English and Sinhala-English},
123
- author={Guzm\'{a}n, Francisco and Chen, Peng-Jen and Ott, Myle and Pino, Juan and Lample, Guillaume and Koehn, Philipp and Chaudhary, Vishrav and Ranzato, Marc'Aurelio},
124
- journal={arXiv preprint arXiv:1902.01382},
125
- year={2019}
126
- }
127
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eng_Latn-ayr_Latn/nllb-multi-domain-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1140a457e7bada6f77c04edd97b21ddf69a63f427aa9526f955ad1d9a472f27
3
+ size 266980
eng_Latn-ayr_Latn/nllb-multi-domain-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aff28f84af0161adac242178fd15eae21d9d514c3fc9a3b04d2b309b94bb7a09
3
+ size 1062329
eng_Latn-ayr_Latn/nllb-multi-domain-valid.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c794d6d653601939e6b533b880f1be13d889d54640845b21f8f5a81be8290f20
3
+ size 232068
eng_Latn-bho_Deva/nllb-multi-domain-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f399279d5cd1290052aaed643af36f90d92b7dbba5f729f596350d8d8e8c2954
3
+ size 335589
eng_Latn-bho_Deva/nllb-multi-domain-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:082555fbb98e8703201cc2fe053178f9bc351de5caa617167ef4fb7fc5912dd2
3
+ size 1343156
eng_Latn-bho_Deva/nllb-multi-domain-valid.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfc5430045f7829d0ed7392233dd40449e77f459a646270a11d38043945fd1
3
+ size 288888
eng_Latn-dyu_Latn/nllb-multi-domain-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ccd816e3b50a558b3d11c7a917582b707f326f853e9e31b511065f1b556f0df0
3
+ size 265096
eng_Latn-dyu_Latn/nllb-multi-domain-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e48b9452b802f6e7cc40692a901429b33398f0b1445cb2c8391491467e1e5b2b
3
+ size 1044708
eng_Latn-dyu_Latn/nllb-multi-domain-valid.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:301c1e20edcd0b94ece9807634cd09bdbafae74152662b73de1ac1454ebc8ba3
3
+ size 231094
eng_Latn-fur_Latn/nllb-multi-domain-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c103d439545fe61aaec4707318577e2a6b5422edc053fc9698a406a515939e09
3
+ size 279897
eng_Latn-fur_Latn/nllb-multi-domain-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03e9d4316e0aee83ac2d9a40ccea76530adf38145f108903d0061126e6616c5d
3
+ size 1093352
eng_Latn-fur_Latn/nllb-multi-domain-valid.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e14b15c5be144a2f41326a33d2b8b22e902f0a8582d33bccd98f4bccf8bfa8e
3
+ size 232986
eng_Latn-rus_Cyrl/nllb-multi-domain-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4443ad4e2914c64dbd1f1e383332991b0d56bec1f84c04bdd031984f73b3edc6
3
+ size 336270
eng_Latn-rus_Cyrl/nllb-multi-domain-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:667748f5e41d681a0806c5d4ede983b90be0a5e4dd1b3164a8b7d90e32d5118a
3
+ size 1332584
eng_Latn-rus_Cyrl/nllb-multi-domain-valid.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:148f406abda822c73703c631da38b4482e6fc836a42a5f914b12e7352a6dbc57
3
+ size 292923
eng_Latn-wol_Latn/nllb-multi-domain-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b820020e77399b7663a78fc5decfb5588bd6a6ac1188a7da85266ecb1737fae3
3
+ size 270852
eng_Latn-wol_Latn/nllb-multi-domain-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7ba394c99e2cdaacf0a64ffa59245ddba8f62ca49c3f9e84ffcd4a6a533d5d5
3
+ size 1071161
eng_Latn-wol_Latn/nllb-multi-domain-valid.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4abaa92532d507ff9f729da2024a3a268f9ffb448566e065dd7b9878b629fbdf
3
+ size 236358
nllb-multi-domain.py DELETED
@@ -1,163 +0,0 @@
1
- # coding=utf-8
2
- """ No Language Left Behind Multi-Domain Evaluation Dataset
3
- """
4
-
5
- import os
6
- import sys
7
- import datasets
8
- from collections import defaultdict
9
- from pathlib import Path
10
- from typing import Union, List, Optional
11
-
12
-
13
- _CITATION = """
14
- @article{nllb2022,
15
- author = {NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Jeff Wang},
16
- title = {No Language Left Behind: Scaling Human-Centered Machine Translation},
17
- year = {2022}
18
- }
19
- """
20
-
21
- _DESCRIPTION = """\
22
- NLLB Multi Domain is a set of professionally-translated sentences in News, Unscripted informal speech, and Health domains. It is designed to enable assessment of out-of-domain performance and to study domain adaptation for machine translation. Each domain has approximately 3000 sentences.
23
- """
24
-
25
- _HOMEPAGE = "https://github.com/facebookresearch/flores"
26
-
27
- _LICENSE = "CC-BY-SA-4.0"
28
-
29
- _LANGUAGES = [
30
- "ayr_Latn", "bho_Deva", "dyu_Latn", "fur_Latn", "rus_Cyrl", "wol_Latn"
31
- ]
32
-
33
- _URLS = {
34
- "chat" : "https://tinyurl.com/NLLBMDchat",
35
- "news" : "https://tinyurl.com/NLLBMDnews",
36
- "health" : "https://tinyurl.com/NLLBMDhealth"
37
- }
38
-
39
- _SPLITS = ["train", "valid", "test"]
40
-
41
- _DOMAINS = ["chat", "news", "health"]
42
-
43
- _SENTENCES_PATHS = {
44
- f"eng_Latn-{lang}": {
45
- domain : {
46
- split: {
47
- lang : os.path.join("NLLB-MD", domain, f"{split}.eng_Latn-{lang}.{lang}"),
48
- "eng_Latn" : os.path.join("NLLB-MD", domain, f"{split}.eng_Latn-{lang}.eng_Latn")
49
- }
50
- for split in _SPLITS
51
- } for domain in _DOMAINS
52
- } for lang in _LANGUAGES
53
- }
54
-
55
-
56
-
57
- from itertools import permutations
58
-
59
- def _pairings(iterable, r=2):
60
- previous = tuple()
61
- for p in permutations(sorted(iterable), r):
62
- if p > previous:
63
- previous = p
64
- yield p
65
-
66
-
67
- class NLLBMultiDomainConfig(datasets.BuilderConfig):
68
- """BuilderConfig for the NLLB Multi-Domain dataset."""
69
- def __init__(self, lang: str, lang2: str = None, **kwargs):
70
- """
71
- Args:
72
- **kwargs: keyword arguments forwarded to super.
73
- """
74
- super().__init__(version=datasets.Version("1.0.0"), **kwargs)
75
- self.lang = lang
76
- self.lang2 = lang2
77
-
78
-
79
- class NLLBMultiDomain(datasets.GeneratorBasedBuilder):
80
- """NLLB-MD dataset."""
81
-
82
- BUILDER_CONFIGS = [
83
- NLLBMultiDomainConfig(
84
- name=f"eng_Latn-{lang}",
85
- description=f"NLLB-MD: {lang} subset.",
86
- lang="eng_Latn",
87
- lang2=lang
88
- )
89
- for lang in _LANGUAGES
90
- ]
91
-
92
- def _info(self):
93
- features = {
94
- "id": datasets.Value("int32"),
95
- "domain": datasets.Value("string")
96
- }
97
- if self.config.name != "all" and "-" not in self.config.name:
98
- features["sentence"] = datasets.Value("string")
99
- elif "-" in self.config.name:
100
- for lang in [self.config.lang, self.config.lang2]:
101
- features[f"sentence_{lang}"] = datasets.Value("string")
102
- else:
103
- for lang in _LANGUAGES:
104
- features[f"sentence_{lang}"] = datasets.Value("string")
105
- return datasets.DatasetInfo(
106
- description=_DESCRIPTION,
107
- features=datasets.Features(features),
108
- homepage=_HOMEPAGE,
109
- license=_LICENSE,
110
- citation=_CITATION,
111
- )
112
-
113
- def _split_generators(self, dl_manager):
114
- dl_dir = dl_manager.download_and_extract(_URLS)
115
-
116
- def _get_sentence_paths(split):
117
- if isinstance(self.config.lang, str) and isinstance(self.config.lang2, str):
118
- sentence_paths = [os.path.join(dl_dir[domain], _SENTENCES_PATHS[self.config.lang + "-" + self.config.lang2][domain][split][lang]) for lang in (self.config.lang, self.config.lang2) for domain in _DOMAINS]
119
- else:
120
- raise ValueError("Please specify two languages.")
121
- return sentence_paths
122
-
123
- return [
124
- datasets.SplitGenerator(
125
- name=split,
126
- gen_kwargs={
127
- "sentence_paths": _get_sentence_paths(split),
128
- }
129
- ) for split in _SPLITS
130
- ]
131
-
132
- def _generate_examples(self, sentence_paths: Union[str, List[str]], langs: Optional[List[str]] = None):
133
- """Yields examples as (key, example) tuples."""
134
- if isinstance(sentence_paths, str):
135
- with open(sentence_paths, "r") as sentences_file:
136
- for id_, sentence in enumerate(
137
- sentences_file
138
- ):
139
- sentence = sentence.strip()
140
- yield id_, {
141
- "id": id_ + 1,
142
- "sentence": sentence,
143
- }
144
- else:
145
- sentences = defaultdict(dict)
146
-
147
- langs_domains = [(lang, domain) for lang in (self.config.lang, self.config.lang2) for domain in _DOMAINS]
148
-
149
- _idx = 0
150
- for path, (lang, domain) in zip(sentence_paths, langs_domains):
151
- with open(path, "r") as sent_file:
152
- sentences[domain][lang] = [l.strip() for l in sent_file.readlines()]
153
- for domain in _DOMAINS:
154
- for s1, s2 in zip(sentences[domain][self.config.lang], sentences[domain][self.config.lang2]):
155
- _idx += 1
156
- yield _idx, {
157
- "id": _idx,
158
- "domain" : domain,
159
- f"sentence_{self.config.lang}": s1,
160
- f"sentence_{self.config.lang2}": s2
161
- }
162
-
163
-