Datasets:

ArXiv:
akshitab commited on
Commit
c5911bc
1 Parent(s): 21252b5

citation update, readme update

Browse files
Files changed (2) hide show
  1. README.md +152 -3
  2. nllb.py +10 -2
README.md CHANGED
@@ -1,3 +1,152 @@
1
- ---
2
- license: odc-by
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dataset Card for No Language Left Behind (NLLB - 200vo)
2
+
3
+ ## Table of Contents
4
+ - [Table of Contents](#table-of-contents)
5
+ - [Dataset Description](#dataset-description)
6
+ - [Dataset Summary](#dataset-summary)
7
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
8
+ - [Languages](#languages)
9
+ - [Dataset Structure](#dataset-structure)
10
+ - [Data Instances](#data-instances)
11
+ - [Data Fields](#data-fields)
12
+ - [Data Splits](#data-splits)
13
+ - [Dataset Creation](#dataset-creation)
14
+ - [Curation Rationale](#curation-rationale)
15
+ - [Source Data](#source-data)
16
+ - [Annotations](#annotations)
17
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
18
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
19
+ - [Social Impact of Dataset](#social-impact-of-dataset)
20
+ - [Discussion of Biases](#discussion-of-biases)
21
+ - [Other Known Limitations](#other-known-limitations)
22
+ - [Additional Information](#additional-information)
23
+ - [Dataset Curators](#dataset-curators)
24
+ - [Licensing Information](#licensing-information)
25
+ - [Citation Information](#citation-information)
26
+ - [Contributions](#contributions)
27
+
28
+ ## Dataset Description
29
+
30
+ - **Homepage:**
31
+ - **Repository:**
32
+ - **Paper:**
33
+ - **Leaderboard:**
34
+ - **Point of Contact:**
35
+
36
+ ### Dataset Summary
37
+
38
+ This dataset was created based on [metadata](https://github.com/facebookresearch/fairseq/tree/nllb) for mined bitext released by Meta AI. It contains bitext for 148 English-centric and 1465 non-English-centric language pairs using the stopes mining library and the LASER3 encoders Heffernan et al. (2022).
39
+
40
+ #### How to use the data
41
+ There are two ways to access the data:
42
+ * Via the Hugging Face Python datasets library
43
+ ```
44
+ from datasets import load_dataset
45
+ dataset = load_dataset("allenai/nllb")
46
+ ```
47
+ * Clone the git repo
48
+ ```
49
+ git lfs install
50
+ git clone https://huggingface.co/datasets/allenai/nllb
51
+ ```
52
+
53
+ ### Supported Tasks and Leaderboards
54
+
55
+ [More Information Needed]
56
+
57
+ ### Languages
58
+
59
+ [More Information Needed]
60
+
61
+ ## Dataset Structure
62
+
63
+ The dataset contains gzipped tab delimited text files for each direction. Each text file contains lines with parallel sentences.
64
+
65
+
66
+ ### Data Instances
67
+
68
+ [More Information Needed]
69
+
70
+ ### Data Fields
71
+
72
+ Every instance for a language pair contains the following fields: 'translation' (containing sentence pairs), 'laser_score', 'source_sentence_lid', 'target_sentence_lid', where 'lid' is language classification probability, 'source_sentence_source', 'source_sentence_url', 'target_sentence_source', 'target_sentence_url'.
73
+
74
+ * Sentence in first language
75
+ * Sentence in second language
76
+ * LASER score
77
+ * Language ID score for first sentence
78
+ * Language ID score for second sentence
79
+ * First sentence source (https://github.com/facebookresearch/LASER/tree/main/data/nllb200)
80
+ * First sentence URL if the source is crawl-data/\*; _ otherwise
81
+ * Second sentence source
82
+ * Second sentence URL if the source is crawl-data/\*; _ otherwise
83
+
84
+ ### Data Splits
85
+
86
+ The data is not split. Given the noisy nature of the overall process, we recommend using the data only for training and use other datasets like [Flores-200](https://github.com/facebookresearch/flores) for the evaluation.
87
+
88
+
89
+ ## Dataset Creation
90
+
91
+ ### Curation Rationale
92
+
93
+ Data was filtered based on language identification, emoji based filtering, and for some high-resource languages language model-based filtering. For more details on data filtering please refer to Section 5.2 (NLLB Team et al., 2022).
94
+
95
+
96
+ ### Source Data
97
+
98
+
99
+ #### Initial Data Collection and Normalization
100
+
101
+ The monolingual data is from Common Crawl and ParaCrawl.
102
+
103
+ #### Who are the source language producers?
104
+
105
+ The source language was produced by writers of each website that have been crawled by Common Crawl and ParaCrawl.
106
+
107
+ ### Annotations
108
+
109
+ #### Annotation process
110
+
111
+ Parallel sentences in the monolingual data were identified using LASER3 encoders. (Heffernan et al., 2022)
112
+
113
+ #### Who are the annotators?
114
+
115
+ The data was not human annotated.
116
+
117
+ ### Personal and Sensitive Information
118
+
119
+ The data in CommonCrawl and ParaCrawl may contain personally identifiable information, sensitive or toxic content that was publicly shared on the Internet.
120
+
121
+ ## Considerations for Using the Data
122
+
123
+ ### Social Impact of Dataset
124
+
125
+ This dataset provides data for training machine learning systems for many languages that have low resources available for NLP.
126
+
127
+ ### Discussion of Biases
128
+
129
+ Biases in the data have not been specifically studied, however as the original source of data is World Wide Web it is likely that the data has biases similar to those prevalent in the Internet. The data may also exhibit biases introduced by language identification and data filtering techniques: lower resource languages may have lower accuracy while data filtering techniques may remove certain less natural utterances.
130
+
131
+ ### Other Known Limitations
132
+
133
+ [More Information Needed]
134
+
135
+ ## Additional Information
136
+
137
+ ### Dataset Curators
138
+
139
+ The data was not curated.
140
+
141
+ ### Licensing Information
142
+
143
+ The dataset is released under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this, you are also bound by the Internet Archive [Terms of Use](https://archive.org/about/terms.php) in respect of the content contained in the dataset.
144
+
145
+
146
+ ### Citation Information
147
+
148
+ NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv, 2022.
149
+
150
+ ### Contributions
151
+
152
+ Thanks to [@akshitab](https://github.com/akshitab) for adding this dataset.
nllb.py CHANGED
@@ -18,14 +18,22 @@ import datasets
18
  import csv
19
  import json
20
 
21
- _CITATION = "" # TODO
 
 
 
 
 
 
 
 
22
 
23
 
24
  _DESCRIPTION = "" # TODO
25
 
26
  _HOMEPAGE = "" # TODO
27
 
28
- _LICENSE = "" # TODO
29
 
30
  from .nllb_lang_pairs import LANG_PAIRS as _LANGUAGE_PAIRS
31
 
 
18
  import csv
19
  import json
20
 
21
+ _CITATION = (
22
+ "@article{team2022NoLL,"
23
+ "title={No Language Left Behind: Scaling Human-Centered Machine Translation},"
24
+ "author={Nllb team and Marta Ruiz Costa-juss{\`a} and James Cross and Onur cCelebi and Maha Elbayad and Kenneth Heafield and Kevin Heffernan and Elahe Kalbassi and Janice Lam and Daniel Licht and Jean Maillard and Anna Sun and Skyler Wang and Guillaume Wenzek and Alison Youngblood and Bapi Akula and Lo{\"i}c Barrault and Gabriel Mejia Gonzalez and Prangthip Hansanti and John Hoffman and Semarley Jarrett and Kaushik Ram Sadagopan and Dirk Rowe and Shannon L. Spruit and C. Tran and Pierre Andrews and Necip Fazil Ayan and Shruti Bhosale and Sergey Edunov and Angela Fan and Cynthia Gao and Vedanuj Goswami and Francisco Guzm'an and Philipp Koehn and Alexandre Mourachko and Christophe Ropers and Safiyyah Saleem and Holger Schwenk and Jeff Wang},"
25
+ "journal={ArXiv},"
26
+ "year={2022},"
27
+ "volume={abs/2207.04672}"
28
+ "}"
29
+ )
30
 
31
 
32
  _DESCRIPTION = "" # TODO
33
 
34
  _HOMEPAGE = "" # TODO
35
 
36
+ _LICENSE = "https://opendatacommons.org/licenses/by/1-0/"
37
 
38
  from .nllb_lang_pairs import LANG_PAIRS as _LANGUAGE_PAIRS
39