Datasets:

ArXiv:
File size: 5,443 Bytes
c5911bc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
# Dataset Card for No Language Left Behind (NLLB - 200vo)

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**

### Dataset Summary

This dataset was created based on [metadata](https://github.com/facebookresearch/fairseq/tree/nllb) for mined bitext released by Meta AI.  It contains bitext for 148 English-centric and 1465 non-English-centric language pairs using the stopes mining library and the LASER3 encoders Heffernan et al. (2022). 

#### How to use the data
There are two ways to access the data:
* Via the Hugging Face Python datasets library 
```
from datasets import load_dataset
dataset = load_dataset("allenai/nllb")
```
* Clone the git repo
```
git lfs install
git clone https://huggingface.co/datasets/allenai/nllb
```

### Supported Tasks and Leaderboards

[More Information Needed]

### Languages

[More Information Needed]

## Dataset Structure

The dataset contains gzipped tab delimited text files for each direction.  Each text file contains lines with parallel sentences.


### Data Instances

[More Information Needed]

### Data Fields

Every instance for a language pair contains the following fields: 'translation' (containing sentence pairs), 'laser_score', 'source_sentence_lid', 'target_sentence_lid', where 'lid' is language classification probability, 'source_sentence_source', 'source_sentence_url', 'target_sentence_source', 'target_sentence_url'.

* Sentence in first language
* Sentence in second language
* LASER score
* Language ID score for first sentence
* Language ID score for second sentence
* First sentence source (https://github.com/facebookresearch/LASER/tree/main/data/nllb200) 
* First sentence URL if the source is crawl-data/\*; _ otherwise
* Second sentence source
* Second sentence URL if the source is crawl-data/\*; _ otherwise

### Data Splits

The data is not split.  Given the noisy nature of the overall process, we recommend using the data only for training and use other datasets like [Flores-200](https://github.com/facebookresearch/flores) for the evaluation.


## Dataset Creation

### Curation Rationale

Data was filtered based on language identification, emoji based filtering, and for some high-resource languages language model-based filtering. For more details on data filtering please refer to Section 5.2 (NLLB Team et al., 2022).


### Source Data


#### Initial Data Collection and Normalization

The monolingual data is from Common Crawl and ParaCrawl. 

#### Who are the source language producers?

The source language was produced by writers of each website that have been crawled by Common Crawl and ParaCrawl.

### Annotations

#### Annotation process

Parallel sentences in the monolingual data were identified using LASER3 encoders. (Heffernan et al., 2022)

#### Who are the annotators?

The data was not human annotated.

### Personal and Sensitive Information

The data in CommonCrawl and ParaCrawl may contain personally identifiable information, sensitive or toxic content that was publicly shared on the Internet.

## Considerations for Using the Data

### Social Impact of Dataset

This dataset provides data for training machine learning systems for many languages that have low resources available for NLP.

### Discussion of Biases

Biases in the data have not been specifically studied, however as the original source of data is World Wide Web it is likely that the data has biases similar to those prevalent in the Internet. The data may also exhibit biases introduced by language identification and data filtering techniques: lower resource languages may have lower accuracy while data filtering techniques may remove certain less natural utterances.

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

The data was not curated.

### Licensing Information

The dataset is released under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this, you are also bound by the Internet Archive [Terms of Use](https://archive.org/about/terms.php) in respect of the content contained in the dataset.


### Citation Information

NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv, 2022.

### Contributions

Thanks to [@akshitab](https://github.com/akshitab) for adding this dataset.