Update README.md
Browse files
README.md
CHANGED
@@ -35,7 +35,7 @@
|
|
35 |
|
36 |
### Dataset Summary
|
37 |
|
38 |
-
This dataset was created based on [metadata](https://github.com/facebookresearch/fairseq/tree/nllb) for mined bitext released by Meta AI. It contains bitext for 148 English-centric and 1465 non-English-centric language pairs using the stopes mining library and the LASER3 encoders (Heffernan et al., 2022).
|
39 |
|
40 |
|
41 |
#### How to use the data
|
@@ -67,9 +67,9 @@ Language pairs can be found [here](https://huggingface.co/datasets/allenai/nllb/
|
|
67 |
The dataset contains gzipped tab delimited text files for each direction. Each text file contains lines with parallel sentences.
|
68 |
|
69 |
|
70 |
-
|
71 |
|
72 |
-
[More Information Needed]
|
73 |
|
74 |
### Data Fields
|
75 |
|
|
|
35 |
|
36 |
### Dataset Summary
|
37 |
|
38 |
+
This dataset was created based on [metadata](https://github.com/facebookresearch/fairseq/tree/nllb) for mined bitext released by Meta AI. It contains bitext for 148 English-centric and 1465 non-English-centric language pairs using the stopes mining library and the LASER3 encoders (Heffernan et al., 2022). The complete dataset is ~450GB.
|
39 |
|
40 |
|
41 |
#### How to use the data
|
|
|
67 |
The dataset contains gzipped tab delimited text files for each direction. Each text file contains lines with parallel sentences.
|
68 |
|
69 |
|
70 |
+
<!---### Data Instances
|
71 |
|
72 |
+
[More Information Needed]--->
|
73 |
|
74 |
### Data Fields
|
75 |
|