Datasets:

Modalities:
Text
Formats:
csv
Libraries:
Datasets
pandas
License:
velmen's picture
updated acknowledgement
40aeda2 verified
---
license: odc-by
task_categories:
- translation
language:
- en
- si
size_categories:
- 10K<n<100K
---
### Licensing Information
The dataset is released under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this, you are also bound to the respective Terms of Use and License of the original source.
### Citation Information
```
@inproceedings{ranathunga-etal-2024-quality,
title = "Quality Does Matter: A Detailed Look at the Quality and Utility of Web-Mined Parallel Corpora",
author = "Ranathunga, Surangika and
De Silva, Nisansa and
Menan, Velayuthan and
Fernando, Aloka and
Rathnayake, Charitha",
editor = "Graham, Yvette and
Purver, Matthew",
booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = mar,
year = "2024",
address = "St. Julian{'}s, Malta",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.eacl-long.52",
pages = "860--880",
abstract = "We conducted a detailed analysis on the quality of web-mined corpora for two low-resource languages (making three language pairs, English-Sinhala, English-Tamil and Sinhala-Tamil). We ranked each corpus according to a similarity measure and carried out an intrinsic and extrinsic evaluation on different portions of this ranked corpus. We show that there are significant quality differences between different portions of web-mined corpora and that the quality varies across languages and datasets. We also show that, for some web-mined datasets, Neural Machine Translation (NMT) models trained with their highest-ranked 25k portion can be on par with human-curated datasets.",
}
```
### Acknowledgement
This work was funded by the Google Award for Inclusion Research (AIR) 2022 received by Surangika Ranathunga and Nisansa de Silva.
We thank the NLLB Meta AI team for open sourcing the dataset. We also thank the AllenNLP team at AI2 for hosting and releasing the original NLLB dataset.