Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -36,9 +36,27 @@ Testset 1 consists of parallel sentences in Ladin and Italian. The dataset conta
|
|
36 |
If you use this dataset, please cite the following paper:
|
37 |
|
38 |
```bibtex
|
39 |
-
@
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
}
|
|
|
|
36 |
If you use this dataset, please cite the following paper:
|
37 |
|
38 |
```bibtex
|
39 |
+
@inproceedings{frontull-moser-2024-rule,
|
40 |
+
title = "Rule-Based, Neural and {LLM} Back-Translation: Comparative Insights from a Variant of {L}adin",
|
41 |
+
author = "Frontull, Samuel and
|
42 |
+
Moser, Georg",
|
43 |
+
editor = "Ojha, Atul Kr. and
|
44 |
+
Liu, Chao-hong and
|
45 |
+
Vylomova, Ekaterina and
|
46 |
+
Pirinen, Flammie and
|
47 |
+
Abbott, Jade and
|
48 |
+
Washington, Jonathan and
|
49 |
+
Oco, Nathaniel and
|
50 |
+
Malykh, Valentin and
|
51 |
+
Logacheva, Varvara and
|
52 |
+
Zhao, Xiaobing",
|
53 |
+
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
|
54 |
+
month = aug,
|
55 |
+
year = "2024",
|
56 |
+
address = "Bangkok, Thailand",
|
57 |
+
publisher = "Association for Computational Linguistics",
|
58 |
+
url = "https://aclanthology.org/2024.loresmt-1.13",
|
59 |
+
pages = "128--138",
|
60 |
+
abstract = "This paper explores the impact of different back-translation approaches on machine translation for Ladin, specifically the Val Badia variant. Given the limited amount of parallel data available for this language (only 18k Ladin-Italian sentence pairs), we investigate the performance of a multilingual neural machine translation model fine-tuned for Ladin-Italian. In addition to the available authentic data, we synthesise further translations by using three different models: a fine-tuned neural model, a rule-based system developed specifically for this language pair, and a large language model. Our experiments show that all approaches achieve comparable translation quality in this low-resource scenario, yet round-trip translations highlight differences in model performance.",
|
61 |
}
|
62 |
+
```
|