bibtex_url
stringlengths 41
53
| proceedings
stringlengths 38
50
| bibtext
stringlengths 566
3.75k
| abstract
stringlengths 4
3.1k
| authors
sequencelengths 1
66
| title
stringlengths 12
172
| id
stringlengths 7
19
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringlengths 0
40
| n_linked_authors
int64 -1
21
| upvotes
int64 -1
116
| num_comments
int64 -1
11
| n_authors
int64 -1
61
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
100
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
100
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2024.wmt-1.89.bib | https://aclanthology.org/2024.wmt-1.89/ | @inproceedings{bar-etal-2024-robustness,
title = "Robustness of Fine-Tuned {LLM}s for Machine Translation with Varying Noise Levels: Insights for {A}sturian, {A}ragonese and Aranese",
author = {B{\"a}r, Martin and
Forcada Rodr{\'\i}guez, Elisa and
Garcia-Abadillo, Maria},
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.89",
pages = "918--924",
abstract = "We present the LCT-LAP proposal for the shared task on Translation into Low-Resource Languages of Spain at WMT24 within the constrained submission category. Our work harnesses encoder-decoder models pretrained on higher-resource Iberian languages to facilitate MT model training for Asturian, Aranese and Aragonese. Furthermore, we explore the robustness of these models when fine-tuned on datasets with varying levels of alignment noise. We fine-tuned a Spanish-Galician model using Asturian data filtered by BLEU score thresholds of 5, 15, 30 and 60, identifying BLEU 15 as the most effective. This threshold was then applied to the Aranese and Aragonese datasets. Our findings indicate that filtering the corpora reduces computational costs and improves performance compared to using nearly raw data or data filtered with language identification. However, it still falls short of the performance achieved by the rule-based system Apertium in Aranese and Aragonese.",
}
| We present the LCT-LAP proposal for the shared task on Translation into Low-Resource Languages of Spain at WMT24 within the constrained submission category. Our work harnesses encoder-decoder models pretrained on higher-resource Iberian languages to facilitate MT model training for Asturian, Aranese and Aragonese. Furthermore, we explore the robustness of these models when fine-tuned on datasets with varying levels of alignment noise. We fine-tuned a Spanish-Galician model using Asturian data filtered by BLEU score thresholds of 5, 15, 30 and 60, identifying BLEU 15 as the most effective. This threshold was then applied to the Aranese and Aragonese datasets. Our findings indicate that filtering the corpora reduces computational costs and improves performance compared to using nearly raw data or data filtered with language identification. However, it still falls short of the performance achieved by the rule-based system Apertium in Aranese and Aragonese. | [
"B{\\\"a}r, Martin",
"Forcada Rodr{\\'\\i}guez, Elisa",
"Garcia-Abadillo, Maria"
] | Robustness of Fine-Tuned LLMs for Machine Translation with Varying Noise Levels: Insights for Asturian, Aragonese and Aranese | wmt-1.89 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.90.bib | https://aclanthology.org/2024.wmt-1.90/ | @inproceedings{sant-etal-2024-training,
title = "Training and Fine-Tuning {NMT} Models for Low-Resource Languages Using Apertium-Based Synthetic Corpora",
author = "Sant, Aleix and
Bardanca, Daniel and
Pichel Campos, Jos{\'e} Ramom and
De Luca Fornaciari, Francesca and
Escolano, Carlos and
Garcia Gilabert, Javier and
Gamallo, Pablo and
Mash, Audrey and
Liao, Xixian and
Melero, Maite",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.90",
pages = "925--933",
abstract = "In this paper, we present the two strategies employed for the WMT24 Shared Task on Translation into Low-Resource Languages of Spain. We participated in the language pairs of Spanish-to-Aragonese, Spanish-to-Aranese, and Spanish-to-Asturian, developing neural-based translation systems and moving away from rule-based approaches for these language directions. To create these models, two distinct strategies were employed. The first strategy involved a thorough cleaning process and curation of the limited provided data, followed by fine-tuning the multilingual NLLB-200-600M model (Constrained Submission). The other strategy involved training a transformer from scratch using a vast amount of synthetic data (Open Submission). Both approaches relied on generated synthetic data and resulted in high ChrF and BLEU scores. However, given the characteristics of the task, the strategy used in the Constrained Submission resulted in higher scores that surpassed the baselines across the three translation directions, whereas the strategy employed in the Open Submission yielded slightly lower scores than the highest baseline.",
}
| In this paper, we present the two strategies employed for the WMT24 Shared Task on Translation into Low-Resource Languages of Spain. We participated in the language pairs of Spanish-to-Aragonese, Spanish-to-Aranese, and Spanish-to-Asturian, developing neural-based translation systems and moving away from rule-based approaches for these language directions. To create these models, two distinct strategies were employed. The first strategy involved a thorough cleaning process and curation of the limited provided data, followed by fine-tuning the multilingual NLLB-200-600M model (Constrained Submission). The other strategy involved training a transformer from scratch using a vast amount of synthetic data (Open Submission). Both approaches relied on generated synthetic data and resulted in high ChrF and BLEU scores. However, given the characteristics of the task, the strategy used in the Constrained Submission resulted in higher scores that surpassed the baselines across the three translation directions, whereas the strategy employed in the Open Submission yielded slightly lower scores than the highest baseline. | [
"Sant, Aleix",
"Bardanca, Daniel",
"Pichel Campos, Jos{\\'e} Ramom",
"De Luca Fornaciari, Francesca",
"Escolano, Carlos",
"Garcia Gilabert, Javier",
"Gamallo, Pablo",
"Mash, Audrey",
"Liao, Xixian",
"Melero, Maite"
] | Training and Fine-Tuning NMT Models for Low-Resource Languages Using Apertium-Based Synthetic Corpora | wmt-1.90 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.91.bib | https://aclanthology.org/2024.wmt-1.91/ | @inproceedings{ponce-etal-2024-vicomtech,
title = "Vicomtech@{WMT} 2024: Shared Task on Translation into Low-Resource Languages of {S}pain",
author = "Ponce, David and
Gete, Harritxu and
Etchegoyhen, Thierry",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.91",
pages = "934--942",
abstract = "We describe Vicomtech{'}s participation in the WMT 2024 Shared Task on translation into low-resource languages of Spain. We addressed all three languages of the task, namely Aragonese, Aranese and Asturian, in both constrained and open settings. Our work mainly centred on exploiting different types of corpora via data filtering, selection and combination methods, along with synthetic data generated with translation models based on rules, neural sequence-to-sequence or large language models. We improved or matched the best baselines in all three language pairs and present complementary results on additional test sets.",
}
| We describe Vicomtech{'}s participation in the WMT 2024 Shared Task on translation into low-resource languages of Spain. We addressed all three languages of the task, namely Aragonese, Aranese and Asturian, in both constrained and open settings. Our work mainly centred on exploiting different types of corpora via data filtering, selection and combination methods, along with synthetic data generated with translation models based on rules, neural sequence-to-sequence or large language models. We improved or matched the best baselines in all three language pairs and present complementary results on additional test sets. | [
"Ponce, David",
"Gete, Harritxu",
"Etchegoyhen, Thierry"
] | Vicomtech@WMT 2024: Shared Task on Translation into Low-Resource Languages of Spain | wmt-1.91 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.92.bib | https://aclanthology.org/2024.wmt-1.92/ | @inproceedings{hu-etal-2024-sjtu,
title = "{SJTU} System Description for the {WMT}24 Low-Resource Languages of {S}pain Task",
author = "Hu, Tianxiang and
Sun, Haoxiang and
Gao, Ruize and
Tang, Jialong and
Zhang, Pei and
Yang, Baosong and
Wang, Rui",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.92",
pages = "943--948",
abstract = "We participate in the translation task on Spanish to Aragonese, Spanish to Aranese and Spanish to Asturian. Initially, we conduct preliminary experiments to assess the basic translation capabilities of various models and evaluate the impact of fine-tuning with different data types. We then choose to fine-tune the Qwen2-0.5B model using a forward synthesized pseudo-corpus from the Apertium translation system to replicate its fundamental performance. Building on this distillation model, we explore three optimization strategies across the three language directions: (1) Assembling the provided FLORES+ dev sets into a 5-shot format translation training dataset and performing few-shot fine-tuning to enhance model performance. (2) Utilizing the FLORES+ dev sets as training data and applying the Contrastive Preference Optimization (CPO) strategy for further refinement. (3) Retrieving the 20 most similar translation examples from the FLORES+ dev sets using the BM25 algorithm and performing 20-shot translations with the Claude 3.5-sonnet model. After evaluating these strategies, we select the best-performing approach for each language pair as our submission result.",
}
| We participate in the translation task on Spanish to Aragonese, Spanish to Aranese and Spanish to Asturian. Initially, we conduct preliminary experiments to assess the basic translation capabilities of various models and evaluate the impact of fine-tuning with different data types. We then choose to fine-tune the Qwen2-0.5B model using a forward synthesized pseudo-corpus from the Apertium translation system to replicate its fundamental performance. Building on this distillation model, we explore three optimization strategies across the three language directions: (1) Assembling the provided FLORES+ dev sets into a 5-shot format translation training dataset and performing few-shot fine-tuning to enhance model performance. (2) Utilizing the FLORES+ dev sets as training data and applying the Contrastive Preference Optimization (CPO) strategy for further refinement. (3) Retrieving the 20 most similar translation examples from the FLORES+ dev sets using the BM25 algorithm and performing 20-shot translations with the Claude 3.5-sonnet model. After evaluating these strategies, we select the best-performing approach for each language pair as our submission result. | [
"Hu, Tianxiang",
"Sun, Haoxiang",
"Gao, Ruize",
"Tang, Jialong",
"Zhang, Pei",
"Yang, Baosong",
"Wang, Rui"
] | SJTU System Description for the WMT24 Low-Resource Languages of Spain Task | wmt-1.92 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.93.bib | https://aclanthology.org/2024.wmt-1.93/ | @inproceedings{luo-etal-2024-multilingual,
title = "Multilingual Transfer and Domain Adaptation for Low-Resource Languages of {S}pain",
author = "Luo, Yuanchang and
Wu, Zhanglin and
Wei, Daimeng and
Shang, Hengchao and
Li, Zongyao and
Guo, Jiaxin and
Rao, Zhiqiang and
Li, Shaojun and
Yang, Jinlong and
Xie, Yuhao and
Jiawei, Zheng and
Wei, Bin and
Yang, Hao",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.93",
pages = "949--954",
abstract = "This article introduces the submission status of the Translation into Low-Resource Languages of Spain task at (WMT 2024) by Huawei Translation Service Center (HW-TSC). We participated in three translation tasks: spanish to aragonese (es2arg), spanish to aranese (es2arn), and spanish to asturian (es2ast). For these three translation tasks, we use training strategies such as multilingual transfer, regularized dropout, forward translation and back translation, labse denoising, transduction ensemble learning and other strategies to neural machine translation (NMT) model based on training deep transformer-big architecture. By using these enhancement strategies, our submission achieved a competitive result in the final evaluation.",
}
| This article introduces the submission status of the Translation into Low-Resource Languages of Spain task at (WMT 2024) by Huawei Translation Service Center (HW-TSC). We participated in three translation tasks: spanish to aragonese (es2arg), spanish to aranese (es2arn), and spanish to asturian (es2ast). For these three translation tasks, we use training strategies such as multilingual transfer, regularized dropout, forward translation and back translation, labse denoising, transduction ensemble learning and other strategies to neural machine translation (NMT) model based on training deep transformer-big architecture. By using these enhancement strategies, our submission achieved a competitive result in the final evaluation. | [
"Luo, Yuanchang",
"Wu, Zhanglin",
"Wei, Daimeng",
"Shang, Hengchao",
"Li, Zongyao",
"Guo, Jiaxin",
"Rao, Zhiqiang",
"Li, Shaojun",
"Yang, Jinlong",
"Xie, Yuhao",
"Jiawei, Zheng",
"Wei, Bin",
"Yang, Hao"
] | Multilingual Transfer and Domain Adaptation for Low-Resource Languages of Spain | wmt-1.93 | Poster | 2409.15924 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.94.bib | https://aclanthology.org/2024.wmt-1.94/ | @inproceedings{kuzmin-etal-2024-tribble,
title = "{TRIBBLE} - {TR}anslating {IB}erian languages Based on Limited {E}-resources",
author = "Kuzmin, Igor and
Przyby{\l}a, Piotr and
Mcgill, Euan and
Saggion, Horacio",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.94",
pages = "955--959",
abstract = "In this short overview paper, we describe our system submission for the language pairs Spanish to Aragonese (spa-arg), Spanish to Aranese (spa-arn), and Spanish to Asturian (spa-ast). We train a unified model for all language pairs in the constrained scenario. In addition, we add two language control tokens for Aragonese and Aranese Occitan, as there is already one present for Asturian. We take the distilled NLLB-200 model with 600M parameters and extend special tokens with 2 tokens that denote target languages (arn{\_}Latn, arg{\_}Latn) because Asturian was already presented in NLLB-200 model. We adapt the model by training on a special regime of data augmentation with both monolingual and bilingual training data for the language pairs in this challenge.",
}
| In this short overview paper, we describe our system submission for the language pairs Spanish to Aragonese (spa-arg), Spanish to Aranese (spa-arn), and Spanish to Asturian (spa-ast). We train a unified model for all language pairs in the constrained scenario. In addition, we add two language control tokens for Aragonese and Aranese Occitan, as there is already one present for Asturian. We take the distilled NLLB-200 model with 600M parameters and extend special tokens with 2 tokens that denote target languages (arn{\_}Latn, arg{\_}Latn) because Asturian was already presented in NLLB-200 model. We adapt the model by training on a special regime of data augmentation with both monolingual and bilingual training data for the language pairs in this challenge. | [
"Kuzmin, Igor",
"Przyby{\\l}a, Piotr",
"Mcgill, Euan",
"Saggion, Horacio"
] | TRIBBLE - TRanslating IBerian languages Based on Limited E-resources | wmt-1.94 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.95.bib | https://aclanthology.org/2024.wmt-1.95/ | @inproceedings{liu-etal-2024-cloudsheep,
title = "{C}loud{S}heep System for {WMT}24 Discourse-Level Literary Translation",
author = "Liu, Lisa and
Liu, Ryan and
Tsai, Angela and
Shang, Jingbo",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.95",
pages = "960--966",
abstract = "This paper describes the CloudSheep translation system for WMT24 Discourse-Level Literary Translation shared task. We participated in the Chinese-English direction on the unconstrained track. Our approach to the task used a pipeline of different tools in order to maximize the translation accuracy and flow of the text by combining the strengths of each tool. In particular, our focus was to translate names consistently and idioms correctly. To achieve consistent names throughout a text, a custom name dictionary was generated for each text, containing person and place names, along with their translations. A common honorific dictionary was applied for consistency with titles, especially in historical or cultivation novels. The names were found and translated with GPT 3.5-turbo. To achieve accurate and concise translations of idioms, which are often translated literally and verbosely, we integrated the CC-CEDICT library to provide official definitions. Then, we used GPT-4 to pick the best dictionary definition that fit the context and rephrase it to fit grammatically within a sentence. For the translation of non-name and non-idiom terms, we used Google Translate. We compared our approach{'}s performance with Google Translate as a baseline using BLEU, chrF, and COMET, as well as A/B testing.",
}
| This paper describes the CloudSheep translation system for WMT24 Discourse-Level Literary Translation shared task. We participated in the Chinese-English direction on the unconstrained track. Our approach to the task used a pipeline of different tools in order to maximize the translation accuracy and flow of the text by combining the strengths of each tool. In particular, our focus was to translate names consistently and idioms correctly. To achieve consistent names throughout a text, a custom name dictionary was generated for each text, containing person and place names, along with their translations. A common honorific dictionary was applied for consistency with titles, especially in historical or cultivation novels. The names were found and translated with GPT 3.5-turbo. To achieve accurate and concise translations of idioms, which are often translated literally and verbosely, we integrated the CC-CEDICT library to provide official definitions. Then, we used GPT-4 to pick the best dictionary definition that fit the context and rephrase it to fit grammatically within a sentence. For the translation of non-name and non-idiom terms, we used Google Translate. We compared our approach{'}s performance with Google Translate as a baseline using BLEU, chrF, and COMET, as well as A/B testing. | [
"Liu, Lisa",
"Liu, Ryan",
"Tsai, Angela",
"Shang, Jingbo"
] | CloudSheep System for WMT24 Discourse-Level Literary Translation | wmt-1.95 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.96.bib | https://aclanthology.org/2024.wmt-1.96/ | @inproceedings{sun-etal-2024-final,
title = "Final Submission of {SJTUL}ove{F}iction to Literary Task",
author = "Sun, Haoxiang and
Hu, Tianxiang and
Gao, Ruize and
Tang, Jialong and
Zhang, Pei and
Yang, Baosong and
Wang, Rui",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.96",
pages = "967--972",
abstract = "This paper describes Shanghai Jiao Tong University (SJTU LoveFiction) Discourse-Level Literary Translation systems for the WMT24shared task. We participate in the literary translation task on Chinese â English, Chinese âGerman and Chinese â Russian with uncon-strained tack.Check our paper for detail.",
}
| This paper describes Shanghai Jiao Tong University (SJTU LoveFiction) Discourse-Level Literary Translation systems for the WMT24shared task. We participate in the literary translation task on Chinese â English, Chinese âGerman and Chinese â Russian with uncon-strained tack.Check our paper for detail. | [
"Sun, Haoxiang",
"Hu, Tianxiang",
"Gao, Ruize",
"Tang, Jialong",
"Zhang, Pei",
"Yang, Baosong",
"Wang, Rui"
] | Final Submission of SJTULoveFiction to Literary Task | wmt-1.96 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.97.bib | https://aclanthology.org/2024.wmt-1.97/ | @inproceedings{luo-etal-2024-context,
title = "Context-aware and Style-related Incremental Decoding Framework for Discourse-Level Literary Translation",
author = "Luo, Yuanchang and
Guo, Jiaxin and
Wei, Daimeng and
Shang, Hengchao and
Li, Zongyao and
Wu, Zhanglin and
Rao, Zhiqiang and
Li, Shaojun and
Yang, Jinlong and
Yang, Hao",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.97",
pages = "973--979",
abstract = "This report outlines our approach for the WMT24 Discourse-Level Literary Translation Task, focusing on the Chinese-English language pair in the Constrained Track. Translating literary texts poses significant challenges due to the nuanced meanings, idiomatic expressions, and intricate narrative structures inherent in such works. To address these challenges, we leveraged the Chinese-Llama2 model, specifically enhanced for this task through a combination of Continual Pre-training (CPT) and Supervised Fine-Tuning (SFT). Our methodology includes a novel Incremental Decoding framework, which ensures that each sentence is translated with consideration of its broader context, maintaining coherence and consistency throughout the text. This approach allows the model to capture long-range dependencies and stylistic elements, producing translations that faithfully preserve the original literary quality. Our experiments demonstrate significant improvements in both sentence-level and document-level BLEU scores, underscoring the effectiveness of our proposed framework in addressing the complexities of document-level literary translation.",
}
| This report outlines our approach for the WMT24 Discourse-Level Literary Translation Task, focusing on the Chinese-English language pair in the Constrained Track. Translating literary texts poses significant challenges due to the nuanced meanings, idiomatic expressions, and intricate narrative structures inherent in such works. To address these challenges, we leveraged the Chinese-Llama2 model, specifically enhanced for this task through a combination of Continual Pre-training (CPT) and Supervised Fine-Tuning (SFT). Our methodology includes a novel Incremental Decoding framework, which ensures that each sentence is translated with consideration of its broader context, maintaining coherence and consistency throughout the text. This approach allows the model to capture long-range dependencies and stylistic elements, producing translations that faithfully preserve the original literary quality. Our experiments demonstrate significant improvements in both sentence-level and document-level BLEU scores, underscoring the effectiveness of our proposed framework in addressing the complexities of document-level literary translation. | [
"Luo, Yuanchang",
"Guo, Jiaxin",
"Wei, Daimeng",
"Shang, Hengchao",
"Li, Zongyao",
"Wu, Zhanglin",
"Rao, Zhiqiang",
"Li, Shaojun",
"Yang, Jinlong",
"Yang, Hao"
] | Context-aware and Style-related Incremental Decoding Framework for Discourse-Level Literary Translation | wmt-1.97 | Poster | 2409.16539 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.98.bib | https://aclanthology.org/2024.wmt-1.98/ | @inproceedings{liu-etal-2024-noveltrans,
title = "{N}ovel{T}rans: System for {WMT}24 Discourse-Level Literary Translation",
author = "Liu, Yuchen and
Yao, Yutong and
Zhan, Runzhe and
Lin, Yuchu and
Wong, Derek F.",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.98",
pages = "980--986",
abstract = "This paper describes our submission system, NovelTrans, from NLP{\mbox{$^2$}}CT and DeepTranx for the WMT24 Discourse-Level Literary Translation Task in Chinese-English, Chinese-German, and Chinese-Russian language pairs under unconstrained conditions. For our primary system, three translations are done by GPT4o using three different settings of additional information and a terminology table generated by online models. The final result is composed of sentences that have the highest xCOMET score compared with the corresponding sentences in other results. Our system achieved an xCOMET score of 79.14 which is higher than performing a direct chapter-level translation on our dataset.",
}
| This paper describes our submission system, NovelTrans, from NLP{\mbox{$^2$}}CT and DeepTranx for the WMT24 Discourse-Level Literary Translation Task in Chinese-English, Chinese-German, and Chinese-Russian language pairs under unconstrained conditions. For our primary system, three translations are done by GPT4o using three different settings of additional information and a terminology table generated by online models. The final result is composed of sentences that have the highest xCOMET score compared with the corresponding sentences in other results. Our system achieved an xCOMET score of 79.14 which is higher than performing a direct chapter-level translation on our dataset. | [
"Liu, Yuchen",
"Yao, Yutong",
"Zhan, Runzhe",
"Lin, Yuchu",
"Wong, Derek F."
] | NovelTrans: System for WMT24 Discourse-Level Literary Translation | wmt-1.98 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.99.bib | https://aclanthology.org/2024.wmt-1.99/ | @inproceedings{li-etal-2024-linchance,
title = "{L}in{C}hance-{NTU} for Unconstrained {WMT}2024 Literary Translation",
author = "Li, Kechen and
Tao, Yaotian and
Huang, Hongyi and
Ji, Tianbo",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.99",
pages = "987--992",
abstract = "The rapid growth of deep learning has spurred significant advancements across industries, par- ticularly in machine translation through large language models (LLMs). However, translat- ing literary still presents challenges, including cross-cultural nuances, complex language struc- tures, metaphorical expressions, and cultural differences. To address these issues, this study utilizes the Llama and Phi models using both LoRA and full-parameter techniques, along-side a prompt-based translation system. Full-parameter tuning of the Llama-3-Chinese-8B-Instruct model was unsuccessful due to mem-ory constraints. In terms of the WMT task, the fully fine-tuned Phi 3 model was selected for submission due to its more natural and flu-ent translations. Nonetheless, results showed that LoRA and the prompt-based system sig- nificantly improved the Llama3 model{'}s perfor- mance, surpassing other models in BLEU and ROUGE evaluations.",
}
| The rapid growth of deep learning has spurred significant advancements across industries, par- ticularly in machine translation through large language models (LLMs). However, translat- ing literary still presents challenges, including cross-cultural nuances, complex language struc- tures, metaphorical expressions, and cultural differences. To address these issues, this study utilizes the Llama and Phi models using both LoRA and full-parameter techniques, along-side a prompt-based translation system. Full-parameter tuning of the Llama-3-Chinese-8B-Instruct model was unsuccessful due to mem-ory constraints. In terms of the WMT task, the fully fine-tuned Phi 3 model was selected for submission due to its more natural and flu-ent translations. Nonetheless, results showed that LoRA and the prompt-based system sig- nificantly improved the Llama3 model{'}s perfor- mance, surpassing other models in BLEU and ROUGE evaluations. | [
"Li, Kechen",
"Tao, Yaotian",
"Huang, Hongyi",
"Ji, Tianbo"
] | LinChance-NTU for Unconstrained WMT2024 Literary Translation | wmt-1.99 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.100.bib | https://aclanthology.org/2024.wmt-1.100/ | @inproceedings{pombal-etal-2024-improving,
title = "Improving Context Usage for Translating Bilingual Customer Support Chat with Large Language Models",
author = "Pombal, Jose and
Agrawal, Sweta and
Martins, Andr{\'e}",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.100",
pages = "993--1003",
abstract = "This paper describes Unbabel+IT{'}s submission to the Chat Shared Task held at the Workshop of Machine Translation 2024. The task focuses on translating customer support chats between agents and customers communicating in different languages. We present two strategies for adapting state-of-the-art language models to better utilize contextual information when translating such conversations. Our training strategy involves finetuning the model on chat datasets with context-augmented instructions, resulting in a specialized model, TOWERCHAT. For inference, we propose a novel quality-aware decoding approach that leverages a context-aware metric, CONTEXTCOMET, to select the optimal translation from a pool of candidates. We evaluate our proposed approach on the official shared task datasets for ten language pairs, showing that our submission consistently outperforms baselines on all and competing systems on 8 out of 10 language pairs across multiple automated metrics. Remarkably, TOWERCHAT outperforms our contrastive submission based on the much larger TOWER-V2-70B model while being 10{\mbox{$\times$}} smaller. According to human evaluation, our system outperforms all other systems and baselines across all language pairs. These results underscore the importance of context-aware training and inference in handling complex bilingual dialogues.",
}
| This paper describes Unbabel+IT{'}s submission to the Chat Shared Task held at the Workshop of Machine Translation 2024. The task focuses on translating customer support chats between agents and customers communicating in different languages. We present two strategies for adapting state-of-the-art language models to better utilize contextual information when translating such conversations. Our training strategy involves finetuning the model on chat datasets with context-augmented instructions, resulting in a specialized model, TOWERCHAT. For inference, we propose a novel quality-aware decoding approach that leverages a context-aware metric, CONTEXTCOMET, to select the optimal translation from a pool of candidates. We evaluate our proposed approach on the official shared task datasets for ten language pairs, showing that our submission consistently outperforms baselines on all and competing systems on 8 out of 10 language pairs across multiple automated metrics. Remarkably, TOWERCHAT outperforms our contrastive submission based on the much larger TOWER-V2-70B model while being 10{\mbox{$\times$}} smaller. According to human evaluation, our system outperforms all other systems and baselines across all language pairs. These results underscore the importance of context-aware training and inference in handling complex bilingual dialogues. | [
"Pombal, Jose",
"Agrawal, Sweta",
"Martins, Andr{\\'e}"
] | Improving Context Usage for Translating Bilingual Customer Support Chat with Large Language Models | wmt-1.100 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.101.bib | https://aclanthology.org/2024.wmt-1.101/ | @inproceedings{yang-etal-2024-optimising,
title = "Optimising {LLM}-Driven Machine Translation with Context-Aware Sliding Windows",
author = "Yang, Xinye and
Mu, Yida and
Bontcheva, Kalina and
Song, Xingyi",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.101",
pages = "1004--1010",
abstract = "This paper describes SheffieldGATE{'}s submission to WMT 2024 Chat Shared Translation Task. We participate in three language pairs: English-German, English-Dutch, and English-Portuguese (Brazil). In this work, we introduce a context-aware sliding window decoding method to track dependencies between chat messages. We fine-tune a large pre-trained language model based on the training data provided by the shared task Our experiments (i) compare the model performance between multilingual and bilingual fine-tuning and (ii) assess the impact of different window sizes. Our experimental results demonstrate that utilising contextual information yields superior performance in document-level translation compared to translating documents as isolated text segments, and that models fine-tuned with multilingual data perform better than those fine-tuned with bilingual data.",
}
| This paper describes SheffieldGATE{'}s submission to WMT 2024 Chat Shared Translation Task. We participate in three language pairs: English-German, English-Dutch, and English-Portuguese (Brazil). In this work, we introduce a context-aware sliding window decoding method to track dependencies between chat messages. We fine-tune a large pre-trained language model based on the training data provided by the shared task Our experiments (i) compare the model performance between multilingual and bilingual fine-tuning and (ii) assess the impact of different window sizes. Our experimental results demonstrate that utilising contextual information yields superior performance in document-level translation compared to translating documents as isolated text segments, and that models fine-tuned with multilingual data perform better than those fine-tuned with bilingual data. | [
"Yang, Xinye",
"Mu, Yida",
"Bontcheva, Kalina",
"Song, Xingyi"
] | Optimising LLM-Driven Machine Translation with Context-Aware Sliding Windows | wmt-1.101 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.102.bib | https://aclanthology.org/2024.wmt-1.102/ | @inproceedings{sung-etal-2024-context,
title = "Context-Aware {LLM} Translation System Using Conversation Summarization and Dialogue History",
author = "Sung, Mingi and
Lee, Seungmin and
Kim, Jiwon and
Kim, Sejoon",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.102",
pages = "1011--1015",
abstract = "Translating conversational text, particularly in customer support contexts, presents unique challenges due to its informal and unstructured nature. We propose a context-aware LLM translation system that leverages conversation summarization and dialogue history to enhance translation quality for the English-Korean language pair. Our approach incorporates the two most recent dialogues as raw data and a summary of earlier conversations to manage context length effectively. We demonstrate that this method significantly improves translation accuracy, maintaining coherence and consistency across conversations. This system offers a practical solution for customer support translation tasks, addressing the complexities of conversational text.",
}
| Translating conversational text, particularly in customer support contexts, presents unique challenges due to its informal and unstructured nature. We propose a context-aware LLM translation system that leverages conversation summarization and dialogue history to enhance translation quality for the English-Korean language pair. Our approach incorporates the two most recent dialogues as raw data and a summary of earlier conversations to manage context length effectively. We demonstrate that this method significantly improves translation accuracy, maintaining coherence and consistency across conversations. This system offers a practical solution for customer support translation tasks, addressing the complexities of conversational text. | [
"Sung, Mingi",
"Lee, Seungmin",
"Kim, Jiwon",
"Kim, Sejoon"
] | Context-Aware LLM Translation System Using Conversation Summarization and Dialogue History | wmt-1.102 | Poster | 2410.16775 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.103.bib | https://aclanthology.org/2024.wmt-1.103/ | @inproceedings{zhu-etal-2024-enhancing,
title = "Enhancing Translation Quality: A Comparative Study of Fine-Tuning and Prompt Engineering in Dialog-Oriented Machine Translation Systems. Insights from the {MULTITAN}-{GML} Team",
author = "Zhu, Lichao and
Zimina, Maria and
Namdarzadeh, Behnoosh and
Ballier, Nicolas and
Yun{\`e}s, Jean-Baptiste",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.103",
pages = "1016--1022",
abstract = "For this shared task, we have used several machine translation engines to produce translations (en â fr) by fine-tuning a dialog-oriented NMT engine and having NMT baseline translations post-edited with prompt engineering. Our objectives are to test the effectiveness of a fine-tuning strategy with help of a robust NMT model, to draw out a from-translation-to-post-editing pipeline, and to evaluate the strong and weak points of NMT systems.",
}
| For this shared task, we have used several machine translation engines to produce translations (en â fr) by fine-tuning a dialog-oriented NMT engine and having NMT baseline translations post-edited with prompt engineering. Our objectives are to test the effectiveness of a fine-tuning strategy with help of a robust NMT model, to draw out a from-translation-to-post-editing pipeline, and to evaluate the strong and weak points of NMT systems. | [
"Zhu, Lichao",
"Zimina, Maria",
"Namdarzadeh, Behnoosh",
"Ballier, Nicolas",
"Yun{\\`e}s, Jean-Baptiste"
] | Enhancing Translation Quality: A Comparative Study of Fine-Tuning and Prompt Engineering in Dialog-Oriented Machine Translation Systems. Insights from the MULTITAN-GML Team | wmt-1.103 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.104.bib | https://aclanthology.org/2024.wmt-1.104/ | @inproceedings{zafar-etal-2024-setu-adapt,
title = "The {SETU}-{ADAPT} Submissions to {WMT} 2024 Chat Translation Tasks",
author = "Zafar, Maria and
Castaldo, Antonio and
Nayak, Prashanth and
Haque, Rejwanul and
Way, Andy",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.104",
pages = "1023--1030",
abstract = "This paper presents the SETU-ADAPT submissions to the WMT24 Chat Translation Task. Large language models (LLM) currently provides the state-of-the-art solutions in many natural language processing (NLP) problems including machine translation (MT). For the WMT24 Chat Translation Task we leveraged LLMs for their MT capabilities. In order to adapt the LLMs for a specific domain of interest, we explored different fine-tuning and prompting strategies. We also employed efficient data retrieval methods to curate the data used for fine-tuning. We carried out experiments for two language pairs: German-to-English and French-to-English. Our MT models were evaluated using three metrics: BLEU, chrF and COMET. In this paper we describes our experiments including training setups, results and findings.",
}
| This paper presents the SETU-ADAPT submissions to the WMT24 Chat Translation Task. Large language models (LLM) currently provides the state-of-the-art solutions in many natural language processing (NLP) problems including machine translation (MT). For the WMT24 Chat Translation Task we leveraged LLMs for their MT capabilities. In order to adapt the LLMs for a specific domain of interest, we explored different fine-tuning and prompting strategies. We also employed efficient data retrieval methods to curate the data used for fine-tuning. We carried out experiments for two language pairs: German-to-English and French-to-English. Our MT models were evaluated using three metrics: BLEU, chrF and COMET. In this paper we describes our experiments including training setups, results and findings. | [
"Zafar, Maria",
"Castaldo, Antonio",
"Nayak, Prashanth",
"Haque, Rejwanul",
"Way, Andy"
] | The SETU-ADAPT Submissions to WMT 2024 Chat Translation Tasks | wmt-1.104 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.105.bib | https://aclanthology.org/2024.wmt-1.105/ | @inproceedings{yang-etal-2024-exploring-traditional,
title = "Exploring the Traditional {NMT} Model and Large Language Model for Chat Translation",
author = "Yang, Jinlong and
Shang, Hengchao and
Wei, Daimeng and
Guo, Jiaxin and
Li, Zongyao and
Wu, Zhanglin and
Rao, Zhiqiang and
Li, Shaojun and
Xie, Yuhao and
Luo, Yuanchang and
Jiawei, Zheng and
Wei, Bin and
Yang, Hao",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.105",
pages = "1031--1037",
abstract = "This paper describes the submissions of Huawei Translation Services Center(HW-TSC) to WMT24 chat translation shared task on EnglishâGermany (en-de) bidirection. The experiments involved fine-tuning models using chat data and exploring various strategies, including Minimum Bayesian Risk (MBR) decoding and self-training. The results show significant performance improvements in certain directions, with the MBR self-training method achieving the best results. The Large Language Model also discusses the challenges and potential avenues for further research in the field of chat translation.",
}
| This paper describes the submissions of Huawei Translation Services Center(HW-TSC) to WMT24 chat translation shared task on EnglishâGermany (en-de) bidirection. The experiments involved fine-tuning models using chat data and exploring various strategies, including Minimum Bayesian Risk (MBR) decoding and self-training. The results show significant performance improvements in certain directions, with the MBR self-training method achieving the best results. The Large Language Model also discusses the challenges and potential avenues for further research in the field of chat translation. | [
"Yang, Jinlong",
"Shang, Hengchao",
"Wei, Daimeng",
"Guo, Jiaxin",
"Li, Zongyao",
"Wu, Zhanglin",
"Rao, Zhiqiang",
"Li, Shaojun",
"Xie, Yuhao",
"Luo, Yuanchang",
"Jiawei, Zheng",
"Wei, Bin",
"Yang, Hao"
] | Exploring the Traditional NMT Model and Large Language Model for Chat Translation | wmt-1.105 | Poster | 2409.16331 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.106.bib | https://aclanthology.org/2024.wmt-1.106/ | @inproceedings{krause-etal-2024-graph,
title = "Graph Representations for Machine Translation in Dialogue Settings",
author = "Krause, Lea and
Baez Santamaria, Selene and
Kalo, Jan-Christoph",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.106",
pages = "1038--1046",
abstract = "In this paper, we present our approach to the WMT24 - Chat Task, addressing the challenge of translating chat conversations.Chat conversations are characterised by their informal, ungrammatical nature and strong reliance on context posing significant challenges for machine translation systems. To address these challenges, we augment large language models with explicit memory mechanisms designed to enhance coherence and consistency across dialogues. Specifically, we employ graph representations to capture and utilise dialogue context, leveraging concept connectivity as a compressed memory. Our approach ranked second place for Dutch and French, and third place for Portuguese and German, based on COMET-22 scores and human evaluation.",
}
| In this paper, we present our approach to the WMT24 - Chat Task, addressing the challenge of translating chat conversations.Chat conversations are characterised by their informal, ungrammatical nature and strong reliance on context posing significant challenges for machine translation systems. To address these challenges, we augment large language models with explicit memory mechanisms designed to enhance coherence and consistency across dialogues. Specifically, we employ graph representations to capture and utilise dialogue context, leveraging concept connectivity as a compressed memory. Our approach ranked second place for Dutch and French, and third place for Portuguese and German, based on COMET-22 scores and human evaluation. | [
"Krause, Lea",
"Baez Santamaria, Selene",
"Kalo, Jan-Christoph"
] | Graph Representations for Machine Translation in Dialogue Settings | wmt-1.106 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.107.bib | https://aclanthology.org/2024.wmt-1.107/ | @inproceedings{wang-etal-2024-reducing,
title = "Reducing Redundancy in {J}apanese-to-{E}nglish Translation: A Multi-Pipeline Approach for Translating Repeated Elements in {J}apanese",
author = "Wang, Qiao and
Huang, Yixuan and
Yuan, Zheng",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.107",
pages = "1047--1055",
abstract = "This paper presents a multi-pipeline Japanese-to-English machine translation (MT) system designed to address the challenge of translating repeated elements from Japanese into fluent and lexically diverse English. The system is developed as part of the Non-Repetitive Translation Task at WMT24, which focuses on minimizing redundancy while maintaining high translation quality. Our approach utilizes MeCab, the de facto NLP tool for Japanese, for the identification of repeated elements, and Claude Sonnet 3.5, a large language model (LLM), for translation and proofreading. The system effectively accomplishes the shared task by identifying and translating in a diversified manner 89.79{\%} of the 470 repeated instances in the testing dataset, and achieving an average translation quality score of 4.60 out of 5, significantly surpassing the baseline score of 3.88. Analysis also revealed the challenges encountered, particularly in identifying standalone noun-suffix elements and occasional cases of consistent translations or mistranslations.",
}
| This paper presents a multi-pipeline Japanese-to-English machine translation (MT) system designed to address the challenge of translating repeated elements from Japanese into fluent and lexically diverse English. The system is developed as part of the Non-Repetitive Translation Task at WMT24, which focuses on minimizing redundancy while maintaining high translation quality. Our approach utilizes MeCab, the de facto NLP tool for Japanese, for the identification of repeated elements, and Claude Sonnet 3.5, a large language model (LLM), for translation and proofreading. The system effectively accomplishes the shared task by identifying and translating in a diversified manner 89.79{\%} of the 470 repeated instances in the testing dataset, and achieving an average translation quality score of 4.60 out of 5, significantly surpassing the baseline score of 3.88. Analysis also revealed the challenges encountered, particularly in identifying standalone noun-suffix elements and occasional cases of consistent translations or mistranslations. | [
"Wang, Qiao",
"Huang, Yixuan",
"Yuan, Zheng"
] | Reducing Redundancy in Japanese-to-English Translation: A Multi-Pipeline Approach for Translating Repeated Elements in Japanese | wmt-1.107 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.108.bib | https://aclanthology.org/2024.wmt-1.108/ | @inproceedings{avila-crego-2024-systran,
title = "{SYSTRAN} @ {WMT}24 Non-Repetitive Translation Task",
author = "Avila, Marko and
Crego, Josep",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.108",
pages = "1056--1062",
abstract = "Many contemporary NLP systems rely on neural decoders for text generation, which demonstrate an impressive ability to generate text approaching human fluency levels. However, in the case of neural machine translation networks, they often grapple with the production of repetitive content, also known as repetitive diction or word repetition, an aspect they weren{'}t explicitly trained to address. While not inherently negative, this repetition can make writing seem monotonous or awkward if not used intentionally for emphasis or stylistic purposes. This paper presents our submission to the WMT 2024 Non-Repetitive Translation Task, for which we adopt a repetition penalty method applied at learning inspired by the principles of label smoothing. No additional work is needed at inference time. We modify the ground-truth distribution to steer the model towards discouraging repetitions. Experiments show the ability of the proposed methods in reducing repetitions within neural machine translation engines, without compromising efficiency or translation quality.",
}
| Many contemporary NLP systems rely on neural decoders for text generation, which demonstrate an impressive ability to generate text approaching human fluency levels. However, in the case of neural machine translation networks, they often grapple with the production of repetitive content, also known as repetitive diction or word repetition, an aspect they weren{'}t explicitly trained to address. While not inherently negative, this repetition can make writing seem monotonous or awkward if not used intentionally for emphasis or stylistic purposes. This paper presents our submission to the WMT 2024 Non-Repetitive Translation Task, for which we adopt a repetition penalty method applied at learning inspired by the principles of label smoothing. No additional work is needed at inference time. We modify the ground-truth distribution to steer the model towards discouraging repetitions. Experiments show the ability of the proposed methods in reducing repetitions within neural machine translation engines, without compromising efficiency or translation quality. | [
"Avila, Marko",
"Crego, Josep"
] | SYSTRAN @ WMT24 Non-Repetitive Translation Task | wmt-1.108 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.109.bib | https://aclanthology.org/2024.wmt-1.109/ | @inproceedings{kovacs-etal-2024-mitigating,
title = "Mitigating Metric Bias in Minimum {B}ayes Risk Decoding",
author = "Kovacs, Geza and
Deutsch, Daniel and
Freitag, Markus",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.109",
pages = "1063--1094",
abstract = "While Minimum Bayes Risk (MBR) decoding using metrics such as COMET or MetricX has outperformed traditional decoding methods such as greedy or beam search, it introduces a challenge we refer to as metric bias. As MBR decoding aims to produce translations that score highly according to a specific utility metric, this very process makes it impossible to use the same metric for both decoding and evaluation, as any improvement might simply be due to reward hacking rather than reflecting real quality improvements. In this work we demonstrate that compared to human ratings, neural metrics not only overestimate the quality of MBR decoding when the same metric is used as the utility metric, but they also overestimate the quality of MBR/QE decoding with other neural utility metrics as well. We also show that the metric bias issue can be mitigated by using an ensemble of utility metrics during MBR decoding: human evaluations show that MBR decoding using an ensemble of utility metrics outperforms a single utility metric.",
}
| While Minimum Bayes Risk (MBR) decoding using metrics such as COMET or MetricX has outperformed traditional decoding methods such as greedy or beam search, it introduces a challenge we refer to as metric bias. As MBR decoding aims to produce translations that score highly according to a specific utility metric, this very process makes it impossible to use the same metric for both decoding and evaluation, as any improvement might simply be due to reward hacking rather than reflecting real quality improvements. In this work we demonstrate that compared to human ratings, neural metrics not only overestimate the quality of MBR decoding when the same metric is used as the utility metric, but they also overestimate the quality of MBR/QE decoding with other neural utility metrics as well. We also show that the metric bias issue can be mitigated by using an ensemble of utility metrics during MBR decoding: human evaluations show that MBR decoding using an ensemble of utility metrics outperforms a single utility metric. | [
"Kovacs, Geza",
"Deutsch, Daniel",
"Freitag, Markus"
] | Mitigating Metric Bias in Minimum Bayes Risk Decoding | wmt-1.109 | Poster | 2411.03524 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.110.bib | https://aclanthology.org/2024.wmt-1.110/ | @inproceedings{liu-etal-2024-beyond-human,
title = "Beyond Human-Only: Evaluating Human-Machine Collaboration for Collecting High-Quality Translation Data",
author = "Liu, Zhongtao and
Riley, Parker and
Deutsch, Daniel and
Lui, Alison and
Niu, Mengmeng and
Shah, Apurva and
Freitag, Markus",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.110",
pages = "1095--1106",
abstract = "Collecting high-quality translations is crucial for the development and evaluation of machine translation systems. However, traditional human-only approaches are costly and slow. This study presents a comprehensive investigation of 11 approaches for acquiring translation data, including human-only, machine-only, and hybrid approaches. Our findings demonstrate that human-machine collaboration can match or even exceed the quality of human-only translations, while being more cost-efficient. Error analysis reveals the complementary strengths between human and machine contributions, highlighting the effectiveness of collaborative methods. Cost analysis further demonstrates the economic benefits of human-machine collaboration methods, with some approaches achieving top-tier quality at around 60{\%} of the cost of traditional methods. We release a publicly available dataset containing nearly 18,000 segments of varying translation quality with corresponding human ratings to facilitate future research.",
}
| Collecting high-quality translations is crucial for the development and evaluation of machine translation systems. However, traditional human-only approaches are costly and slow. This study presents a comprehensive investigation of 11 approaches for acquiring translation data, including human-only, machine-only, and hybrid approaches. Our findings demonstrate that human-machine collaboration can match or even exceed the quality of human-only translations, while being more cost-efficient. Error analysis reveals the complementary strengths between human and machine contributions, highlighting the effectiveness of collaborative methods. Cost analysis further demonstrates the economic benefits of human-machine collaboration methods, with some approaches achieving top-tier quality at around 60{\%} of the cost of traditional methods. We release a publicly available dataset containing nearly 18,000 segments of varying translation quality with corresponding human ratings to facilitate future research. | [
"Liu, Zhongtao",
"Riley, Parker",
"Deutsch, Daniel",
"Lui, Alison",
"Niu, Mengmeng",
"Shah, Apurva",
"Freitag, Markus"
] | Beyond Human-Only: Evaluating Human-Machine Collaboration for Collecting High-Quality Translation Data | wmt-1.110 | Poster | 2410.11056 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.111.bib | https://aclanthology.org/2024.wmt-1.111/ | @inproceedings{pitorro-etal-2024-effective,
title = "How Effective Are State Space Models for Machine Translation?",
author = "Pitorro, Hugo and
Vasylenko, Pavlo and
Treviso, Marcos and
Martins, Andr{\'e}",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.111",
pages = "1107--1124",
abstract = "Transformers are the current architecture of choice for NLP, but their attention layers do not scale well to long contexts. Recent works propose to replace attention with linear recurrent layers - this is the case for state space models, which enjoy efficient training and inference. However, it remains unclear whether these models are competitive with transformers in machine translation (MT). In this paper, we provide a rigorous and comprehensive experimental comparison between transformers and linear recurrent models for MT. Concretely, we experiment with RetNet, Mamba, and hybrid versions of Mamba which incorporate attention mechanisms. Our findings demonstrate that Mamba is highly competitive with transformers on sentence and paragraph-level datasets, where in the latter both models benefit from shifting the training distribution towards longer sequences. Further analysis show that integrating attention into Mamba improves translation quality, robustness to sequence length extrapolation, and the ability to recall named entities.",
}
| Transformers are the current architecture of choice for NLP, but their attention layers do not scale well to long contexts. Recent works propose to replace attention with linear recurrent layers - this is the case for state space models, which enjoy efficient training and inference. However, it remains unclear whether these models are competitive with transformers in machine translation (MT). In this paper, we provide a rigorous and comprehensive experimental comparison between transformers and linear recurrent models for MT. Concretely, we experiment with RetNet, Mamba, and hybrid versions of Mamba which incorporate attention mechanisms. Our findings demonstrate that Mamba is highly competitive with transformers on sentence and paragraph-level datasets, where in the latter both models benefit from shifting the training distribution towards longer sequences. Further analysis show that integrating attention into Mamba improves translation quality, robustness to sequence length extrapolation, and the ability to recall named entities. | [
"Pitorro, Hugo",
"Vasylenko, Pavlo",
"Treviso, Marcos",
"Martins, Andr{\\'e}"
] | How Effective Are State Space Models for Machine Translation? | wmt-1.111 | Poster | 2407.05489 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.112.bib | https://aclanthology.org/2024.wmt-1.112/ | @inproceedings{post-junczys-dowmunt-2024-evaluation,
title = "Evaluation and Large-scale Training for Contextual Machine Translation",
author = "Post, Matt and
Junczys-Dowmunt, Marcin",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.112",
pages = "1125--1139",
abstract = "Despite the fact that context is known to be vital for resolving a range of translation ambiguities, most traditional machine translation systems continue to be trained and to operate at the sentence level. A common explanation is the lack of document-level annotations for existing training data. This work investigates whether having such annotations would be helpful for training traditional MT systems at scale. We build large-scale, state-of-the-art contextual MT systems into German, French, and Russian, fixing the datasets while comparing the effect of sourcing contextual training samples from both parallel and back-translated data. We then evaluate these contextual models across a range of contextual test sets from the literature, where we find that (a) document annotations from both mined parallel and back-translated monolingual data are helpful, but that the best contextual MT systems do not draw contextual samples from the parallel data. We also make two points related to evaluation: (b) contrastive score-based metrics on challenge sets are not discriminative; instead, models must be tested directly on their ability to generate correct outputs, and (c) standard corpus-level metrics such as COMET work best in settings that are dense in contextual phenomena.",
}
| Despite the fact that context is known to be vital for resolving a range of translation ambiguities, most traditional machine translation systems continue to be trained and to operate at the sentence level. A common explanation is the lack of document-level annotations for existing training data. This work investigates whether having such annotations would be helpful for training traditional MT systems at scale. We build large-scale, state-of-the-art contextual MT systems into German, French, and Russian, fixing the datasets while comparing the effect of sourcing contextual training samples from both parallel and back-translated data. We then evaluate these contextual models across a range of contextual test sets from the literature, where we find that (a) document annotations from both mined parallel and back-translated monolingual data are helpful, but that the best contextual MT systems do not draw contextual samples from the parallel data. We also make two points related to evaluation: (b) contrastive score-based metrics on challenge sets are not discriminative; instead, models must be tested directly on their ability to generate correct outputs, and (c) standard corpus-level metrics such as COMET work best in settings that are dense in contextual phenomena. | [
"Post, Matt",
"Junczys-Dowmunt, Marcin"
] | Evaluation and Large-scale Training for Contextual Machine Translation | wmt-1.112 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.113.bib | https://aclanthology.org/2024.wmt-1.113/ | @inproceedings{qian-etal-2024-multi,
title = "A Multi-task Learning Framework for Evaluating Machine Translation of Emotion-loaded User-generated Content",
author = "Qian, Shenbin and
Orasan, Constantin and
Kanojia, Diptesh and
Do Carmo, F{\'e}lix",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.113",
pages = "1140--1154",
abstract = "Machine translation (MT) of user-generated content (UGC) poses unique challenges, including handling slang, emotion, and literary devices like irony and sarcasm. Evaluating the quality of these translations is challenging as current metrics do not focus on these ubiquitous features of UGC. To address this issue, we utilize an existing emotion-related dataset that includes emotion labels and human-annotated translation errors based on Multi-dimensional Quality Metrics. We extend it with sentence-level evaluation scores and word-level labels, leading to a dataset suitable for sentence- and word-level translation evaluation and emotion classification, in a multi-task setting. We propose a new architecture to perform these tasks concurrently, with a novel combined loss function, which integrates different loss heuristics, like the Nash and Aligned losses. Our evaluation compares existing fine-tuning and multi-task learning approaches, assessing generalization with ablative experiments over multiple datasets. Our approach achieves state-of-the-art performance and we present a comprehensive analysis for MT evaluation of UGC.",
}
| Machine translation (MT) of user-generated content (UGC) poses unique challenges, including handling slang, emotion, and literary devices like irony and sarcasm. Evaluating the quality of these translations is challenging as current metrics do not focus on these ubiquitous features of UGC. To address this issue, we utilize an existing emotion-related dataset that includes emotion labels and human-annotated translation errors based on Multi-dimensional Quality Metrics. We extend it with sentence-level evaluation scores and word-level labels, leading to a dataset suitable for sentence- and word-level translation evaluation and emotion classification, in a multi-task setting. We propose a new architecture to perform these tasks concurrently, with a novel combined loss function, which integrates different loss heuristics, like the Nash and Aligned losses. Our evaluation compares existing fine-tuning and multi-task learning approaches, assessing generalization with ablative experiments over multiple datasets. Our approach achieves state-of-the-art performance and we present a comprehensive analysis for MT evaluation of UGC. | [
"Qian, Shenbin",
"Orasan, Constantin",
"Kanojia, Diptesh",
"Do Carmo, F{\\'e}lix"
] | A Multi-task Learning Framework for Evaluating Machine Translation of Emotion-loaded User-generated Content | wmt-1.113 | Poster | 2410.03277 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.114.bib | https://aclanthology.org/2024.wmt-1.114/ | @inproceedings{raunak-etal-2024-instruction,
title = "On Instruction-Finetuning Neural Machine Translation Models",
author = "Raunak, Vikas and
Grundkiewicz, Roman and
Junczys-Dowmunt, Marcin",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.114",
pages = "1155--1166",
abstract = "In this work, we introduce instruction finetuning for Neural Machine Translation (NMT) models, which distills instruction following capabilities from Large Language Models (LLMs) into orders-of-magnitude smaller NMT models. Our instruction-finetuning recipe for NMT models enables customization of translations for a limited but disparate set of translation-specific tasks.We show that NMT models are capable of following multiple instructions simultaneously and demonstrate capabilities of zero-shot composition of instructions.We also show that through instruction finetuning, traditionally disparate tasks such as formality-controlled machine translation, multi-domain adaptation as well as multi-modal translations can be tackled jointly by a single instruction finetuned NMT model, at a performance level comparable to LLMs such as GPT-3.5-Turbo.To the best of our knowledge, our work is among the first to demonstrate the instruction-following capabilities of traditional NMT models, which allows for faster, cheaper and more efficient serving of customized translations.",
}
| In this work, we introduce instruction finetuning for Neural Machine Translation (NMT) models, which distills instruction following capabilities from Large Language Models (LLMs) into orders-of-magnitude smaller NMT models. Our instruction-finetuning recipe for NMT models enables customization of translations for a limited but disparate set of translation-specific tasks.We show that NMT models are capable of following multiple instructions simultaneously and demonstrate capabilities of zero-shot composition of instructions.We also show that through instruction finetuning, traditionally disparate tasks such as formality-controlled machine translation, multi-domain adaptation as well as multi-modal translations can be tackled jointly by a single instruction finetuned NMT model, at a performance level comparable to LLMs such as GPT-3.5-Turbo.To the best of our knowledge, our work is among the first to demonstrate the instruction-following capabilities of traditional NMT models, which allows for faster, cheaper and more efficient serving of customized translations. | [
"Raunak, Vikas",
"Grundkiewicz, Roman",
"Junczys-Dowmunt, Marcin"
] | On Instruction-Finetuning Neural Machine Translation Models | wmt-1.114 | Poster | 2410.05553 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.115.bib | https://aclanthology.org/2024.wmt-1.115/ | @inproceedings{salesky-etal-2024-benchmarking,
title = "Benchmarking Visually-Situated Translation of Text in Natural Images",
author = "Salesky, Elizabeth and
Koehn, Philipp and
Post, Matt",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.115",
pages = "1167--1182",
abstract = "We introduce a benchmark, Vistra, for visually-situated translation of English text in natural images to four target languages. We describe the dataset construction and composition. We benchmark open-source and commercial OCR and MT models on Vistra, and present both quantitative results and a taxonomy of common OCR error classes with their effect on downstream MT. Finally, we assess direct image-to-text translation with a multimodal LLM, and show that it is able in some cases but not yet consistently to disambiguate possible translations with visual context. We show that this is an unsolved and challenging task even for strong commercial models. We hope that the creation and release of this benchmark which is the first of its kind for these language pairs will encourage further research in this direction.",
}
| We introduce a benchmark, Vistra, for visually-situated translation of English text in natural images to four target languages. We describe the dataset construction and composition. We benchmark open-source and commercial OCR and MT models on Vistra, and present both quantitative results and a taxonomy of common OCR error classes with their effect on downstream MT. Finally, we assess direct image-to-text translation with a multimodal LLM, and show that it is able in some cases but not yet consistently to disambiguate possible translations with visual context. We show that this is an unsolved and challenging task even for strong commercial models. We hope that the creation and release of this benchmark which is the first of its kind for these language pairs will encourage further research in this direction. | [
"Salesky, Elizabeth",
"Koehn, Philipp",
"Post, Matt"
] | Benchmarking Visually-Situated Translation of Text in Natural Images | wmt-1.115 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.116.bib | https://aclanthology.org/2024.wmt-1.116/ | @inproceedings{sizov-etal-2024-analysing,
title = "Analysing Translation Artifacts: A Comparative Study of {LLM}s, {NMT}s, and Human Translations",
author = "Sizov, Fedor and
Espa{\~n}a-Bonet, Cristina and
Van Genabith, Josef and
Xie, Roy and
Dutta Chowdhury, Koel",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.116",
pages = "1183--1199",
abstract = "Translated texts exhibit a range of characteristics that make them appear distinct from texts originally written in the same target language. With the rise of Large Language Models (LLMs), which are designed for a wide range of language generation and understanding tasks, there has been significant interest in their application to Machine Translation. While several studies have focused on improving translation quality through fine-tuning or few-shot prompting techniques, there has been limited exploration of how LLM-generated translations qualitatively differ from those produced by Neural Machine Translation (NMT) models, and human translations. Our study employs explainability methods such as Leave-One-Out (LOO) and Integrated Gradients (IG) to analyze the lexical features distinguishing human translations from those produced by LLMs and NMT systems. Specifically, we apply a two-stage approach: first, classifying texts based on their origin {--} whether they are original or translations {--} and second, extracting significant lexical features (highly attributed input words) using post-hoc interpretability methods. Our analysis shows that different methods of feature extraction vary in their effectiveness, with LOO being generally better at pinpointing critical input words and IG capturing a broader range of important words. Finally, our results show that while LLMs and NMT systems can produce translations of a good quality, they still differ from texts originally written by native speakers. Specifically, we find that while some LLMs often align closely with human translations, traditional NMT systems exhibit distinct characteristics, particularly in their use of certain linguistic features.",
}
| Translated texts exhibit a range of characteristics that make them appear distinct from texts originally written in the same target language. With the rise of Large Language Models (LLMs), which are designed for a wide range of language generation and understanding tasks, there has been significant interest in their application to Machine Translation. While several studies have focused on improving translation quality through fine-tuning or few-shot prompting techniques, there has been limited exploration of how LLM-generated translations qualitatively differ from those produced by Neural Machine Translation (NMT) models, and human translations. Our study employs explainability methods such as Leave-One-Out (LOO) and Integrated Gradients (IG) to analyze the lexical features distinguishing human translations from those produced by LLMs and NMT systems. Specifically, we apply a two-stage approach: first, classifying texts based on their origin {--} whether they are original or translations {--} and second, extracting significant lexical features (highly attributed input words) using post-hoc interpretability methods. Our analysis shows that different methods of feature extraction vary in their effectiveness, with LOO being generally better at pinpointing critical input words and IG capturing a broader range of important words. Finally, our results show that while LLMs and NMT systems can produce translations of a good quality, they still differ from texts originally written by native speakers. Specifically, we find that while some LLMs often align closely with human translations, traditional NMT systems exhibit distinct characteristics, particularly in their use of certain linguistic features. | [
"Sizov, Fedor",
"Espa{\\~n}a-Bonet, Cristina",
"Van Genabith, Josef",
"Xie, Roy",
"Dutta Chowdhury, Koel"
] | Analysing Translation Artifacts: A Comparative Study of LLMs, NMTs, and Human Translations | wmt-1.116 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.117.bib | https://aclanthology.org/2024.wmt-1.117/ | @inproceedings{song-etal-2024-grammatical,
title = "How Grammatical Features Impact Machine Translation: A New Test Suite for {C}hinese-{E}nglish {MT} Evaluation",
author = "Song, Huacheng and
Li, Yi and
Wu, Yiwen and
Liu, Yu and
Lin, Jingxia and
Xu, Hongzhi",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.117",
pages = "1200--1221",
abstract = "Machine translation (MT) evaluation has evolved toward a trend of fine-grained granularity, enabling a more precise diagnosis of hidden flaws and weaknesses of MT systems from various perspectives. This paper examines how MT systems are potentially affected by certain grammatical features, offering insights into the challenges these features pose and suggesting possible directions for improvement. We develop a new test suite by extracting 7,848 sentences from a multi-domain Chinese-English parallel corpus. All the Chinese text was further annotated with 43 grammatical features using a semi-automatic method. This test suite was subsequently used to evaluate eight state-of-the-art MT systems according to six different automatic evaluation metrics. The results reveal intriguing patterns of MT performance associated with different domains and various grammatical features, highlighting the test suite{'}s effectiveness. The test suite was made publicly available and it will serve as an important benchmark for evaluating and diagnosing Chinese-English MT systems.",
}
| Machine translation (MT) evaluation has evolved toward a trend of fine-grained granularity, enabling a more precise diagnosis of hidden flaws and weaknesses of MT systems from various perspectives. This paper examines how MT systems are potentially affected by certain grammatical features, offering insights into the challenges these features pose and suggesting possible directions for improvement. We develop a new test suite by extracting 7,848 sentences from a multi-domain Chinese-English parallel corpus. All the Chinese text was further annotated with 43 grammatical features using a semi-automatic method. This test suite was subsequently used to evaluate eight state-of-the-art MT systems according to six different automatic evaluation metrics. The results reveal intriguing patterns of MT performance associated with different domains and various grammatical features, highlighting the test suite{'}s effectiveness. The test suite was made publicly available and it will serve as an important benchmark for evaluating and diagnosing Chinese-English MT systems. | [
"Song, Huacheng",
"Li, Yi",
"Wu, Yiwen",
"Liu, Yu",
"Lin, Jingxia",
"Xu, Hongzhi"
] | How Grammatical Features Impact Machine Translation: A New Test Suite for Chinese-English MT Evaluation | wmt-1.117 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.118.bib | https://aclanthology.org/2024.wmt-1.118/ | @inproceedings{thompson-etal-2024-improving,
title = "Improving Statistical Significance in Human Evaluation of Automatic Metrics via Soft Pairwise Accuracy",
author = "Thompson, Brian and
Mathur, Nitika and
Deutsch, Daniel and
Khayrallah, Huda",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.118",
pages = "1222--1234",
abstract = "Selecting an automatic metric that best emulates human annotators is often non-trivial, because there is no clear definition of {``}best emulates.{''} A meta-metric is required to compare the human judgments to the automatic metric scores, and metric rankings depend on the choice of meta-metric. We propose Soft Pairwise Accuracy (SPA), a new meta-metric that builds on Pairwise Accuracy (PA) but incorporates the statistical significance of both the human judgments and the metric scores. We show that SPA is more stable than PA with respect to changes in the number of systems/segments used for evaluation. We also show that PA can only assign a small set of distinct output values to metrics, and this results in many metrics being artificially assigned the exact same PA score. We demonstrate that SPA fixes this issue. Finally, we show that SPA is more discriminative than PA, producing more statistically significant comparisons between metrics. SPA was selected as the official system-level metric for the 2024 WMT Metrics Shared Task.",
}
| Selecting an automatic metric that best emulates human annotators is often non-trivial, because there is no clear definition of {``}best emulates.{''} A meta-metric is required to compare the human judgments to the automatic metric scores, and metric rankings depend on the choice of meta-metric. We propose Soft Pairwise Accuracy (SPA), a new meta-metric that builds on Pairwise Accuracy (PA) but incorporates the statistical significance of both the human judgments and the metric scores. We show that SPA is more stable than PA with respect to changes in the number of systems/segments used for evaluation. We also show that PA can only assign a small set of distinct output values to metrics, and this results in many metrics being artificially assigned the exact same PA score. We demonstrate that SPA fixes this issue. Finally, we show that SPA is more discriminative than PA, producing more statistically significant comparisons between metrics. SPA was selected as the official system-level metric for the 2024 WMT Metrics Shared Task. | [
"Thompson, Brian",
"Mathur, Nitika",
"Deutsch, Daniel",
"Khayrallah, Huda"
] | Improving Statistical Significance in Human Evaluation of Automatic Metrics via Soft Pairwise Accuracy | wmt-1.118 | Poster | 2409.09598 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.119.bib | https://aclanthology.org/2024.wmt-1.119/ | @inproceedings{tsiamas-etal-2024-speech,
title = "Speech Is More than Words: Do Speech-to-Text Translation Systems Leverage Prosody?",
author = "Tsiamas, Ioannis and
Sperber, Matthias and
Finch, Andrew and
Garg, Sarthak",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.119",
pages = "1235--1257",
abstract = "The prosody of a spoken utterance, including features like stress, intonation and rhythm, can significantly affect the underlying semantics, and as a consequence can also affect its textual translation. Nevertheless, prosody is rarely studied within the context of speech-to-text translation (S2TT) systems. In particular, end-to-end (E2E) systems have been proposed as well-suited for prosody-aware translation because they have direct access to the speech signal when making translation decisions, but the understanding of whether this is successful in practice is still limited. A main challenge is the difficulty of evaluating prosody awareness in translation. To address this challenge, we introduce an evaluation methodology and a focused benchmark (named ContraProSt) aimed at capturing a wide range of prosodic phenomena. Our methodology uses large language models and controllable text-to-speech (TTS) to generate contrastive examples. Through experiments in translating English speech into German, Spanish, and Japanese, we find that (a) S2TT models possess some internal representation of prosody, but the prosody signal is often not strong enough to affect the translations, (b) E2E systems outperform cascades of speech recognition and text translation systems, confirming their theoretical advantage in this regard, and (c) certain cascaded systems also capture prosodic information in the translation, but only to a lesser extent that depends on the particulars of the transcript{'}s surface form.",
}
| The prosody of a spoken utterance, including features like stress, intonation and rhythm, can significantly affect the underlying semantics, and as a consequence can also affect its textual translation. Nevertheless, prosody is rarely studied within the context of speech-to-text translation (S2TT) systems. In particular, end-to-end (E2E) systems have been proposed as well-suited for prosody-aware translation because they have direct access to the speech signal when making translation decisions, but the understanding of whether this is successful in practice is still limited. A main challenge is the difficulty of evaluating prosody awareness in translation. To address this challenge, we introduce an evaluation methodology and a focused benchmark (named ContraProSt) aimed at capturing a wide range of prosodic phenomena. Our methodology uses large language models and controllable text-to-speech (TTS) to generate contrastive examples. Through experiments in translating English speech into German, Spanish, and Japanese, we find that (a) S2TT models possess some internal representation of prosody, but the prosody signal is often not strong enough to affect the translations, (b) E2E systems outperform cascades of speech recognition and text translation systems, confirming their theoretical advantage in this regard, and (c) certain cascaded systems also capture prosodic information in the translation, but only to a lesser extent that depends on the particulars of the transcript{'}s surface form. | [
"Tsiamas, Ioannis",
"Sperber, Matthias",
"Finch, Andrew",
"Garg, Sarthak"
] | Speech Is More than Words: Do Speech-to-Text Translation Systems Leverage Prosody? | wmt-1.119 | Poster | 2410.24019 | [
""
] | https://huggingface.co/papers/2410.24019 | 2 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.wmt-1.120.bib | https://aclanthology.org/2024.wmt-1.120/ | @inproceedings{zhang-etal-2024-cultural,
title = "Cultural Adaptation of Menus: A Fine-Grained Approach",
author = "Zhang, Zhonghe and
He, Xiaoyu and
Iyer, Vivek and
Birch, Alexandra",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.120",
pages = "1258--1271",
abstract = "Machine Translation of Culture-Specific Items (CSIs) poses significant challenges. Recent work on CSI translation has shown some success using Large Language Models (LLMs) to adapt to different languages and cultures; however, a deeper analysis is needed to examine the benefits and pitfalls of each method. In this paper, we introduce the ChineseMenuCSI dataset, the largest for Chinese-English menu corpora, annotated with CSI vs Non-CSI labels and a fine-grained test set. We define three levels of CSI figurativeness for a more nuanced analysis and develop a novel methodology for automatic CSI identification, which outperforms GPT-based prompts in most categories. Importantly, we are the first to integrate human translation theories into LLM-driven translation processes, significantly improving translation accuracy, with COMET scores increasing by up to 7 points. The code and dataset are available at https://github.com/Henry8772/ChineseMenuCSI.",
}
| Machine Translation of Culture-Specific Items (CSIs) poses significant challenges. Recent work on CSI translation has shown some success using Large Language Models (LLMs) to adapt to different languages and cultures; however, a deeper analysis is needed to examine the benefits and pitfalls of each method. In this paper, we introduce the ChineseMenuCSI dataset, the largest for Chinese-English menu corpora, annotated with CSI vs Non-CSI labels and a fine-grained test set. We define three levels of CSI figurativeness for a more nuanced analysis and develop a novel methodology for automatic CSI identification, which outperforms GPT-based prompts in most categories. Importantly, we are the first to integrate human translation theories into LLM-driven translation processes, significantly improving translation accuracy, with COMET scores increasing by up to 7 points. The code and dataset are available at https://github.com/Henry8772/ChineseMenuCSI. | [
"Zhang, Zhonghe",
"He, Xiaoyu",
"Iyer, Vivek",
"Birch, Alex",
"ra"
] | Cultural Adaptation of Menus: A Fine-Grained Approach | wmt-1.120 | Poster | 2408.13534 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.121.bib | https://aclanthology.org/2024.wmt-1.121/ | @inproceedings{zouhar-etal-2024-pitfalls,
title = "Pitfalls and Outlooks in Using {COMET}",
author = "Zouhar, Vil{\'e}m and
Chen, Pinzhen and
Lam, Tsz Kin and
Moghe, Nikita and
Haddow, Barry",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.121",
pages = "1272--1288",
abstract = "The COMET metric has blazed a trail in the machine translation community, given its strong correlation with human judgements of translation quality.Its success stems from being a modified pre-trained multilingual model finetuned for quality assessment.However, it being a machine learning model also gives rise to a new set of pitfalls that may not be widely known. We investigate these unexpected behaviours from three aspects:1) technical: obsolete software versions and compute precision; 2) data: empty content, language mismatch, and translationese at test time as well as distribution and domain biases in training; 3) usage and reporting: multi-reference support and model referencing in the literature. All of these problems imply that COMET scores are not comparable between papers or even technical setups and we put forward our perspective on fixing each issue.Furthermore, we release the sacreCOMET package that can generate a signature for the software and model configuration as well as an appropriate citation.The goal of this work is to help the community make more sound use of the COMET metric.",
}
| The COMET metric has blazed a trail in the machine translation community, given its strong correlation with human judgements of translation quality.Its success stems from being a modified pre-trained multilingual model finetuned for quality assessment.However, it being a machine learning model also gives rise to a new set of pitfalls that may not be widely known. We investigate these unexpected behaviours from three aspects:1) technical: obsolete software versions and compute precision; 2) data: empty content, language mismatch, and translationese at test time as well as distribution and domain biases in training; 3) usage and reporting: multi-reference support and model referencing in the literature. All of these problems imply that COMET scores are not comparable between papers or even technical setups and we put forward our perspective on fixing each issue.Furthermore, we release the sacreCOMET package that can generate a signature for the software and model configuration as well as an appropriate citation.The goal of this work is to help the community make more sound use of the COMET metric. | [
"Zouhar, Vil{\\'e}m",
"Chen, Pinzhen",
"Lam, Tsz Kin",
"Moghe, Nikita",
"Haddow, Barry"
] | Pitfalls and Outlooks in Using COMET | wmt-1.121 | Poster | 2408.15366 | [
"https://github.com/PinzhenChen/sacreCOMET"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.122.bib | https://aclanthology.org/2024.wmt-1.122/ | @inproceedings{berger-etal-2024-post,
title = "Post-edits Are Preferences Too",
author = "Berger, Nathaniel and
Riezler, Stefan and
Exel, Miriam and
Huck, Matthias",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.122",
pages = "1289--1300",
abstract = "Preference Optimization (PO) techniques are currently one of the state of the art techniques for fine-tuning large language models (LLMs) on pairwise preference feedback from human annotators. However, in machine translation, this sort of feedback can be difficult to solicit. Additionally, Kreuzer et al. (2018) have shown that, for machine translation, pairwise preferences are less reliable than other forms of human feedback, such as 5-point ratings.We examine post-edits to see if they can be a source of reliable human preferences by construction. In PO, a human annotator is shown sequences {\$}s{\_}1{\$} and {\$}s{\_}2{\$} and asked for a preference judgment, while for post-editing, editors create {\$}s{\_}1{\$} and know that it should be better than {\$}s{\_}2{\$}. We attempt to use these implicit preferences for PO and show that it helps the model move towards post-edit like hypotheses and away from machine translation-like hypotheses. Furthermore, we show that best results are obtained by pre-training the model with supervised fine-tuning (SFT) on post-edits in order to promote post-edit like hypotheses to the top output ranks.",
}
| Preference Optimization (PO) techniques are currently one of the state of the art techniques for fine-tuning large language models (LLMs) on pairwise preference feedback from human annotators. However, in machine translation, this sort of feedback can be difficult to solicit. Additionally, Kreuzer et al. (2018) have shown that, for machine translation, pairwise preferences are less reliable than other forms of human feedback, such as 5-point ratings.We examine post-edits to see if they can be a source of reliable human preferences by construction. In PO, a human annotator is shown sequences {\$}s{\_}1{\$} and {\$}s{\_}2{\$} and asked for a preference judgment, while for post-editing, editors create {\$}s{\_}1{\$} and know that it should be better than {\$}s{\_}2{\$}. We attempt to use these implicit preferences for PO and show that it helps the model move towards post-edit like hypotheses and away from machine translation-like hypotheses. Furthermore, we show that best results are obtained by pre-training the model with supervised fine-tuning (SFT) on post-edits in order to promote post-edit like hypotheses to the top output ranks. | [
"Berger, Nathaniel",
"Riezler, Stefan",
"Exel, Miriam",
"Huck, Matthias"
] | Post-edits Are Preferences Too | wmt-1.122 | Poster | 2410.02320 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.123.bib | https://aclanthology.org/2024.wmt-1.123/ | @inproceedings{briakou-etal-2024-translating,
title = "Translating Step-by-Step: Decomposing the Translation Process for Improved Translation Quality of Long-Form Texts",
author = "Briakou, Eleftheria and
Luo, Jiaming and
Cherry, Colin and
Freitag, Markus",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.123",
pages = "1301--1317",
abstract = "In this paper we present a step-by-step approach to long-form text translation, drawing on established processes in translation studies. Instead of viewing machine translation as a single, monolithic task, we propose a framework that engages language models in a multi-turn interaction, encompassing pre-translation research, drafting, refining, and proofreading, resulting in progressively improved translations.Extensive automatic evaluations using Gemini 1.5 Pro across ten language pairs show that translating step-by-step yields large translation quality improvements over conventional zero-shot prompting approaches and earlier human-like baseline strategies, resulting in state-of-the-art results on WMT 2024.",
}
| In this paper we present a step-by-step approach to long-form text translation, drawing on established processes in translation studies. Instead of viewing machine translation as a single, monolithic task, we propose a framework that engages language models in a multi-turn interaction, encompassing pre-translation research, drafting, refining, and proofreading, resulting in progressively improved translations.Extensive automatic evaluations using Gemini 1.5 Pro across ten language pairs show that translating step-by-step yields large translation quality improvements over conventional zero-shot prompting approaches and earlier human-like baseline strategies, resulting in state-of-the-art results on WMT 2024. | [
"Briakou, Eleftheria",
"Luo, Jiaming",
"Cherry, Colin",
"Freitag, Markus"
] | Translating Step-by-Step: Decomposing the Translation Process for Improved Translation Quality of Long-Form Texts | wmt-1.123 | Poster | 2409.06790 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.124.bib | https://aclanthology.org/2024.wmt-1.124/ | @inproceedings{caillaut-etal-2024-scaling,
title = "Scaling Laws of Decoder-Only Models on the Multilingual Machine Translation Task",
author = {Caillaut, Ga{\"e}tan and
Nakhl{\'e}, Mariam and
Qader, Raheel and
Liu, Jingshu and
Barth{\'e}lemy, Jean-Gabriel},
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.124",
pages = "1318--1331",
abstract = "Recent studies have showcased remarkable capabilities of decoder-only models in many NLP tasks, including translation. Yet, the machine translation field has been largely dominated by encoder-decoder models based on the Transformer architecture. As a consequence, scaling laws of encoder-decoder models for neural machine translation have already been well studied, but decoder-only models have received less attention.This work explores the scaling laws of decoder-only models on the multilingual and multidomain translation task. We trained a collection of six decoder-only models, ranging from 70M to 7B parameters, on a sentence-level, multilingual (8 languages) and multidomain (9 domains) dataset. We conducted a series of experiments showing that the loss of decoder-only models can be estimated using a scaling law similar to the one discovered for large language models, but we also show that this scaling law has difficulties to generalize to too large models or to a different data distribution. We also study different scaling methods and show that scaling the depth and the width of a model lead to similar test loss improvements, but with different impact on the model{'}s efficiency.",
}
| Recent studies have showcased remarkable capabilities of decoder-only models in many NLP tasks, including translation. Yet, the machine translation field has been largely dominated by encoder-decoder models based on the Transformer architecture. As a consequence, scaling laws of encoder-decoder models for neural machine translation have already been well studied, but decoder-only models have received less attention.This work explores the scaling laws of decoder-only models on the multilingual and multidomain translation task. We trained a collection of six decoder-only models, ranging from 70M to 7B parameters, on a sentence-level, multilingual (8 languages) and multidomain (9 domains) dataset. We conducted a series of experiments showing that the loss of decoder-only models can be estimated using a scaling law similar to the one discovered for large language models, but we also show that this scaling law has difficulties to generalize to too large models or to a different data distribution. We also study different scaling methods and show that scaling the depth and the width of a model lead to similar test loss improvements, but with different impact on the model{'}s efficiency. | [
"Caillaut, Ga{\\\"e}tan",
"Nakhl{\\'e}, Mariam",
"Qader, Raheel",
"Liu, Jingshu",
"Barth{\\'e}lemy, Jean-Gabriel"
] | Scaling Laws of Decoder-Only Models on the Multilingual Machine Translation Task | wmt-1.124 | Poster | 2409.15051 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.125.bib | https://aclanthology.org/2024.wmt-1.125/ | @inproceedings{court-elsner-2024-shortcomings,
title = "Shortcomings of {LLM}s for Low-Resource Translation: Retrieval and Understanding Are Both the Problem",
author = "Court, Sara and
Elsner, Micha",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.125",
pages = "1332--1354",
abstract = "This work investigates the in-context learning abilities of pretrained large language models (LLMs) when instructed to translate text from a low-resource language into a high-resource language as part of an automated machine translation pipeline. We conduct a set of experiments translating Southern Quechua to Spanish and examine the informativity of various types of information retrieved from a constrained database of digitized pedagogical materials (dictionaries and grammar lessons) and parallel corpora. Using both automatic and human evaluation of model output, we conduct ablation studies that manipulate (1) context type (morpheme translations, grammar descriptions, and corpus examples), (2) retrieval methods (automated vs. manual), and (3) model type. Our results suggest that even relatively small LLMs are capable of utilizing prompt context for zero-shot low-resource translation when provided a minimally sufficient amount of relevant linguistic information. However, the variable effects of prompt type, retrieval method, model type, and language community-specific factors highlight the limitations of using even the best LLMs as translation systems for the majority of the world{'}s 7,000+ languages and their speakers.",
}
| This work investigates the in-context learning abilities of pretrained large language models (LLMs) when instructed to translate text from a low-resource language into a high-resource language as part of an automated machine translation pipeline. We conduct a set of experiments translating Southern Quechua to Spanish and examine the informativity of various types of information retrieved from a constrained database of digitized pedagogical materials (dictionaries and grammar lessons) and parallel corpora. Using both automatic and human evaluation of model output, we conduct ablation studies that manipulate (1) context type (morpheme translations, grammar descriptions, and corpus examples), (2) retrieval methods (automated vs. manual), and (3) model type. Our results suggest that even relatively small LLMs are capable of utilizing prompt context for zero-shot low-resource translation when provided a minimally sufficient amount of relevant linguistic information. However, the variable effects of prompt type, retrieval method, model type, and language community-specific factors highlight the limitations of using even the best LLMs as translation systems for the majority of the world{'}s 7,000+ languages and their speakers. | [
"Court, Sara",
"Elsner, Micha"
] | Shortcomings of LLMs for Low-Resource Translation: Retrieval and Understanding Are Both the Problem | wmt-1.125 | Poster | 2406.15625 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.126.bib | https://aclanthology.org/2024.wmt-1.126/ | @inproceedings{finkelstein-etal-2024-introducing,
title = "Introducing the {N}ews{P}a{LM} {MBR} and {QE} Dataset: {LLM}-Generated High-Quality Parallel Data Outperforms Traditional Web-Crawled Data",
author = "Finkelstein, Mara and
Vilar, David and
Freitag, Markus",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.126",
pages = "1355--1372",
abstract = "Recent research in neural machine translation (NMT) has shown that training on high-quality machine-generated data can outperform training on human-generated data. This work accompanies the first-ever release of a LLM-generated, MBR-decoded and QE-reranked dataset with both sentence-level and multi-sentence examples. We perform extensive experiments to demonstrate the quality of our dataset in terms of its downstream impact on NMT model performance. We find that training from scratch on our (machine-generated) dataset outperforms training on the (web-crawled) WMT{'}23 training dataset (which is 300 times larger), and also outperforms training on the top-quality subset of the WMT{'}23 training dataset. We also find that performing self-distillation by finetuning the LLM which generated this dataset outperforms the LLM{'}s strong few-shot baseline. These findings corroborate the quality of our dataset, and demonstrate the value of high-quality machine-generated data in improving performance of NMT models.",
}
| Recent research in neural machine translation (NMT) has shown that training on high-quality machine-generated data can outperform training on human-generated data. This work accompanies the first-ever release of a LLM-generated, MBR-decoded and QE-reranked dataset with both sentence-level and multi-sentence examples. We perform extensive experiments to demonstrate the quality of our dataset in terms of its downstream impact on NMT model performance. We find that training from scratch on our (machine-generated) dataset outperforms training on the (web-crawled) WMT{'}23 training dataset (which is 300 times larger), and also outperforms training on the top-quality subset of the WMT{'}23 training dataset. We also find that performing self-distillation by finetuning the LLM which generated this dataset outperforms the LLM{'}s strong few-shot baseline. These findings corroborate the quality of our dataset, and demonstrate the value of high-quality machine-generated data in improving performance of NMT models. | [
"Finkelstein, Mara",
"Vilar, David",
"Freitag, Markus"
] | Introducing the NewsPaLM MBR and QE Dataset: LLM-Generated High-Quality Parallel Data Outperforms Traditional Web-Crawled Data | wmt-1.126 | Poster | 2408.06537 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.127.bib | https://aclanthology.org/2024.wmt-1.127/ | @inproceedings{gisserot-boukhlef-etal-2024-preference,
title = "Is Preference Alignment Always the Best Option to Enhance {LLM}-Based Translation? An Empirical Analysis",
author = "Gisserot-Boukhlef, Hippolyte and
Rei, Ricardo and
Malherbe, Emmanuel and
Hudelot, C{\'e}line and
Colombo, Pierre and
Guerreiro, Nuno M.",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.127",
pages = "1373--1392",
abstract = "Neural metrics for machine translation (MT) evaluation have become increasingly prominent due to their superior correlation with human judgments compared to traditional lexical metrics. Researchers have therefore utilized neural metrics through quality-informed decoding strategies, achieving better results than likelihood-based methods. With the rise of Large Language Models (LLMs), preference-based alignment techniques have gained attention for their potential to enhance translation quality by optimizing model weights directly on preferences induced by quality estimators. This study focuses on Contrastive Preference Optimization (CPO) and conducts extensive experiments to evaluate the impact of preference-based alignment on translation quality. Our findings indicate that while CPO consistently outperforms Supervised Fine-Tuning (SFT) on high-quality data with regard to the alignment metric, it may lead to instability across downstream evaluation metrics, particularly between neural and lexical ones. Additionally, we demonstrate that relying solely on the base model for generating candidate translations achieves performance comparable to using multiple external systems, while ensuring better consistency across downstream metrics.",
}
| Neural metrics for machine translation (MT) evaluation have become increasingly prominent due to their superior correlation with human judgments compared to traditional lexical metrics. Researchers have therefore utilized neural metrics through quality-informed decoding strategies, achieving better results than likelihood-based methods. With the rise of Large Language Models (LLMs), preference-based alignment techniques have gained attention for their potential to enhance translation quality by optimizing model weights directly on preferences induced by quality estimators. This study focuses on Contrastive Preference Optimization (CPO) and conducts extensive experiments to evaluate the impact of preference-based alignment on translation quality. Our findings indicate that while CPO consistently outperforms Supervised Fine-Tuning (SFT) on high-quality data with regard to the alignment metric, it may lead to instability across downstream evaluation metrics, particularly between neural and lexical ones. Additionally, we demonstrate that relying solely on the base model for generating candidate translations achieves performance comparable to using multiple external systems, while ensuring better consistency across downstream metrics. | [
"Gisserot-Boukhlef, Hippolyte",
"Rei, Ricardo",
"Malherbe, Emmanuel",
"Hudelot, C{\\'e}line",
"Colombo, Pierre",
"Guerreiro, Nuno M."
] | Is Preference Alignment Always the Best Option to Enhance LLM-Based Translation? An Empirical Analysis | wmt-1.127 | Poster | 2409.20059 | [
""
] | https://huggingface.co/papers/2409.20059 | 5 | 15 | 2 | 6 | [] | [
"hgissbkh/WMT22-23-Test-Metrics"
] | [] | [] | [
"hgissbkh/WMT22-23-Test-Metrics"
] | [] | 1 |
https://aclanthology.org/2024.wmt-1.128.bib | https://aclanthology.org/2024.wmt-1.128/ | @inproceedings{iyer-etal-2024-quality,
title = "Quality or Quantity? On Data Scale and Diversity in Adapting Large Language Models for Low-Resource Translation",
author = "Iyer, Vivek and
Malik, Bhavitvya and
Stepachev, Pavel and
Chen, Pinzhen and
Haddow, Barry and
Birch, Alexandra",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.128",
pages = "1393--1409",
abstract = "Despite the recent popularity of Large Language Models (LLMs) in Machine Translation (MT), their performance in low-resource languages (LRLs) still lags significantly behind Neural Machine Translation (NMT) models. In this work, we explore what it would take to adapt LLMs for the low-resource setting. Particularly, we re-examine the role of two factors: a) the importance and application of parallel data, and b) diversity in Supervised Fine-Tuning (SFT). Recently, parallel data has seen reduced use in adapting LLMs for MT, while data diversity has been embraced to promote transfer across languages and tasks. However, for low-resource LLM-MT, we show that the opposite is true for both considerations: a) \textit{parallel data} is critical during both pre-training and SFT; b) diversity tends to cause \textit{interference} instead of transfer. Our experiments with three LLMs across two low-resourced language groups{---}Indigenous American and North-East Indian{---}reveal consistent trends, underscoring the generalizability of our findings. We believe these insights will be valuable for scaling to massively multilingual LLM-MT models that can effectively serve LRLs.",
}
| Despite the recent popularity of Large Language Models (LLMs) in Machine Translation (MT), their performance in low-resource languages (LRLs) still lags significantly behind Neural Machine Translation (NMT) models. In this work, we explore what it would take to adapt LLMs for the low-resource setting. Particularly, we re-examine the role of two factors: a) the importance and application of parallel data, and b) diversity in Supervised Fine-Tuning (SFT). Recently, parallel data has seen reduced use in adapting LLMs for MT, while data diversity has been embraced to promote transfer across languages and tasks. However, for low-resource LLM-MT, we show that the opposite is true for both considerations: a) \textit{parallel data} is critical during both pre-training and SFT; b) diversity tends to cause \textit{interference} instead of transfer. Our experiments with three LLMs across two low-resourced language groups{---}Indigenous American and North-East Indian{---}reveal consistent trends, underscoring the generalizability of our findings. We believe these insights will be valuable for scaling to massively multilingual LLM-MT models that can effectively serve LRLs. | [
"Iyer, Vivek",
"Malik, Bhavitvya",
"Stepachev, Pavel",
"Chen, Pinzhen",
"Haddow, Barry",
"Birch, Alex",
"ra"
] | Quality or Quantity? On Data Scale and Diversity in Adapting Large Language Models for Low-Resource Translation | wmt-1.128 | Poster | 2408.12780 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.129.bib | https://aclanthology.org/2024.wmt-1.129/ | @inproceedings{jiyoon-etal-2024-efficient,
title = "Efficient Technical Term Translation: A Knowledge Distillation Approach for Parenthetical Terminology Translation",
author = "Jiyoon, Myung and
Park, Jihyeon and
Son, Jungki and
Lee, Kyungro and
Han, Joohyung",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.129",
pages = "1410--1427",
abstract = "This paper addresses the challenge of accurately translating technical terms, which are crucial for clear communication in specialized fields. We introduce the Parenthetical Terminology Translation (PTT) task, designed to mitigate potential inaccuracies by displaying the original term in parentheses alongside its translation. To implement this approach, we generated a representative PTT dataset using a collaborative approach with large language models and applied knowledge distillation to fine-tune traditional Neural Machine Translation (NMT) models and small-sized Large Language Models (sLMs). Additionally, we developed a novel evaluation metric to assess both overall translation accuracy and the correct parenthetical presentation of terms. Our findings indicate that sLMs did not consistently outperform NMT models, with fine-tuning proving more effective than few-shot prompting, particularly in models with continued pre-training in the target language. These insights contribute to the advancement of more reliable terminology translation methodologies.",
}
| This paper addresses the challenge of accurately translating technical terms, which are crucial for clear communication in specialized fields. We introduce the Parenthetical Terminology Translation (PTT) task, designed to mitigate potential inaccuracies by displaying the original term in parentheses alongside its translation. To implement this approach, we generated a representative PTT dataset using a collaborative approach with large language models and applied knowledge distillation to fine-tune traditional Neural Machine Translation (NMT) models and small-sized Large Language Models (sLMs). Additionally, we developed a novel evaluation metric to assess both overall translation accuracy and the correct parenthetical presentation of terms. Our findings indicate that sLMs did not consistently outperform NMT models, with fine-tuning proving more effective than few-shot prompting, particularly in models with continued pre-training in the target language. These insights contribute to the advancement of more reliable terminology translation methodologies. | [
"Jiyoon, Myung",
"Park, Jihyeon",
"Son, Jungki",
"Lee, Kyungro",
"Han, Joohyung"
] | Efficient Technical Term Translation: A Knowledge Distillation Approach for Parenthetical Terminology Translation | wmt-1.129 | Poster | 2410.00683 | [
""
] | https://huggingface.co/papers/2410.00683 | 0 | 0 | 0 | 5 | [
"PrompTart/m2m100_418M_PTT_en_ko"
] | [
"PrompTart/PTT_en_ko",
"PrompTart/PTT_advanced_en_ko"
] | [] | [
"PrompTart/m2m100_418M_PTT_en_ko"
] | [
"PrompTart/PTT_en_ko",
"PrompTart/PTT_advanced_en_ko"
] | [] | 1 |
https://aclanthology.org/2024.wmt-1.130.bib | https://aclanthology.org/2024.wmt-1.130/ | @inproceedings{kashani-motlagh-etal-2024-assessing,
title = "Assessing the Role of Imagery in Multimodal Machine Translation",
author = "Kashani Motlagh, Nicholas and
Davis, Jim and
Gwinnup, Jeremy and
Erdmann, Grant and
Anderson, Tim",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.130",
pages = "1428--1439",
abstract = "In Multimodal Machine Translation (MMT), the use of visual data has shown only marginal improvements compared to text-only models. Previously, the CoMMuTE dataset and associated metric were proposed to score models on tasks where the imagery is necessary to disambiguate between two possible translations for each ambiguous source sentence. In this work, we introduce new metrics within the CoMMuTE domain to provide deeper insights into image-aware translation models. Our proposed metrics differ from the previous CoMMuTE scoring method by 1) assessing the impact of multiple images on individual translations and 2) evaluating a model{'}s ability to jointly select each translation for each image context. Our results challenge the conventional views of poor visual comprehension capabilities of MMT models and show that models can indeed meaningfully interpret visual information, though they may not leverage it sufficiently in the final decision.",
}
| In Multimodal Machine Translation (MMT), the use of visual data has shown only marginal improvements compared to text-only models. Previously, the CoMMuTE dataset and associated metric were proposed to score models on tasks where the imagery is necessary to disambiguate between two possible translations for each ambiguous source sentence. In this work, we introduce new metrics within the CoMMuTE domain to provide deeper insights into image-aware translation models. Our proposed metrics differ from the previous CoMMuTE scoring method by 1) assessing the impact of multiple images on individual translations and 2) evaluating a model{'}s ability to jointly select each translation for each image context. Our results challenge the conventional views of poor visual comprehension capabilities of MMT models and show that models can indeed meaningfully interpret visual information, though they may not leverage it sufficiently in the final decision. | [
"Kashani Motlagh, Nicholas",
"Davis, Jim",
"Gwinnup, Jeremy",
"Erdmann, Grant",
"Anderson, Tim"
] | Assessing the Role of Imagery in Multimodal Machine Translation | wmt-1.130 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.131.bib | https://aclanthology.org/2024.wmt-1.131/ | @inproceedings{kocmi-etal-2024-error,
title = "Error Span Annotation: A Balanced Approach for Human Evaluation of Machine Translation",
author = "Kocmi, Tom and
Zouhar, Vil{\'e}m and
Avramidis, Eleftherios and
Grundkiewicz, Roman and
Karpinska, Marzena and
Popovi{\'c}, Maja and
Sachan, Mrinmaya and
Shmatova, Mariya",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.131",
pages = "1440--1453",
abstract = "High-quality Machine Translation (MT) evaluation relies heavily on human judgments.Comprehensive error classification methods, such as Multidimensional Quality Metrics (MQM), are expensive as they are time-consuming and can only be done by experts, whose availability may be limited especially for low-resource languages.On the other hand, just assigning overall scores, like Direct Assessment (DA), is simpler and faster and can be done by translators of any level, but is less reliable.In this paper, we introduce Error Span Annotation (ESA), a human evaluation protocol which combines the continuous rating of DA with the high-level error severity span marking of MQM.We validate ESA by comparing it to MQM and DA for 12 MT systems and one human reference translation (English to German) from WMT23. The results show that ESA offers faster and cheaper annotations than MQM at the same quality level, without the requirement of expensive MQM experts.",
}
| High-quality Machine Translation (MT) evaluation relies heavily on human judgments.Comprehensive error classification methods, such as Multidimensional Quality Metrics (MQM), are expensive as they are time-consuming and can only be done by experts, whose availability may be limited especially for low-resource languages.On the other hand, just assigning overall scores, like Direct Assessment (DA), is simpler and faster and can be done by translators of any level, but is less reliable.In this paper, we introduce Error Span Annotation (ESA), a human evaluation protocol which combines the continuous rating of DA with the high-level error severity span marking of MQM.We validate ESA by comparing it to MQM and DA for 12 MT systems and one human reference translation (English to German) from WMT23. The results show that ESA offers faster and cheaper annotations than MQM at the same quality level, without the requirement of expensive MQM experts. | [
"Kocmi, Tom",
"Zouhar, Vil{\\'e}m",
"Avramidis, Eleftherios",
"Grundkiewicz, Roman",
"Karpinska, Marzena",
"Popovi{\\'c}, Maja",
"Sachan, Mrinmaya",
"Shmatova, Mariya"
] | Error Span Annotation: A Balanced Approach for Human Evaluation of Machine Translation | wmt-1.131 | Poster | 2406.11580 | [
"https://github.com/wmt-conference/ErrorSpanAnnotation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wmt-1.132.bib | https://aclanthology.org/2024.wmt-1.132/ | @inproceedings{koehn-2024-neural,
title = "Neural Methods for Aligning Large-Scale Parallel Corpora from the Web for South and {E}ast {A}sian Languages",
author = "Koehn, Philipp",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.132",
pages = "1454--1466",
abstract = "We introduce neural methods and a toxicity filtering step to the hierarchical web mining approach of Paracrawl (Ba{\~n}{\'o}n et al., 2020), showing large improvements. We apply these methods to web-scale parallel corpus mining for 9 South and East Asian national languages, creating training resources for machine translation that yield better translation quality for most of these languages than existing publicly available datasets in OPUS. Our methods also generally lead to better results than the global mining approach of Schwenk et al. (2021).",
}
| We introduce neural methods and a toxicity filtering step to the hierarchical web mining approach of Paracrawl (Ba{\~n}{\'o}n et al., 2020), showing large improvements. We apply these methods to web-scale parallel corpus mining for 9 South and East Asian national languages, creating training resources for machine translation that yield better translation quality for most of these languages than existing publicly available datasets in OPUS. Our methods also generally lead to better results than the global mining approach of Schwenk et al. (2021). | [
"Koehn, Philipp"
] | Neural Methods for Aligning Large-Scale Parallel Corpora from the Web for South and East Asian Languages | wmt-1.132 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wmt-1.133.bib | https://aclanthology.org/2024.wmt-1.133/ | @inproceedings{koneru-etal-2024-plug,
title = "Plug, Play, and Fuse: Zero-Shot Joint Decoding via Word-Level Re-ranking across Diverse Vocabularies",
author = "Koneru, Sai and
Huck, Matthias and
Exel, Miriam and
Niehues, Jan",
editor = "Haddow, Barry and
Kocmi, Tom and
Koehn, Philipp and
Monz, Christof",
booktitle = "Proceedings of the Ninth Conference on Machine Translation",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wmt-1.133",
pages = "1467--1481",
abstract = "Recent advancements in NLP have resulted in models with specialized strengths, such as processing multimodal inputs or excelling in specific domains. However, real-world tasks, like multimodal translation, often require a combination of these strengths, such as handling both translation and image processing. While individual translation and vision models are powerful, they typically lack the ability to perform both tasks in a single system. Combining these models poses challenges, particularly due to differences in their vocabularies, which limit the effectiveness of traditional ensemble methods to post-generation techniques like N-best list re-ranking. In this work, we propose a novel zero-shot ensembling strategy that allows for the integration of different models during the decoding phase without the need for additional training. Our approach re-ranks beams during decoding by combining scores at the word level, using heuristics to predict when a word is completed. We demonstrate the effectiveness of this method in machine translation scenarios, showing that it enables the generation of translations that are both speech- and image-aware while also improving overall translation quality.",
}
| Recent advancements in NLP have resulted in models with specialized strengths, such as processing multimodal inputs or excelling in specific domains. However, real-world tasks, like multimodal translation, often require a combination of these strengths, such as handling both translation and image processing. While individual translation and vision models are powerful, they typically lack the ability to perform both tasks in a single system. Combining these models poses challenges, particularly due to differences in their vocabularies, which limit the effectiveness of traditional ensemble methods to post-generation techniques like N-best list re-ranking. In this work, we propose a novel zero-shot ensembling strategy that allows for the integration of different models during the decoding phase without the need for additional training. Our approach re-ranks beams during decoding by combining scores at the word level, using heuristics to predict when a word is completed. We demonstrate the effectiveness of this method in machine translation scenarios, showing that it enables the generation of translations that are both speech- and image-aware while also improving overall translation quality. | [
"Koneru, Sai",
"Huck, Matthias",
"Exel, Miriam",
"Niehues, Jan"
] | Plug, Play, and Fuse: Zero-Shot Joint Decoding via Word-Level Re-ranking across Diverse Vocabularies | wmt-1.133 | Poster | 2408.11327 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wnu-1.1.bib | https://aclanthology.org/2024.wnu-1.1/ | @inproceedings{huang-usbeck-2024-narration,
title = "Narration as Functions: from Events to Narratives",
author = "Huang, Junbo and
Usbeck, Ricardo",
editor = "Lal, Yash Kumar and
Clark, Elizabeth and
Iyyer, Mohit and
Chaturvedi, Snigdha and
Brei, Anneliese and
Brahman, Faeze and
Chandu, Khyathi Raghavi",
booktitle = "Proceedings of the The 6th Workshop on Narrative Understanding",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wnu-1.1",
pages = "1--7",
abstract = "Identifying events from text has a long past in narrative analysis, but a short history in Natural Language Processing (NLP). In this position paper, a question is asked: given the telling of a sequence of real-world events by a news narrator, what do NLP event extraction models capture, and what do they miss? Insights from critical discourse analysis (CDA) and from a series of movements in literary criticism motivate us to model the narrated logic in news narratives.As a result, a computational framework is proposed to model the function of news narration, which shapes the narrated world, consumed by news narratees. As a simplification, we represent the causal logic between events depicted in the narrated world.",
}
| Identifying events from text has a long past in narrative analysis, but a short history in Natural Language Processing (NLP). In this position paper, a question is asked: given the telling of a sequence of real-world events by a news narrator, what do NLP event extraction models capture, and what do they miss? Insights from critical discourse analysis (CDA) and from a series of movements in literary criticism motivate us to model the narrated logic in news narratives.As a result, a computational framework is proposed to model the function of news narration, which shapes the narrated world, consumed by news narratees. As a simplification, we represent the causal logic between events depicted in the narrated world. | [
"Huang, Junbo",
"Usbeck, Ricardo"
] | Narration as Functions: from Events to Narratives | wnu-1.1 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wnu-1.2.bib | https://aclanthology.org/2024.wnu-1.2/ | @inproceedings{ermolaeva-etal-2024-tame,
title = "How to tame your plotline: A framework for goal-driven interactive fairy tale generation",
author = "Ermolaeva, Marina and
Shakhmatova, Anastasia and
Nepomnyashchikh, Alina and
Fenogenova, Alena",
editor = "Lal, Yash Kumar and
Clark, Elizabeth and
Iyyer, Mohit and
Chaturvedi, Snigdha and
Brei, Anneliese and
Brahman, Faeze and
Chandu, Khyathi Raghavi",
booktitle = "Proceedings of the The 6th Workshop on Narrative Understanding",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wnu-1.2",
pages = "8--31",
abstract = "Automatic storytelling is a difficult NLP task that poses a challenge even for state-of-the-art large language models. This paper proposes a pipeline for interactive fairy tale generation in a mixed-initiative setting. Our approach introduces a story goal as a stopping condition, imposes minimal structure on the narrative in the form of a simple emotional arc, and controls the transition between the stages of the story via system prompt engineering. The resulting framework reconciles creating a structured and complete short-form narrative with retaining player agency and allowing users to influence the storyline through their input. We evaluate our approach with several proprietary and open-source language models and examine its transferability to different languages, specifically English and Russian.",
}
| Automatic storytelling is a difficult NLP task that poses a challenge even for state-of-the-art large language models. This paper proposes a pipeline for interactive fairy tale generation in a mixed-initiative setting. Our approach introduces a story goal as a stopping condition, imposes minimal structure on the narrative in the form of a simple emotional arc, and controls the transition between the stages of the story via system prompt engineering. The resulting framework reconciles creating a structured and complete short-form narrative with retaining player agency and allowing users to influence the storyline through their input. We evaluate our approach with several proprietary and open-source language models and examine its transferability to different languages, specifically English and Russian. | [
"Ermolaeva, Marina",
"Shakhmatova, Anastasia",
"Nepomnyashchikh, Alina",
"Fenogenova, Alena"
] | How to tame your plotline: A framework for goal-driven interactive fairy tale generation | wnu-1.2 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wnu-1.3.bib | https://aclanthology.org/2024.wnu-1.3/ | @inproceedings{lagrange-2024-understanding,
title = "Understanding Transmedia Storytelling: Reception and Narrative Comprehension in Bill Willingham{'}s Fables Franchise",
author = "Lagrange, Victoria",
editor = "Lal, Yash Kumar and
Clark, Elizabeth and
Iyyer, Mohit and
Chaturvedi, Snigdha and
Brei, Anneliese and
Brahman, Faeze and
Chandu, Khyathi Raghavi",
booktitle = "Proceedings of the The 6th Workshop on Narrative Understanding",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wnu-1.3",
pages = "32--36",
abstract = "This study explores the reception and understanding of the transmedia ensemble surrounding Bill Willingham{'}s Fables (2002-2015), a comic series reimagining fairytale characters in a modern setting. Fables expands its narrative across multiple media, including spin-off comics, a novel, and the video game The Wolf Among Us. This research investigates key questions: Can we identify a distinct group of transmedia consumers? What elements of the narrative sustain interest across media? A survey of 58 participants reveals that while most enter the franchise through the comic series, a significant number are introduced via the video game. The findings indicate that Fables fans are highly engaged transmedia consumers, with a majority exploring several parts of the franchise wanting to pursue narrative exploration. This study offers insights into how transmedia narratives are consumed, emphasizing the role of familiar story elements in encouraging cross-media engagement.",
}
| This study explores the reception and understanding of the transmedia ensemble surrounding Bill Willingham{'}s Fables (2002-2015), a comic series reimagining fairytale characters in a modern setting. Fables expands its narrative across multiple media, including spin-off comics, a novel, and the video game The Wolf Among Us. This research investigates key questions: Can we identify a distinct group of transmedia consumers? What elements of the narrative sustain interest across media? A survey of 58 participants reveals that while most enter the franchise through the comic series, a significant number are introduced via the video game. The findings indicate that Fables fans are highly engaged transmedia consumers, with a majority exploring several parts of the franchise wanting to pursue narrative exploration. This study offers insights into how transmedia narratives are consumed, emphasizing the role of familiar story elements in encouraging cross-media engagement. | [
"Lagrange, Victoria"
] | Understanding Transmedia Storytelling: Reception and Narrative Comprehension in Bill Willingham's Fables Franchise | wnu-1.3 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wnu-1.4.bib | https://aclanthology.org/2024.wnu-1.4/ | @inproceedings{piper-bagga-2024-using,
title = "Using Large Language Models for Understanding Narrative Discourse",
author = "Piper, Andrew and
Bagga, Sunyam",
editor = "Lal, Yash Kumar and
Clark, Elizabeth and
Iyyer, Mohit and
Chaturvedi, Snigdha and
Brei, Anneliese and
Brahman, Faeze and
Chandu, Khyathi Raghavi",
booktitle = "Proceedings of the The 6th Workshop on Narrative Understanding",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wnu-1.4",
pages = "37--46",
abstract = "In this study, we explore the application of large language models (LLMs) to analyze narrative discourse within the framework established by the field of narratology. We develop a set of elementary narrative features derived from prior theoretical work that focus on core dimensions of narrative, including time, setting, and perspective. Through experiments with GPT-4 and fine-tuned open-source models like Llama3, we demonstrate the models{'} ability to annotate narrative passages with reasonable levels of agreement with human annotators. Leveraging a dataset of human-annotated passages spanning 18 distinct narrative and non-narrative genres, our work provides empirical support for the deictic theory of narrative communication. This theory posits that a fundamental function of storytelling is the focalization of attention on distant human experiences to facilitate social coordination. We conclude with a discussion of the possibilities for LLM-driven narrative discourse understanding.",
}
| In this study, we explore the application of large language models (LLMs) to analyze narrative discourse within the framework established by the field of narratology. We develop a set of elementary narrative features derived from prior theoretical work that focus on core dimensions of narrative, including time, setting, and perspective. Through experiments with GPT-4 and fine-tuned open-source models like Llama3, we demonstrate the models{'} ability to annotate narrative passages with reasonable levels of agreement with human annotators. Leveraging a dataset of human-annotated passages spanning 18 distinct narrative and non-narrative genres, our work provides empirical support for the deictic theory of narrative communication. This theory posits that a fundamental function of storytelling is the focalization of attention on distant human experiences to facilitate social coordination. We conclude with a discussion of the possibilities for LLM-driven narrative discourse understanding. | [
"Piper, Andrew",
"Bagga, Sunyam"
] | Using Large Language Models for Understanding Narrative Discourse | wnu-1.4 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wnu-1.7.bib | https://aclanthology.org/2024.wnu-1.7/ | @inproceedings{shokri-etal-2024-safe,
title = "Is It Safe to Tell Your Story? Towards Achieving Privacy for Sensitive Narratives",
author = "Shokri, Mohammad and
Bishop, Allison and
Levitan, Sarah Ita",
editor = "Lal, Yash Kumar and
Clark, Elizabeth and
Iyyer, Mohit and
Chaturvedi, Snigdha and
Brei, Anneliese and
Brahman, Faeze and
Chandu, Khyathi Raghavi",
booktitle = "Proceedings of the The 6th Workshop on Narrative Understanding",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wnu-1.7",
pages = "47--54",
abstract = "Evolving tools for narrative analysis present an opportunity to identify common structure in stories that are socially important to tell, such as stories of survival from domestic abuse. A greater structural understanding of such stories could lead to stronger protections against de-anonymization, as well as future tools to help survivors navigate the complex trade-offs inherent in trying to tell their stories safely. In this work we explore narrative patterns within a small set of domestic violence stories, identifying many similarities. We then propose a method to assess the safety of sharing a story based on a distance feature vector.",
}
| Evolving tools for narrative analysis present an opportunity to identify common structure in stories that are socially important to tell, such as stories of survival from domestic abuse. A greater structural understanding of such stories could lead to stronger protections against de-anonymization, as well as future tools to help survivors navigate the complex trade-offs inherent in trying to tell their stories safely. In this work we explore narrative patterns within a small set of domestic violence stories, identifying many similarities. We then propose a method to assess the safety of sharing a story based on a distance feature vector. | [
"Shokri, Mohammad",
"Bishop, Allison",
"Levitan, Sarah Ita"
] | Is It Safe to Tell Your Story? Towards Achieving Privacy for Sensitive Narratives | wnu-1.7 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wnu-1.9.bib | https://aclanthology.org/2024.wnu-1.9/ | @inproceedings{heyns-van-zaanen-2024-annotating,
title = "Annotating Mystery Novels: Guidelines and Adaptations",
author = "Heyns, Nuette and
Van Zaanen, Menno",
editor = "Lal, Yash Kumar and
Clark, Elizabeth and
Iyyer, Mohit and
Chaturvedi, Snigdha and
Brei, Anneliese and
Brahman, Faeze and
Chandu, Khyathi Raghavi",
booktitle = "Proceedings of the The 6th Workshop on Narrative Understanding",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wnu-1.9",
pages = "55--66",
abstract = "To understand how stories are structured, we would like to be able to analyze the architecture of narratives. This article reviews and compares existing annotation guidelines for scene and narrative level annotation. We propose new guidelines, based on existing ones, and show how these can be effectively extended from general-purpose to specialized contexts, such as mystery novels which feature unique narrative elements like red herrings and plot twists. This provides a controlled environment for examining genre-specific event structuring. Additionally, we present a newly annotated genre-specific dataset of mystery novels, offering valuable resources for training and evaluating models in narrative understanding. This study aims to enhance annotation practices and advance the development of computational models for narrative analysis.",
}
| To understand how stories are structured, we would like to be able to analyze the architecture of narratives. This article reviews and compares existing annotation guidelines for scene and narrative level annotation. We propose new guidelines, based on existing ones, and show how these can be effectively extended from general-purpose to specialized contexts, such as mystery novels which feature unique narrative elements like red herrings and plot twists. This provides a controlled environment for examining genre-specific event structuring. Additionally, we present a newly annotated genre-specific dataset of mystery novels, offering valuable resources for training and evaluating models in narrative understanding. This study aims to enhance annotation practices and advance the development of computational models for narrative analysis. | [
"Heyns, Nuette",
"Van Zaanen, Menno"
] | Annotating Mystery Novels: Guidelines and Adaptations | wnu-1.9 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wnu-1.12.bib | https://aclanthology.org/2024.wnu-1.12/ | @inproceedings{heddaya-etal-2024-causal,
title = "Causal Micro-Narratives",
author = "Heddaya, Mourad and
Zeng, Qingcheng and
Zentefis, Alexander and
Voigt, Rob and
Tan, Chenhao",
editor = "Lal, Yash Kumar and
Clark, Elizabeth and
Iyyer, Mohit and
Chaturvedi, Snigdha and
Brei, Anneliese and
Brahman, Faeze and
Chandu, Khyathi Raghavi",
booktitle = "Proceedings of the The 6th Workshop on Narrative Understanding",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wnu-1.12",
pages = "67--84",
abstract = "We present a novel approach to classify causal micro-narratives from text. These narratives are sentence-level explanations of the cause(s) and/or effect(s) of a target subject. The approach requires only a subject-specific ontology of causes and effects, and we demonstrate it with an application to inflation narratives. Using a human-annotated dataset spanning historical and contemporary US news articles for training, we evaluate several large language models (LLMs) on this multi-label classification task. The best-performing model{---}a fine-tuned Llama 3.1 8B{---}achieves F1 scores of 0.87 on narrative detection and 0.71 on narrative classification. Comprehensive error analysis reveals challenges arising from linguistic ambiguity and highlights how model errors often mirror human annotator disagreements. This research establishes a framework for extracting causal micro-narratives from real-world data, with wide-ranging applications to social science research.",
}
| We present a novel approach to classify causal micro-narratives from text. These narratives are sentence-level explanations of the cause(s) and/or effect(s) of a target subject. The approach requires only a subject-specific ontology of causes and effects, and we demonstrate it with an application to inflation narratives. Using a human-annotated dataset spanning historical and contemporary US news articles for training, we evaluate several large language models (LLMs) on this multi-label classification task. The best-performing model{---}a fine-tuned Llama 3.1 8B{---}achieves F1 scores of 0.87 on narrative detection and 0.71 on narrative classification. Comprehensive error analysis reveals challenges arising from linguistic ambiguity and highlights how model errors often mirror human annotator disagreements. This research establishes a framework for extracting causal micro-narratives from real-world data, with wide-ranging applications to social science research. | [
"Heddaya, Mourad",
"Zeng, Qingcheng",
"Zentefis, Alex",
"er",
"Voigt, Rob",
"Tan, Chenhao"
] | Causal Micro-Narratives | wnu-1.12 | Poster | 2410.05252 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wnu-1.15.bib | https://aclanthology.org/2024.wnu-1.15/ | @inproceedings{das-etal-2024-media,
title = "Media Framing through the Lens of Event-Centric Narratives",
author = "Das, Rohan and
Chandra, Aditya and
Lee, I-Ta and
Pacheco, Maria Leonor",
editor = "Lal, Yash Kumar and
Clark, Elizabeth and
Iyyer, Mohit and
Chaturvedi, Snigdha and
Brei, Anneliese and
Brahman, Faeze and
Chandu, Khyathi Raghavi",
booktitle = "Proceedings of the The 6th Workshop on Narrative Understanding",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wnu-1.15",
pages = "85--98",
abstract = "From a communications perspective, a frame defines the packaging of the language used in such a way as to encourage certain interpretations and to discourage others. For example, a news article can frame immigration as either a boost or a drain on the economy, and thus communicate very different interpretations of the same phenomenon. In this work, we argue that to explain framing devices we have to look at the way narratives are constructed. As a first step in this direction, we propose a framework that extracts events and their relations to other events, and groups them into high-level narratives that help explain frames in news articles. We show that our framework can be used to analyze framing in U.S. news for two different domains: immigration and gun control.",
}
| From a communications perspective, a frame defines the packaging of the language used in such a way as to encourage certain interpretations and to discourage others. For example, a news article can frame immigration as either a boost or a drain on the economy, and thus communicate very different interpretations of the same phenomenon. In this work, we argue that to explain framing devices we have to look at the way narratives are constructed. As a first step in this direction, we propose a framework that extracts events and their relations to other events, and groups them into high-level narratives that help explain frames in news articles. We show that our framework can be used to analyze framing in U.S. news for two different domains: immigration and gun control. | [
"Das, Rohan",
"Ch",
"ra, Aditya",
"Lee, I-Ta",
"Pacheco, Maria Leonor"
] | Media Framing through the Lens of Event-Centric Narratives | wnu-1.15 | Poster | 2410.03151 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wnu-1.16.bib | https://aclanthology.org/2024.wnu-1.16/ | @inproceedings{baumann-etal-2024-bert,
title = "{BERT}-based Annotation of Oral Texts Elicited via Multilingual Assessment Instrument for Narratives",
author = "Baumann, Timo and
Eller, Korbinian and
Gagarina, Natalia",
editor = "Lal, Yash Kumar and
Clark, Elizabeth and
Iyyer, Mohit and
Chaturvedi, Snigdha and
Brei, Anneliese and
Brahman, Faeze and
Chandu, Khyathi Raghavi",
booktitle = "Proceedings of the The 6th Workshop on Narrative Understanding",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wnu-1.16",
pages = "99--104",
abstract = "We investigate how NLP can help annotate the structure and complexity of oral narrative texts elicited via the Multilingual Assessment Instrument for Narratives (MAIN). MAIN is a theory-based tool designed to evaluate the narrative abilities of children who are learning one or more languages from birth or early in their development. It provides a standardized way to measure how well children can comprehend and produce stories across different languages and referential norms for children between 3 and 12 years old. MAIN has been adapted to over ninety languages and is used in over 65 countries. The MAIN analysis focuses on story structure and story complexity which are typically evaluated manually based on scoring sheets. We here investigate the automation of this process using BERT-based classification which already yields promising results.",
}
| We investigate how NLP can help annotate the structure and complexity of oral narrative texts elicited via the Multilingual Assessment Instrument for Narratives (MAIN). MAIN is a theory-based tool designed to evaluate the narrative abilities of children who are learning one or more languages from birth or early in their development. It provides a standardized way to measure how well children can comprehend and produce stories across different languages and referential norms for children between 3 and 12 years old. MAIN has been adapted to over ninety languages and is used in over 65 countries. The MAIN analysis focuses on story structure and story complexity which are typically evaluated manually based on scoring sheets. We here investigate the automation of this process using BERT-based classification which already yields promising results. | [
"Baumann, Timo",
"Eller, Korbinian",
"Gagarina, Natalia"
] | BERT-based Annotation of Oral Texts Elicited via Multilingual Assessment Instrument for Narratives | wnu-1.16 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |