bibtex_url
stringlengths 41
53
| acl_proceedings
stringlengths 38
50
| bibtext
stringlengths 528
3.02k
| abstract
stringlengths 17
2.35k
| authors
sequencelengths 1
44
| title
stringlengths 18
190
| id
stringlengths 7
19
| arxiv_id
stringlengths 10
10
⌀ | GitHub
sequencelengths 1
1
| paper_page
stringclasses 528
values | n_linked_authors
int64 -1
15
| upvotes
int64 -1
77
| num_comments
int64 -1
10
| n_authors
int64 -1
52
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
15
| Spaces
sequencelengths 0
46
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2023.wmt-1.94.bib | https://aclanthology.org/2023.wmt-1.94/ | @inproceedings{zhang-2023-iol-research,
title = "{IOL} Research Machine Translation Systems for {WMT}23 Low-Resource {I}ndic Language Translation Shared Task",
author = "Zhang, Wenbo",
editor = "Koehn, Philipp and
Haddow, Barry and
Kocmi, Tom and
Monz, Christof",
booktitle = "Proceedings of the Eighth Conference on Machine Translation",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.wmt-1.94",
doi = "10.18653/v1/2023.wmt-1.94",
pages = "978--982",
abstract = "This paper describes the IOL Research team{'}s submission systems for the WMT23 low-resource Indic language translation shared task. We participated in 4 language pairs, including en-as, en-mz, en-kha, en-mn. We use transformer based neural network architecture to train our machine translation models. Overall, the core of our system is to improve the quality of low resource translation by utilizing monolingual data through pre-training and data augmentation. We first trained two denoising language models similar to T5 and BART using monolingual data, and then used parallel data to fine-tune the pretrained language models to obtain two multilingual machine translation models. The multilingual machine translation models can be used to translate English monolingual data into other multilingual data, forming multilingual parallel data as augmented data. We trained multiple translation models from scratch using augmented data and real parallel data to build the final submission systems by model ensemble. Experimental results show that our method greatly improves the BLEU scores for translation of these four language pairs.",
}
| This paper describes the IOL Research team{'}s submission systems for the WMT23 low-resource Indic language translation shared task. We participated in 4 language pairs, including en-as, en-mz, en-kha, en-mn. We use transformer based neural network architecture to train our machine translation models. Overall, the core of our system is to improve the quality of low resource translation by utilizing monolingual data through pre-training and data augmentation. We first trained two denoising language models similar to T5 and BART using monolingual data, and then used parallel data to fine-tune the pretrained language models to obtain two multilingual machine translation models. The multilingual machine translation models can be used to translate English monolingual data into other multilingual data, forming multilingual parallel data as augmented data. We trained multiple translation models from scratch using augmented data and real parallel data to build the final submission systems by model ensemble. Experimental results show that our method greatly improves the BLEU scores for translation of these four language pairs. | [
"Zhang, Wenbo"
] | IOL Research Machine Translation Systems for WMT23 Low-Resource Indic Language Translation Shared Task | wmt-1.94 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.wmt-1.95.bib | https://aclanthology.org/2023.wmt-1.95/ | @inproceedings{vamvas-etal-2023-trained,
title = "Trained {MT} Metrics Learn to Cope with Machine-translated References",
author = "Vamvas, Jannis and
Domhan, Tobias and
Trenous, Sony and
Sennrich, Rico and
Hasler, Eva",
editor = "Koehn, Philipp and
Haddow, Barry and
Kocmi, Tom and
Monz, Christof",
booktitle = "Proceedings of the Eighth Conference on Machine Translation",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.wmt-1.95",
doi = "10.18653/v1/2023.wmt-1.95",
pages = "983--995",
abstract = "Neural metrics trained on human evaluations of MT tend to correlate well with human judgments, but their behavior is not fully understood. In this paper, we perform a controlled experiment and compare a baseline metric that has not been trained on human evaluations (Prism) to a trained version of the same metric (Prism+FT). Surprisingly, we find that Prism+FT becomes more robust to machine-translated references, which are a notorious problem in MT evaluation. This suggests that the effects of metric training go beyond the intended effect of improving overall correlation with human judgments.",
}
| Neural metrics trained on human evaluations of MT tend to correlate well with human judgments, but their behavior is not fully understood. In this paper, we perform a controlled experiment and compare a baseline metric that has not been trained on human evaluations (Prism) to a trained version of the same metric (Prism+FT). Surprisingly, we find that Prism+FT becomes more robust to machine-translated references, which are a notorious problem in MT evaluation. This suggests that the effects of metric training go beyond the intended effect of improving overall correlation with human judgments. | [
"Vamvas, Jannis",
"Domhan, Tobias",
"Trenous, Sony",
"Sennrich, Rico",
"Hasler, Eva"
] | Trained MT Metrics Learn to Cope with Machine-translated References | wmt-1.95 | 2312.00536 | [
"https://github.com/amazon-science/prism-finetuned"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.wmt-1.96.bib | https://aclanthology.org/2023.wmt-1.96/ | @inproceedings{deutsch-etal-2023-training,
title = "Training and Meta-Evaluating Machine Translation Evaluation Metrics at the Paragraph Level",
author = "Deutsch, Daniel and
Juraska, Juraj and
Finkelstein, Mara and
Freitag, Markus",
editor = "Koehn, Philipp and
Haddow, Barry and
Kocmi, Tom and
Monz, Christof",
booktitle = "Proceedings of the Eighth Conference on Machine Translation",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.wmt-1.96",
doi = "10.18653/v1/2023.wmt-1.96",
pages = "996--1013",
abstract = "As research on machine translation moves to translating text beyond the sentence level, it remains unclear how effective automatic evaluation metrics are at scoring longer translations. In this work, we first propose a method for creating paragraph-level data for training and meta-evaluating metrics from existing sentence-level data. Then, we use these new datasets to benchmark existing sentence-level metrics as well as train learned metrics at the paragraph level. Interestingly, our experimental results demonstrate that using sentence-level metrics to score entire paragraphs is equally as effective as using a metric designed to work at the paragraph level. We speculate this result can be attributed to properties of the task of reference-based evaluation as well as limitations of our datasets with respect to capturing all types of phenomena that occur in paragraph-level translations.",
}
| As research on machine translation moves to translating text beyond the sentence level, it remains unclear how effective automatic evaluation metrics are at scoring longer translations. In this work, we first propose a method for creating paragraph-level data for training and meta-evaluating metrics from existing sentence-level data. Then, we use these new datasets to benchmark existing sentence-level metrics as well as train learned metrics at the paragraph level. Interestingly, our experimental results demonstrate that using sentence-level metrics to score entire paragraphs is equally as effective as using a metric designed to work at the paragraph level. We speculate this result can be attributed to properties of the task of reference-based evaluation as well as limitations of our datasets with respect to capturing all types of phenomena that occur in paragraph-level translations. | [
"Deutsch, Daniel",
"Juraska, Juraj",
"Finkelstein, Mara",
"Freitag, Markus"
] | Training and Meta-Evaluating Machine Translation Evaluation Metrics at the Paragraph Level | wmt-1.96 | 2308.13506 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.wmt-1.97.bib | https://aclanthology.org/2023.wmt-1.97/ | @inproceedings{ferrando-etal-2023-automating,
title = "Automating Behavioral Testing in Machine Translation",
author = "Ferrando, Javier and
Sperber, Matthias and
Setiawan, Hendra and
Telaar, Dominic and
Hasan, Sa{\v{s}}a",
editor = "Koehn, Philipp and
Haddow, Barry and
Kocmi, Tom and
Monz, Christof",
booktitle = "Proceedings of the Eighth Conference on Machine Translation",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.wmt-1.97",
doi = "10.18653/v1/2023.wmt-1.97",
pages = "1014--1030",
abstract = "Behavioral testing in NLP allows fine-grained evaluation of systems by examining their linguistic capabilities through the analysis of input-output behavior. Unfortunately, existing work on behavioral testing in Machine Translation (MT) is currently restricted to largely handcrafted tests covering a limited range of capabilities and languages. To address this limitation, we propose to use Large Language Models (LLMs) to generate a diverse set of source sentences tailored to test the behavior of MT models in a range of situations. We can then verify whether the MT model exhibits the expected behavior through matching candidate sets that are also generated using LLMs. Our approach aims to make behavioral testing of MT systems practical while requiring only minimal human effort. In our experiments, we apply our proposed evaluation framework to assess multiple available MT systems, revealing that while in general pass-rates follow the trends observable from traditional accuracy-based metrics, our method was able to uncover several important differences and potential bugs that go unnoticed when relying only on accuracy.",
}
| Behavioral testing in NLP allows fine-grained evaluation of systems by examining their linguistic capabilities through the analysis of input-output behavior. Unfortunately, existing work on behavioral testing in Machine Translation (MT) is currently restricted to largely handcrafted tests covering a limited range of capabilities and languages. To address this limitation, we propose to use Large Language Models (LLMs) to generate a diverse set of source sentences tailored to test the behavior of MT models in a range of situations. We can then verify whether the MT model exhibits the expected behavior through matching candidate sets that are also generated using LLMs. Our approach aims to make behavioral testing of MT systems practical while requiring only minimal human effort. In our experiments, we apply our proposed evaluation framework to assess multiple available MT systems, revealing that while in general pass-rates follow the trends observable from traditional accuracy-based metrics, our method was able to uncover several important differences and potential bugs that go unnoticed when relying only on accuracy. | [
"Ferr",
"o, Javier",
"Sperber, Matthias",
"Setiawan, Hendra",
"Telaar, Dominic",
"Hasan, Sa{\\v{s}}a"
] | Automating Behavioral Testing in Machine Translation | wmt-1.97 | null | [
"https://github.com/apple/ml-behavioral-testing-for-mt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.wmt-1.98.bib | https://aclanthology.org/2023.wmt-1.98/ | @inproceedings{pires-etal-2023-one,
title = "One Wide Feedforward Is All You Need",
author = "Pires, Telmo and
Vilarinho Lopes, Ant{\'o}nio and
Assogba, Yannick and
Setiawan, Hendra",
editor = "Koehn, Philipp and
Haddow, Barry and
Kocmi, Tom and
Monz, Christof",
booktitle = "Proceedings of the Eighth Conference on Machine Translation",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.wmt-1.98",
doi = "10.18653/v1/2023.wmt-1.98",
pages = "1031--1044",
abstract = "The Transformer architecture has two main non-embedding components: Attention and the Feed Forward Network (FFN). Attention captures interdependencies between words regardless of their position, while the FFN non-linearly transforms each input token independently. In this work we explore the role of the FFN, and find that despite taking up a significant fraction of the model{'}s parameters, it is highly redundant. Concretely, we are able to substantially reduce the number of parameters with only a modest drop in accuracy by removing the FFN on the decoder layers and sharing a single FFN across the encoder. Finally we scale this architecture back to its original size by increasing the hidden dimension of the shared FFN, achieving substantial gains in both accuracy and latency with respect to the original Transformer Big.",
}
| The Transformer architecture has two main non-embedding components: Attention and the Feed Forward Network (FFN). Attention captures interdependencies between words regardless of their position, while the FFN non-linearly transforms each input token independently. In this work we explore the role of the FFN, and find that despite taking up a significant fraction of the model{'}s parameters, it is highly redundant. Concretely, we are able to substantially reduce the number of parameters with only a modest drop in accuracy by removing the FFN on the decoder layers and sharing a single FFN across the encoder. Finally we scale this architecture back to its original size by increasing the hidden dimension of the shared FFN, achieving substantial gains in both accuracy and latency with respect to the original Transformer Big. | [
"Pires, Telmo",
"Vilarinho Lopes, Ant{\\'o}nio",
"Assogba, Yannick",
"Setiawan, Hendra"
] | One Wide Feedforward Is All You Need | wmt-1.98 | 2309.01826 | [
""
] | https://huggingface.co/papers/2309.01826 | 3 | 31 | 1 | 4 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.wmt-1.99.bib | https://aclanthology.org/2023.wmt-1.99/ | @inproceedings{aepli-etal-2023-benchmark,
title = "A Benchmark for Evaluating Machine Translation Metrics on Dialects without Standard Orthography",
author = {Aepli, No{\"e}mi and
Amrhein, Chantal and
Schottmann, Florian and
Sennrich, Rico},
editor = "Koehn, Philipp and
Haddow, Barry and
Kocmi, Tom and
Monz, Christof",
booktitle = "Proceedings of the Eighth Conference on Machine Translation",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.wmt-1.99",
doi = "10.18653/v1/2023.wmt-1.99",
pages = "1045--1065",
abstract = "For sensible progress in natural language processing, it is important that we are aware of the limitations of the evaluation metrics we use. In this work, we evaluate how robust metrics are to non-standardized dialects, i.e. spelling differences in language varieties that do not have a standard orthography. To investigate this, we collect a dataset of human translations and human judgments for automatic machine translations from English to two Swiss German dialects. We further create a challenge set for dialect variation and benchmark existing metrics{'} performances. Our results show that existing metrics cannot reliably evaluate Swiss German text generation outputs, especially on segment level. We propose initial design adaptations that increase robustness in the face of non-standardized dialects, although there remains much room for further improvement. The dataset, code, and models are available here: https://github.com/textshuttle/dialect{\_}eval",
}
| For sensible progress in natural language processing, it is important that we are aware of the limitations of the evaluation metrics we use. In this work, we evaluate how robust metrics are to non-standardized dialects, i.e. spelling differences in language varieties that do not have a standard orthography. To investigate this, we collect a dataset of human translations and human judgments for automatic machine translations from English to two Swiss German dialects. We further create a challenge set for dialect variation and benchmark existing metrics{'} performances. Our results show that existing metrics cannot reliably evaluate Swiss German text generation outputs, especially on segment level. We propose initial design adaptations that increase robustness in the face of non-standardized dialects, although there remains much room for further improvement. The dataset, code, and models are available here: https://github.com/textshuttle/dialect{\_}eval | [
"Aepli, No{\\\"e}mi",
"Amrhein, Chantal",
"Schottmann, Florian",
"Sennrich, Rico"
] | A Benchmark for Evaluating Machine Translation Metrics on Dialects without Standard Orthography | wmt-1.99 | 2311.16865 | [
"https://github.com/microsofttranslator/ntrex"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.wmt-1.100.bib | https://aclanthology.org/2023.wmt-1.100/ | @inproceedings{fernandes-etal-2023-devil,
title = "The Devil Is in the Errors: Leveraging Large Language Models for Fine-grained Machine Translation Evaluation",
author = "Fernandes, Patrick and
Deutsch, Daniel and
Finkelstein, Mara and
Riley, Parker and
Martins, Andr{\'e} and
Neubig, Graham and
Garg, Ankush and
Clark, Jonathan and
Freitag, Markus and
Firat, Orhan",
editor = "Koehn, Philipp and
Haddow, Barry and
Kocmi, Tom and
Monz, Christof",
booktitle = "Proceedings of the Eighth Conference on Machine Translation",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.wmt-1.100",
doi = "10.18653/v1/2023.wmt-1.100",
pages = "1066--1083",
abstract = "Automatic evaluation of machine translation (MT) is a critical tool driving the rapid iterative development of MT systems. While considerable progress has been made on estimating a single scalar quality score, current metrics lack the informativeness of more detailed schemes that annotate individual errors, such as Multidimensional Quality Metrics (MQM). In this paper, we help fill this gap by proposing AutoMQM, a prompting technique which leverages the reasoning and in-context learning capabilities of large language models (LLMs) and asks them to identify and categorize errors in translations. We start by evaluating recent LLMs, such as PaLM and PaLM-2, through simple score prediction prompting, and we study the impact of labeled data through in-context learning and finetuning. We then evaluate AutoMQM with PaLM-2 models, and we find that it improves performance compared to just prompting for scores (with particularly large gains for larger models) while providing interpretability through error spans that align with human annotations.",
}
| Automatic evaluation of machine translation (MT) is a critical tool driving the rapid iterative development of MT systems. While considerable progress has been made on estimating a single scalar quality score, current metrics lack the informativeness of more detailed schemes that annotate individual errors, such as Multidimensional Quality Metrics (MQM). In this paper, we help fill this gap by proposing AutoMQM, a prompting technique which leverages the reasoning and in-context learning capabilities of large language models (LLMs) and asks them to identify and categorize errors in translations. We start by evaluating recent LLMs, such as PaLM and PaLM-2, through simple score prediction prompting, and we study the impact of labeled data through in-context learning and finetuning. We then evaluate AutoMQM with PaLM-2 models, and we find that it improves performance compared to just prompting for scores (with particularly large gains for larger models) while providing interpretability through error spans that align with human annotations. | [
"Fern",
"es, Patrick",
"Deutsch, Daniel",
"Finkelstein, Mara",
"Riley, Parker",
"Martins, Andr{\\'e}",
"Neubig, Graham",
"Garg, Ankush",
"Clark, Jonathan",
"Freitag, Markus",
"Firat, Orhan"
] | The Devil Is in the Errors: Leveraging Large Language Models for Fine-grained Machine Translation Evaluation | wmt-1.100 | 2308.07286 | [
""
] | https://huggingface.co/papers/2308.07286 | 5 | 5 | 0 | 10 | [] | [] | [] | 1 | Poster |