Is Preference Alignment Always the Best Option to Enhance LLM-Based Translation? An Empirical Analysis
Abstract
Neural metrics for machine translation (MT) evaluation have become increasingly prominent due to their superior correlation with human judgments compared to traditional lexical metrics. Researchers have therefore utilized neural metrics through quality-informed decoding strategies, achieving better results than likelihood-based methods. With the rise of Large Language Models (LLMs), preference-based alignment techniques have gained attention for their potential to enhance translation quality by optimizing model weights directly on preferences induced by quality estimators. This study focuses on Contrastive Preference Optimization (CPO) and conducts extensive experiments to evaluate the impact of preference-based alignment on translation quality. Our findings indicate that while CPO consistently outperforms Supervised Fine-Tuning (SFT) on high-quality data with regard to the alignment metric, it may lead to instability across downstream evaluation metrics, particularly between neural and lexical ones. Additionally, we demonstrate that relying solely on the base model for generating candidate translations achieves performance comparable to using multiple external systems, while ensuring better consistency across downstream metrics.
Community
Link to models and datasets: https://huggingface.co/collections/artefactory/translation-alignment-analysis-66f3e56669bed67108c309ea
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Cross-lingual Human-Preference Alignment for Neural Machine Translation with Direct Quality Optimization (2024)
- Model-based Preference Optimization in Abstractive Summarization without Human Feedback (2024)
- Instruction-tuned Large Language Models for Machine Translation in the Medical Domain (2024)
- ASFT: Aligned Supervised Fine-Tuning through Absolute Likelihood (2024)
- Progressively Selective Label Enhancement for Language Model Alignment (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper