diff --git "a/20240921/2408.11926v2.json" "b/20240921/2408.11926v2.json" new file mode 100644--- /dev/null +++ "b/20240921/2408.11926v2.json" @@ -0,0 +1,435 @@ +{ + "title": "Defining Boundaries: The Impact of Domain Specification on Cross-Language and Cross-Domain Transfer in Machine Translation", + "abstract": "Recent advancements in neural machine translation (NMT) have revolutionized the field, yet the dependency on extensive parallel corpora limits progress for low-resource languages and domains. Cross-lingual transfer learning offers a promising solution by utilizing data from high-resource languages but often struggles with in-domain NMT. This paper investigates zero-shot cross-lingual domain adaptation for NMT, focusing on the impact of domain specification and linguistic factors on transfer effectiveness. Using English as the source language and Spanish for fine-tuning, we evaluate multiple target languages, including Portuguese, Italian, French, Czech, Polish, and Greek. We demonstrate that both language-specific and domain-specific factors influence transfer effectiveness, with domain characteristics playing a crucial role in determining cross-domain transfer potential. We also explore the feasibility of zero-shot cross-lingual cross-domain transfer, providing insights into which domains are more responsive to transfer and why. Our results show the importance of well-defined domain boundaries and transparency in experimental setups for in-domain transfer learning.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Advancements in neural machine translation (NMT) have transformed the field, but these systems often require large parallel corpora, which are scarce for low-resource languages. Cross-lingual transfer learning has emerged as a solution, leveraging high-resource language data to improve translation quality for low-resource languages. However, a critical limitation is the pre-training on heterogeneous data, which hampers the translation of specialized texts due to a mismatch between training data and the target domain. Domain adaptation mitigates this by adjusting NMT models to specific domains, enhancing translation performance for specialized content such as legal or medical texts. Despite advancements, the intersection of cross-lingual transfer learning and domain adaptation\u2014specifically zero-shot cross-lingual domain adaptation\u2014remains under-explored.\nIn this paper, we investigate zero-shot cross-lingual domain adaptation for NMT, integrating transfer learning across languages with domain adaptation. The objective is to fine-tune multilingual pre-trained NMT models with domain-specific data from a resource-rich language pair, capturing domain-specific knowledge and transferring it to low-resource languages within the same domain. We focus our study on the following questions:\nEnhancement of Domain-Specific Quality: Evaluating whether the domain-specific quality of machine translation (MT) output for one language pair can be improved by fine-tuning the model on domain-relevant data from another language pair.\nTransferability of Domains: Identifying the transferable and non-transferable domains within the scope of zero-shot cross-lingual domain adaptation for NMT.\nInfluence of Language-Specific vs. Domain-Specific Factors: Analyzing the relative influence of language-specific and domain-specific factors on the effectiveness of zero-shot cross-lingual domain adaptation.\n###figure_1### Languages explored include English as the source, Spanish for fine-tuning, and Portuguese, Italian, French, Czech, Polish, and Greek as evaluation targets, representing varying linguistic similarities. Results demonstrate that domain-specific translation quality improves through zero-shot cross-lingual domain adaptation, with specialized domains (e.g., medical, legal, IT) benefiting more than mixed domains (e.g., movie subtitles, TED talks). The study underscores the critical role of well-defining domain data to effectively transfer domain-specific knowledge across languages and domains." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Background and Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Transfer Learning Across Languages", + "text": "Building on the advancements in NMT, particularly the Transformer architecture (Vaswani et al., 2017 ###reference_b30###), transfer learning techniques have gained significant interest in recent years. The Transformer\u2019s self-attention mechanism allows for effective modeling of long-range dependencies, leading to state-of-the-art performance in machine translation (MT) tasks. However, the success of these models is heavily reliant on the availability of large-scale parallel corpora, which are often scarce for low-resource languages. Cross-lingual transfer learning provides a promising solution to this data scarcity challenge by leveraging knowledge acquired from high-resource languages. The common approach uses multilingual pre-trained models like mT5 (Xue et al., 2021 ###reference_b33###), mBERT (Devlin et al., 2019 ###reference_b7###) and XLM-R (Conneau et al., 2020 ###reference_b3###) that are initially trained on large multilingual corpora to capture cross-lingual representations. These models can then be fine-tuned on limited parallel data for low-resource languages, transferring knowledge from the high-resource languages present in the pre-training data (Costa-juss\u00e0 et al., 2022 ###reference_b5###; Fan et al., 2021 ###reference_b8###). Importantly, the efficiency depends on the linguistic proximity between the languages involved (Dabre et al., 2017 ###reference_b6###). Extending further, recent work explores zero-shot translation capabilities for unseen language pairs when no parallel data exists, relying solely on multilingual pre-training (Ji et al., 2020 ###reference_b13###; Firat et al., 2016 ###reference_b10###). Approaches include pivoting through high-resource languages (Johnson et al., 2017 ###reference_b14###), modifying architectures to build universal encoders that map diverse languages into shared representations (Gu and Feng, 2022 ###reference_b12###), and auxiliary training objectives encouraging cross-lingual similarity (Al-Shedivat and Parikh, 2019 ###reference_b1###). While promising, zero-shot translation remains challenging due to linguistic dissimilarities between languages and the model\u2019s ability to generalize across diverse language pairs (Philippy et al., 2023 ###reference_b21###; Lin et al., 2019 ###reference_b18###)." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Domain Adaptation", + "text": "A separate key challenge in NMT is domain adaptation, as general-purpose NMT systems struggle to effectively translate specialized domains like legal or medical texts due to vocabulary and stylistic mismatches from their training data (Koehn and Knowles, 2017 ###reference_b16###). A crucial aspect is how domains are defined. The conventional view defines a domain as \u201ca corpus from a specific source, which may differ from other domains in terms of topic, genre, style, level of formality, etc.\u201d (Koehn and Knowles, 2017 ###reference_b16###). van der Wees et al. (2017 ###reference_b29###) provide a more comprehensive view, defining a domain as a combination of provenance, topic, and genre, where provenance refers to the source of a given text, the topic pertains to the subject matter, and genre encompasses the function, register, syntax, and style of the text, as defined by Santini (2004 ###reference_b26###). Of these, topic and genre are regarded as the most critical complementary features for characterizing a domain effectively (Saunders, 2022 ###reference_b27###). Plank (2016 ###reference_b22###), on the other hand, argues that topic and genre may not fully capture all domain factors, suggesting other aspects like sentence type, language, etc. Despite this, much of the research on domain adaptation in MT has mainly focused on genre as the primary domain differentiator, constructing experiments around datasets like OpenSubtitles111http://www.opensubtitles.org/ ###reference_ww.opensubtitles.org/### or TED (Cooper Stickland et al., 2021 ###reference_b4###; Lai et al., 2022 ###reference_b17###; Verma et al., 2022 ###reference_b31###),222https://www.ted.com ###reference_www.ted.com### which offer data within the same genre while overlooking even topic specifics. However, a more comprehensive approach should account for both topic and genre, as well as other domain-specific language patterns that may impact translation quality." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Zero-Shot Cross-Lingual Domain Adaptation", + "text": "Building on transfer learning across languages and domain adaptation, zero-shot cross-lingual domain adaptation tackles adapting multilingual NMT to specialized domains for languages with limited parallel in-domain data. The approach leverages a multilingual pre-trained NMT model fine-tuned on domain-specific data from a high-resource language pair, enabling it to capture domain knowledge. This adapted model can then translate target languages within the same domain, transferring domain knowledge in a zero-shot manner. The effectiveness of this approach depends on several influencing factors, including the linguistic proximity between the pivot and target languages, the nature and complexity of the domain itself, as well as the composition of the initial general-purpose pre-training data. Moreover, as the model transitions from general pre-training to specialized fine-tuning, there is also a risk of catastrophic forgetting where previously learned general knowledge is overwritten (Saunders, 2022 ###reference_b27###). Approaches like embedding freezing (Grosso et al., 2022 ###reference_b11###) have proved effective in mitigating this issue. While Grosso et al. (2022 ###reference_b11###) demonstrated the feasibility of zero-shot cross-lingual domain adaptation for the medical domain, comprehensive analysis across diverse languages and domains is still needed to understand these influencing factors and their relative impacts fully." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Experimental Setup", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Domains, Datasets, and Languages", + "text": "We curate data from six different domains, encompassing both specialized areas with well-defined topics and genres, as well as more mixed domains with distinct genres but diverse topics. The three main specialized domains under focus are medical (documents related to medicinal products and their use), legal (European Union laws), and information technology (IT) (localization documents and technical user manuals). Additionally, we include two domains (movie subtitles and TED talks) that do not strictly adhere to the conventional definition of a domain, as they exhibit distinct genres but lack a specific topical focus. These two are primarily included for experimental purposes to understand the importance of domain specification in domain adaptation for NMT. The sixth domain we include is a general-purpose domain (sample of Wikipedia articles and online newspapers) to assess whether improvements in translation quality are consistent across all domains or specific to those that diverge significantly from the pre-training data (see Table 1 ###reference_###).\nFor fine-tuning the model, we use English and Spanish as the source and target languages, respectively, across all six domains. When evaluating the fine-tuned models, English remains the source language, while the target languages are chosen based on their linguistic relatedness to Spanish, the pivot language used for fine-tuning. The target languages for evaluation are Portuguese, Italian, French, Czech, Polish, and Greek. By selecting English as the source language for both fine-tuning and evaluation, we ensure consistency across results while potentially benefiting from shared linguistic properties between English and the target languages. The test data for each domain is parallel across all languages used in the experiments." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Data Preprocessing", + "text": "To prepare the datasets for fine-tuning and evaluation, we apply a standardized preprocessing pipeline. For the OPUS-sourced datasets333https://opus.nlpl.eu/ ###reference_opus.nlpl.eu/### in the medical, IT, movie subtitles, and TED talks domains, we download the English-to-Spanish parallel data as well as the English-to-target-language data for each of the six target languages. The Wikipedia dataset for English-to-Spanish is also obtained from OPUS to represent the general domain training and validation sets. For the multilingual legal domain data from MultiEURLEX,444https://github.com/nlpaueb/multi-eurlex ###reference_### as well as the NTREX-128 test set555https://github.com/MicrosoftTranslator/NTREX ###reference_EX### representing the general domain, we retrieve the corpora directly from their respective repositories. As the OPUS datasets are originally aligned at the sentence level, we first clean and filter the data using a consistent methodology across all domains. This involves removing sentences with token lengths outside the range of 3 to 100, irregular punctuation, duplicates, and sentences exhibiting a similarity of 60% or higher. Any source or target sentences that do not meet these criteria are discarded from the parallel data in each domain to mitigate potential noise. Next, we preprocess the data to extract a set of 1,000 parallel sentences as the test data for all eight languages (English and the seven target languages) in each domain.\nThese parallel test sentences are then removed from the English-Spanish parallel dataset, with the remaining data split into a validation set of 1,000 sentences and a training set of 150,000 sentences for each domain. For the NTREX-128 test set, we do not apply any data cleaning and simply extract the first 1,000 sentences for each of the eight languages to create the general domain test sets. The MultiEURLEX corpus, being document-aligned, requires aligning the data at the sentence level for the training, validation, and test splits provided in the corpus before the cleaning steps applied to the OPUS datasets." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Model", + "text": "The baseline model employed in our experiments is the M2M-100 (many-to-many for 100 languages) multilingual NMT model (Fan et al., 2021 ###reference_b8###). M2M-100 is a sequence-to-sequence Transformer model capable of translating directly between any pair of its supported 100 languages without relying on English as an intermediary. The original M2M-100 model was trained on a diverse parallel corpus spanning 100 languages, curated through a novel data mining approach called the \u201cbridge language family mining strategy\u201d (Fan et al., 2021 ###reference_b8###) and mined from the Common Crawl corpus.666https://commoncrawl.org/ ###reference_commoncrawl.org/### While all languages used in our experiments, except for Italian, are considered bridge languages in the M2M-100 model, the lack of information on the exact data sizes mined for each language limits our ability to comprehensively analyze how the amount of pre-training data affects the language-specific performance of the model. For our experiments, we utilize the m2m100\\_418M variant777https://huggingface.co/facebook/m2m100_418M/tree/main ###reference_M/tree/main### with 418 million parameters to meet the hardware limitations of our project." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Implementation", + "text": "The implementation of the M2M-100 model is based on the Transformers library from Hugging Face.888https://huggingface.co/docs/transformers/en/index ###reference_n/index### We fine-tune the model on the English-Spanish parallel dataset for each of the six domains separately, using the same set of configurations, including a learning rate of 1e-7, a batch size of 10, dropout of 0.1, weight decay of 0.0, label smoothing of 0.2, AdamW optimizer with betas of 0.9 and 0.98, a maximum input/output length of 128 tokens, mixed precision (FP16) training, and a maximum of 60,000 training steps, with epoch-level validation. Due to resource limitations, we do not train the models until convergence. We freeze the embedding layers of the encoder to prevent catastrophic forgetting of the pre-trained representations. The models are trained on a single NVIDIA-T4 GPU (each for around seven hours), and the best checkpoint is saved for inference. For inference, we load each fine-tuned M2M-100 model and its corresponding tokenizer and generate the translated output using beam search decoding with a beam size of 4. The baselines involve evaluating the initial m2m100\\_418M checkpoint on all target languages in each domain separately, using the same inference configurations as for the fine-tuned models. All experiments are conducted in the Google Colab environment.999https://colab.research.google.com/ ###reference_###101010All models and data used in the experiments will be made public in the final version.\nWe include BLEU (Papineni et al., 2002 ###reference_b20###) as one of our evaluation metrics due to its widespread adoption within the MT research. We rely on sacreBLEU (Post, 2018 ###reference_b23###) (13a tokenizer) and its implementation of paired bootstrap resampling (Koehn, 2004 ###reference_b15###) with 300 resampling trials and a p-value threshold of 0.05. We also use COMET (Rei et al., 2020 ###reference_b24###), for which we employ comet-compare (bootstrap resampling and a paired t-test, 300 resamples and a p-value of 0.05)." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Results", + "text": "First, we examine the effectiveness of fine-tuning a massively multilingual pre-trained model, M2M-100, on domain-specific data from an English-Spanish language pair and evaluate its performance across various target languages within the same domain.\nTable 2 ###reference_### presents the main results, comparing the pre-trained baseline model\u2019s performance against the fine-tuned models across six domains. Across the specialized medical, legal, and IT domains, the fine-tuned models consistently outperform the baseline model, achieving higher BLEU and COMET scores for all language pairs (see Appendix A ###reference_### for CometKiwi scores), with few exceptions depending on the evaluation metric. This improvement demonstrates the effectiveness of domain adaptation through fine-tuning, enabling the model to capture domain-specific knowledge and vocabulary more effectively within well-defined, specialized domains.\nHowever, the degree of improvement varies across domains and target languages. For the medical domain, while fine-tuning significantly improves the results for the English-Spanish language pair used for fine-tuning, the improvement becomes less pronounced as the target language diverges linguistically from the pivot language (Spanish). This trend is less evident in the legal and IT domains, where substantial improvements are observed across almost all language pairs, even for those linguistically distant from Spanish. In contrast, for the mixed domains of movie subtitles and TED talks, which lack a specific topical focus but exhibit distinct genres, the improvements from fine-tuning are more modest, although still consistent across most language pairs. Notably, the improvements are more consistent for the TED talks domain than the movie subtitles domain. Moreover, even the performance for the English-Spanish language pair used for fine-tuning shows only marginal improvements compared to the specialized domains. In the general domain, the fine-tuned model exhibits mixed performance, with statistically insignificant improvements in some language pairs and slight decreases in others compared to the baseline model. This could be attributed to the composition of the pre-training data, which contains substantial general-domain content, potentially limiting the benefits of fine-tuning on a specific language pair, as the baseline model is already well-trained on such data.\nTurning to the subject of domain transferability, Table 3 ###reference_### illustrates the results of zero-shot cross-lingual cross-domain transfer, where the model is fine-tuned on movie subtitles or TED talks mixed domains and evaluated on the specialized medical, legal, and IT domains across all language pairs. The results exhibit mixed performance, depending on the domain and language pair, with improvements generally more pronounced, while not statistically significant, when evaluated using COMET than BLEU. Notably, fine-tuning on TED talks leads to more effective cross-domain transfer compared to movie subtitles. Regarding the target specialized domains, cross-domain transfer appears more effective for the IT domain than legal or medical domains. This observation suggests certain domains may be more responsive to cross-domain transfer due to their inherent characteristics." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Discussion", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Enhancing Domain-Specific Translation Quality", + "text": "The main results (see Table 2 ###reference_###) imply that the domain-specific quality of MT output for one language pair can indeed be enhanced by fine-tuning the model on domain-relevant data from another language pair. The fine-tuned models consistently outperform the baseline across the specialized medical, legal, and IT domains, achieving higher BLEU and COMET scores for almost all language pairs. This improvement demonstrates the effectiveness of domain adaptation through fine-tuning, enabling the model to capture domain-specific knowledge and vocabulary more effectively within well-defined domains. While the degree of improvement varies across domains and target languages, the consistent gains observed underscore the potential of zero-shot cross-lingual domain adaptation. By utilizing domain-specific data from a single language pair, the model\u2019s performance can be enhanced for multiple target languages within the same domain, even for those linguistically distant from the fine-tuning language pair." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Factors Influencing Domain Transferability", + "text": "The results presented in Tables 2 ###reference_### and 3 ###reference_### reveal varying levels of improvement in zero-shot cross-lingual domain adaptation across specialized domains, as well as differences in how responsive certain domains are to cross-domain transfer. To investigate the factors behind these observations, we take a deeper look into the characteristics of each domain. One key factor that emerges is the linguistic complexity of the domains, as evidenced by the average sentence length and vocabulary size shown in Table 4 ###reference_###. The movie subtitles domain has approximately twice the shorter average sentence length and a 20% smaller vocabulary size in the training set compared to the TED talks domain, which could explain why fine-tuning on the more complex TED talks domain results in better generalization to specialized domains.\nAmong the specialized domains, the IT domain stands out with the smallest average sentence length and vocabulary size, potentially making it easier for models to generalize and adapt to this domain. In contrast, the legal domain appears to be the most linguistically complex in terms of sentence length and vocabulary size. However, certain unique aspects of legal language may paradoxically facilitate transfer to this domain. The unique language style prevalent in legal texts, featuring archaic vocabulary, repetitive syntax patterns, and convoluted sentences can, while increasing overall complexity, also aid models in adapting to the legal domain\u2019s language use. Additionally, the substantially longer average sentence length in the legal domain (around three times longer than the IT domain and two times longer than the medical domain) means that when training for a fixed number of epochs, the model was exposed to more training data in terms of the total number of words. This increased exposure to legal language likely contributed to better adaptation to this domain.\nThese findings demonstrate that domain transferability is not solely determined by a domain\u2019s topical specificity or genre but is also influenced by its linguistic properties and inherent characteristics. While domains exhibiting greater linguistic complexity in terms of sentence length and vocabulary size tend to be more challenging for effective cross-lingual adaptation and cross-domain transfer, certain linguistic features can actually facilitate transfer. Consequently, in addition to the domain\u2019s topical and genre specialization, a nuanced understanding of a domain\u2019s inherent linguistic characteristics, as well as the amount of exposure to a domain\u2019s language during training, is required for optimizing cross-lingual adaptation and cross-domain transfer. Fundamental properties like vocabulary size, sentence complexity, and consistent language styles emerge as key factors influencing transferability potential across domains." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Influence of Language-Specific and Domain-Specific Factors", + "text": "The results also show that both language-specific and domain-specific factors have influence on the task, with language influences being more prominent in the main results, while domain-specific factors play a crucial role in determining cross-domain transfer effectiveness. The main results (see Table 2 ###reference_###) highlight language influences, with larger gains observed for the fine-tuning language pair (English-Spanish) and less pronounced improvements as the linguistic distance from the pivot language increases. This trend shows the role of language-specific factors and linguistic proximity in the effectiveness of cross-lingual domain adaptation. However, domain-specific factors also play an essential role, as evident from the varied performance across specialized domains discussed in the previous section.\nThe findings suggest that while fine-tuning the model on closely related language pairs is advantageous, inherent domain characteristics ultimately determine the limits of both zero-shot cross-lingual domain adaptation and cross-domain transfer. Additionally, as shown in Table 5 ###reference_###, the specificity of domain vocabularies in specialized domains, where terminologies, though highly specialized, are consistent across languages, facilitates language transfer even for linguistically distant languages. This characteristic can be attributed to the presence of loanwords, which are fully or partly assimilated from one language into another, or terms that often remain untranslated. Furthermore, the influence of the pre-training data cannot be overlooked. Seven out of the eight languages involved in the experiments were considered bridge languages in the M2M-100 model, meaning the model was trained on comparatively larger amounts of data in these languages, capturing general language knowledge (the only exception is Italian, which, while not a bridge language in M2M-100, is still considered a relatively high-resource language). This pre-existing language knowledge likely contributes to the observed language transfer capabilities.\nTherefore, while language-specific factors play a more prominent role in the main task of zero-shot cross-lingual domain adaptation, domain-specific factors are crucial determinants of cross-domain transfer effectiveness. These findings highlight the importance of considering both language and domain aspects when adapting NMT systems for specialized domains. The results also demonstrate that domains can exhibit distinct linguistic properties outside the notions of topic and genre, which significantly impact the effectiveness of cross-lingual adaptation and cross-domain transfer.\nConsequently, a more nuanced understanding of a domain\u2019s inherent linguistic characteristics is crucial for optimizing these processes. By emphasizing the influence of domain-specific factors on transfer performance, this study highlights the importance of revisiting the traditional definitions of \u201cdomain\u201d in MT. Current research often skips over this distinction, relying primarily on topical or genre-based domain classifications. However, the findings highlight the need for a more comprehensive characterization that accounts for linguistic complexities and domain-specific language patterns to develop effective strategies for tailoring NMT systems to diverse specialized domains." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion and Future Work", + "text": "The study explored zero-shot cross-lingual domain adaptation for NMT to bridge the gap between large multilingual models and their limited specialized domain performance. Experiments across six domains showed consistent translation quality improvements for most target languages compared to the pre-trained baseline when fine-tuning a pivot language from the same domain. However, the degree of improvement varied based on the linguistic proximity between pivot and target languages, as well as the domain\u2019s linguistic complexity and data variety. The feasibility of zero-shot cross-lingual cross-domain transfer, using models fine-tuned on mixed domains for specialized domains, was also investigated. While achievable, effectiveness depended on the properties of pivot and target domains, with more consistent language domains being more responsive to cross-domain transfer. Future work can explore a broader range of specialized domains/languages and focus specifically on cross-domain transfer techniques. Also, including diverse language families will enable a better understanding of how language characteristics interact with domain transferability potential." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Limitations", + "text": "The findings of this research are subject to several limitations, the first and primary one being the use of an existing pre-trained model (M2M-100) rather than pre-training a model specifically for the languages and domains included in the experiments due to resource constraints. Pre-training a model from scratch would have allowed for better control over the pre-training data, ensuring minimal overlap with the domain-specific data used for fine-tuning. Furthermore, the models are not fine-tuned until convergence, which potentially impacts the full realization of their capabilities. Additionally, the experiments focus on languages primarily from the Indo-European family, limiting the insights into the influence of linguistic relatedness and transferability potential across more diverse language pairs. Addressing these limitations is crucial for future research to foster a more comprehensive representation of the proposed approach." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Ethical Considerations", + "text": "In this paper, we used open-source datasets and models that were already published. Therefore, there are no ethical considerations for both our models and results." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Complete Evaluation Results", + "text": "In this section, we report full results, illustrating the translation performance on all language pairs for all models, evaluated using BLEU, COMET, and CometKiwi." + } + ], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DomainDatasets
MedicalEMEA-V3 (Tiedemann, 2012)\n
LegalMultiEURLEX (Chalkidis et\u00a0al., 2021)\n
IT\n\n\n\n\n\n\n\n
Ubuntu, KDE4, GNOME,
PHP, OpenOffice (Tiedemann, 2012)\n
\n
Movie SubtitlesOpenSubtitles (Lison et\u00a0al., 2018)\n
TED TalksTED2020 (Reimers and Gurevych, 2020)\n
General\n\n\n\n\n\n\n\n
Wikipedia (Wo\u0142k and Marasek, 2014)\n
NTREX-128 (Federmann et\u00a0al., 2022)\n
\n
\n
\n
Table 1: Datasets used for each domain.
\n
", + "capture": "Table 1: Datasets used for each domain." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
en\u2192esen\u2192pten\u2192iten\u2192fren\u2192csen\u2192plen\u2192el
BLEUCOMETBLEUCOMETBLEUCOMETBLEUCOMETBLEUCOMETBLEUCOMETBLEUCOMET
med_base360.845131.70.852729.70.848128.60.817922.60.8582180.823429.10.8595
med_ft43.50.866334.40.864530.60.856329.20.829322.50.865918.10.834228.80.8644
leg_base44.10.849442.90.865336.90.872141.80.853330.60.890533.20.879640.70.8972
leg_ft49.60.871443.30.877438.30.882743.70.869433.40.903834.20.890643.90.9059
it_base34.20.795426.60.7929.20.796425.20.738619.10.787221.20.781827.10.7972
it_ft44.40.834429.50.810433.40.815129.50.761919.70.814722.50.807828.50.816
sub_base22.80.7548200.767817.80.744616.10.6988150.759414.60.747215.70.7768
sub_ft24.70.762721.20.774118.80.753217.50.707315.30.76615.10.756615.60.7725
ted_base35.90.818931.10.823829.30.81433.70.792621.80.808216.20.7857290.8439
ted_ft37.80.826231.90.830329.60.8199350.795521.90.819416.60.793429.20.8475
gen_base32.70.78630.50.803231.50.800426.10.753425.10.783118.40.76326.20.8138
gen_ft32.50.786830.80.804331.30.800225.1\n0.75324.70.787418.10.760425.4\n0.8065\n
\n
\n
Table 2: Main results, comparing the performance of the baseline (base) and fine-tuned (ft) models using BLEU and COMET scores across six domains: medical (med), legal (leg), IT (it), movie subtitles (sub), TED talks (ted), and a general domain (gen). Bold values indicate the higher score between the baseline and fine-tuned models for each metric, domain, and language pair. Arrows indicate significantly worse () and better () performance compared to the baseline according to each metric, domain, and language pair (p-value < 0.05).
\n
", + "capture": "Table 2: Main results, comparing the performance of the baseline (base) and fine-tuned (ft) models using BLEU and COMET scores across six domains: medical (med), legal (leg), IT (it), movie subtitles (sub), TED talks (ted), and a general domain (gen). Bold values indicate the higher score between the baseline and fine-tuned models for each metric, domain, and language pair. Arrows indicate significantly worse () and better () performance compared to the baseline according to each metric, domain, and language pair (p-value < 0.05)." + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
en\u2192esen\u2192pten\u2192iten\u2192fren\u2192csen\u2192plen\u2192el
BLEUCOMETBLEUCOMETBLEUCOMETBLEUCOMETBLEUCOMETBLEUCOMETBLEUCOMET
med_base360.845131.70.852729.70.848128.60.817922.60.8582180.823429.10.8595
sub2med35.3\n0.8416\n30.4\n0.8473\n28.6\n0.8444\n27.9\n0.8133\n21.6\n0.8506\n16.9\n0.815\n27.4\n0.8491\n
ted2med35.70.844431.40.852829.40.847728\n0.815522.10.859517.4\n0.821728.4\n0.8585
med_ft 43.50.866334.40.864530.60.856329.20.829322.50.865918.10.834228.80.8644
leg_base44.10.849442.90.865336.90.872141.80.853330.60.890533.20.879640.70.8972
sub2leg41.6\n0.8424\n40.3\n0.8614\n35.1\n0.8645\n40.2\n0.851429.6\n0.8866\n30.7\n0.8729\n38.4\n0.8919\n
ted2leg43.4\n0.850342.1\n0.867136.60.8717420.858931.30.892332.80.882341.30.899
leg_ft49.60.871443.30.877438.30.882743.70.869433.40.903834.20.890643.90.9059
it_base34.20.795426.60.7929.20.796425.20.738619.10.787221.20.781827.10.7972
sub2it34.20.792926.60.786629.90.796224.70.7335\n18\n0.782920.3\n0.779325.7\n0.7826\n
ted2it35.90.803228.20.796830.80.802326.50.745119.70.794221.70.789828.30.8009
it_ft44.40.834429.5 0.810433.40.815129.50.761919.70.814722.50.807828.50.816
\n
\n
Table 3: Results for zero-shot cross-lingual cross-domain transfer, where models fine-tuned on movie subtitles (sub) or TED talks (ted) are evaluated on medical (med), legal (leg), and IT (it) domains using BLEU and COMET scores. Scores compare cross-domain transfer models (sub2med, ted2med, etc.) against baseline (base) and models fine-tuned on the target domain (med_ft, leg_ft, it_ft). Bold values indicate higher score between the baseline and fine-tuned models for each metric, domain, and language pair. Gray rows show expected higher scores (also in bold) for direct fine-tuning. Arrows indicate significantly worse () and better () performance compared to the baseline according to each metric, domain, and language pair (p-value < 0.05).
\n
", + "capture": "Table 3: Results for zero-shot cross-lingual cross-domain transfer, where models fine-tuned on movie subtitles (sub) or TED talks (ted) are evaluated on medical (med), legal (leg), and IT (it) domains using BLEU and COMET scores. Scores compare cross-domain transfer models (sub2med, ted2med, etc.) against baseline (base) and models fine-tuned on the target domain (med_ft, leg_ft, it_ft). Bold values indicate higher score between the baseline and fine-tuned models for each metric, domain, and language pair. Gray rows show expected higher scores (also in bold) for direct fine-tuning. Arrows indicate significantly worse () and better () performance compared to the baseline according to each metric, domain, and language pair (p-value < 0.05)." + }, + "4": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DomainTrainValidTest
Avg Sent LenVocab SizeAvg Sent LenVocab SizeAvg Sent LenVocab Size
Medical19.2223,82419.203,46216.932,966
Legal36.3333,65733.452,97128.572,975
IT12.0725,11112.542,4289.411,830
Movie subtitles9.7634,31010.352,19310.342,117
TED talks18.1042,51318.753,18517.433,421
\n
\n
Table 4: Average sentence length (in words) and vocabulary size (number of unique words) for the training, validation, and test sets across specialized and mixed domains.
\n
", + "capture": "Table 4: Average sentence length (in words) and vocabulary size (number of unique words) for the training, validation, and test sets across specialized and mixed domains." + }, + "5": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nMedical Domain\n\n
\n\nSource Text: Like all medicines, KOGENATE Bayer 1000 IU can cause side effects, although not everybody gets them.\n\n
\n\nItalian Reference: Come tutti i medicinali, KOGENATE Bayer 1000 UI pu\u00f2 causare effetti indesiderati sebbene non tutte le persone li manifestino.\n\n
\n\nPolish Reference: Jak ka\u017cdy lek, preparat KOGENATE Bayer 1000 j. m. mo\u017ce powodowa\u0107 dzia\u0142ania niepo\u017c\u0105dane, chocia\u017c nie u ka\u017cdego one wyst\u0105pi\u0105.\n\n
\n\nLegal Domain\n\n
\n\nSource Text: 5. As soon as it adopts a delegated act, the Commission shall notify it simultaneously to the European Parliament and to the Council.\n\n
\n\nItalian Reference: 5. Non appena adotta un atto delegato, la Commissione ne d\u00e0 contestualmente notifica al Parlamento europeo e al Consiglio.\n\n
\n\nPolish Reference: 5. Niezw\u0142ocznie po przyj\u0119ciu aktu delegowanego Komisja przekazuje go r\u00f3wnocze\u015bnie Parlamentowi Europejskiemu i Radzie.\n\n
\n\nIT Domain\n\n
\n\nSource Text: The ISTIME() function returns True if the parameter is a time value. Otherwise, it returns False.\n\n
\n\nItalian Reference: La funzione ISTIME() restituisce True se il parametro \u00e8 un\u2019 espressione di tempo. Altrimenti restituisce False.\n\n
\n\nPolish Reference: Funkcja ISTIME() zwraca True je\u015bli parametr ma warto\u015b\u0107 czasu, w przeciwnym wypadku False.\n\n
\n
\n
Table 5: Examples of reference translations in the medical, legal, and IT domain test sets across English-to-Italian and English-to-Polish language pairs, highlighting the presence of loanwords and untranslated terms, marked in red in the source text and in blue in the references.
\n
", + "capture": "Table 5: Examples of reference translations in the medical, legal, and IT domain test sets across English-to-Italian and English-to-Polish language pairs, highlighting the presence of loanwords and untranslated terms, marked in red in the source text and in blue in the references." + }, + "6": { + "table_html": "
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
en\u2192es
BLEUCOMETCometKiwi
med_base360.84510.8355
sub2med35.3\n0.8416\n0.8307\n
ted2med35.70.84440.8358
med_ft43.50.86630.8481
leg_base44.10.84940.8514
sub2leg41.6\n0.8424\n0.8448\n
ted2leg43.4\n0.85030.8617\n
leg_ft49.60.87140.8682
it_base34.20.79540.7681
sub2it34.20.79290.7656
ted2it35.9\n0.8032\n0.782\n
it_ft44.40.83440.7979
sub_base22.80.75480.767
sub_ft24.70.76270.771
ted_base35.90.81890.8042
ted_ft37.80.82620.8076
gen_base32.70.7860.7723
gen_ft32.50.78680.7736
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
en\u2192pt
BLEUCOMETCometKiwi
med_base31.70.85270.8285
sub2med30.4\n0.8473\n0.8228\n
ted2med31.40.85280.8319\n
med_ft34.40.86450.8383
leg_base42.90.86530.8407
sub2leg40.3\n0.8614\n0.8377
ted2leg42.1\n0.86710.851\n
leg_ft43.30.87740.8559
it_base26.60.790.7685
sub2it26.60.78660.7637\n
ted2it28.2\n0.7968\n0.7788\n
it_ft29.50.81040.7886
sub_base200.76780.7694
sub_ft21.20.77410.7722
ted_base31.10.82380.7979
ted_ft31.90.83030.8032
gen_base30.50.80320.7705
gen_ft30.80.80430.7705
\n
\n
\n
Table 6: Translation performance on en\u2192es and en\u2192pt language pairs for all models, evaluated using BLEU, COMET, and CometKiwi metrics. Bold values indicate the higher score between the baseline and fine-tuned models for each metric, domain, and language pair. Arrows indicate significantly worse () and better () performance compared to the baseline according to each metric, domain, and language pair (p-value < 0.05).
\n
", + "capture": "Table 6: Translation performance on en\u2192es and en\u2192pt language pairs for all models, evaluated using BLEU, COMET, and CometKiwi metrics. Bold values indicate the higher score between the baseline and fine-tuned models for each metric, domain, and language pair. Arrows indicate significantly worse () and better () performance compared to the baseline according to each metric, domain, and language pair (p-value < 0.05)." + }, + "7": { + "table_html": "
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
en\u2192it
BLEUCOMETCometKiwi
med_base29.70.84810.8392
sub2med28.6\n0.8444\n0.8351\n
ted2med29.40.84770.8416
med_ft30.60.85630.8456
leg_base36.90.87210.8567
sub2leg35.1\n0.8645\n0.8492\n
ted2leg36.60.87170.8657\n
leg_ft38.30.88270.8714
it_base29.20.79640.7753
sub2it29.90.79620.7798
ted2it30.8\n0.8023\n0.7909\n
it_ft33.40.81510.7974
sub_base17.80.74460.7783
sub_ft18.80.75320.7845
ted_base29.30.8140.8141
ted_ft29.60.81990.8176
gen_base31.50.80040.7837
gen_ft31.30.80020.7821
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
en\u2192fr
BLEUCOMETCometKiwi
med_base28.60.81790.8397
sub2med27.9\n0.8133\n0.8346\n
ted2med28\n0.81550.8394
med_ft29.20.82930.8462
leg_base41.80.85330.8513
sub2leg40.2\n0.85140.8483\n
ted2leg420.8589\n0.8597\n
leg_ft43.70.86940.8668
it_base25.20.73860.7734
sub2it24.70.7335\n0.7739
ted2it26.5\n0.7451\n0.7871\n
it_ft29.50.76190.7939
sub_base16.10.69880.7759
sub_ft17.50.70730.7809
ted_base33.70.79260.81
ted_ft350.79550.8124
gen_base26.10.75340.7829
gen_ft25.1\n0.7530.7835
\n
\n
\n
\n
Table 7: Translation performance on en\u2192it and en\u2192fr language pairs for all models, evaluated using BLEU, COMET, and CometKiwi metrics. Bold values indicate the higher score between the baseline and fine-tuned models for each metric, domain, and language pair. Arrows indicate significantly worse () and better () performance compared to the baseline according to each metric, domain, and language pair (p-value < 0.05).
\n
", + "capture": "Table 7: Translation performance on en\u2192it and en\u2192fr language pairs for all models, evaluated using BLEU, COMET, and CometKiwi metrics. Bold values indicate the higher score between the baseline and fine-tuned models for each metric, domain, and language pair. Arrows indicate significantly worse () and better () performance compared to the baseline according to each metric, domain, and language pair (p-value < 0.05)." + }, + "8": { + "table_html": "
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
en\u2192cs
BLEUCOMETCometKiwi
med_base22.60.85820.8313
sub2med21.6\n0.8506\n0.8249\n
ted2med22.10.85950.8335
med_ft22.50.86590.8402
leg_base30.60.89050.8409
sub2leg29.6\n0.8866\n0.8394
ted2leg31.3\n0.89230.8582\n
leg_ft33.40.90380.8614
it_base19.10.78720.7558
sub2it18\n0.78290.7561
ted2it19.70.7942\n0.7709\n
it_ft19.70.81470.7855
sub_base150.75940.7622
sub_ft15.30.7660.7644
ted_base21.80.80820.79
ted_ft21.90.81940.7997
gen_base25.10.78310.7471
gen_ft24.70.78740.7502
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
en\u2192pl
BLEUCOMETCometKiwi
med_base180.82340.8063
sub2med16.9\n0.815\n0.7987\n
ted2med17.4\n0.82170.807
med_ft18.10.83420.817
leg_base33.20.87960.8251
sub2leg30.7\n0.8729\n0.8201\n
ted2leg32.80.88230.8358\n
leg_ft34.20.89060.8427
it_base21.20.78180.7518
sub2it20.3\n0.77930.7488
ted2it21.70.7898\n0.7629\n
it_ft22.50.80780.776
sub_base14.60.74720.755
sub_ft15.10.75660.7576
ted_base16.20.78570.775
ted_ft16.60.79340.7819
gen_base18.40.7630.7445
gen_ft18.10.76040.7452
\n
\n
\n
Table 8: Translation performance on en\u2192cs and en\u2192pl language pairs for all models, evaluated using BLEU, COMET, and CometKiwi metrics. Bold values indicate the higher score between the baseline and fine-tuned models for each metric, domain, and language pair. Arrows indicate significantly worse () and better () performance compared to the baseline according to each metric, domain, and language pair (p-value < 0.05).
\n
", + "capture": "Table 8: Translation performance on en\u2192cs and en\u2192pl language pairs for all models, evaluated using BLEU, COMET, and CometKiwi metrics. Bold values indicate the higher score between the baseline and fine-tuned models for each metric, domain, and language pair. Arrows indicate significantly worse () and better () performance compared to the baseline according to each metric, domain, and language pair (p-value < 0.05)." + }, + "9": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
en\u2192el
BLEUCOMETCometKiwi
med_base29.10.85950.8152
sub2med27.4\n0.8491\n0.8039\n
ted2med28.4\n0.85850.8152
med_ft28.80.86440.8214
leg_base40.70.89720.8384
sub2leg38.4\n0.8919\n0.835
ted2leg41.3\n0.8990.8499\n
leg_ft43.90.90590.8552
it_base27.10.79720.7563
sub2it25.7\n0.7826\n0.7469\n
ted2it28.3\n0.80090.7662\n
it_ft28.50.8160.7754
sub_base15.70.77680.7695
sub_ft15.60.77250.7651
ted_base290.84390.7934
ted_ft29.20.84750.7991
gen_base26.20.81380.7754
gen_ft25.4\n0.8065\n0.7724
\n
Table 9: Translation performance on en\u2192el language pair for all models, evaluated using BLEU, COMET, and CometKiwi metrics. Bold values indicate the higher score between the baseline and fine-tuned models for each metric and domain. Arrows indicate significantly worse () and better () performance compared to the baseline according to each metric and domain (p-value < 0.05).
\n
", + "capture": "Table 9: Translation performance on en\u2192el language pair for all models, evaluated using BLEU, COMET, and CometKiwi metrics. Bold values indicate the higher score between the baseline and fine-tuned models for each metric and domain. Arrows indicate significantly worse () and better () performance compared to the baseline according to each metric and domain (p-value < 0.05)." + } + }, + "image_paths": { + "1": { + "figure_path": "2408.11926v2_figure_1.png", + "caption": "Figure 1: Illustration of the zero-shot cross-lingual domain adaptation and zero-shot cross-lingual cross-domain transfer setups.", + "url": "http://arxiv.org/html/2408.11926v2/extracted/5869349/acl_latex.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Consistency by agreement in zero-shot neural machine translation.", + "author": "Maruan Al-Shedivat and Ankur Parikh. 2019.", + "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1184\u20131197, Minneapolis, Minnesota. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/N19-1121" + } + }, + { + "2": { + "title": "MultiEURLEX - a multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer.", + "author": "Ilias Chalkidis, Manos Fergadiotis, and Ion Androutsopoulos. 2021.", + "venue": "In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6974\u20136996, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2021.emnlp-main.559" + } + }, + { + "3": { + "title": "Unsupervised cross-lingual representation learning at scale.", + "author": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020.", + "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440\u20138451, Online. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2020.acl-main.747" + } + }, + { + "4": { + "title": "Multilingual domain adaptation for NMT: Decoupling language and domain information with adapters.", + "author": "Asa Cooper Stickland, Alexandre Berard, and Vassilina Nikoulina. 2021.", + "venue": "In Proceedings of the Sixth Conference on Machine Translation, pages 578\u2013598, Online. Association for Computational Linguistics.", + "url": "https://aclanthology.org/2021.wmt-1.64" + } + }, + { + "5": { + "title": "No language left behind: Scaling human-centered machine translation.", + "author": "Marta R Costa-juss\u00e0, James Cross, Onur \u00c7elebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022.", + "venue": "arXiv preprint arXiv:2207.04672.", + "url": null + } + }, + { + "6": { + "title": "An empirical study of language relatedness for transfer learning in neural machine translation.", + "author": "Raj Dabre, Tetsuji Nakagawa, and Hideto Kazawa. 2017.", + "venue": "In Proceedings of the 31st Pacific Asia Conference on Language, Information and Computation, pages 282\u2013286. The National University (Phillippines).", + "url": "https://aclanthology.org/Y17-1038" + } + }, + { + "7": { + "title": "BERT: Pre-training of deep bidirectional transformers for language understanding.", + "author": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019.", + "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171\u20134186, Minneapolis, Minnesota. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/N19-1423" + } + }, + { + "8": { + "title": "Beyond english-centric multilingual machine translation.", + "author": "Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021.", + "venue": "Journal of Machine Learning Research, 22(107):1\u201348.", + "url": null + } + }, + { + "9": { + "title": "NTREX-128 \u2013 news test references for MT evaluation of 128 languages.", + "author": "Christian Federmann, Tom Kocmi, and Ying Xin. 2022.", + "venue": "In Proceedings of the First Workshop on Scaling Up Multilingual Evaluation, pages 21\u201324, Online. Association for Computational Linguistics.", + "url": "https://aclanthology.org/2022.sumeval-1.4" + } + }, + { + "10": { + "title": "Zero-resource translation with multi-lingual neural machine translation.", + "author": "Orhan Firat, Baskaran Sankaran, Yaser Al-onaizan, Fatos T. Yarman Vural, and Kyunghyun Cho. 2016.", + "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 268\u2013277, Austin, Texas. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/D16-1026" + } + }, + { + "11": { + "title": "Robust domain adaptation for pre-trained multilingual neural machine translation models.", + "author": "Mathieu Grosso, Alexis Mathey, Pirashanth Ratnamogan, William Vanhuffel, and Michael Fotso. 2022.", + "venue": "In Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22), pages 1\u201311, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2022.mmnlu-1.1" + } + }, + { + "12": { + "title": "Improving zero-shot multilingual translation with universal representations and cross-mapping.", + "author": "Shuhao Gu and Yang Feng. 2022.", + "venue": "In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 6492\u20136504, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2022.findings-emnlp.485" + } + }, + { + "13": { + "title": "Cross-lingual pre-training based transfer for zero-shot neural machine translation.", + "author": "Baijun Ji, Zhirui Zhang, Xiangyu Duan, Min Zhang, Boxing Chen, and Weihua Luo. 2020.", + "venue": "Proceedings of the AAAI Conference on Artificial Intelligence, 34(01):115\u2013122.", + "url": "https://doi.org/10.1609/aaai.v34i01.5341" + } + }, + { + "14": { + "title": "Google\u2019s multilingual neural machine translation system: Enabling zero-shot translation.", + "author": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017.", + "venue": "Transactions of the Association for Computational Linguistics, 5:339\u2013351.", + "url": "https://doi.org/10.1162/tacl_a_00065" + } + }, + { + "15": { + "title": "Statistical significance tests for machine translation evaluation.", + "author": "Philipp Koehn. 2004.", + "venue": "In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388\u2013395, Barcelona, Spain. Association for Computational Linguistics.", + "url": "https://aclanthology.org/W04-3250" + } + }, + { + "16": { + "title": "Six challenges for neural machine translation.", + "author": "Philipp Koehn and Rebecca Knowles. 2017.", + "venue": "In Proceedings of the First Workshop on Neural Machine Translation, pages 28\u201339, Vancouver. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/W17-3204" + } + }, + { + "17": { + "title": "m^4 adapter: Multilingual multi-domain adaptation for machine translation with a meta-adapter.", + "author": "Wen Lai, Alexandra Chronopoulou, and Alexander Fraser. 2022.", + "venue": "In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4282\u20134296, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2022.findings-emnlp.315" + } + }, + { + "18": { + "title": "Choosing transfer languages for cross-lingual learning.", + "author": "Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019.", + "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3125\u20133135, Florence, Italy. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/P19-1301" + } + }, + { + "19": { + "title": "OpenSubtitles2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora.", + "author": "Pierre Lison, J\u00f6rg Tiedemann, and Milen Kouylekov. 2018.", + "venue": "In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).", + "url": "https://aclanthology.org/L18-1275" + } + }, + { + "20": { + "title": "Bleu: a method for automatic evaluation of machine translation.", + "author": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002.", + "venue": "In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311\u2013318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", + "url": "https://doi.org/10.3115/1073083.1073135" + } + }, + { + "21": { + "title": "Towards a common understanding of contributing factors for cross-lingual transfer in multilingual language models: A review.", + "author": "Fred Philippy, Siwen Guo, and Shohreh Haddadan. 2023.", + "venue": "In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5877\u20135891, Toronto, Canada. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2023.acl-long.323" + } + }, + { + "22": { + "title": "What to do about non-standard (or non-canonical) language in nlp.", + "author": "Barbara Plank. 2016.", + "venue": "arXiv preprint arXiv:1608.07836.", + "url": null + } + }, + { + "23": { + "title": "A call for clarity in reporting BLEU scores.", + "author": "Matt Post. 2018.", + "venue": "In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186\u2013191, Brussels, Belgium. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/W18-6319" + } + }, + { + "24": { + "title": "COMET: A neural framework for MT evaluation.", + "author": "Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020.", + "venue": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685\u20132702, Online. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2020.emnlp-main.213" + } + }, + { + "25": { + "title": "Making monolingual sentence embeddings multilingual using knowledge distillation.", + "author": "Nils Reimers and Iryna Gurevych. 2020.", + "venue": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4512\u20134525, Online. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2020.emnlp-main.365" + } + }, + { + "26": { + "title": "State-of-the-art on automatic genre identification.", + "author": "Marina Santini. 2004.", + "venue": "Information Technology Research Institute Technical Report Series, ITRI, University of Brighton.", + "url": null + } + }, + { + "27": { + "title": "Domain adaptation and multi-domain adaptation for neural machine translation: A survey.", + "author": "Danielle Saunders. 2022.", + "venue": "Journal of Artificial Intelligence Research, 75:351\u2013424.", + "url": null + } + }, + { + "28": { + "title": "Parallel data, tools and interfaces in OPUS.", + "author": "J\u00f6rg Tiedemann. 2012.", + "venue": "In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC\u201912), pages 2214\u20132218, Istanbul, Turkey. European Language Resources Association (ELRA).", + "url": "http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf" + } + }, + { + "29": { + "title": "What\u2019s in a domain?: Towards fine-grained adaptation for machine translation.", + "author": "ME van der Wees et al. 2017.", + "venue": null, + "url": null + } + }, + { + "30": { + "title": "Attention is all you need.", + "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017.", + "venue": "In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.", + "url": "https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf" + } + }, + { + "31": { + "title": "Strategies for adapting multilingual pre-training for domain-specific machine translation.", + "author": "Neha Verma, Kenton Murray, and Kevin Duh. 2022.", + "venue": "In Proceedings of the 15th biennial conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 31\u201344, Orlando, USA. Association for Machine Translation in the Americas.", + "url": "https://aclanthology.org/2022.amta-research.3" + } + }, + { + "32": { + "title": "Building subject-aligned comparable corpora and mining it for truly parallel sentence pairs.", + "author": "Krzysztof Wo\u0142k and Krzysztof Marasek. 2014.", + "venue": "Procedia Technology, 18:126\u2013132.", + "url": "https://doi.org/10.1016/j.protcy.2014.11.024" + } + }, + { + "33": { + "title": "mT5: A massively multilingual pre-trained text-to-text transformer.", + "author": "Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021.", + "venue": "In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483\u2013498, Online. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2021.naacl-main.41" + } + } + ], + "url": "http://arxiv.org/html/2408.11926v2" +} \ No newline at end of file