{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:12:37.794434Z" }, "title": "Two Heads are Better than One? Verification of Ensemble Effect in Neural Machine Translation", "authors": [ { "first": "Chanjun", "middle": [], "last": "Park", "suffix": "", "affiliation": { "laboratory": "", "institution": "Korea University", "location": {} }, "email": "" }, { "first": "Sungjin", "middle": [], "last": "Park", "suffix": "", "affiliation": { "laboratory": "", "institution": "NAVER Corp", "location": {} }, "email": "sungjin.park@navercorp.com" }, { "first": "Seolhwa", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Copenhagen", "location": {} }, "email": "" }, { "first": "Taesun", "middle": [], "last": "Whang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Wisenut Inc", "location": {} }, "email": "taesunwhang@wisenut.co.kr" }, { "first": "Heuiseok", "middle": [], "last": "Lim", "suffix": "", "affiliation": { "laboratory": "", "institution": "Korea University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In the field of natural language processing, ensembles are broadly known to be effective in improving performance. This paper analyzes how ensemble of neural machine translation (NMT) models affect performance improvement by designing various experimental setups (i.e., intra-, inter-ensemble, and nonconvergence ensemble). To an in-depth examination, we analyze each ensemble method with respect to several aspects such as different attention models and vocab strategies. Experimental results show that ensembling is not always resulting in performance increases and give noteworthy negative findings.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In the field of natural language processing, ensembles are broadly known to be effective in improving performance. This paper analyzes how ensemble of neural machine translation (NMT) models affect performance improvement by designing various experimental setups (i.e., intra-, inter-ensemble, and nonconvergence ensemble). To an in-depth examination, we analyze each ensemble method with respect to several aspects such as different attention models and vocab strategies. Experimental results show that ensembling is not always resulting in performance increases and give noteworthy negative findings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Ensemble is a technique for obtaining accurate predictions by combining the predictions of several models. In neural machine translation (NMT), ensembles are most closely related to vocabulary (vocab). In particular, by aggregating the prediction results of multiple models, the ensemble averages the probability values over the vocab of the softmax layer (Garmash and Monz, 2016; Tan et al., 2020) .", "cite_spans": [ { "start": 356, "end": 380, "text": "(Garmash and Monz, 2016;", "ref_id": "BIBREF7" }, { "start": 381, "end": 398, "text": "Tan et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most existing studies on ensembling for NMT focus on improving the performance of shared tasks. For example, in WMT's shared task, almost every participating team applied the ensemble technique to improve performance (Fonseca et al., 2019; Chatterjee et al., 2019; Specia et al., 2020) . However, in most cases, only experimental results that improved performance by applying the ensemble technique are introduced; in-depth comparative analysis is rarely conducted (Wei et al., 2020; Park et al., 2020a; Lee et al., 2020) . In this study, we attempt to investigate three main aspects regarding ensembles for machine translation.", "cite_spans": [ { "start": 217, "end": 239, "text": "(Fonseca et al., 2019;", "ref_id": "BIBREF5" }, { "start": 240, "end": 264, "text": "Chatterjee et al., 2019;", "ref_id": "BIBREF3" }, { "start": 265, "end": 285, "text": "Specia et al., 2020)", "ref_id": "BIBREF17" }, { "start": 465, "end": 483, "text": "(Wei et al., 2020;", "ref_id": "BIBREF20" }, { "start": 484, "end": 503, "text": "Park et al., 2020a;", "ref_id": "BIBREF13" }, { "start": 504, "end": 521, "text": "Lee et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "First, we investigate the ensemble effect when using various vocab strategies and different attention models. For the vocab that plays the most important role in the machine translation ensemble, three dif-ferent experimental conditions-independent vocab, share vocab, and share embedding-are applied to two different attention networks Vaswani et al., 2017) .", "cite_spans": [ { "start": 337, "end": 358, "text": "Vaswani et al., 2017)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Second, we investigate which among intraensemble and inter-ensemble is more effective for performance improvement. Notably, intraensemble is an ensemble of identical models, while inter-ensemble represents an ensemble between models that follow different network structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Third, we analyze the effect of the nonconverging model on ensemble performance. Most existing studies create an ensemble using only those models that have been fitted. However, we perform in-depth comparative analysis experiments, raising the question of whether the nonconverging model has only negative effects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Ensemble prediction is a representative method for improving the translation performance of NMT systems. A commonly reported method involves aggregating predictions by training different models of the same architecture in parallel. Then, during decoding, we average the probabilities over the output layers of the target vocab at each time step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ensemble in NMT", "sec_num": "2.1" }, { "text": "In this study, we follow the above method for ensembles using the same model architecture (i.e., intra-ensemble). Because the target vocabs are the same, ensembles of components with different model structures (i.e., inter-ensemble) also follow the same method. We conduct experiments on intra-and inter-ensemble effects on LSTM-Attention and Transformer (Vaswani et al., 2017) networks, combined with various vocab strategies. A detailed description of the vocab strategies is provided in the next section.", "cite_spans": [ { "start": 355, "end": 377, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Ensemble in NMT", "sec_num": "2.1" }, { "text": "Independent vocab means learning separate weights from each encoder and decoder without any connection or communication between the source and target languages. Most NMT research follows this methodology Vaswani et al., 2017; Park et al., 2021b) .", "cite_spans": [ { "start": 204, "end": 225, "text": "Vaswani et al., 2017;", "ref_id": "BIBREF19" }, { "start": 226, "end": 245, "text": "Park et al., 2021b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Vocab Strategies", "sec_num": "2.2" }, { "text": "Share vocab means that the model uses a common vocab for a combination of the source and target languages (Lakew et al., 2018) . That is, the encoder and decoder interact within the same vocab, and can refer to each other's vocabs, thus making the model more robust.", "cite_spans": [ { "start": 106, "end": 126, "text": "(Lakew et al., 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Vocab Strategies", "sec_num": "2.2" }, { "text": "Share embedding goes a step beyond sharing the source-target vocabs, and shares the vocab embedding matrix of the encoder and decoder (Liu et al., 2019) . It enables the sharing of vocab from various languages through one integrated embedding space. Consequently, it has been widely used in recent multilingual NMT (Aharoni et al., 2019) .", "cite_spans": [ { "start": 134, "end": 152, "text": "(Liu et al., 2019)", "ref_id": "BIBREF11" }, { "start": 315, "end": 337, "text": "(Aharoni et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Vocab Strategies", "sec_num": "2.2" }, { "text": "Intra-ensemble is an ensemble of identical models. We use the LSTM-Attention and Transformer networks with three different weights for the combinations to average the probabilities of ensemble. Inter-ensemble represents an ensemble of models that follow different network structures. We experiment with different combinations of the two attention-based models and vocab strategies. In this experiment, we aim to suggest directions for creating a better ensemble technique by analyzing the effect of intra-and inter-ensemble combined with the vocab strategy and size of vocabs. Moreover, all experiments compare vocab size (i.e., 32k and 64k) by considering performance difference with respect to vocab capacity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design of Intra-and Inter-ensemble", "sec_num": "2.3.1" }, { "text": "In general, ensembles comprise well-fitted models; however, we conduct experiments to examine how models with less convergence affect the ensemble. Non-converging models are trained using \u00bc of the iterations needed for convergent models. Consequently, we can determine whether non-converging models will cause only negative effects on the ensemble. 3 Experimental Settings and Results", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design of Non-convergence Ensemble", "sec_num": "2.3.2" }, { "text": "In this study, we use the Korean-English parallel corpus released on AI Hub 1 as the training data (Park and Lim, 2020) . Several studies (Park et al., 2020b (Park et al., , 2021a ) have adopted this corpus for Korean language NMT research. The total amount of sentence pairs is 1.6M. We randomly extract 5k sentence pairs twice from the training data, and use these data for the validation and test sets. We employ sentencepiece (Kudo and Richardson, 2018) for subword tokenization. The performance evaluation of all the translation results are proceeds with BLEU score by leveraging multibleu.perl script given by Moses.", "cite_spans": [ { "start": 99, "end": 119, "text": "(Park and Lim, 2020)", "ref_id": null }, { "start": 138, "end": 157, "text": "(Park et al., 2020b", "ref_id": "BIBREF16" }, { "start": 158, "end": 179, "text": "(Park et al., , 2021a", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.1" }, { "text": "Our negative findings and their insights are illustrated by NF and Insight, respectively. The performance results of the baseline models (seen as recipes of an ensemble) are shown in Tables 1 to 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3.2" }, { "text": "We show the results of applying the vocab strategies to two different models, namely LSTM-Attention and Transformer with three different weights (i.e., w 1 , w 2 , and w 3 ) for intra-ensemble in Table 1 . Additionally, we compare the combinations of those weights to investigate the apparent intra-ensemble effect. Table 1 shows the significant variation in ensemble effect, according to the vocab strategies. The Transformer and LSTM-Attention models exhibit the highest performance in the order of independent vocab (ind), share embedding (se), and share vocab (sv) in both vocab sizes (32k and 64k, respectively). NF1: Although Lakew et al. 2018; Park et al. (2021a) found that share vocab (sv) is effective when subword tokenization is applied as a pretokenize step during training, it has a negative effect in model training. However, we find that sharing the vocab improves performance; nevertheless, sharing the embedding space is more helpful. However, training with independent vocab strategy shows the highest performance without interference.", "cite_spans": [ { "start": 651, "end": 670, "text": "Park et al. (2021a)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 196, "end": 203, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 316, "end": 323, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Comparison of Intra-ensemble Effect", "sec_num": "3.2.1" }, { "text": "To an in-depth examination, we analyze the intraensemble performance with respect to four aspects: i) different attention models, ii) vocab strategy, iii) vocab size, and iv) the number of models in the ensemble.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Intra-ensemble Effect", "sec_num": "3.2.1" }, { "text": "i) Different attention models We investigate the influence of the different attention networks on an ensemble. Self-attention-based networks refine ( ) all vocab strategies; however, there are more cases without performance improvement than those with performance improvement using the Bahdanau attention-based networks. That is, NF2: specifically, with the Bahdanau attention network, there is a case in which a negative result ( ) occurred in an ensemble. This result is interpreted as a difference in the robustness (i.e., with minimum performance degradation) and capacity (i.e., parallelism) of the model, as the following interpretations show. The Bahdanau attention network is exposed to problems with long-term dependencies (Bengio et al., 1993) , resulting in the weak processing of long-sequences and requiring more data than self-attention. Furthermore, the Bahdanau attention network is wellknown for not being context-aware, leading to variance in model prediction (Gao et al., 2021) . Thus, Insight: it can be seen that there is a lack of capacity and robustness in the Bahdanau attention network. Owing to this, it can be inferred that this network has a negative influence on the ensemble effect.", "cite_spans": [ { "start": 732, "end": 753, "text": "(Bengio et al., 1993)", "ref_id": "BIBREF2" }, { "start": 978, "end": 996, "text": "(Gao et al., 2021)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison of Intra-ensemble Effect", "sec_num": "3.2.1" }, { "text": "ii) Vocab strategy We observe that there is performance variation among the vocab strategies. Our finding is in line with the aforementioned result in terms of the ensemble effect being the same as the ordering in LSTM-Attention, which is ind, se and sv. This is reasonable because of the previous result; however, NF3: mixing the vocab (i.e., sv) has a negative effect on the ensemble performance. Table 2 : Performance of inter-ensembles (combinations of vocab sizes and attention networks). Here, the column \"Intra\" records the highest score among the two different models, according to each vocabulary strategy in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 399, "end": 406, "text": "Table 2", "ref_id": null }, { "start": 618, "end": 625, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Comparison of Intra-ensemble Effect", "sec_num": "3.2.1" }, { "text": "iii) Vocab size As illustrated in Table 1 , the performance of intra-ensemble models shows vast differences owing to vocab sizes. We confirm that a vocab size of 64k is more effective than that of 32k; consequently, we theorize that vocab size is closely related to the effect of ensemble. In the Transformer ensemble with independent vocab (i.e., Transformer ind ), the BLEU score is improved by 0.73 in the baseline model at 32k; in contrast, the BLEU score is improved by 1.52 at 64k, which is an improvement of more than two times. In other words, NF4: even a slight alteration of vocab size significantly affects the ensemble performance, and we know that a broader capacity leads to better performance when conducting vocab prediction using softmax.", "cite_spans": [], "ref_spans": [ { "start": 34, "end": 41, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Comparison of Intra-ensemble Effect", "sec_num": "3.2.1" }, { "text": "We explore the number of ensembles, and further validate the performance using the model combinations. NF5: Contrary to the expectation that the number and performance of the ensemble models would show a positive correlation, this was not the case. As shown in Table 1 , only six cases, i.e., 50% of the 12 cases, demonstrate a good score in the three models ({w 1 , w 2 , w 3 }) of the ensemble. The remaining six cases demonstrate a good score in two models ({w 1 , w 3 }, {w 2 , w 3 }). This result proves the statement of NF5.", "cite_spans": [], "ref_spans": [ { "start": 261, "end": 268, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "iv) Number of ensemble models", "sec_num": null }, { "text": "Inter-ensemble is feasible if the same vocab is used across the two models. Therefore, an ensemble of Transformer and LSTM-Attention model with the corresponding vocab strategy can be created; a comparison of the performance results with intraensembles is presented in Table 1 . The results for inter-ensembles are shown in Table 2 . This result shows that the baseline (i.e., Intra) exhibits better performance than inter-ensembles. Notably, inter-ensembles show a negative effect. Table 4 : Performance of combinations of inter-ensembles with non-convergence (NC) and convergence (C) conditions along with vocab sizes and attention networks. \u2206% represents the average relative rate (i.e., the differences), from first to third columns, of inter-ensembles over \"Best Inter.\" Note that the bold numbers indicate the best score in each case.", "cite_spans": [], "ref_spans": [ { "start": 269, "end": 276, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 324, "end": 331, "text": "Table 2", "ref_id": null }, { "start": 483, "end": 490, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Intra-ensemble or Inter-ensemble?", "sec_num": "3.2.2" }, { "text": "That is, NF6: inter-ensemble exhibits a negative effect on performance, resulting in performance degradation in all cases. It seems that the heterogeneous model architecture from the two different models acted as a hindrance to performance improvement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intra-ensemble or Inter-ensemble?", "sec_num": "3.2.2" }, { "text": "In this section, we investigate the effect of nonconvergence on intra-and inter-ensembles. We choose the model with the best score (intra-and inter-ensembles) from Table 1 and Table 2 , respectively, as target models for comparison. The performance results of intra-and interensemble with non-convergence models are illustrated in Table 3 and Table 4 , respectively. Table 3 , intra-ensemble with a non-convergence model leads to negative results compared to the baseline model (i.e., Best Intra) in LSTM-Attention. Using the Transformer model as a baseline generally lead to performance degradation; however, the decrease is relatively small. There are a few exceptions ( ) that show that nonconverging models with Transformer sometimes perform better when ensembled together.", "cite_spans": [], "ref_spans": [ { "start": 164, "end": 183, "text": "Table 1 and Table 2", "ref_id": "TABREF1" }, { "start": 331, "end": 350, "text": "Table 3 and Table 4", "ref_id": "TABREF4" }, { "start": 367, "end": 374, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Does Non-convergence Ensemble Cause Negative Results?", "sec_num": "3.2.3" }, { "text": "These results revealed that NF7: the Trans-former model is more robust than the LSTM-Attention model and stronger under adverse conditions. Additionally, it is inferred that the underfitted model plays a role in noise injection, boosting performance. Insight: This result is a meaningful in that even a non-convergence model, which many researchers neglect, can help improve performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intra-ensemble In", "sec_num": null }, { "text": "Inter-ensemble As detailed in Table 4 , the performance decreased in all cases, and NF8: nonconverging model causes a highly negative result in inter-ensembles compared to intra-ensembles. In conclusion, inter-ensemble provide negative results in all cases for the experiments conducted in this study.", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 37, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Intra-ensemble In", "sec_num": null }, { "text": "Most researchers consider it common sense that ensembles are better; however, few studies have conducted any type of close verification. In this study, we perform various tests based on three experimental designs related to the ensemble technique, and demonstrate its negative aspects. Thus, we provide insights into the positives and negatives of ensembling for machine translation. In the future, we plan to conduct expanded experiments based on different language pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "https://aihub.or.kr/aidata/87", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Massively multilingual neural machine translation", "authors": [ { "first": "Roee", "middle": [], "last": "Aharoni", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.00089" ] }, "num": null, "urls": [], "raw_text": "Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. arXiv preprint arXiv:1903.00089.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.0473" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The problem of learning long-term dependencies in recurrent networks", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Frasconi", "suffix": "" }, { "first": "Patrice", "middle": [], "last": "Simard", "suffix": "" } ], "year": 1993, "venue": "IEEE international conference on neural networks", "volume": "", "issue": "", "pages": "1183--1188", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, Paolo Frasconi, and Patrice Simard. 1993. The problem of learning long-term dependen- cies in recurrent networks. In IEEE international conference on neural networks, pages 1183-1188. IEEE.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Findings of the wmt 2019 shared task on automatic post-editing", "authors": [ { "first": "Rajen", "middle": [], "last": "Chatterjee", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Negri", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Turchi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "3", "issue": "", "pages": "11--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rajen Chatterjee, Christian Federmann, Matteo Negri, and Marco Turchi. 2019. Findings of the wmt 2019 shared task on automatic post-editing. In Proceed- ings of the Fourth Conference on Machine Transla- tion (Volume 3: Shared Task Papers, Day 2), pages 11-28.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1406.1078" ] }, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Findings of the wmt 2019 shared tasks on quality estimation", "authors": [ { "first": "Erick", "middle": [], "last": "Fonseca", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Yankovskaya", "suffix": "" }, { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Martins", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Fishel", "suffix": "" }, { "first": "", "middle": [], "last": "Federmann", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "3", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erick Fonseca, Lisa Yankovskaya, Andr\u00e9 FT Martins, Mark Fishel, and Christian Federmann. 2019. Find- ings of the wmt 2019 shared tasks on quality esti- mation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 1-10.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Scalable transformers for neural machine translation", "authors": [ { "first": "Peng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Shijie", "middle": [], "last": "Geng", "suffix": "" }, { "first": "Xiaogang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jifeng", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Hongsheng", "middle": [], "last": "Li", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2106.02242" ] }, "num": null, "urls": [], "raw_text": "Peng Gao, Shijie Geng, Xiaogang Wang, Jifeng Dai, and Hongsheng Li. 2021. Scalable transformers for neural machine translation. arXiv preprint arXiv:2106.02242.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Ensemble learning for multi-source neural machine translation", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Garmash", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1409--1418", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ekaterina Garmash and Christof Monz. 2016. Ensem- ble learning for multi-source neural machine trans- lation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguis- tics: Technical Papers, pages 1409-1418.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "John", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.06226" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Transfer learning in multilingual neural machine translation with dynamic vocabulary", "authors": [ { "first": "M", "middle": [], "last": "Surafel", "suffix": "" }, { "first": "Aliia", "middle": [], "last": "Lakew", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Erofeeva", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Negri", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Federico", "suffix": "" }, { "first": "", "middle": [], "last": "Turchi", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.01137" ] }, "num": null, "urls": [], "raw_text": "Surafel M Lakew, Aliia Erofeeva, Matteo Negri, Mar- cello Federico, and Marco Turchi. 2018. Trans- fer learning in multilingual neural machine trans- lation with dynamic vocabulary. arXiv preprint arXiv:1811.01137.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Postech-etri's submission to the wmt2020 ape shared task: Automatic post-editing with crosslingual language model", "authors": [ { "first": "Jihyung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Wonkee", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Jaehun", "middle": [], "last": "Shin", "suffix": "" }, { "first": "Baikjin", "middle": [], "last": "Jung", "suffix": "" }, { "first": "Young-Gil", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Jong-Hyeok", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "777--782", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jihyung Lee, WonKee Lee, Jaehun Shin, Baikjin Jung, Young-Gil Kim, and Jong-Hyeok Lee. 2020. Postech-etri's submission to the wmt2020 ape shared task: Automatic post-editing with cross- lingual language model. In Proceedings of the Fifth Conference on Machine Translation, pages 777- 782.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Shared-private bilingual word embeddings for neural machine translation", "authors": [ { "first": "Xuebo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Derek", "middle": [ "F" ], "last": "Wong", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Lidia", "middle": [ "S" ], "last": "Chao", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Jingbo", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.03100" ] }, "num": null, "urls": [], "raw_text": "Xuebo Liu, Derek F Wong, Yang Liu, Lidia S Chao, Tong Xiao, and Jingbo Zhu. 2019. Shared-private bilingual word embeddings for neural machine trans- lation. arXiv preprint arXiv:1906.03100.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Should we find another model?: Improving neural machine translation performance with one-piece tokenization method without model modification", "authors": [ { "first": "Chanjun", "middle": [], "last": "Park", "suffix": "" }, { "first": "Sugyeong", "middle": [], "last": "Eo", "suffix": "" }, { "first": "Hyeonseok", "middle": [], "last": "Moon", "suffix": "" }, { "first": "Heui-Seok", "middle": [], "last": "Lim", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers", "volume": "", "issue": "", "pages": "97--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chanjun Park, Sugyeong Eo, Hyeonseok Moon, and Heui-Seok Lim. 2021a. Should we find another model?: Improving neural machine translation per- formance with one-piece tokenization method with- out model modification. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers, pages 97- 104.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Ancient korean neural machine translation", "authors": [ { "first": "Chanjun", "middle": [], "last": "Park", "suffix": "" }, { "first": "Chanhee", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Yeongwook", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Heuiseok", "middle": [], "last": "Lim", "suffix": "" } ], "year": 2020, "venue": "IEEE Access", "volume": "8", "issue": "", "pages": "116617--116625", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chanjun Park, Chanhee Lee, Yeongwook Yang, and Heuiseok Lim. 2020a. Ancient korean neural ma- chine translation. IEEE Access, 8:116617-116625.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "2020. A study on the performance improvement of machine translation using public korean-english parallel corpus", "authors": [ { "first": "Chanjun", "middle": [], "last": "Park", "suffix": "" }, { "first": "Heuiseok", "middle": [], "last": "Lim", "suffix": "" } ], "year": null, "venue": "Journal of Digital Convergence", "volume": "18", "issue": "6", "pages": "271--277", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chanjun Park and Heuiseok Lim. 2020. A study on the performance improvement of machine translation us- ing public korean-english parallel corpus. Journal of Digital Convergence, 18(6):271-277.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Sugyeong Eo, and Heuiseok Lim. 2021b. A study on performance improvement considering the balance between corpus in neural machine translation", "authors": [ { "first": "Chanjun", "middle": [], "last": "Park", "suffix": "" }, { "first": "Kinam", "middle": [], "last": "Park", "suffix": "" }, { "first": "Hyeonseok", "middle": [], "last": "Moon", "suffix": "" } ], "year": null, "venue": "Journal of the Korea Convergence Society", "volume": "12", "issue": "5", "pages": "23--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chanjun Park, Kinam Park, Hyeonseok Moon, Sug- yeong Eo, and Heuiseok Lim. 2021b. A study on performance improvement considering the balance between corpus in neural machine translation. Jour- nal of the Korea Convergence Society, 12(5):23-29.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Decoding strategies for improving low-resource machine translation", "authors": [ { "first": "Chanjun", "middle": [], "last": "Park", "suffix": "" }, { "first": "Yeongwook", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Kinam", "middle": [], "last": "Park", "suffix": "" }, { "first": "Heuiseok", "middle": [], "last": "Lim", "suffix": "" } ], "year": 2020, "venue": "Electronics", "volume": "9", "issue": "10", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chanjun Park, Yeongwook Yang, Kinam Park, and Heuiseok Lim. 2020b. Decoding strategies for im- proving low-resource machine translation. Electron- ics, 9(10):1562.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Findings of the wmt 2020 shared task on machine translation robustness", "authors": [ { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Zhenhao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Juan", "middle": [], "last": "Pino", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "76--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucia Specia, Zhenhao Li, Juan Pino, Vishrav Chaud- hary, Francisco Guzm\u00e1n, Graham Neubig, Nadir Durrani, Yonatan Belinkov, Philipp Koehn, Hassan Sajjad, et al. 2020. Findings of the wmt 2020 shared task on machine translation robustness. In Proceed- ings of the Fifth Conference on Machine Translation, pages 76-91.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "An empirical study on ensemble learning of multimodal machine translation", "authors": [ { "first": "Liang", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yifeng", "middle": [], "last": "Han", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kaixi", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Peipei", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "2020 IEEE Sixth International Conference on Multimedia Big Data (BigMM)", "volume": "", "issue": "", "pages": "63--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Tan, Lin Li, Yifeng Han, Dong Li, Kaixi Hu, Dong Zhou, and Peipei Wang. 2020. An empirical study on ensemble learning of multimodal machine translation. In 2020 IEEE Sixth International Con- ference on Multimedia Big Data (BigMM), pages 63-69. IEEE.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Hw-tsc's participation in the wmt 2020 news translation shared task", "authors": [ { "first": "Daimeng", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Hengchao", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Zhanglin", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhengzhe", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Liangyou", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jiaxin", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Minghan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Lizhi", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Qin", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "293--299", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daimeng Wei, Hengchao Shang, Zhanglin Wu, Zhengzhe Yu, Liangyou Li, Jiaxin Guo, Minghan Wang, Hao Yang, Lizhi Lei, Ying Qin, et al. 2020. Hw-tsc's participation in the wmt 2020 news trans- lation shared task. In Proceedings of the Fifth Con- ference on Machine Translation, pages 293-299.", "links": null } }, "ref_entries": { "TABREF1": { "type_str": "table", "num": null, "html": null, "text": "Performance of intra-ensembles (combinations of vocab sizes and attention networks). The baseline score is the average of the three models that have different weights. Note that the bold numbers indicate the best score in each case.", "content": "" }, "TABREF3": { "type_str": "table", "num": null, "html": null, "text": "w1} {wnc, w2} {wnc, w3} {wnc, w1, w2} {wnc, w1, w3} {wnc, w2, w3} {wnc, w1, w2, w3} \u2206%", "content": "
Vocab size {wnc, 32,000 Baseline Cases Best Intra wnc LSTM ind 24.47 19.06 22.84 LSTMsv 21.49 16.11 19.25 LSTMse 21.50 17.20 19.94 Transformer ind 34.13 31.87 33.3722.79 19.32 20.13 33.8722.84 19.31 20.12 33.70Intra-ensembles with non-convergence 23.54 23.53 20.14 20.11 20.71 20.65 33.83 34.0323.56 20.28 20.77 34.1223.75 21.42 20.92 34.04-4.93 -7.05 -4.82 -0.82
Transformersv29.8827.8128.9428.9429.3829.5329.4229.5129.59-1.84
Transformerse30.1927.7229.0129.2329.5629.6729.9629.8329.93-1.96
LSTM ind25.0319.5423.5723.7423.6424.5524.5324.3724.56-3.57
LSTMsv22.9218.8721.7621.7621.7722.3522.3622.3922.54-3.43
LSTMse22.9817.6421.0721.2121.2222.0922.1122.1522.43-5.33
64,000Transformer ind33.9731.2233.2333.6833.7133.7933.8534.1434.29-0.46
Transformersv31.0228.7329.9030.5030.6430.5230.8031.0330.90-1.31
Transformerse31.2828.4130.1630.4830.6230.7931.0531.0531.18-1.66
" }, "TABREF4": { "type_str": "table", "num": null, "html": null, "text": "Performance of combinations of intra-ensembles using non-convergence models (w nc ) with vocab sizes and attention networks. \u2206% represents the average relative rate (i.e., the difference) {w nc , w 1 } to {w nc , w 1 , w 2 , w 3 } over \"Best Intra.\" Note that the bold numbers represent the best score in each case.", "content": "
BaselineInter-Ensembles
Vocab sizeCasesBest InterNC(Transformer) C(LSTM) &C(Transformer) NC(LSTM) &NC(Transformer) NC(LSTM) &\u2206%
LSTMind + Transformerind31.7030.4029.7428.09-7.22
32,000LSTMsv + Transformersv27.4626.2225.3223.64-8.74
LSTMse + Transformerse27.2526.1225.6624.30-6.94
LSTMind + Transformerind31.9530.8730.2628.96-6.01
64,000LSTMsv + Transformersv28.9827.4127.5626.15-6.70
LSTMse + Transformerse28.9727.2127.1624.99-8.69
" } } } }