{ "paper_id": "I17-1038", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:39:22.924101Z" }, "title": "NMT or SMT: Case Study of a Narrow-domain English-Latvian Post-editing Project", "authors": [ { "first": "Inguna", "middle": [], "last": "Skadi\u0146", "suffix": "", "affiliation": {}, "email": "inguna.skadina@tilde.lv" }, { "first": "M\u0101rcis", "middle": [], "last": "Pinnis", "suffix": "", "affiliation": {}, "email": "marcis.pinnis@tilde.lv" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The recent technological shift in machine translation from statistical machine translation (SMT) to neural machine translation (NMT) raises the question of the strengths and weaknesses of NMT. In this paper, we present an analysis of NMT and SMT systems' outputs from narrow domain English-Latvian MT systems that were trained on a rather small amount of data. We analyze post-edits produced by professional translators and manually annotated errors in these outputs. Analysis of post-edits allowed us to conclude that both approaches are comparably successful, allowing for an increase in translators' productivity, with the NMT system showing slightly worse results. Through the analysis of annotated errors, we found that NMT translations are more fluent than SMT translations. However, errors related to accuracy, especially, mistranslation and omission errors, occur more often in NMT outputs. The word form errors, that characterize the morphological richness of Latvian, are frequent for both systems, but slightly fewer in NMT outputs.", "pdf_parse": { "paper_id": "I17-1038", "_pdf_hash": "", "abstract": [ { "text": "The recent technological shift in machine translation from statistical machine translation (SMT) to neural machine translation (NMT) raises the question of the strengths and weaknesses of NMT. In this paper, we present an analysis of NMT and SMT systems' outputs from narrow domain English-Latvian MT systems that were trained on a rather small amount of data. We analyze post-edits produced by professional translators and manually annotated errors in these outputs. Analysis of post-edits allowed us to conclude that both approaches are comparably successful, allowing for an increase in translators' productivity, with the NMT system showing slightly worse results. Through the analysis of annotated errors, we found that NMT translations are more fluent than SMT translations. However, errors related to accuracy, especially, mistranslation and omission errors, occur more often in NMT outputs. The word form errors, that characterize the morphological richness of Latvian, are frequent for both systems, but slightly fewer in NMT outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "For many years, the central problem in machine translation (MT) has been the quality. MT quality has been recognized as a complicated research question when translation is performed into a morphologically rich (and also under-resourced) language with a relatively free word order, e.g., Bulgarian, Croatian, Estonian, Finnish, Greek or Latvian. Possible solutions for widely used statistical machine translation have been studied for many years (e.g., Tamchyna and Bojar 2013; Burlot and Yvon 2015) .", "cite_spans": [ { "start": 59, "end": 63, "text": "(MT)", "ref_id": null }, { "start": 452, "end": 476, "text": "Tamchyna and Bojar 2013;", "ref_id": "BIBREF28" }, { "start": 477, "end": 498, "text": "Burlot and Yvon 2015)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Today machine translation is experiencing a paradigm shift from (phrase-based) statistical machine translation (SMT) to neural machine translation (NMT). The first results obtained in recent years are promising, as it can be seen from the results of WMT 2016 (Bojar et al., 2016) and WMT 2017 (Bojar et al., 2017) .", "cite_spans": [ { "start": 259, "end": 279, "text": "(Bojar et al., 2016)", "ref_id": null }, { "start": 293, "end": 313, "text": "(Bojar et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As NMT becomes more and more popular, the question of what can we expect from NMT in terms of quality becomes very important. Recent analysis of English to German SMT and NMT outputs of manual transcripts of short speeches showed that NMT can decrease the post-editing effort (Bentivogli et al., 2016) . A comparison of NMT and SMT systems for nine language directions (English to and from Czech, German, Romanian, Russian, and English to Finnish) on news stories made by showed that translations produced by NMT systems are more fluent and more accurate in terms of word order compared to translations produced by SMT systems. By analyzing of manually error-annotated outputs of generic English-Croatian MT systems, Klubi\u010dka et al. (2017) found that NMT handles all types of agreement better than SMT (including factored models).", "cite_spans": [ { "start": 276, "end": 301, "text": "(Bentivogli et al., 2016)", "ref_id": "BIBREF1" }, { "start": 717, "end": 739, "text": "Klubi\u010dka et al. (2017)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we delve further into analyzing the strengths and weaknesses of NMT from the perspective of translation quality and the needs of the localization industry. We analyze translations of good quality domain-specific (medicine related) English-Latvian SMT and NMT systems that were trained on a rather small (ca. 325K sentences) data set. The target language -Latvian -is a morphologically rich under-resourced language (about 1.5 million speakers). As it is a synthetically inflected language, words change their form according to their grammatical function. In Latvian only half of the word endings are unambiguous, while for the rest, multiple base forms may be derived from the inflected form (Skadi\u0146a et al., 2012) .", "cite_spans": [ { "start": 707, "end": 729, "text": "(Skadi\u0146a et al., 2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We analyze outputs of NMT and SMT systems in a post-editing (PE) scenario. Data on PE time, keystrokes, and typical operations were collected during the PE process. Analysis of these data allowed us to conclude that both approaches (SMT and NMT) are comparably successful allowing to increase translator productivity, with the NMT system showing slightly worse results. We believe that the reason translations from the SMT system are better in our case, is that from the small amount of data, SMT learns better terminology and phrases which are specific for the particular narrow domain. The situation could be different for broad domain MT systems, as it can be seen from recent WMT 2017 English-Latvian news domain results, where NMT and hybrid approaches were better (Bojar et al., 2017; Pinnis et al., 2017) .", "cite_spans": [ { "start": 770, "end": 790, "text": "(Bojar et al., 2017;", "ref_id": "BIBREF4" }, { "start": 791, "end": 811, "text": "Pinnis et al., 2017)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In addition, for a small sub-set of the MT system translations, manual error annotation was performed. This allowed us to identify the main error categories for each MT system. Through analysis of annotated errors, we found that NMT translations are more fluent than SMT translations, NMT produces significantly fewer typography errors than SMT. At the same time errors related to accuracy, especially, mistranslation and omission errors, occur more often in NMT outputs. The word form errors, which characterize the morphological richness of Latvian, are slightly fewer in NMT outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Questions on how to evaluate the quality and usefulness of machine translation have been studied for several decades. For localization industry needs, MT quality and PE productivity have been analyzed by Flournoy and Duran (2009) ; Groves and Schmidtke (2009) ; Plitt and Masselot (2010) ; Skadi\u0146\u0161 et al. (2011) ; Pinnis et al. (2016) and others. These studies report significant productivity increase when good quality SMT systems are used. Recently, for English-Spanish Sanchez-Torron and Koehn (2016) reported that \"for 1point increase in BLEU, there is a PE time decrease of 0.16 seconds per word, about 3-4%\".", "cite_spans": [ { "start": 204, "end": 229, "text": "Flournoy and Duran (2009)", "ref_id": "BIBREF9" }, { "start": 232, "end": 259, "text": "Groves and Schmidtke (2009)", "ref_id": "BIBREF10" }, { "start": 262, "end": 287, "text": "Plitt and Masselot (2010)", "ref_id": "BIBREF19" }, { "start": 290, "end": 311, "text": "Skadi\u0146\u0161 et al. (2011)", "ref_id": "BIBREF26" }, { "start": 314, "end": 334, "text": "Pinnis et al. (2016)", "ref_id": "BIBREF17" }, { "start": 491, "end": 503, "text": "Koehn (2016)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Several studies have recently compared SMT and NMT systems. Bentivogli et al. (2016) conducted a detailed analysis of SMT and NMT output for the English-German language pair on translations of manual transcripts of TED talks 1 . They found that NMT decreases post-editing effort, but degrades faster than SMT for longer sentences. They also found that NMT output contains fewer morphology errors, lexical errors and substantially fewer word order errors. compared NMT and SMT systems submitted to WMT16 news translation task for nine translation directions (English to and from Czech, German, Romanian, Russian, and English to Finnish). The authors found that the translations produced by NMT systems were more fluent and more accurate in terms of word order compared to translations produced by SMT systems. They observed that NMT systems are also more accurate at producing inflected forms, but they perform poorly when translating very long sentences.", "cite_spans": [ { "start": 60, "end": 84, "text": "Bentivogli et al. (2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "However, when Farajian et al. 2017compared the performance of generic English-French NMT and SMT systems, that were trained on a generic parallel corpus composed of data from different domains, they found that on such multidomain data SMT outperforms its neural counterpart. Moreover, Castilho et al. (2017) in their study, in which human evaluators compared NMT and SMT output for a range of language pairs, reported mixed results from the human evaluation. Similarly to the previous authors, they reported an increase in fluency, but inconsistent results for adequacy (the neural model showed a greater number of errors of omission, addition, and mistranslation) for NMT when compared to SMT. They argue that, although \"NMT shows significant improvements for some language pairs and specific domains, there is still much room for research and improvement before broad generalizations can be made.\"", "cite_spans": [ { "start": 285, "end": 307, "text": "Castilho et al. (2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Analysis of NMT and SMT errors was recently made by Klubi\u010dka et al. (2017) for English-Croatian MT systems. The authors analyzed manual error annotations of SMT and NMT system translations in the news domain and concluded that the NMT system reduces the errors produced by the SMT system by 54%.", "cite_spans": [ { "start": 52, "end": 74, "text": "Klubi\u010dka et al. (2017)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The SMT and NMT systems were trained on the parallel corpus from the European Medicines Agency (EMEA), which is a part of the OPUS cor- (Tiedemann, 2009) , and the latest documents from the EMEA website (years 2009-2014) 2 . Prior to the training of the MT systems, we preprocessed the training data using tools for corpora cleaning, filtering, non-translatable token (e.g., URL, e-mail address, different code, etc.) identification, tokenization, and true-casing. The statistics of the training corpora before and after preprocessing are given in Table 1 .", "cite_spans": [ { "start": 136, "end": 153, "text": "(Tiedemann, 2009)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 548, "end": 555, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data and MT Systems", "sec_num": "3" }, { "text": "The SMT system is a standard phrase-based system that was trained on the Tilde MT platform (Vasi\u013cjevs et al., 2012) with Moses . The system features a 7-gram translation model and a 5-gram language model. The language model was trained with KenLM (Heafield, 2011) .", "cite_spans": [ { "start": 91, "end": 115, "text": "(Vasi\u013cjevs et al., 2012)", "ref_id": "BIBREF31" }, { "start": 247, "end": 263, "text": "(Heafield, 2011)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical Machine Translation System", "sec_num": "3.1" }, { "text": "The system was tuned with MERT (Bertoldi et al., 2009) using a held-out set of 2,000 sentence pairs.", "cite_spans": [ { "start": 31, "end": 54, "text": "(Bertoldi et al., 2009)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical Machine Translation System", "sec_num": "3.1" }, { "text": "We used the sub-word neural machine translation toolkit Nematus (Sennrich et al., 2017) for training the NMT system. The toolkit allows training attention-based encoder-decoder models with gated recurrent units in the recurrent layers. For word splitting in sub-word units, we use the byte pair encoding tools from the subword-nmt toolkit (Sennrich et al., 2015 ). The NMT system was trained using a vocabulary of 40,000 word parts (39,500 for byte pair encoding), a projection (embedding) layer of 500 dimensions, recurrent units of 1024 dimensions, a batch size of 20 and dropout enabled. All other parameters were set to the default parameters as used by the developers of Nematus for their WMT 2016 submissions (Sennrich et al., 2016) .", "cite_spans": [ { "start": 64, "end": 87, "text": "(Sennrich et al., 2017)", "ref_id": "BIBREF21" }, { "start": 339, "end": 361, "text": "(Sennrich et al., 2015", "ref_id": "BIBREF22" }, { "start": 715, "end": 738, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation System", "sec_num": "3.2" }, { "text": "SMT and NMT systems were evaluated on a heldout set of 1000 randomly selected sentence pairs. The automatic evaluation results are given in Table 2. The results show that the SMT system achieves better results than the NMT system. This could be explained by the relatively small size of the parallel corpus and a very narrow domain, i.e., from the small amount of data, SMT learns better terminology and phrases which are specific for the particular narrow domain. When translation is performed into a morphologically rich language, such as Latvian, automatic metrics (e.g. BLEU score) are not always good indicators of translation quality. Table 3 illustrates a case, where both translations have the same quality, but because of different word order the SMT translation received 41.38 BLEU points, while the NMT translation -only 24.42 points. To validate the automatic evaluation results, we performed a small blind comparative evaluation task. The task was performed by 5 professional translators who evaluated 198 segments in total. The results of the comparative evaluation show that the translations of the SMT system are preferred more often by evaluators than the translations of the NMT system (see Figure 1 ). However, the difference is not statistically significant according to the methodology by Skadi\u0146\u0161 et al. (2010) . Therefore, both systems were further used in the post-editing and error annotation experiments.", "cite_spans": [ { "start": 1310, "end": 1331, "text": "Skadi\u0146\u0161 et al. (2010)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 641, "end": 648, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 1209, "end": 1217, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "MT System Evaluation", "sec_num": "3.3" }, { "text": "4 What Can Be Learned from Post-edits?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MT System Evaluation", "sec_num": "3.3" }, { "text": "For post-editing, we compiled a list of 22,500 segments (360,000 words) from EMEA documents. Then, we split the list into documents consisting of 100 segments so that the original sequence of sentences is preserved, and translated the documents At first, translators were asked to post-edit SMT translations. Then, three months later, they were asked to post-edit NMT translations. For the NMT post-editing task, the documents were redistributed to translators, to ensure that each translator has different set of documents in SMT and NMT post-editing tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-editing process", "sec_num": "4.1" }, { "text": "We asked translators to post-edit translated segments with the post-editing tool PET (Aziz et al. 2012) . It allowed us to track the time spent on each segment and to log all keystrokes that the translator performed while post-editing each segment. Translators were asked not to spend excessive amounts of time on each segment because the quality expectations were not \"human translation quality\", but rather \"post-editing quality\".", "cite_spans": [ { "start": 85, "end": 103, "text": "(Aziz et al. 2012)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Post-editing process", "sec_num": "4.1" }, { "text": "To assist post-editing, translators were provided with an automatically extracted in-domain term collection that was integrated into PET and provided translation suggestions for known terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-editing process", "sec_num": "4.1" }, { "text": "After post-editing each segment, translators were asked to evaluate the quality of the MT translation, marking it as one of the following: \"near perfect\", \"very good\", \"poor\", and \"very poor\". If the translator did not apply any changes, the system automatically assigned the highest quality rating -\"Unchanged\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-editing process", "sec_num": "4.1" }, { "text": "Five professional translators were involved in the SMT post-editing task and seven in the NMT post-editing task. Finally, we asked the translators who participated in both tasks (4 in total) to translate two documents without pre-translated segments in order to measure each translator's pure translation productivity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-editing process", "sec_num": "4.1" }, { "text": "Most of the translators involved in this experiment post-edited 20 documents (in each post-editing Figure 2 : Distribution of rankings for MT segments task). To perform a fair comparison between SMT and NMT post-editing tasks, we limit our analysis to the first 20 documents post-edited by each translator participating in both post-editing tasks. We perform the analysis only on segments that were not found in the MT system training data (approximately 36% of segments were discarded). The statistics of the post-edited data that are used for the further analysis is given in Table 4 . We start the analysis by examining the MT quality assessments produced by translators during post-editing. The Figure 2 summarizes the distribution of rankings showing that the SMT system produced a larger proportion of near perfect and perfect translations than the NMT system -50.2% compared to just 39.3%.", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 107, "text": "Figure 2", "ref_id": null }, { "start": 578, "end": 585, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 699, "end": 707, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Post-editing Results", "sec_num": "4.2" }, { "text": "The detailed logs of each translators work allowed to measure the time spent on post-editing Figure 4 : Segment count and editing time distribution for different quality MT segments in three distinct intervals: the amount of time that elapsed between the appearance of an MT segment and the first click, or \"reading time\"; the amount of time between the first edit and approval of the segment, or \"editing time\"; and the amount of time spent between approval of the segment and completion of the quality assessment, referred to as \"assessment time\". The results of the log data analysis in Figure 3 show that on average it takes 30% more time for translators to start editing SMT translations. It is also obvious that editing of good, very good and near perfect SMT translations requires 16-62% more time than for NMT translations. However, the situation is opposite for poor and very poor translations -it requires 3-25% more time to post-edit NMT translations. This difference is more noticeable in Figure 4 , which shows that post-editing poor and very poor NMT translations (24% of all post-edited NMT translations) required more than half of the editing time (55.1%). In comparison post-editing of poor SMT translations (16.8% of all post-edited SMT translations) In terms of productivity (see Figure 5 ), it is evident that both tasks (SMT and NMT postediting) obtain higher productivity than pure translation. However, the productivity is higher for post-editing SMT translations (104% compared to 94%).", "cite_spans": [], "ref_spans": [ { "start": 93, "end": 101, "text": "Figure 4", "ref_id": null }, { "start": 590, "end": 598, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 1001, "end": 1009, "text": "Figure 4", "ref_id": null }, { "start": 1299, "end": 1307, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Post-editing Results", "sec_num": "4.2" }, { "text": "When analyzing the effect of the length of segments on productivity (tokens translated/postedited per hour), the results in Figure 6 showed that there is an obvious decrease in post-editing productivity for longer segments, with the NMT post-editing productivity decreasing faster than for SMT post-editing. It is interesting that there is almost no change in productivity when translating without MT support.", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 132, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Post-editing Results", "sec_num": "4.2" }, { "text": "The information on the time spent on each segment allows us to analyze the relationship between the post-editing productivity and the postediting effort that is expressed with the help of the Human-targeted Translation Edit Rate (HTER; Snover et al. 2006) . Figure 7 depicts the aver- MT has a minimal effect on productivity MT decreases productivity Figure 7 : Average productivity (tokens translated/post-edited per hour; y axis) at different MT suggestion quality thresholds (HTER; x axis) age productivity for different MT translation quality intervals. It shows that we can identify average MT system quality thresholds, at which postediting becomes productive (HTER of 0.4 or less) and at which it stops being productive (HTER of 0.7 or higher). The average HTER scores of the SMT and NMT systems are 0.22 and 0.31 respectively. The figure also shows that there is little difference between SMT and NMT post-editing, with the NMT post-editing being faster at individual quality levels. Still, because the NMT system produced more poor translations, the overall postediting productivity is higher for the SMT postediting task.", "cite_spans": [ { "start": 236, "end": 255, "text": "Snover et al. 2006)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 258, "end": 266, "text": "Figure 7", "ref_id": null }, { "start": 351, "end": 359, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Post-editing Results", "sec_num": "4.2" }, { "text": "To validate, whether the post-edits are of good quality, we performed quality assessment of the post-edits according to the LISA Quality Assurance model 3 . The quality assessment was performed by professional editors from our localization department. The results in Figure 8 show that even though the task for translators was to perform light post-editing, the quality of the post-edited translations is rated as excellent (i.e., the average error score for both SMT and NMT post-edits is below 10 per 1000 words).", "cite_spans": [], "ref_spans": [ { "start": 267, "end": 275, "text": "Figure 8", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Post-editing Results", "sec_num": "4.2" }, { "text": "The aim of the error annotation task was to identify common and specific errors for both MT architectures and their influence on the overall quality of MT output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MT Error Annotation", "sec_num": "5" }, { "text": "For error annotation (EA), 1800 English segments and their translations into Latvian by SMT and 3 LISA QA model: http://web.archive.org/web/ 20080124014404/http://www.lisa.org/products/qamodel/ NMT systems were selected. Only translations that were marked as \"Very good\" during postediting for both MT systems were included. The main reason for including only segments that have good translations was the necessity to avoid wrong annotations due to very bad input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Annotation Task", "sec_num": "5.1" }, { "text": "The error classification used, in this task, is based on Multidimensional Quality Metrics (MQM; . More specifically, the subset that is defined by was used. In this classification, errors are divided into three top categories: accuracy, fluency, and terminology. These top level categories then include more detailed categories from the MQM issue type hierarchy.", "cite_spans": [ { "start": 90, "end": 95, "text": "(MQM;", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Error Annotation Task", "sec_num": "5.1" }, { "text": "The EA was performed four months after finishing both post-editing tasks. Two translators, who participated in both post-editing tasks, were involved to ensure consistency between post-editing and error annotation tasks and to avoid a situation when translators annotate errors, which were not requested to be corrected during post-editing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Annotation Task", "sec_num": "5.1" }, { "text": "The error annotation was performed in the Translate5 4 platform. Before translators started the error annotation, they were introduced to a video tutorial, written guidelines, and the decision process. During annotation, translators saw the source segment, MT output, and post-edited MT output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Annotation Task", "sec_num": "5.1" }, { "text": "Each translator annotated 1000 segments translated by the SMT system and the same 1000 segments translated by the NMT system. Although inter-annotator agreement was not our main interest, 200 translations from each system were annotated by both translators. Table 5 : Summary of error annotation task (count -number of errors for particular category; total -sum of errors, including subcategories)", "cite_spans": [], "ref_spans": [ { "start": 258, "end": 265, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Error Annotation Task", "sec_num": "5.1" }, { "text": "The overall results of the error annotation task are summarized in Table 5 . Results show that although the segments were ranked as good, most of them contain more than one error per segment. The total number of errors is higher for SMT. There are twice as many errors related to fluency (77%) as to accuracy (28%) for SMT, while for NMT the fluency errors comprise 55% of errors, but accuracy errors -44%. The complexity of Latvian morphology is a reason why more than 1/4 of errors are grammar errors (35% for SMT and 27% for NMT), from which almost 1/5 of errors are word form errors (SMT 21%, NMT -19%). For instance, both MT systems generate the wrong form for the word \"aerosols (spray)\" when translating the sentence \"How to use the nasal spray\": the SMT system generates the singular nominative form aerosols (spray), while the NMT system generates singular genitive form aerosola (spray).", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 74, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Observations from the Error Annotation Task", "sec_num": "5.2" }, { "text": "A significant difference between SMT and 4 http://translate5-metashare.dfki.de NMT outputs has been observed for three error subcategories -typography (the subcategory of fluency), mistranslation (the subcategory of accuracy) and omission (the subcategory of accuracy).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Observations from the Error Annotation Task", "sec_num": "5.2" }, { "text": "Typography errors are much more widespread in SMT (21.70%) than in NMT (11%). Usually these are cases where spaces are wrongly used (e.g. \"beta -2 -agonisti\" instead of \"beta-2-agonisti\" (beta-2-agonists), or wrong separators appear in numbers (e.g. \"3,644\" instead of \"3644\", or \"0.5\" instead of \"0,5\"). These errors, especially wrong separators, are not frequent in NMT translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Observations from the Error Annotation Task", "sec_num": "5.2" }, { "text": "The Latvian language has a very rich, morphology-based word-building potential (words are usually built by adding affixes to the stem). This feature resulted in a high number (19%) of mistranslations from the NMT system. Typical cases of mistranslation from the NMT system include the incorrect translation of numbers (e.g., 30 July 2012 is translated as 2008. gada 30. j\u016blijs), terms (e.g., drop (piliens) is translated as injekcija (injection)) and named entities (e.g., Naglazyme (Naglazyme) is translated as MabCampath).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Observations from the Error Annotation Task", "sec_num": "5.2" }, { "text": "Latvian also has a relatively free word order. In the case of a formal, narrow domain, where usually the word order is strict, it has a rather small influence even for the SMT system (9% of errors), while in the case of more general systems this could have much greater impact.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Observations from the Error Annotation Task", "sec_num": "5.2" }, { "text": "Errors of omission are much more frequent for NMT (15%) than for SMT outputs (10%). NMT also produces fewer (4%) word order errors than SMT (9%), while SMT has fewer (8%) spelling errors than NMT (11%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Observations from the Error Annotation Task", "sec_num": "5.2" }, { "text": "Although the aim of this research was not to study consistency between annotations, but to identify and analyze the main error categories, 200 segments translated by SMT and NMT systems were annotated by two translators. The reason for having only two annotators was seriously debated in the consortium of the QT21 project 5 by a number of leading MT researchers. It was agreed that, to show inconsistencies/issues, common understanding of the annotation task, it is enough to have two annotators. The inter-annotator agreement is more like a sanity check for the fine-grained annotation levels (whether annotators have common understanding or not). Table 6 presents the summary on errors annotated in these segments.", "cite_spans": [], "ref_spans": [ { "start": 650, "end": 657, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Inter-annotator Agreement", "sec_num": "5.3" }, { "text": "Similarly to the whole error annotation task, slightly more errors are found in the SMT system's output. Table 6 also confirms the finding from the overall error annotation task, that NMT produces less typography and word order errors than SMT, but it produces more mistranslation and omission errors.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 112, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Inter-annotator Agreement", "sec_num": "5.3" }, { "text": "There are several error categories where translators have different opinions about the applicability of the particular categories. The table clearly demonstrates that the most complicated case was the identification of a correct subcategory for wrong word form errors. The annotator A1 mostly assigned the top category \"word form\" for such errors, while the annotator A2 marked them as agreement errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inter-annotator Agreement", "sec_num": "5.3" }, { "text": "Another case of significant disagreement between annotators can be observed for fluency errors in the NMT post-editing task. As there was no consistent correspondence between an error category assigned by annotator A2 for cases where an- notator A1 marked fluency errors, we asked annotator A1 to explain her reasoning. She told us that she marked fluency errors where a post-editor during post-editing applied just stylistic corrections. After inspecting these cases, we agreed with her explanation. For inter-annotator agreement, we calculated free-marginal kappa under three different conditions (see Table 7 ): perfect match analysis (i.e., by taking the precise positions and (sub)categories of errors into account), error count analysis (i.e., by ignoring error positions), and error presence analysis (i.e., by just looking at whether both annotators identified that a segment contains a certain (sub)category of errors) 6 . The results show that when taking positions into account, there is just slight agreement between the annotators. This is explained by the different understanding of where errors need to be marked: one translator annotated errors at the character level, while the other -at the token level. For instance, in the case of wrong Table 7 : Inter-annotator agreement (free-marginal kappa) on the 200 segment data sets separators in numbers (e.g. 7.5), one annotator marked only the punctuation mark, while the other -the whole number. If we analyze the agreement on just error count and error presence levels, we see that the annotators reached moderate agreement for the annotation of errors for the SMT system's translations, but only fair agreement for the NMT system's translations. This is mainly due to the disagreement on how to annotate fluency errors. The inter-annotator agreement scores highlight the necessity for improvements in the general guidelines to mitigate the potential for disagreement. That being said, the inter-annotator agreement in the higher error levels (i.e., if we do not split errors up in 4 levels of sub-categories, but analyze only the top 2 levels) is good (over 0.6) for SMT and moderate (over 0.4) for NMT.", "cite_spans": [], "ref_spans": [ { "start": 604, "end": 611, "text": "Table 7", "ref_id": null }, { "start": 1257, "end": 1264, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Inter-annotator Agreement", "sec_num": "5.3" }, { "text": "In this paper, we presented an analysis of narrow domain English-Latvian SMT and NMT systems, that were trained on a rather small in-domain corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Translations of both systems were post-edited by professional translators and ranked depending on the complexity of editing. 83% of SMT translations and 73% of NMT translations were ranked as perfect, near perfect or very good, thus confirming the fact that in-domain MT systems can produce good quality translations even when the amount of training data is limited. The analysis of post-edited data allowed us to conclude that both approaches allow for an increase in translator productivity, with the NMT system showing slightly worse results in general, but better for good quality MT output. We believe that the lower results for the NMT system are linked to the relatively small size of the parallel corpus and the narrow domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "By analysis of the manually annotated errors, we found that the SMT system produced twice as many errors related to fluency (77%) in comparison to those related to accuracy (28%), while for the NMT system the fluency errors comprise 55% of all errors, but accuracy errors -44%. In terms of error subcategories, widespread errors for both systems are grammar errors (35% for SMT and 27% for NMT), especially wrong word form errors (21% for SMT and 19% for NMT), indicating that morphologically rich languages, e.g., Latvian, are problematic for both MT systems, while improving with NMT. A significant difference between SMT and NMT outputs has been observed for three error subcategories -typography (22% for SMT and 11% for NMT), mistranslation (7% for SMT and 19% for NMT) and omission (10% for SMT and 15% for NMT).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The obtained results show that in the case of a narrow domain, if MT systems are trained on a small amount of data, the SMT system performs better than the NMT system. The reason why the SMT system in our case is better, is that from the small amount of data, SMT learns better terminology and phrases which are specific for the particular narrow domain. The situation differs for broad domain MT systems, as it has been demonstrated by recent WMT 2017 English-Latvian news domain results, where NMT and hybrid approaches were better.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "http://www.ted.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Free-marginal kappa is interpreted as: 0.01-0.20 = slight agreement, 0.21-0.40 = fair agreement, 0.41-0.60 = moderate agreement, 0.61-0.80 = substantial agreement, 0.81-1.00 = almost perfect agreement(Landis and Koch, 1977)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Tilde's Localization Department for the hard work they did to prepare material for the analysis presented in this paper. The work within the QT21 project has received funding from the European Union under grant agreement n \u2022 645452. The research has been supported by the ICT Competence Centre (www.itkc.lv) within the project \"2.2. Prototype of a Software and Hardware Platform for Integration of Machine Translation in Corporate Infrastructure\" of EU Structural funds, ID n \u2022 1.2.1.1/16/A/007.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Pet: a tool for post-editing and assessing machine translation", "authors": [ { "first": "Wilker", "middle": [], "last": "Aziz", "suffix": "" }, { "first": "Sheila Cm De", "middle": [], "last": "Sousa", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC12)", "volume": "", "issue": "", "pages": "3982--3987", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wilker Aziz, Sheila CM De Sousa, and Lucia Specia. 2012. Pet: a tool for post-editing and assessing ma- chine translation. In In Proceedings of the Eight In- ternational Conference on Language Resources and Evaluation (LREC12), pages 3982-3987.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural versus phrasebased machine translation quality: a case study", "authors": [ { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Arianna", "middle": [], "last": "Bisazza", "suffix": "" }, { "first": "Mauro", "middle": [], "last": "Cettolo", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "257--267", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luisa Bentivogli, Arianna Bisazza, Mauro Cettolo, and Marcello Federico. 2016. Neural versus phrase- based machine translation quality: a case study. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 257-267.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Improved Minimum Error Rate Training in Moses", "authors": [ { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Jean-Baptiste", "middle": [], "last": "Fouet", "suffix": "" } ], "year": 2009, "venue": "The Prague Bulletin of Mathematical Linguistics", "volume": "91", "issue": "1", "pages": "7--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicola Bertoldi, Barry Haddow, and Jean-Baptiste Fouet. 2009. Improved Minimum Error Rate Train- ing in Moses. The Prague Bulletin of Mathematical Linguistics, 91(1):7-16.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Matthias Huck, Antonio Jimeno Yepes", "authors": [ { "first": "Ondrej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Rajen", "middle": [], "last": "Chatterjee", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, An- tonio Jimeno Yepes, Philipp Koehn, Varvara Lo- gacheva, Christof Monz, et al. 2016. Findings of the 2016 conference on machine translation (wmt16). Proceedings of WMT.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Findings of the 2017 conference on machine translation (wmt17)", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Rajen", "middle": [], "last": "Chatterjee", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Shujian", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Varvara", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Negri", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Raphael", "middle": [], "last": "Rubino", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Turchi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Second Conference on Machine Translation", "volume": "2", "issue": "", "pages": "169--214", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 conference on machine translation (wmt17). In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, pages 169-214, Copenhagen, Denmark. Association for Computa- tional Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Practical guidelines for the use of mqm in scientific research on translation quality. Preparation and Launch of a Large-scale Action for Quality Translation Technology, report", "authors": [ { "first": "Aljoscha", "middle": [], "last": "Burchardt", "suffix": "" }, { "first": "Arle", "middle": [], "last": "Lommel", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aljoscha Burchardt and Arle Lommel. 2014. Practical guidelines for the use of mqm in scientific research on translation quality. Preparation and Launch of a Large-scale Action for Quality Translation Technol- ogy, report, page 19.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Morphologyaware alignments for translation to and from a synthetic language", "authors": [ { "first": "Franck", "middle": [], "last": "Burlot", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Yvon", "suffix": "" } ], "year": 2015, "venue": "Proc. IWSLT", "volume": "", "issue": "", "pages": "188--195", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franck Burlot and Fran\u00e7ois Yvon. 2015. Morphology- aware alignments for translation to and from a syn- thetic language. In Proc. IWSLT, pages 188-195.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Is neural machine translation the new state of the art? The Prague Bulletin of Mathematical Linguistics", "authors": [ { "first": "Sheila", "middle": [], "last": "Castilho", "suffix": "" }, { "first": "Joss", "middle": [], "last": "Moorkens", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Gaspari", "suffix": "" }, { "first": "Iacer", "middle": [], "last": "Calixto", "suffix": "" }, { "first": "John", "middle": [], "last": "Tinsley", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Way", "suffix": "" } ], "year": 2017, "venue": "", "volume": "108", "issue": "", "pages": "109--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sheila Castilho, Joss Moorkens, Federico Gaspari, Iacer Calixto, John Tinsley, and Andy Way. 2017. Is neural machine translation the new state of the art? The Prague Bulletin of Mathematical Linguis- tics, 108(1):109-120.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Neural vs. phrase-based machine translation in a multi-domain scenario", "authors": [ { "first": "Marco", "middle": [], "last": "M Amin Farajian", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Turchi", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Negri", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "", "middle": [], "last": "Federico", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M Amin Farajian, Marco Turchi, Matteo Negri, Nicola Bertoldi, and Marcello Federico. 2017. Neural vs. phrase-based machine translation in a multi-domain scenario. EACL 2017, page 280.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Machine translation and document localization at adobe: From pilot to production. MT Summit XII: proceedings of the twelfth Machine Translation Summit", "authors": [ { "first": "Raymond", "middle": [], "last": "Flournoy", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Duran", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "425--428", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raymond Flournoy and Christine Duran. 2009. Ma- chine translation and document localization at adobe: From pilot to production. MT Summit XII: proceedings of the twelfth Machine Translation Summit, pages 425-428.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Identification and analysis of post-editing patterns for mt", "authors": [ { "first": "Declan", "middle": [], "last": "Groves", "suffix": "" }, { "first": "Dag", "middle": [], "last": "Schmidtke", "suffix": "" } ], "year": 2009, "venue": "Proceedings of MT Summit", "volume": "12", "issue": "", "pages": "429--436", "other_ids": {}, "num": null, "urls": [], "raw_text": "Declan Groves and Dag Schmidtke. 2009. Identifica- tion and analysis of post-editing patterns for mt. In Proceedings of MT Summit, volume 12, pages 429- 436.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "KenLM : Faster and Smaller Language Model Queries", "authors": [ { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "187--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Heafield. 2011. KenLM : Faster and Smaller Language Model Queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, 2009, pages 187-197. Association for Computa- tional Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Fine-grained human evaluation of neural versus phrase-based machine translation", "authors": [ { "first": "Filip", "middle": [], "last": "Klubi\u010dka", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Toral", "suffix": "" }, { "first": "M", "middle": [], "last": "V\u00edctor", "suffix": "" }, { "first": "", "middle": [], "last": "S\u00e1nchez-Cartagena", "suffix": "" } ], "year": 2017, "venue": "The Prague Bulletin of Mathematical Linguistics", "volume": "108", "issue": "1", "pages": "121--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Filip Klubi\u010dka, Antonio Toral, and V\u00edctor M S\u00e1nchez- Cartagena. 2017. Fine-grained human evaluation of neural versus phrase-based machine translation. The Prague Bulletin of Mathematical Linguistics, 108(1):121-132.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Factored translation models", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" } ], "year": 2007, "venue": "EMNLP-CoNLL", "volume": "", "issue": "", "pages": "868--876", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn and Hieu Hoang. 2007. Factored trans- lation models. In EMNLP-CoNLL, pages 868-876.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Moses: Open Source Toolkit for Statistical Machine Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Ond\\vrej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\\vrej Bojar, Alexan- dra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Trans- lation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstra- tion Sessions, ACL '07, pages 177-180, Strouds- burg, PA, USA. Association for Computational Lin- guistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The measurement of observer agreement for categorical data", "authors": [ { "first": "Richard", "middle": [], "last": "Landis", "suffix": "" }, { "first": "Gary G", "middle": [], "last": "Koch", "suffix": "" } ], "year": 1977, "venue": "Biometrics", "volume": "", "issue": "", "pages": "159--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "J Richard Landis and Gary G Koch. 1977. The mea- surement of observer agreement for categorical data. Biometrics, pages 159-174.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Multidimensional quality metrics (MQM): A framework for declaring and describing translation quality metrics. Tradumtica: tecnologies de la traducci", "authors": [ { "first": "Arle", "middle": [], "last": "Richard Lommel", "suffix": "" }, { "first": "Aljoscha", "middle": [], "last": "Burchardt", "suffix": "" }, { "first": "Hans", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2014, "venue": "", "volume": "0", "issue": "", "pages": "455--463", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arle Richard Lommel, Aljoscha Burchardt, and Hans Uszkoreit. 2014. Multidimensional quality metrics (MQM): A framework for declaring and describing translation quality metrics. Tradumtica: tecnologies de la traducci, 0(12):455-463.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "What Can We Really Learn from Post-editing?", "authors": [ { "first": "M\u0101rcis", "middle": [], "last": "Pinnis", "suffix": "" }, { "first": "Rihards", "middle": [], "last": "Kalni\u0146\u0161", "suffix": "" }, { "first": "Raivis", "middle": [], "last": "Skadi\u0146\u0161", "suffix": "" }, { "first": "Inguna", "middle": [], "last": "Skadi\u0146a", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 12th Conference of the Association for Machine Translation in the Americas (AMTA 2016)", "volume": "2", "issue": "", "pages": "86--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "M\u0101rcis Pinnis, Rihards Kalni\u0146\u0161, Raivis Skadi\u0146\u0161, and Inguna Skadi\u0146a. 2016. What Can We Really Learn from Post-editing? In Proceedings of the 12th Con- ference of the Association for Machine Translation in the Americas (AMTA 2016), vol. 2: MT Users, pages 86-91, Austin, USA. Association for Machine Translation in the Americas.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Tilde's machine translation systems for wmt 2017", "authors": [ { "first": "M\u0101rcis", "middle": [], "last": "Pinnis", "suffix": "" }, { "first": "Rihards", "middle": [], "last": "Kri\u0161lauks", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Second Conference on Machine Translation", "volume": "", "issue": "", "pages": "374--381", "other_ids": {}, "num": null, "urls": [], "raw_text": "M\u0101rcis Pinnis, Rihards Kri\u0161lauks, Toms Miks, Daiga Deksne, and Valters\u0160ics. 2017. Tilde's machine translation systems for wmt 2017. In Proceedings of the Second Conference on Machine Translation, pages 374-381.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A productivity test of statistical machine translation post-editing in a typical localisation context. The Prague bulletin of mathematical linguistics", "authors": [ { "first": "Mirko", "middle": [], "last": "Plitt", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Masselot", "suffix": "" } ], "year": 2010, "venue": "", "volume": "93", "issue": "", "pages": "7--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mirko Plitt and Fran\u00e7ois Masselot. 2010. A productiv- ity test of statistical machine translation post-editing in a typical localisation context. The Prague bulletin of mathematical linguistics, 93:7-16.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Machine translation quality and post-editor productivity", "authors": [ { "first": "Marina", "middle": [], "last": "Sanchez-Torron", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2016, "venue": "AMTA 2016", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Sanchez-Torron and Philipp Koehn. 2016. Ma- chine translation quality and post-editor productiv- ity. AMTA 2016, Vol., page 16.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Nematus: a toolkit for neural machine translation", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Hitschler", "suffix": "" }, { "first": "Marcin", "middle": [], "last": "Junczys-Dowmunt", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "L\u00e4ubli", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Valerio Miceli", "suffix": "" }, { "first": "Jozef", "middle": [], "last": "Barone", "suffix": "" }, { "first": "", "middle": [], "last": "Mokry", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1703.04357" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexan- dra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel L\u00e4ubli, Antonio Vale- rio Miceli Barone, Jozef Mokry, et al. 2017. Nema- tus: a toolkit for neural machine translation. arXiv preprint arXiv:1703.04357.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Neural Machine Translation of Rare Words with Subword Units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2015)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (ACL 2015), Berlin, Germany. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Edinburgh Neural Machine Translation Systems for WMT 16", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the First Conference on Machine Translation (WMT 2016)", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh Neural Machine Translation Sys- tems for WMT 16. In Proceedings of the First Con- ference on Machine Translation (WMT 2016), Vol- ume 2: Shared Task Papers.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Improving SMT for Baltic Languages with Factored Models", "authors": [ { "first": "Raivis", "middle": [], "last": "Skadi\u0146\u0161", "suffix": "" }, { "first": "K\u0101rlis", "middle": [], "last": "Goba", "suffix": "" }, { "first": "Valters\u0161ics", "middle": [], "last": "", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The Baltic Perspective: Proceedings of the Fourth International Conference", "volume": "219", "issue": "", "pages": "125--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raivis Skadi\u0146\u0161, K\u0101rlis Goba, and Valters\u0160ics. 2010. Improving SMT for Baltic Languages with Factored Models. In Human Language Technologies: The Baltic Perspective: Proceedings of the Fourth Inter- national Conference, Baltic HLT 2010, volume 219, pages 125-132. IOS Press.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Andrejs Vasi\u013cjevs, Tatjana Gornostaja, Iveta Kei\u0161a, and Alda Rudz\u012bte. 2012. The Latvian language in the digital age", "authors": [ { "first": "Inguna", "middle": [], "last": "Skadi\u0146a", "suffix": "" }, { "first": "Andrejs", "middle": [], "last": "Veisbergs", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Inguna Skadi\u0146a, Andrejs Veisbergs, Andrejs Vasi\u013cjevs, Tatjana Gornostaja, Iveta Kei\u0161a, and Alda Rudz\u012bte. 2012. The Latvian language in the digital age. Springer.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Evaluation of SMT in Localization to Under-Resourced Inflected Language", "authors": [ { "first": "Raivis", "middle": [], "last": "Skadi\u0146\u0161", "suffix": "" }, { "first": "M\u0101ris", "middle": [], "last": "Puri\u0146\u0161", "suffix": "" }, { "first": "Inguna", "middle": [], "last": "Skadi\u0146a", "suffix": "" }, { "first": "Andrejs", "middle": [], "last": "Vasi\u013cjevs", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 15th International Conference of the European Association for Machine Translation", "volume": "", "issue": "", "pages": "35--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raivis Skadi\u0146\u0161, M\u0101ris Puri\u0146\u0161, Inguna Skadi\u0146a, and An- drejs Vasi\u013cjevs. 2011. Evaluation of SMT in Local- ization to Under-Resourced Inflected Language. In Proceedings of the 15th International Conference of the European Association for Machine Translation (EAMT 2011), May, pages 35-40, Leuven, Belgium. European Association for Machine Translation.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A Study of Translation Edit Rate with Targeted Human Annotation", "authors": [ { "first": "Matthew", "middle": [], "last": "Snover", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Linnea", "middle": [], "last": "Micciulla", "suffix": "" }, { "first": "John", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 7th biennial conference of the Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "223--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human An- notation. In Proceedings of the 7th biennial confer- ence of the Association for Machine Translation in the Americas, August, pages 223-231, Cambridge, MA, USA.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "No Free Lunch in Factored Phrase-Based Machine Translation", "authors": [ { "first": "Ale\u0161", "middle": [], "last": "Tamchyna", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" } ], "year": 2013, "venue": "Proc. of CICLing", "volume": "7817", "issue": "", "pages": "210--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ale\u0161 Tamchyna and Ond\u0159ej Bojar. 2013. No Free Lunch in Factored Phrase-Based Machine Transla- tion. In Proc. of CICLing 2013, volume 7817 of LNCS, pages 210-223, Samos, Greece. Springer- Verlag.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "News from OPUS -A Collection of Multilingual Parallel Corpora with Tools and Interfaces", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2009, "venue": "Recent advances in natural language processing", "volume": "5", "issue": "", "pages": "237--248", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Tiedemann. 2009. News from OPUS -A Collec- tion of Multilingual Parallel Corpora with Tools and Interfaces. In Recent advances in natural language processing, volume 5, pages 237-248.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A multifaceted evaluation of neural versus phrasebased machine translation for 9 language directions", "authors": [ { "first": "Antonio", "middle": [], "last": "Toral", "suffix": "" }, { "first": "M", "middle": [], "last": "V\u00edctor", "suffix": "" }, { "first": "", "middle": [], "last": "S\u00e1nchez-Cartagena", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter", "volume": "1", "issue": "", "pages": "1063--1073", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antonio Toral and V\u00edctor M. S\u00e1nchez-Cartagena. 2017. A multifaceted evaluation of neural versus phrase- based machine translation for 9 language directions. In Proceedings of the 15th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1063- 1073, Valencia, Spain. Association for Computa- tional Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Letsmt!: a cloud-based platform for do-it-yourself machine translation", "authors": [ { "first": "Andrejs", "middle": [], "last": "Vasi\u013cjevs", "suffix": "" }, { "first": "Raivis", "middle": [], "last": "Skadi\u0146\u0161", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the ACL 2012 System Demonstrations", "volume": "", "issue": "", "pages": "43--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrejs Vasi\u013cjevs, Raivis Skadi\u0146\u0161, and J\u00f6rg Tiede- mann. 2012. Letsmt!: a cloud-based platform for do-it-yourself machine translation. In Proceedings of the ACL 2012 System Demonstrations, pages 43- 48. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Human comparative evaluation results for SMT and NMT systems", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "Average time in seconds spent on a segment", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "post-editing NMT post-editing", "uris": null, "type_str": "figure" }, "FIGREF3": { "num": null, "text": "Individual translator productivity (tokens translated/post-edited per hour) based on ac-Translation and post-editing productivity (tokens translated/post-edited per hour) for segments with different length with linear trendlines required just 34.4% of time.", "uris": null, "type_str": "figure" }, "FIGREF5": { "num": null, "text": "Average error score (per 1000 words)", "uris": null, "type_str": "figure" }, "TABREF1": { "text": "Statistics of the training corpora pus", "content": "", "num": null, "html": null, "type_str": "table" }, "TABREF2": { "text": "57\u00b11.46 9.45\u00b10.18 0.7586 NMT 38.44\u00b11.62 8.63\u00b10.15 0.7065", "content": "
SystemBLEUNISTChrF2
SMT46.
", "num": null, "html": null, "type_str": "table" }, "TABREF3": { "text": "Automatic evaluation results", "content": "", "num": null, "html": null, "type_str": "table" }, "TABREF4": { "text": "Seek medical advice straight away if you develop a severe rash, itching or shortness of breath or difficulty breathing. Human 100.00 Nekav\u0113joties mekl\u0113jiet medic\u012bnisku pal\u012bdz\u012bbu , ja Jums par\u0101d\u0101s izsitumi , rodas nieze vai elpas tr\u016bkums , vai apgr\u016btin\u0101ta elpo\u0161ana . SMT 41.38 Nekav\u0113joties mekl\u0113jiet medic\u012bnisko pal\u012bdz\u012bbu , ja Jums rodas smagi izsitumi , nieze vai elpas tr\u016bkums vai apgr\u016btin\u0101ta elpo\u0161ana . NMT 24.42 Ja Jums rodas smagi izsitumi, nieze vai elpas tr\u016bkums vai apgr\u016btin\u0101ta elpo\u0161ana , nekav\u0113joties mekl\u0113jiet medic\u012bnisko pal\u012bdz\u012bbu .", "content": "
Sentence BLEUText
Source-
", "num": null, "html": null, "type_str": "table" }, "TABREF5": { "text": "Influence of word order on BLEU score for similar translations by SMT and NMT systems with both MT systems.", "content": "", "num": null, "html": null, "type_str": "table" }, "TABREF7": { "text": "Statistics of post-edited data", "content": "
30% 40%34% 28%33% 34%SMTNMT
23%
20%16%15%
11%
10%2%4%
0%
Unchanged NearVery goodPoorVery poor
perfect
", "num": null, "html": null, "type_str": "table" }, "TABREF11": { "text": "Error annotation summary for 200 segments annotated by 2 translators (A1 and A2)", "content": "", "num": null, "html": null, "type_str": "table" } } } }