{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:14:45.103763Z" }, "title": "Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods", "authors": [ { "first": "Lifeng", "middle": [], "last": "Han", "suffix": "", "affiliation": { "laboratory": "ADAPT Research Centre", "institution": "", "location": {} }, "email": "lifeng.han@adaptcentre.ie" }, { "first": "Gareth", "middle": [ "J F" ], "last": "Jones", "suffix": "", "affiliation": { "laboratory": "ADAPT Research Centre", "institution": "", "location": {} }, "email": "" }, { "first": "Alan", "middle": [ "F" ], "last": "Smeaton", "suffix": "", "affiliation": { "laboratory": "", "institution": "Dublin City University", "location": { "settlement": "Dublin", "country": "Ireland" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "To facilitate effective translation modeling and translation studies, one of the crucial questions to address is how to assess translation quality. From the perspectives of accuracy, reliability, repeatability and cost, translation quality assessment (TQA) itself is a rich and challenging task. In this work, we present a high-level and concise survey of TQA methods, including both manual judgement criteria and automated evaluation metrics, which we classify into further detailed sub-categories. We hope that this work will be an asset for both translation model researchers and quality assessment researchers. In addition, we hope that it will enable practitioners to quickly develop a better understanding of the conventional TQA field, and to find corresponding closely relevant evaluation solutions for their own needs. This work may also serve inspire further development of quality assessment and evaluation methodologies for other natural language processing (NLP) tasks in addition to machine translation (MT), such as automatic text summarization (ATS), natural language understanding (NLU) and natural language generation (NLG). 1", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "To facilitate effective translation modeling and translation studies, one of the crucial questions to address is how to assess translation quality. From the perspectives of accuracy, reliability, repeatability and cost, translation quality assessment (TQA) itself is a rich and challenging task. In this work, we present a high-level and concise survey of TQA methods, including both manual judgement criteria and automated evaluation metrics, which we classify into further detailed sub-categories. We hope that this work will be an asset for both translation model researchers and quality assessment researchers. In addition, we hope that it will enable practitioners to quickly develop a better understanding of the conventional TQA field, and to find corresponding closely relevant evaluation solutions for their own needs. This work may also serve inspire further development of quality assessment and evaluation methodologies for other natural language processing (NLP) tasks in addition to machine translation (MT), such as automatic text summarization (ATS), natural language understanding (NLU) and natural language generation (NLG). 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Machine translation (MT) research, starting from the 1950s (Weaver, 1955) , has been one of the main research topics in computational linguistics (CL) and natural language processing (NLP), and has influenced and been influenced by several other language processing tasks such as parsing and language modeling. Starting from rulebased methods to example-based, and then statis-tical methods (Brown et al., 1993; Och and Ney, 2003; Chiang, 2005; Koehn, 2010) , to the current paradigm of neural network structures (Cho et al., 2014; Johnson et al., 2016; Vaswani et al., 2017; Lample and Conneau, 2019) , MT quality continue to improve. However, as MT and translation quality assessment (TQA) researchers report, MT outputs are still far from reaching human parity (L\u00e4ubli et al., 2018; L\u00e4ubli et al., 2020; Han et al., 2020a) . MT quality assessment is thus still an important task to facilitate MT research itself, and also for downstream applications. TQA remains a challenging and difficult task because of the richness, variety, and ambiguity phenomena of natural language itself, e.g. the same concept can be expressed in different word structures and patterns in different languages, even inside one language (Arnold, 2003) .", "cite_spans": [ { "start": 59, "end": 73, "text": "(Weaver, 1955)", "ref_id": "BIBREF136" }, { "start": 391, "end": 411, "text": "(Brown et al., 1993;", "ref_id": "BIBREF16" }, { "start": 412, "end": 430, "text": "Och and Ney, 2003;", "ref_id": "BIBREF109" }, { "start": 431, "end": 444, "text": "Chiang, 2005;", "ref_id": "BIBREF29" }, { "start": 445, "end": 457, "text": "Koehn, 2010)", "ref_id": "BIBREF77" }, { "start": 513, "end": 531, "text": "(Cho et al., 2014;", "ref_id": "BIBREF30" }, { "start": 532, "end": 553, "text": "Johnson et al., 2016;", "ref_id": "BIBREF72" }, { "start": 554, "end": 575, "text": "Vaswani et al., 2017;", "ref_id": "BIBREF134" }, { "start": 576, "end": 601, "text": "Lample and Conneau, 2019)", "ref_id": "BIBREF81" }, { "start": 764, "end": 785, "text": "(L\u00e4ubli et al., 2018;", "ref_id": "BIBREF85" }, { "start": 786, "end": 806, "text": "L\u00e4ubli et al., 2020;", "ref_id": "BIBREF98" }, { "start": 807, "end": 825, "text": "Han et al., 2020a)", "ref_id": "BIBREF64" }, { "start": 1215, "end": 1229, "text": "(Arnold, 2003)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we introduce human judgement and evaluation (HJE) criteria that have been used in standard international shared tasks and more broadly, such as NIST (LI, 2005) , WMT (Koehn and Monz, 2006a; Callison-Burch et al., 2007a , 2009 , 2010 , 2012 Bojar et al., 2013 Bojar et al., , 2014 Bojar et al., , 2015 Bojar et al., , 2016 Bojar et al., , 2017 Bojar et al., , 2018 Barrault et al., 2019 Barrault et al., , 2020 , and IWSLT (Eck and Hori, 2005; Paul, 2009; Paul et al., 2010; Federico et al., 2011) . We then introduce automated TQA methods, including the automatic evaluation metrics that were proposed inside these shared tasks and beyond. Regarding Human Assessment (HA) methods, we categorise them into traditional and advanced sets, with the first set including intelligibility, fidelity, fluency, adequacy, and comprehension, and the second set including task-oriented, extended criteria, utilizing post-editing, segment ranking, crowd source intelligence (direct assessment), and revisiting traditional criteria.", "cite_spans": [ { "start": 163, "end": 173, "text": "(LI, 2005)", "ref_id": "BIBREF89" }, { "start": 180, "end": 203, "text": "(Koehn and Monz, 2006a;", "ref_id": "BIBREF79" }, { "start": 204, "end": 232, "text": "Callison-Burch et al., 2007a", "ref_id": "BIBREF18" }, { "start": 233, "end": 239, "text": ", 2009", "ref_id": "BIBREF111" }, { "start": 240, "end": 246, "text": ", 2010", "ref_id": "BIBREF77" }, { "start": 247, "end": 253, "text": ", 2012", "ref_id": "BIBREF17" }, { "start": 254, "end": 272, "text": "Bojar et al., 2013", "ref_id": "BIBREF10" }, { "start": 273, "end": 293, "text": "Bojar et al., , 2014", "ref_id": "BIBREF11" }, { "start": 294, "end": 314, "text": "Bojar et al., , 2015", "ref_id": "BIBREF15" }, { "start": 315, "end": 335, "text": "Bojar et al., , 2016", "ref_id": "BIBREF14" }, { "start": 336, "end": 356, "text": "Bojar et al., , 2017", "ref_id": "BIBREF12" }, { "start": 357, "end": 377, "text": "Bojar et al., , 2018", "ref_id": "BIBREF13" }, { "start": 378, "end": 399, "text": "Barrault et al., 2019", "ref_id": "BIBREF7" }, { "start": 400, "end": 423, "text": "Barrault et al., , 2020", "ref_id": null }, { "start": 436, "end": 456, "text": "(Eck and Hori, 2005;", "ref_id": "BIBREF41" }, { "start": 457, "end": 468, "text": "Paul, 2009;", "ref_id": "BIBREF111" }, { "start": 469, "end": 487, "text": "Paul et al., 2010;", "ref_id": "BIBREF112" }, { "start": 488, "end": 510, "text": "Federico et al., 2011)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Regarding automated TQA methods, we classify these into three categories including simple n-gram based word surface matching, deeper lin-guistic feature integration such as syntax and semantics, and deep learning (DL) models, with the first two regarded as traditional and the last one regarded as advanced due to the recent appearance of DL models for NLP. We further divide each of these three categories into sub-branches, each with a different focus. Of course, this classification does not have clear boundaries. For instance some automated metrics are involved in both ngram word surface similarity and linguistic features. This paper differs from the existing works (Dorr et al., 2009; EuroMatrix, 2007) by introducing recent developments in MT evaluation measures, the different classifications from manual to automatic evaluation methodologies, the introduction of more recently developed quality estimation (QE) tasks, and its concise presentation of these concepts.", "cite_spans": [ { "start": 673, "end": 692, "text": "(Dorr et al., 2009;", "ref_id": "BIBREF38" }, { "start": 693, "end": 710, "text": "EuroMatrix, 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We hope that our work will shed light and offer a useful guide for both MT researchers and researchers in other relevant NLP disciplines, from the similarity and evaluation point of view, to find useful quality assessment methods, either from the manual or automated perspective, inspired from this work. This might include, for instance, natural language generation (Gehrmann et al., 2021) , natural language understanding (Ruder et al., 2021) and automatic summarization (Mani, 2001; Bhandari et al., 2020) .", "cite_spans": [ { "start": 367, "end": 390, "text": "(Gehrmann et al., 2021)", "ref_id": "BIBREF47" }, { "start": 424, "end": 444, "text": "(Ruder et al., 2021)", "ref_id": "BIBREF121" }, { "start": 473, "end": 485, "text": "(Mani, 2001;", "ref_id": "BIBREF101" }, { "start": 486, "end": 508, "text": "Bhandari et al., 2020)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows: Sections 2 and 3 present human assessment and automated assessment methods respectively; Section 4 presents some discussions and perspectives; Section 5 summarizes our conclusions and future work. We also list some further relevant readings in the appendices, such as evaluating methods of TQA itself, MT QE, and mathematical formulas. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section we introduce human judgement methods, as reflected in Fig. 1 . This categorises these human methods as Traditional and Advanced.", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 76, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Human Assessment Methods", "sec_num": "2" }, { "text": "The earliest human assessment methods for MT can be traced back to around 1966. They include the intelligibility and fidelity used by the au-2 This work is based on an earlier preprint edition (Han, 2016) (Carroll, 1966) . The requirement that a translation is intelligible means that, as far as possible, the translation should read like normal, well-edited prose and be readily understandable in the same way that such a sentence would be understandable if originally composed in the translation language. The requirement that a translation is of high fidelity or accuracy includes the requirement that the translation should, as little as possible, twist, distort, or controvert the meaning intended by the original.", "cite_spans": [ { "start": 193, "end": 204, "text": "(Han, 2016)", "ref_id": "BIBREF63" }, { "start": 205, "end": 220, "text": "(Carroll, 1966)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Intelligibility and Fidelity", "sec_num": "2.1.1" }, { "text": "In 1990s, the Advanced Research Projects Agency (ARPA) created a methodology to evaluate machine translation systems using the adequacy, fluency and comprehension of the MT output (Church and Hovy, 1991) which adapted in MT evaluation campaigns including (White et al., 1994) . To set upp this methodology, the human assessor is asked to look at each fragment, delimited by syntactic constituents and containing sufficient information, and judge its adequacy on a scale 1to-5. Results are computed by averaging the judgments over all of the decisions in the translation set.", "cite_spans": [ { "start": 180, "end": 203, "text": "(Church and Hovy, 1991)", "ref_id": "BIBREF31" }, { "start": 255, "end": 275, "text": "(White et al., 1994)", "ref_id": "BIBREF137" } ], "ref_spans": [], "eq_spans": [], "section": "Fluency, Adequacy and Comprehension", "sec_num": "2.1.2" }, { "text": "Fluency evaluation is compiled in the same manner as for the adequacy except that the assessor is to make intuitive judgments on a sentenceby-sentence basis for each translation. Human assessors are asked to determine whether the translation is good English without reference to the correct translation. Fluency evaluation determines whether a sentence is well-formed and fluent in context. Comprehension relates to \"Informativeness\", whose objective is to measure a system's ability to produce a translation that conveys sufficient information, such that people can gain necessary information from it. The reference set of expert translations is used to create six questions with six possible answers respectively including, \"none of above\" and \"cannot be determined\". Bangalore et al. (2000) classified accuracy into several categories including simple string accuracy, generation string accuracy, and two corresponding tree-based accuracy. Reeder (2004) found the correlation between fluency and the number of words it takes to distinguish between human translation and MT output.", "cite_spans": [ { "start": 770, "end": 793, "text": "Bangalore et al. (2000)", "ref_id": "BIBREF5" }, { "start": 943, "end": 956, "text": "Reeder (2004)", "ref_id": "BIBREF120" } ], "ref_spans": [], "eq_spans": [], "section": "Fluency, Adequacy and Comprehension", "sec_num": "2.1.2" }, { "text": "The \"Linguistics Data Consortium (LDC)\" 3 designed two five-point scales representing fluency and adequacy for the annual NIST MT evaluation workshop. The developed scales became a widely used methodology when manually evaluating MT by assigning values. The five point scale for adequacy indicates how much of the meaning expressed in the reference translation is also expressed in a translation hypothesis; the second five point scale indicates how fluent the translation is, involving both grammatical correctness and idiomatic word choices. Specia et al. (2011) conducted a study of MT adequacy and broke it into four levels, from score 4 to 1: highly adequate, the translation faithfully conveys the content of the input sentence; fairly adequate, where the translation generally conveys the meaning of the input sentence, there are some problems with word order or tense/voice/number, or there are repeated, added or non-translated words; poorly adequate, the content of the input sentence is not adequately conveyed by the translation; and completely inadequate, the content of the input sentence is not conveyed at all by the translation.", "cite_spans": [ { "start": 544, "end": 564, "text": "Specia et al. (2011)", "ref_id": "BIBREF128" } ], "ref_spans": [], "eq_spans": [], "section": "Further Development", "sec_num": "2.1.3" }, { "text": "White and Taylor (1998) developed a taskoriented evaluation methodology for Japanese-to-English translation to measure MT systems in light of the tasks for which their output might be used. They seek to associate the diagnostic scores as-signed to the output used in the DARPA (Defense Advanced Research Projects Agency) 4 evaluation with a scale of language-dependent tasks, such as scanning, sorting, and topic identification. They develop an MT proficiency metric with a corpus of multiple variants which are usable as a set of controlled samples for user judgments. The principal steps include identifying the user-performed text-handling tasks, discovering the order of texthandling task tolerance, analyzing the linguistic and non-linguistic translation problems in the corpus used in determining task tolerance, and developing a set of source language patterns which correspond to diagnostic target phenomena. A brief introduction to task-based MT evaluation work was shown in their later work (Doyon et al., 1999) . introduced tasked-based MT output evaluation by the extraction of who, when, and where three types of elements. They extended their work later into event understanding (Laoudi et al., 2006) . King et al. (2003) extend a large range of manual evaluation methods for MT systems which, in addition to the earlir mentioned accuracy, include suitability, whether even accurate results are suitable in the particular context in which the system is to be used; interoperability, whether with other software or hardware platforms; reliability, i.e., don't break down all the time or take a long time to get running again after breaking down; usability, easy to get the interfaces, easy to learn and operate, and looks pretty; efficiency, when needed, keep up with the flow of dealt documents; maintainability, being able to modify the system in order to adapt it to particular users; and portability, one version of a system can be replaced by a new version, because MT systems are rarely static and they tend to improve over time as resources grow and bugs are fixed.", "cite_spans": [ { "start": 1001, "end": 1021, "text": "(Doyon et al., 1999)", "ref_id": "BIBREF39" }, { "start": 1192, "end": 1213, "text": "(Laoudi et al., 2006)", "ref_id": "BIBREF83" }, { "start": 1216, "end": 1234, "text": "King et al. (2003)", "ref_id": "BIBREF75" } ], "ref_spans": [], "eq_spans": [], "section": "Task-oriented", "sec_num": "2.2.1" }, { "text": "One alternative method to assess MT quality is to compare the post-edited correct translation to the original MT output. This type of evaluation is, however, time consuming and depends on the skills of the human assessor and post-editing performer. One example of a metric that is designed in such a manner is the human translation error rate (HTER) (Snover et al., 2006) . This is based on the number of editing steps, computing the editing steps between an automatic translation and a reference translation. Here, a human assessor has to find the minimum number of insertions, deletions, substitutions, and shifts to convert the system output into an acceptable translation. HTER is defined as the sum of the number of editing steps divided by the number of words in the acceptable translation.", "cite_spans": [ { "start": 350, "end": 371, "text": "(Snover et al., 2006)", "ref_id": "BIBREF124" } ], "ref_spans": [], "eq_spans": [], "section": "Utilizing Post-editing", "sec_num": "2.2.3" }, { "text": "In the WMT metrics task, human assessment based on segment ranking was often used. Human assessors were frequently asked to provide a complete ranking over all the candidate translations of the same source segment (Callison-Burch et al., 2011 , 2012 . In the WMT13 shared-tasks (Bojar et al., 2013) , five systems were randomised for the assessor to give a rank. Each time, the source segment and the reference translation were presented together with the candidate translations from the five systems. The assessors ranked the systems from 1 to 5, allowing tied scores. For each ranking, there was the potential to provide as many as 10 pairwise results if there were no ties. The collected pairwise rankings were then used to assign a corresponding score to each participating system to reflect the quality of the automatic translations. The assigned scores could also be used to reflect how frequently a system was judged to be better or worse than other systems when they were compared on the same source segment, according to the following formula:", "cite_spans": [ { "start": 214, "end": 242, "text": "(Callison-Burch et al., 2011", "ref_id": "BIBREF24" }, { "start": 243, "end": 249, "text": ", 2012", "ref_id": "BIBREF17" }, { "start": 278, "end": 298, "text": "(Bojar et al., 2013)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Segment Ranking", "sec_num": "2.2.4" }, { "text": "#better pairwise ranking #total pairwise comparison \u2212 #ties comparisons (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segment Ranking", "sec_num": "2.2.4" }, { "text": "With the reported very low human inter-agreement scores from the WMT segment ranking task, researchers started to address this issue by exploring new human assessment methods, as well as seeking reliable automatic metrics for segment level ranking (Graham et al., 2015) . Graham et al. (2013) noted that the lower agreements from WMT human assessment might be caused partially by the interval-level scales set up for the human assessor to choose regarding quality judgement of each segment. For instance, the human assessor possibly corresponds to the situation where neither of the two categories they were forced to choose is preferred. In light of this rationale, they proposed continuous measurement scales (CMS) for human TQA using fluency criteria. This was implemented by introducing the crowdsource platform Amazon MTurk, with some quality control methods such as the insertion of bad-reference and ask-again, and statistical significance testing. This methodology reported improved both intra-annotator and inter-annotator consistency. Detailed quality control methodologies, including statistical significance testing were documented in direct assessment (DA) .", "cite_spans": [ { "start": 248, "end": 269, "text": "(Graham et al., 2015)", "ref_id": "BIBREF50" }, { "start": 272, "end": 292, "text": "Graham et al. (2013)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Crowd Source Intelligence", "sec_num": "2.2.5" }, { "text": "Popovi\u0107 (2020a) criticized the traditional human TQA methods because they fail to reflect real problems in translation by assigning scores and ranking several candidates from the same source. Instead, Popovi\u0107 (2020a) designed a new methodology by asking human assessors to mark all problematic parts of candidate translations, either words, phrases, or sentences. Two questions that were typically asked of the assessors related to comprehensibility and adequacy. The first criteria considered whether the translation is understandable, or understandable but with errors; the second criteria measures if the candidate translation has different meaning to the original text, or maintains the meaning but with errors. Both criteria take into account whether parts of the original text are missing in translation. Under a similar experimental setup, Popovi\u0107 (2020b) also summarized the most frequent error types that the annotators recognized as misleading translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Revisiting Traditional Criteria", "sec_num": "2.2.6" }, { "text": "Manual evaluation suffers some disadvantages such as that it is time-consuming, expensive, not tune-able, and not reproducible. Due to these aspects, automatic evaluation metrics have been widely used for MT. Typically, these compare the output of MT systems against human reference translations, but there are also some metrics that do not use reference translations. There are usually two ways to offer the human reference translation, either offering one single reference or offering multiple references for a single source sentence (Lin and Och, 2004; Han et al., 2012) .", "cite_spans": [ { "start": 536, "end": 555, "text": "(Lin and Och, 2004;", "ref_id": "BIBREF93" }, { "start": 556, "end": 573, "text": "Han et al., 2012)", "ref_id": "BIBREF68" } ], "ref_spans": [], "eq_spans": [], "section": "Automated Assessment Methods", "sec_num": "3" }, { "text": "Automated metrics often measure the overlap in words and word sequences, as well as word order and edit distance. We classify these kinds of metrics as \"simple n-gram word surface matching\". Further developed metrics also take linguistic features into account such as syntax and semantics, including POS, sentence structure, textual entailment, paraphrase, synonyms, named entities, multi-word expressions (MWEs), semantic roles and language models. We classify these metrics that utilize the linguistic features as \"Deeper Linguistic Features (aware)\". This classification is only for easier understanding and better organization of the content. It is not easy to separate these two categories clearly since sometimes they merge with each other. For instance, some metrics from the first category might also use certain linguistic features. Furthermore, we will introduce some recent models that apply deep learning into the TQA framework, as in Fig. 2 . Due to space limitations, we present MT quality estimation (QE) task which does not rely on reference translations during the automated computing procedure in the appendices.", "cite_spans": [], "ref_spans": [ { "start": 947, "end": 953, "text": "Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Automated Assessment Methods", "sec_num": "3" }, { "text": "By calculating the minimum number of editing steps to transform MT output to reference, Su et al. (1992) introduced the word error rate (WER) metric into MT evaluation. This metric, inspired by Levenshtein Distance (or edit distance), takes word order into account, and the operations include insertion (adding word), deletion (dropping word) and replacement (or substitution, replace one word with another), the minimum number of editing steps needed to match two sequences. One of the weak points of the WER metric is the fact that word ordering is not taken into account appropriately. The WER scores are very low when the word order of system output translation is \"wrong\" according to the reference. In the Levenshtein distance, the mismatches in word order require the deletion and re-insertion of the misplaced words. However, due to the diversity of language expressions, some so-called \"wrong\" order sentences by WER also prove to be good translations. To address this problem, the positionindependent word error rate (PER) introduced by Tillmann et al. (1997) is designed to ignore word order when matching output and reference. Without taking into account of the word order, PER counts the number of times that identical words appear in both sentences. Depending on whether the translated sentence is longer or shorter than the reference translation, the rest of the words are either insertion or deletion ones. Another way to overcome the unconscionable penalty on word order in the Levenshtein distance is adding a novel editing step that allows the movement of word sequences from one part of the output to another. This is something a human posteditor would do with the cut-and-paste function of a word processor. In this light, Snover et al. (2006) designed the translation edit rate (TER) metric that adds block movement (jumping action) as an editing step. The shift option is performed on a contiguous sequence of words within the output sentence. For the edits, the cost of the block movement, any number of continuous words and any distance, is equal to that of the single word operation, such as insertion, deletion and substitution.", "cite_spans": [ { "start": 88, "end": 104, "text": "Su et al. (1992)", "ref_id": "BIBREF131" }, { "start": 1047, "end": 1069, "text": "Tillmann et al. (1997)", "ref_id": "BIBREF132" }, { "start": 1744, "end": 1764, "text": "Snover et al. (2006)", "ref_id": "BIBREF124" } ], "ref_spans": [], "eq_spans": [], "section": "Levenshtein Distance", "sec_num": "3.1.1" }, { "text": "The widely used evaluation BLEU metric (Papineni et al., 2002) is based on the degree of ngram overlap between the strings of words produced by the MT output and the human translation references at the corpus level. BLEU calculates precision scores with n-grams sized from 1-to-4, together multiplied by the coefficient of brevity penalty (BP). If there are multi-references for each candidate sentence, then the nearest length as compared to the candidate sentence is selected as the effective one. In the BLEU metric, the n-gram precision weight \u03bb n is usually selected as a uniform weight. However, the 4-gram precision value can be very low or even zero when the test corpus is small. To weight more heavily those n-grams that are more informative, Doddington (2002) pro-poses the NIST metric with the information weight added. Furthermore, Doddington (2002) replaces the geometric mean of co-occurrences with the arithmetic average of n-gram counts, extends the n-gram into 5-gram (N = 5), and selects the average length of reference translations instead of the nearest length.", "cite_spans": [ { "start": 39, "end": 62, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF110" }, { "start": 753, "end": 770, "text": "Doddington (2002)", "ref_id": "BIBREF37" }, { "start": 845, "end": 862, "text": "Doddington (2002)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Precision and Recall", "sec_num": "3.1.2" }, { "text": "ROUGE (Lin and Hovy, 2003 ) is a recalloriented evaluation metric, which was initially developed for summaries, and inspired by BLEU and NIST. ROUGE has also been applied in automated TQA in later work (Lin and Och, 2004) .", "cite_spans": [ { "start": 6, "end": 25, "text": "(Lin and Hovy, 2003", "ref_id": "BIBREF92" }, { "start": 202, "end": 221, "text": "(Lin and Och, 2004)", "ref_id": "BIBREF93" } ], "ref_spans": [], "eq_spans": [], "section": "Precision and Recall", "sec_num": "3.1.2" }, { "text": "The F-measure is the combination of precision (P ) and recall (R), which was firstly employed in information retrieval (IR) and latterly adopted by the information extraction (IE) community, MT evaluations, and others. Turian et al. 2006carried out experiments to examine how standard measures such as precision, recall and F-measure can be applied to TQA and showed the comparisons of these standard measures with some alternative evaluation methodologies. Banerjee and Lavie (2005) designed METEOR as a novel evaluation metric. METEOR is based on the general concept of flexible unigram matching, precision and recall, including the match between words that are simple morphological variants of each other with identical word stems and words that are synonyms of each other. To measure how well-ordered the matched words in the candidate translation are in relation to the human reference, METEOR introduces a penalty coefficient, different to what is done in BLEU, by employing the number of matched chunks.", "cite_spans": [ { "start": 458, "end": 483, "text": "Banerjee and Lavie (2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Precision and Recall", "sec_num": "3.1.2" }, { "text": "The right word order plays an important role to ensure a high quality translation output. However, language diversity also allows different appearances or structures of a sentence. How to successfully achieve a penalty on really wrong word order, i.e. wrongly structured sentences, instead of on \"correct\" different order, i.e. the candidate sentence that has different word order to the reference, but is well structured, has attracted a lot of interest from researchers. In fact, the Levenshtein distance (Section 3.1.1) and n-gram based measures also contain word order information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Revisiting Word Order", "sec_num": "3.1.3" }, { "text": "Featuring the explicit assessment of word order and word choice, Wong and yu Kit (2009) developed the evaluation metric ATEC (assessment of text essential characteristics). This is also based on precision and recall criteria, but with a position difference penalty coefficient attached. The word choice is assessed by matching word forms at various linguistic levels, including surface form, stem, sound and sense, and further by weighing the informativeness of each word.", "cite_spans": [ { "start": 65, "end": 87, "text": "Wong and yu Kit (2009)", "ref_id": "BIBREF139" } ], "ref_spans": [], "eq_spans": [], "section": "Revisiting Word Order", "sec_num": "3.1.3" }, { "text": "Partially inspired by this, our work LEPOR (Han et al., 2012) is designed as a combination of augmented evaluation factors including n-gram based word order penalty in addition to precision, recall, and enhanced sentence-length penalty. The LEPOR metric (including hLEPOR) was reported with top performance on the Englishto-other (Spanish, German, French, Czech and Russian) language pairs in ACL-WMT13 metrics shared tasks for system level evaluation (Han et al., 2013d) . The n-gram based variant nLEPOR was also analysed by MT researchers as one of the three best performing segment level automated metrics (together with METEOR and sentBLEU-MOSES) that correlated with human judgement at a level that was not significantly outperformed by any other metrics, on Spanish-to-English, in addition to an aggregated set of overall tested language pairs (Graham et al., 2015) .", "cite_spans": [ { "start": 43, "end": 61, "text": "(Han et al., 2012)", "ref_id": "BIBREF68" }, { "start": 452, "end": 471, "text": "(Han et al., 2013d)", "ref_id": "BIBREF71" }, { "start": 851, "end": 872, "text": "(Graham et al., 2015)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Revisiting Word Order", "sec_num": "3.1.3" }, { "text": "Although some of the previously outlined metrics incorporate linguistic information, e.g. synonyms and stemming in METEOR and part of speech (POS) in LEPOR, the simple n-gram word surface matching methods mainly focus on the exact matches of the surface words in the output translation. The advantages of the metrics based on the first category (simple n-gram word matching) are that they perform well in capturing translation fluency (Lo et al., 2012) , are very fast to compute and have low cost. On the other hand, there are also some weaknesses, for instance, syntactic information is rarely considered and the underlying assumption that a good translation is one that shares the same word surface lexical choices as the reference translations is not justified semantically. Word surface lexical similarity does not adequately reflect similarity in meaning. Translation evaluation metrics that reflect meaning similarly need to be based on similarity of semantic structure and not merely flat lexical similarity.", "cite_spans": [ { "start": 435, "end": 452, "text": "(Lo et al., 2012)", "ref_id": "BIBREF95" } ], "ref_spans": [], "eq_spans": [], "section": "Deeper Linguistic Features", "sec_num": "3.2" }, { "text": "Syntactic similarity methods usually employ the features of morphological POS information, phrase categories, phrase decompositionality or sentence structure generated by linguistic tools such as a language parser or chunker.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Similarity", "sec_num": "3.2.1" }, { "text": "In grammar, a POS is a linguistic category of words or lexical items, which is generally defined by the syntactic or morphological behaviour of the lexical item. Common linguistic categories of lexical items include noun, verb, adjective, adverb, and preposition. To reflect the syntactic quality of automatically translated sentences, researchers employ POS information into their evaluations. Using the IBM model one, Popovi\u0107 et al. (2011) evaluate translation quality by calculating the similarity scores of source and target (translated) sentences without using a reference translation, based on the morphemes, 4-gram POS and lexicon probabilities. Dahlmeier et al. (2011) developed the TESLA evaluation metrics, combining the synonyms of bilingual phrase tables and POS information in the matching task. Other similar work using POS information include (Gim\u00e9nez and M\u00e1rquez, 2007; Popovic and Ney, 2007; .", "cite_spans": [ { "start": 420, "end": 441, "text": "Popovi\u0107 et al. (2011)", "ref_id": "BIBREF114" }, { "start": 653, "end": 676, "text": "Dahlmeier et al. (2011)", "ref_id": "BIBREF36" }, { "start": 858, "end": 885, "text": "(Gim\u00e9nez and M\u00e1rquez, 2007;", "ref_id": "BIBREF49" }, { "start": 886, "end": 908, "text": "Popovic and Ney, 2007;", "ref_id": "BIBREF115" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Similarity", "sec_num": "3.2.1" }, { "text": "In linguistics, a phrase may refer to any group of words that form a constituent, and so functions as a single unit in the syntax of a sentence. To measure an MT system's performance in translating new text types, such as in what ways the system itself could be extended to deal with new text types, Povlsen et al. (1998) carried out work focusing on the study of an English-to-Danish MT system. The syntactic constructions are explored with more complex linguistic knowledge, such as the identification of fronted adverbial subordinate clauses and prepositional phrases. Assuming that similar grammatical structures should occur in both source and translations, Avramidis et al. (2011) perform evaluation on source (German) and target (English) sentences employing the features of sentence length ratio, unknown words, phrase numbers including noun phrase, verb phrase and prepositional phrase. Other similar work using phrase similarity includes (Li et al., 2012) that uses noun phrases and verb phrases from chunking, (Echizen-ya and Araki, 2010) that only uses the noun phrase chunking in automatic evaluation, and (Han et al., 2013c ) that designs a universal phrase tagset for French to English MT evaluation.", "cite_spans": [ { "start": 300, "end": 321, "text": "Povlsen et al. (1998)", "ref_id": "BIBREF118" }, { "start": 1119, "end": 1137, "text": "(Han et al., 2013c", "ref_id": "BIBREF69" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Similarity", "sec_num": "3.2.1" }, { "text": "Syntax is the study of the principles and processes by which sentences are constructed in par-ticular languages. To address the overall goodness of a translated sentence's structure, Liu and Gildea (2005) employ constituent labels and headmodifier dependencies from a language parser as syntactic features for MT evaluation. They compute the similarity of dependency trees. Their experiments show that adding syntactic information can improve evaluation performance, especially for predicting the fluency of translation hypotheses. Other works that use syntactic information in evaluation include (Lo and Wu, 2011a) and (Lo et al., 2012) that use an automatic shallow parser and the RED metric (Yu et al., 2014 ) that applies dependency trees.", "cite_spans": [ { "start": 183, "end": 204, "text": "Liu and Gildea (2005)", "ref_id": "BIBREF94" }, { "start": 597, "end": 615, "text": "(Lo and Wu, 2011a)", "ref_id": "BIBREF96" }, { "start": 620, "end": 637, "text": "(Lo et al., 2012)", "ref_id": "BIBREF95" }, { "start": 694, "end": 710, "text": "(Yu et al., 2014", "ref_id": "BIBREF140" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Similarity", "sec_num": "3.2.1" }, { "text": "As a contrast to syntactic information, which captures overall grammaticality or sentence structure similarity, the semantic similarity of automatic translations and the source sentences (or references) can be measured by employing semantic features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Similarity", "sec_num": "3.2.2" }, { "text": "To capture the semantic equivalence of sentences or text fragments, named entity knowledge is taken from the literature on named-entity recognition, which aims to identify and classify atomic elements in a text into different entity categories (Marsh and Perzanowski, 1998; Guo et al., 2009) . The most commonly used entity categories include the names of persons, locations, organizations and time (Han et al., 2013a) . In the MEDAR2011 evaluation campaign, one baseline system based on Moses utilized an Open NLP toolkit to perform named entity detection, in addition to other packages. The low performances from the perspective of named entities causes a drop in fluency and adequacy. In the quality estimation of the MT task in WMT 2012, (Buck, 2012) introduced features including named entity, in addition to discriminative word lexicon, neural networks, back off behavior (Raybaud et al., 2011) and edit distance. Experiments on individual features showed that, from the perspective of the increasing the correlation score with human judgments, the named entity feature contributed the most to the overall performance, in comparisons to the impacts of other features.", "cite_spans": [ { "start": 244, "end": 273, "text": "(Marsh and Perzanowski, 1998;", "ref_id": "BIBREF102" }, { "start": 274, "end": 291, "text": "Guo et al., 2009)", "ref_id": "BIBREF55" }, { "start": 399, "end": 418, "text": "(Han et al., 2013a)", "ref_id": "BIBREF61" }, { "start": 742, "end": 754, "text": "(Buck, 2012)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Similarity", "sec_num": "3.2.2" }, { "text": "Multi-word Expressions (MWEs) set obstacles for MT models due to their complexity in presentation as well as idiomaticity (Sag et al., 2002; Han et al., 2020b,a; Han et al., 2021) . To investigate the effect of MWEs in MT evaluation (MTE), Salehi et al. (2015) focused on the compositionality of noun compounds. They identify the noun compounds first from the system outputs and reference with Stanford parser. The matching scores of the system outputs and reference sentences are then recalculated, adding up to the Tesla metric, by considering the predicated compositionality of identified noun compound phrases. Our own recent work in this area (Han et al., 2020a) provides an extensive investigation into various MT errors caused by MWEs.", "cite_spans": [ { "start": 122, "end": 140, "text": "(Sag et al., 2002;", "ref_id": "BIBREF122" }, { "start": 141, "end": 161, "text": "Han et al., 2020b,a;", "ref_id": null }, { "start": 162, "end": 179, "text": "Han et al., 2021)", "ref_id": "BIBREF66" }, { "start": 240, "end": 260, "text": "Salehi et al. (2015)", "ref_id": "BIBREF123" }, { "start": 648, "end": 667, "text": "(Han et al., 2020a)", "ref_id": "BIBREF64" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Similarity", "sec_num": "3.2.2" }, { "text": "Synonyms are words with the same or close meanings. One of the most widely used synonym databases in the NLP literature is WordNet (Miller et al., 1990) , which is an English lexical database grouping English words into sets of synonyms. WordNet classifies words mainly into four kinds of POS categories; Noun, Verb, Adjective, and Adverb, without prepositions, determiners, etc. Synonymous words or phrases are organized using the unit of synsets. Each synset is a hierarchical structure with the words at different levels according to their semantic relations.", "cite_spans": [ { "start": 131, "end": 152, "text": "(Miller et al., 1990)", "ref_id": "BIBREF105" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Similarity", "sec_num": "3.2.2" }, { "text": "Textual entailment is usually used as a directive relation between text fragments. If the truth of one text fragment TA follows another text fragment TB, then there is a directional relation between TA and TB (TB \u21d2 TA). Instead of the pure logical or mathematical entailment, textual entailment in natural language processing (NLP) is usually performed with a relaxed or loose definition (Dagan et al., 2006) . For instance, according to text fragment TB, if it can be inferred that the text fragment TA is most likely to be true then the relationship TB \u21d2 TA is also established. Since the relation is directive, it means that the inverse inference (TA \u21d2 TB) is not ensured to be true (Dagan and Glickman, 2004) . Castillo and Estrella (2012) present a new approach for MT evaluation based on the task of \"Semantic Textual Similarity\". This problem is addressed using a textual entailment engine based on WordNet semantic features.", "cite_spans": [ { "start": 388, "end": 408, "text": "(Dagan et al., 2006)", "ref_id": "BIBREF35" }, { "start": 686, "end": 712, "text": "(Dagan and Glickman, 2004)", "ref_id": "BIBREF34" }, { "start": 715, "end": 743, "text": "Castillo and Estrella (2012)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Similarity", "sec_num": "3.2.2" }, { "text": "Paraphrase is to restate the meaning of a passage of text but utilizing other words, which can be seen as bidirectional textual entailment (Androutsopoulos and Malakasiotis, 2010). Instead of the literal translation, word by word and line by line used by meta-phrases, a paraphrase represents a dynamic equivalent. Further knowledge of paraphrases from the aspect of linguistics is introduced in the works by (McKeown, 1979; Meteer and Shaked, 1988; Barzilay and Lee, 2003) . Snover et al. (2006) describe a new evaluation metric TER-Plus (TERp). Sequences of words in the reference are considered to be paraphrases of a sequence of words in the hypothesis if that phrase pair occurs in the TERp phrase table.", "cite_spans": [ { "start": 409, "end": 424, "text": "(McKeown, 1979;", "ref_id": "BIBREF103" }, { "start": 425, "end": 449, "text": "Meteer and Shaked, 1988;", "ref_id": "BIBREF104" }, { "start": 450, "end": 473, "text": "Barzilay and Lee, 2003)", "ref_id": "BIBREF8" }, { "start": 476, "end": 496, "text": "Snover et al. (2006)", "ref_id": "BIBREF124" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Similarity", "sec_num": "3.2.2" }, { "text": "Semantic roles are employed by researchers as linguistic features in MT evaluation. To utilize semantic roles, sentences are usually first shallow parsed and entity tagged. Then the semantic roles are used to specify the arguments and adjuncts that occur in both the candidate translation and reference translation. For instance, the semantic roles introduced by Gim\u00e9nez and M\u00e1rquez (2007) ; Gim\u00e9ne and M\u00e1rquez (2008) include causative agent, adverbial adjunct, directional adjunct, negation marker, and predication adjunct, etc. In a further development, Lo and Wu (2011a,b) presented the MEANT metric designed to capture the predicate-argument relations as structural relations in semantic frames, which are not reflected in the flat semantic role label features in the work of Gim\u00e9nez and M\u00e1rquez (2007) . Furthermore, instead of using uniform weights, Lo et al. (2012) weight the different types of semantic roles as empirically determined by their relative importance to the adequate preservation of meaning. Generally, semantic roles account for the semantic structure of a segment and have proved effective in assessing adequacy of translation.", "cite_spans": [ { "start": 363, "end": 389, "text": "Gim\u00e9nez and M\u00e1rquez (2007)", "ref_id": "BIBREF49" }, { "start": 392, "end": 417, "text": "Gim\u00e9ne and M\u00e1rquez (2008)", "ref_id": "BIBREF48" }, { "start": 556, "end": 575, "text": "Lo and Wu (2011a,b)", "ref_id": null }, { "start": 780, "end": 806, "text": "Gim\u00e9nez and M\u00e1rquez (2007)", "ref_id": "BIBREF49" }, { "start": 856, "end": 872, "text": "Lo et al. (2012)", "ref_id": "BIBREF95" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Similarity", "sec_num": "3.2.2" }, { "text": "Language models are also utilized by MT evaluation researchers. A statistical language model usually assigns a probability to a sequence of words by means of a probability distribution. Gamon et al. (2005) propose the LM-SVM, language model, and support vector machine methods investigating the possibility of evaluating MT quality and fluency in the absence of reference translations. They evaluate the performance of the system when used as a classifier for identifying highly dis-fluent and ill-formed sentences. Generally, the linguistic features mentioned above, including both syntactic and semantic features, are combined in two ways, either by following a machine learning approach (Albrecht and Hwa, 2007; Leusch and Ney, 2009) , or trying to combine a wide variety of metrics in a more simple and straightforward way, such as (Gim\u00e9ne and M\u00e1rquez, 2008; Specia and Gim\u00e9nez, 2010; Comelles et al., 2012) .", "cite_spans": [ { "start": 186, "end": 205, "text": "Gamon et al. (2005)", "ref_id": "BIBREF46" }, { "start": 690, "end": 714, "text": "(Albrecht and Hwa, 2007;", "ref_id": "BIBREF0" }, { "start": 715, "end": 736, "text": "Leusch and Ney, 2009)", "ref_id": "BIBREF88" }, { "start": 836, "end": 862, "text": "(Gim\u00e9ne and M\u00e1rquez, 2008;", "ref_id": "BIBREF48" }, { "start": 863, "end": 888, "text": "Specia and Gim\u00e9nez, 2010;", "ref_id": "BIBREF125" }, { "start": 889, "end": 911, "text": "Comelles et al., 2012)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Similarity", "sec_num": "3.2.2" }, { "text": "We briefly list some works that have applied deep learning and neural networks for TQA which are promising for further exploration. For instance, Guzm\u00e1n et al. (2015) ; Guzmn et al. (2017) use neural networks (NNs) for TQA for pair wise modeling to choose the best hypothetical translation by comparing candidate translations with a reference, integrating syntactic and semantic information into NNs. Gupta et al. (2015b) proposed LSTM networks based on dense vectors to conduct TQA, while Ma et al. (2016) designed a new metric based on bi-directional LSTMs, which is similar to the work of Guzm\u00e1n et al. (2015) but with less complexity by allowing the evaluation of a single hypothesis with a reference, instead of a pairwise situation.", "cite_spans": [ { "start": 146, "end": 166, "text": "Guzm\u00e1n et al. (2015)", "ref_id": "BIBREF58" }, { "start": 169, "end": 188, "text": "Guzmn et al. (2017)", "ref_id": "BIBREF59" }, { "start": 401, "end": 421, "text": "Gupta et al. (2015b)", "ref_id": "BIBREF57" }, { "start": 490, "end": 506, "text": "Ma et al. (2016)", "ref_id": "BIBREF99" }, { "start": 592, "end": 612, "text": "Guzm\u00e1n et al. (2015)", "ref_id": "BIBREF58" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Networks for TQA", "sec_num": "3.3" }, { "text": "In this section, we examine several topics that can be considered for further development of MT evaluation fields.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Perspectives", "sec_num": "4" }, { "text": "The first aspect is that development should involve both n-gram word surface matching and the deeper linguistic features. Because natural languages are expressive and ambiguous at different levels (Gim\u00e9nez and M\u00e1rquez, 2007) , simple ngram word surface similarity based metrics limit their scope to the lexical dimension and are not sufficient to ensure that two sentences convey the same meaning or not. For instance, (Callison-Burch et al., 2006a) and (Koehn and Monz, 2006b) report that simple n-gram matching metrics tend to favor automatic statistical MT systems. If the evaluated systems belong to different types that include rule based, human aided, and statistical systems, then the simple n-gram matching metrics, such as BLEU, give a strong disagreement between these ranking results and those of the human assessors. So deeper linguistic features are very important in the MT evaluation procedure.", "cite_spans": [ { "start": 197, "end": 224, "text": "(Gim\u00e9nez and M\u00e1rquez, 2007)", "ref_id": "BIBREF49" }, { "start": 419, "end": 449, "text": "(Callison-Burch et al., 2006a)", "ref_id": "BIBREF25" }, { "start": 454, "end": 477, "text": "(Koehn and Monz, 2006b)", "ref_id": "BIBREF80" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Perspectives", "sec_num": "4" }, { "text": "However, inappropriate utilization, or abundant or abused utilization, of linguistic features will result in limited popularity of measures incorporating linguistic features. In the future, how to utilize the linguistic features in a more accurate, flexible and simplified way, will be one challenge in MT evaluation. Furthermore, the MT evaluation from the aspects of semantic similarity is more reasonable and reaches closer to the human judgments, so it should receive more attention.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Perspectives", "sec_num": "4" }, { "text": "The second major aspect is that MT quality estimation (QE) tasks are different to traditional MT evaluation in several ways, such as extracting reference-independent features from input sentences and their translation, obtaining quality scores based on models produced from training data, predicting the quality of an unseen translated text at system run-time, filtering out sentences which are not good enough for post processing, and selecting the best translation among multiple systems. This means that with so many challenges, the topic will continuously attract many researchers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Perspectives", "sec_num": "4" }, { "text": "Thirdly, some advanced or challenging technologies that can be further applied to MT evaluation include the deep learning models (Gupta et al., 2015a; Zhang and Zong, 2015) , semantic logic form, and decipherment model.", "cite_spans": [ { "start": 129, "end": 150, "text": "(Gupta et al., 2015a;", "ref_id": "BIBREF56" }, { "start": 151, "end": 172, "text": "Zhang and Zong, 2015)", "ref_id": "BIBREF141" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Perspectives", "sec_num": "4" }, { "text": "In this paper we have presented a survey of the state-of-the-art in translation quality assessment methodologies from the viewpoints of both manual judgements and automated methods. This work differs from conventional MT evaluation review work by its concise structure and inclusion of some recently published work and references. Due to space limitations, in the main content, we focused on conventional human assessment methods and automated evaluation metrics with reliance on reference translations. However, we also list some interesting and related work in the appendices, such as the quality estimation in MT when the reference translation is not presented during the estimation, and the evaluating methodology for TQA methods themselves. However, this arrangement does not affect the overall understanding of this paper as a self contained overview. We believe this work can help both MT and NLP researchers and practitioners in identifying appropriate quality assessment methods for their work. We also expect this work might shed some light on evaluation methodologies in other NLP tasks, due to the similarities they share, such as text summarization (Mani, 2001; Bhandari et al., 2020) , natural language understanding (Ruder et al., 2021) , natural language generation (Gehrmann et al., 2021) , as well as programming language (code) generation (Liguori et al., 2021) . explore this problem, Koehn (2004) presents an investigation statistical significance testing for MT evaluation. The bootstrap re-sampling method is used to compute the statistical significance intervals for evaluation metrics on small test sets. Statistical significance usually refers to two separate notions, one of which is the p-value, the probability that the observed data will occur by chance in a given single null hypothesis. The other one is the \"Type I\" error rate of a statistical hypothesis test, which is also called \"false positive\" and measured by the probability of incorrectly rejecting a given null hypothesis in favour of a second alternative hypothesis (Hald, 1998) .", "cite_spans": [ { "start": 1162, "end": 1174, "text": "(Mani, 2001;", "ref_id": "BIBREF101" }, { "start": 1175, "end": 1197, "text": "Bhandari et al., 2020)", "ref_id": "BIBREF9" }, { "start": 1231, "end": 1251, "text": "(Ruder et al., 2021)", "ref_id": "BIBREF121" }, { "start": 1282, "end": 1305, "text": "(Gehrmann et al., 2021)", "ref_id": "BIBREF47" }, { "start": 1358, "end": 1380, "text": "(Liguori et al., 2021)", "ref_id": null }, { "start": 1405, "end": 1417, "text": "Koehn (2004)", "ref_id": "BIBREF76" }, { "start": 2058, "end": 2070, "text": "(Hald, 1998)", "ref_id": "BIBREF60" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "Since human judgments are usually trusted as the gold standards that automatic MT evaluation metrics should try to approach, the reliability and coherence of human judgments is very important. Cohen's kappa agreement coefficient is one of the most commonly used evaluation methods (Cohen, 1960) . For the problem of nominal scale agreement between two judges, there are two relevant quantities p 0 and p c . p 0 is the proportion of units in which the judges agreed and p c is the proportion of units for which agreement is expected by chance. The coefficient k is simply the proportion of chance-expected disagreements which do not occur, or alternatively, it is the proportion of agreement after chance agreement is removed from consideration:", "cite_spans": [ { "start": 281, "end": 294, "text": "(Cohen, 1960)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "A.2: Evaluating Human Judgment", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k = p 0 \u2212 p c 1 \u2212 p c", "eq_num": "(2)" } ], "section": "A.2: Evaluating Human Judgment", "sec_num": null }, { "text": "where p 0 \u2212 p c represents the proportion of the cases in which beyond-chance agreement occurs and is the numerator of the coefficient (Landis and Koch, 1977) .", "cite_spans": [ { "start": 147, "end": 158, "text": "Koch, 1977)", "ref_id": "BIBREF82" } ], "ref_spans": [], "eq_spans": [], "section": "A.2: Evaluating Human Judgment", "sec_num": null }, { "text": "In this section, we introduce three correlation coefficient algorithms that have been widely used at recent WMT workshops to measure the closeness of automatic evaluation and manual judgments. The choice of correlation algorithm depends on whether scores or ranks schemes are utilized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.3: Correlating Manual and Automatic Score", "sec_num": null }, { "text": "Pearson's correlation coefficient (Pearson, 1900) is commonly represented by the Greek letter \u03c1. The correlation between random variables X and Y denoted as \u03c1 XY is measured as follows (Montgomery and Runger, 2003) .", "cite_spans": [ { "start": 34, "end": 49, "text": "(Pearson, 1900)", "ref_id": "BIBREF113" }, { "start": 185, "end": 214, "text": "(Montgomery and Runger, 2003)", "ref_id": "BIBREF106" } ], "ref_spans": [], "eq_spans": [], "section": "Pearson Correlation", "sec_num": null }, { "text": "\u03c1 XY = cov(X, Y ) V (X)V (Y ) = \u03c3 XY \u03c3 X \u03c3 Y (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pearson Correlation", "sec_num": null }, { "text": "Because the standard deviations of variable X and Y are higher than 0 (\u03c3 X > 0 and \u03c3 Y > 0), if the covariance \u03c3 XY between X and Y is positive, negative or zero, the correlation score between X and Y will correspondingly result in positive, negative or zero, respectively. Based on a sample of paired data (X, Y ) as (x i , y i ), i = 1 to n , the Pearson correlation coefficient is calculated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pearson Correlation", "sec_num": null }, { "text": "\u03c1 XY = n i=1 (x i \u2212 \u00b5 x )(y i \u2212 \u00b5 y ) n i=1 (x i \u2212 \u00b5 x ) 2 n i=1 (y i \u2212 \u00b5 y ) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pearson Correlation", "sec_num": null }, { "text": "(4) where \u00b5 x and \u00b5 y specify the means of discrete random variable X and Y respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pearson Correlation", "sec_num": null }, { "text": "Spearman rank correlation coefficient, a simplified version of Pearson correlation coefficient, is another algorithm to measure the correlations of automatic evaluation and manual judges, e.g. in WMT metrics task (Callison-Burch et al., 2008 , 2009 , 2010 . When there are no ties, Spearman rank correlation coefficient, which is sometimes specified as (rs) is calculated as:", "cite_spans": [ { "start": 213, "end": 241, "text": "(Callison-Burch et al., 2008", "ref_id": "BIBREF20" }, { "start": 242, "end": 248, "text": ", 2009", "ref_id": "BIBREF111" }, { "start": 249, "end": 255, "text": ", 2010", "ref_id": "BIBREF77" } ], "ref_spans": [], "eq_spans": [], "section": "Spearman rank Correlation", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "rs \u03d5(XY ) = 1 \u2212 6 n i=1 d 2 i n(n 2 \u2212 1)", "eq_num": "(5)" } ], "section": "Spearman rank Correlation", "sec_num": null }, { "text": "where d i is the difference-value (D-value) between the two corresponding rank variables (x i \u2212 y i ) in X = {x 1 , x 2 , ..., x n } and Y = {y 1 , y 2 , ..., y n } describing the system \u03d5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spearman rank Correlation", "sec_num": null }, { "text": "Kendall's \u03c4 (Kendall, 1938) has been used in recent years for the correlation between automatic order and reference order (Callison-Burch et al., 2010 , 2012 . It is defined as: \u03c4 = num concordant pairs \u2212 num discordant pairs total pairs (6) The latest version of Kendall's \u03c4 is introduced in (Kendall and Gibbons, 1990) . Lebanon and Lafferty (2002) give an overview work for Kendall's \u03c4 showing its application in calculating how much the system orders differ from the reference order. More concretely, Lapata (2003) proposed the use of Kendall's \u03c4 , a measure of rank correlation, to estimate the distance between a system-generated and a human-generated goldstandard order.", "cite_spans": [ { "start": 12, "end": 27, "text": "(Kendall, 1938)", "ref_id": "BIBREF73" }, { "start": 122, "end": 150, "text": "(Callison-Burch et al., 2010", "ref_id": "BIBREF21" }, { "start": 151, "end": 157, "text": ", 2012", "ref_id": "BIBREF17" }, { "start": 293, "end": 320, "text": "(Kendall and Gibbons, 1990)", "ref_id": "BIBREF74" }, { "start": 323, "end": 350, "text": "Lebanon and Lafferty (2002)", "ref_id": "BIBREF87" }, { "start": 505, "end": 518, "text": "Lapata (2003)", "ref_id": "BIBREF84" } ], "ref_spans": [], "eq_spans": [], "section": "Kendall's \u03c4", "sec_num": null }, { "text": "There are researchers who did some work about the comparisons of different types of metrics. For example, Callison-Burch et al. (2006b , 2007b ; Lavie (2013) mentioned that, through some qualitative analysis on some standard data set, BLEU cannot reflect MT system performance well in many situations, i.e. higher BLEU score cannot ensure better translation outputs. There are some recently developed metrics that can perform much better than the traditional ones especially on challenging sentence-level evaluation, though they are not popular yet such as nLEPOR and SentBLEU-Moses (Graham et al., 2015; . Such comparison will help MT researchers to select th appropriate metrics to use for specialist tasks.", "cite_spans": [ { "start": 106, "end": 134, "text": "Callison-Burch et al. (2006b", "ref_id": "BIBREF26" }, { "start": 135, "end": 142, "text": ", 2007b", "ref_id": "BIBREF19" }, { "start": 583, "end": 604, "text": "(Graham et al., 2015;", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "A.4: Metrics Comparison", "sec_num": null }, { "text": "In past years, some MT evaluation methods that do not use manually created gold reference translations were proposed. These are referred to as \"Quality Estimation (QE)\". Some of the related works have already been introduced in previous sections. The most recent quality estimation tasks can be found at WMT12 to WMT20 (Callison-Burch et al., 2012; Bojar et al., 2013 Bojar et al., , 2014 Bojar et al., , 2015 Specia et al., 2018; Fonseca et al., 2019; . These defined a novel evaluation metric that provides some advantages over the traditional ranking metrics. The DeltaAvg metric assumes that the reference test set has a number associated with each entry that represents its extrinsic value. Given these values, their metric does not need an explicit reference ranking, the way that Spearman ranking correlation does. The goal of the DeltaAvg metric is to measure how valuable a proposed ranking is according to the extrinsic values associated with the test entries.", "cite_spans": [ { "start": 319, "end": 348, "text": "(Callison-Burch et al., 2012;", "ref_id": "BIBREF22" }, { "start": 349, "end": 367, "text": "Bojar et al., 2013", "ref_id": "BIBREF10" }, { "start": 368, "end": 388, "text": "Bojar et al., , 2014", "ref_id": "BIBREF11" }, { "start": 389, "end": 409, "text": "Bojar et al., , 2015", "ref_id": "BIBREF15" }, { "start": 410, "end": 430, "text": "Specia et al., 2018;", "ref_id": "BIBREF127" }, { "start": 431, "end": 452, "text": "Fonseca et al., 2019;", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Appendix B: MT QE", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "DeltaAvg v [n] = n\u22121 k=1 V (S 1,k ) n \u2212 1 \u2212 V (S)", "eq_num": "(7)" } ], "section": "Appendix B: MT QE", "sec_num": null }, { "text": "For scoring, two task evaluation metrics were used that have traditionally been used for measur-ing performance in regression tasks: Mean Absolute Error (MAE) as a primary metric, and Root of Mean Squared Error (RMSE) as a secondary metric. For a given test set S with entries s i , 1 i |S|, H(s i ) is the proposed score for entry s i (hypothesis), and V (s i ) is the reference value for entry s i (gold-standard value).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B: MT QE", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "N i\u22121 |H(s i ) \u2212 V (s i )| N (8) RMSE = N i\u22121 (H(s i ) \u2212 V (s i )) 2 N", "eq_num": "(9)" } ], "section": "MAE =", "sec_num": null }, { "text": "where N = |S|. Both these metrics are non-parametric, automatic and deterministic (and therefore consistent), and extrinsically interpretable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MAE =", "sec_num": null }, { "text": "Some further readings on MT QE are the comparison between MT evaluation and QE Specia et al. (2010) and the QE framework model QuEst ; the weakly supervised approaches for quality estimation and the limitations analysis of QE Supervised Systems Vogel, 2013, 2014) , and unsupervised QE models ; the recent shared tasks on QE (Fonseca et al., 2019; .", "cite_spans": [ { "start": 76, "end": 99, "text": "QE Specia et al. (2010)", "ref_id": null }, { "start": 245, "end": 263, "text": "Vogel, 2013, 2014)", "ref_id": null }, { "start": 325, "end": 347, "text": "(Fonseca et al., 2019;", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "MAE =", "sec_num": null }, { "text": "In very recent years, the two shared tasks, i.e. MT quality estimation and traditional MT evaluation metrics, have tried to integrate into each other and benefit from both knowledge. For instance, in WMT2019 shared task, there were 10 referenceless evaluation metrics which were used for the QE task, \"QE as a Metric\", as well (Ma et al., 2019) . ", "cite_spans": [ { "start": 327, "end": 344, "text": "(Ma et al., 2019)", "ref_id": "BIBREF100" } ], "ref_spans": [], "eq_spans": [], "section": "MAE =", "sec_num": null }, { "text": "where c is the total length of candidate translation, and r refers to the sum of effective reference sentence length in the corpus. Bellow is from NIST metric, then F-measure, METEOR and LEPOR: where, in our own metric LEPOR and its variations, nLEPOR (n-gram precision and recall LEPOR) and hLEPOR (harmonic LEPOR), P and R are for precision and recall, LP for length penalty, NPosPenal for n-gram position difference penalty, and HPR for harmonic mean of precision and recall, respectively (Han et al., 2012 (Han et al., , 2013b Han, 2014; .", "cite_spans": [ { "start": 492, "end": 509, "text": "(Han et al., 2012", "ref_id": "BIBREF68" }, { "start": 510, "end": 530, "text": "(Han et al., , 2013b", "ref_id": "BIBREF67" }, { "start": 531, "end": 541, "text": "Han, 2014;", "ref_id": "BIBREF62" } ], "ref_spans": [], "eq_spans": [], "section": "MAE =", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Info = log 2 ( #occurrence of w 1 , \u2022 \u2022 \u2022 , w n\u22121 #occurrence of w 1 , \u2022 \u2022 \u2022 , w n )", "eq_num": "(18" } ], "section": "MAE =", "sec_num": null }, { "text": "authors GJ and AS in alphabetic order", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.ldc.upenn.edu", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.darpa.mil", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We appreciate the comments from Derek F. Wong, editing help from Ying Shi (Angela), and the anonymous reviewers for their valuable reviews and feedback. The ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund. The input of Alan Smeaton is part-funded by Science Foundation Ireland under grant number SFI/12/RC/2289 (Insight Centre).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "Appendix A: Evaluating TQA", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendices", "sec_num": null }, { "text": "If different MT systems produce translations with different qualities on a dataset, how can we ensure that they indeed own different system quality? To", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1: Statistical Significance", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A re-examination of machine learning approaches for sentence-level mt evaluation", "authors": [ { "first": "J", "middle": [], "last": "Albrecht", "suffix": "" }, { "first": "R", "middle": [], "last": "Hwa", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Albrecht and R. Hwa. 2007. A re-examination of machine learning approaches for sentence-level mt evaluation. In Proceedings of the 45th Annual Meet- ing of the ACL, Prague, Czech Republic.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A survey of paraphrasing and textual entailment methods", "authors": [ { "first": "Jon", "middle": [], "last": "Androutsopoulos", "suffix": "" }, { "first": "Prodromos", "middle": [], "last": "Malakasiotis", "suffix": "" } ], "year": 2010, "venue": "Journal of Artificial Intelligence Research", "volume": "38", "issue": "", "pages": "135--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jon Androutsopoulos and Prodromos Malakasiotis. 2010. A survey of paraphrasing and textual entail- ment methods. Journal of Artificial Intelligence Re- search, 38:135-187.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Computers and Translation: A translator's guide-Chap8 Why translation is difficult for computers", "authors": [ { "first": "D", "middle": [], "last": "Arnold", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Arnold. 2003. Computers and Translation: A trans- lator's guide-Chap8 Why translation is difficult for computers. Benjamins Translation Library.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Evaluate with confidence estimation: Machine ranking of translation outputs using grammatical features", "authors": [ { "first": "Eleftherios", "middle": [], "last": "Avramidis", "suffix": "" }, { "first": "Maja", "middle": [], "last": "Popovic", "suffix": "" }, { "first": "David", "middle": [], "last": "Vilar", "suffix": "" }, { "first": "Aljoscha", "middle": [], "last": "Burchardt", "suffix": "" } ], "year": 2011, "venue": "Proceedings of WMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eleftherios Avramidis, Maja Popovic, David Vilar, and Aljoscha Burchardt. 2011. Evaluate with confidence estimation: Machine ranking of translation outputs using grammatical features. In Proceedings of WMT 2011.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments", "authors": [ { "first": "Satanjeev", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the ACL 2005.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Evaluation metrics for generation", "authors": [ { "first": "Srinivas", "middle": [], "last": "Bangalore", "suffix": "" }, { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Whittaker", "suffix": "" } ], "year": 2000, "venue": "Proceedings of INLG", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Srinivas Bangalore, Owen Rambow, and Steven Whit- taker. 2000. Evaluation metrics for generation. In Proceedings of INLG 2000.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Proceedings of the Fifth Conference on Machine Translation", "authors": [ { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Magdalena", "middle": [], "last": "Biesialska", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Marta", "middle": [ "R" ], "last": "Costa-Juss\u00e0", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Grundkiewicz", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Joanis", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kocmi", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Chi-Kiu", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Ljube\u0161i\u0107", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Makoto", "middle": [], "last": "Morishita", "suffix": "" }, { "first": "Masaaki", "middle": [], "last": "Nagata", "suffix": "" }, { "first": "Toshiaki", "middle": [], "last": "Nakazawa", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lo\u00efc Barrault, Magdalena Biesialska, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljube\u0161i\u0107, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshi- aki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20). In Proceedings of the Fifth Conference on Machine Translation, pages 1-55, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Findings of the 2019 conference on machine translation (WMT19)", "authors": [ { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Marta", "middle": [ "R" ], "last": "Costa-Juss\u00e0", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Fishel", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Shervin", "middle": [], "last": "Malmasi", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Mathias", "middle": [], "last": "M\u00fcller", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "2", "issue": "", "pages": "1--61", "other_ids": { "DOI": [ "10.18653/v1/W19-5301" ] }, "num": null, "urls": [], "raw_text": "Lo\u00efc Barrault, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M\u00fcller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine trans- lation (WMT19). In Proceedings of the Fourth Con- ference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning to paraphrase: an unsupervised approach using multiple-sequence alignment", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2003, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: an unsupervised approach using multiple-sequence alignment. In Proceedings of NAACL 2003.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Reevaluating evaluation in text summarization", "authors": [ { "first": "Manik", "middle": [], "last": "Bhandari", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Narayan Gour", "suffix": "" }, { "first": "Atabak", "middle": [], "last": "Ashfaq", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "9347--9359", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.751" ] }, "num": null, "urls": [], "raw_text": "Manik Bhandari, Pranav Narayan Gour, Atabak Ash- faq, Pengfei Liu, and Graham Neubig. 2020. Re- evaluating evaluation in text summarization. In Pro- ceedings of the 2020 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 9347-9359, Online. Association for Compu- tational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Findings of the 2013 workshop on statistical machine translation", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Buck", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2013, "venue": "Proceedings of WMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Christian Buck, Chris Callison-Burch, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2013. Findings of the 2013 workshop on statistical machine translation. In Proceedings of WMT 2013.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Findings of the 2014 workshop on statistical machine translation", "authors": [ { "first": "Ondrej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Buck", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Leveling", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Pecina", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Herve", "middle": [], "last": "Saint-Amand", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Ale\u0161", "middle": [], "last": "Tamchyna", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "12--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Ale\u0161 Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 12-58, Baltimore, Maryland, USA. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Findings of the 2017 conference on machine translation (WMT17)", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Rajen", "middle": [], "last": "Chatterjee", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Shujian", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Varvara", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Negri", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Raphael", "middle": [], "last": "Rubino", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Turchi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Second Conference on Machine Translation", "volume": "", "issue": "", "pages": "169--214", "other_ids": { "DOI": [ "10.18653/v1/W17-4717" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 conference on machine translation (WMT17). In Proceedings of the Second Conference on Machine Translation, pages 169-214, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Findings of the 2018 conference on machine translation (wmt18)", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Fishel", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation", "volume": "2", "issue": "", "pages": "272--307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, and Christof Monz. 2018. Find- ings of the 2018 conference on machine translation (wmt18). In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Pa- pers, pages 272-307, Belgium, Brussels. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Association for Computational Linguistics", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Kamran", "suffix": "" }, { "first": "Milo\u0161", "middle": [], "last": "Stanojevi\u0107", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the First Conference on Machine Translation", "volume": "2", "issue": "", "pages": "199--231", "other_ids": { "DOI": [ "10.18653/v1/W16-2302" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Yvette Graham, Amir Kamran, and Milo\u0161 Stanojevi\u0107. 2016. Results of the WMT16 metrics shared task. In Proceedings of the First Con- ference on Machine Translation: Volume 2, Shared Task Papers, pages 199-231, Berlin, Germany. As- sociation for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Findings of the 2015 workshop on statistical machine translation", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Rajen", "middle": [], "last": "Chatterjee", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Hokamp", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Varvara", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Negri", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Carolina", "middle": [], "last": "Scarton", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Turchi", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "1--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 1-46, Lisbon, Portugal. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Stephen", "middle": [ "A Della" ], "last": "Brown", "suffix": "" }, { "first": "Vincent", "middle": [ "J" ], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The math- ematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263- 311.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Black box features for the wmt 2012 quality estimation shared task", "authors": [ { "first": "Christian", "middle": [], "last": "Buck", "suffix": "" } ], "year": 2012, "venue": "Proceedings of WMT 2012", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Buck. 2012. Black box features for the wmt 2012 quality estimation shared task. In Proceedings of WMT 2012.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "meta-) evaluation of machine translation", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" }, { "first": "Cameron", "middle": [], "last": "Fordyce", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Schroeder", "suffix": "" } ], "year": 2007, "venue": "Proceedings of WMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007a. (meta-) evaluation of machine translation. In Pro- ceedings of WMT 2007.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "(meta-) evaluation of machine translation", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" }, { "first": "Cameron", "middle": [], "last": "Fordyce", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Schroeder", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Second Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "64--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007b. (meta-) evaluation of machine translation. In Pro- ceedings of the Second Workshop on Statistical Ma- chine Translation, pages 64-71. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Further meta-evaluation of machine translation", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" }, { "first": "Cameron", "middle": [], "last": "Fordyce", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Schroeder", "suffix": "" } ], "year": 2008, "venue": "Proceedings of WMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2008. Further meta-evaluation of machine translation. In Proceedings of WMT 2008.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Findings of the 2010 joint workshop on statistical machine translation and metrics for machine translation", "authors": [ { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Kay", "middle": [], "last": "Peterson", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Przybocki", "suffix": "" }, { "first": "Omar", "middle": [ "F" ], "last": "Zaridan", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the WMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Philipp Koehn, Christof Monz, Kay Peterson, Mark Przybocki, and Omar F. Zari- dan. 2010. Findings of the 2010 joint workshop on statistical machine translation and metrics for ma- chine translation. In Proceedings of the WMT 2010.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Findings of the 2012 workshop on statistical machine translation", "authors": [ { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2012, "venue": "Proceedings of WMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 workshop on statistical ma- chine translation. In Proceedings of WMT 2012.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Findings of the 2009 workshop on statistical machine translation", "authors": [ { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Schroeder", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 4th WMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Philipp Koehn, Christof Monz, and Josh Schroeder. 2009. Findings of the 2009 workshop on statistical machine translation. In Pro- ceedings of the 4th WMT 2009.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Findings of the 2011 workshop on statistical machine translation", "authors": [ { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Omar", "middle": [ "F" ], "last": "Zaridan", "suffix": "" } ], "year": 2011, "venue": "Proceedings of WMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Philipp Koehn, Christof Monz, and Omar F. Zaridan. 2011. Findings of the 2011 workshop on statistical machine translation. In Pro- ceedings of WMT 2011.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Improved statistical machine translation using paraphrases", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" } ], "year": 2006, "venue": "Proceedings of HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Philipp Koehn, and Miles Os- borne. 2006a. Improved statistical machine trans- lation using paraphrases. In Proceedings of HLT- NAACL 2006.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Re-evaluating the role of bleu in machine translation research", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" }, { "first": "Miles", "middle": [], "last": "Osborne", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2006, "venue": "Proceedings of EACL 2006", "volume": "", "issue": "", "pages": "249--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006b. Re-evaluating the role of bleu in ma- chine translation research. In Proceedings of EACL 2006, volume 2006, pages 249-256.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "An experiment in evaluating the quality of translation", "authors": [ { "first": "John", "middle": [ "B" ], "last": "Carroll", "suffix": "" } ], "year": 1966, "venue": "Mechanical Translation and Computational Linguistics", "volume": "9", "issue": "3-4", "pages": "67--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "John B. Carroll. 1966. An experiment in evaluating the quality of translation. Mechanical Translation and Computational Linguistics, 9(3-4):67-75.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Semantic textual similarity for MT evaluation", "authors": [ { "first": "Julio", "middle": [], "last": "Castillo", "suffix": "" }, { "first": "Paula", "middle": [], "last": "Estrella", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "52--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julio Castillo and Paula Estrella. 2012. Semantic tex- tual similarity for MT evaluation. In Proceedings of the Seventh Workshop on Statistical Machine Trans- lation, pages 52-58, Montr\u00e9al, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A hierarchical phrase-based model for statistical machine translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", "volume": "", "issue": "", "pages": "263--270", "other_ids": { "DOI": [ "10.3115/1219840.1219873" ] }, "num": null, "urls": [], "raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Pro- ceedings of the 43rd Annual Meeting of the As- sociation for Computational Linguistics (ACL'05), pages 263-270, Ann Arbor, Michigan. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "On the properties of neural machine translation: Encoder-decoder approaches", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merrienboer", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "KyungHyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. CoRR, abs/1409.1259.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Good applications for crummy machine translation", "authors": [ { "first": "Kenneth", "middle": [], "last": "Church", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the Natural Language Processing Systems Evaluation Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Church and Eduard Hovy. 1991. Good ap- plications for crummy machine translation. In Pro- ceedings of the Natural Language Processing Sys- tems Evaluation Workshop.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A coefficient of agreement for nominal scales", "authors": [ { "first": "Jasob", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1960, "venue": "Educational and Psychological Measurement", "volume": "20", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jasob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):3746.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Verta: Linguistic features in mt evaluation", "authors": [ { "first": "Elisabet", "middle": [], "last": "Comelles", "suffix": "" }, { "first": "Jordi", "middle": [], "last": "Atserias", "suffix": "" }, { "first": "Victoria", "middle": [], "last": "Arranz", "suffix": "" }, { "first": "Irene", "middle": [], "last": "Castell\u00f3n", "suffix": "" } ], "year": 2012, "venue": "LREC", "volume": "", "issue": "", "pages": "3944--3950", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elisabet Comelles, Jordi Atserias, Victoria Arranz, and Irene Castell\u00f3n. 2012. Verta: Linguistic features in mt evaluation. In LREC, pages 3944-3950.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Probabilistic textual entailment: Generic applied modeling of language variability", "authors": [ { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Glickman", "suffix": "" } ], "year": 2004, "venue": "Learning Methods for Text Understanding and Mining workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan and Oren Glickman. 2004. Probabilistic textual entailment: Generic applied modeling of lan- guage variability. In Learning Methods for Text Un- derstanding and Mining workshop.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "The pascal recognising textual entailment challenge", "authors": [ { "first": "Oren", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2006, "venue": "Machine Learning Challenges", "volume": "3944", "issue": "", "pages": "177--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. Machine Learning Challenges:LNCS, 3944:177-190.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Tesla at wmt2011: Translation evaluation and tunable metric", "authors": [ { "first": "Daniel", "middle": [], "last": "Dahlmeier", "suffix": "" }, { "first": "Chang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2011, "venue": "Proceedings of WMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Dahlmeier, Chang Liu, and Hwee Tou Ng. 2011. Tesla at wmt2011: Translation evaluation and tunable metric. In Proceedings of WMT 2011.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Automatic evaluation of machine translation quality using n-gram cooccurrence statistics", "authors": [ { "first": "George", "middle": [], "last": "Doddington", "suffix": "" } ], "year": 2002, "venue": "HLT Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co- occurrence statistics. In HLT Proceedings.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Part 5: Machine translation evaluation", "authors": [ { "first": "Bonnie", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Snover", "suffix": "" }, { "first": "Etc", "middle": [ "Nitin" ], "last": "Madnani", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonnie Dorr, Matt Snover, and etc. Nitin Madnani. 2009. Part 5: Machine translation evaluation. In Bonnie Dorr edited DARPA GALE program report.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Task-based evaluation for machine translation", "authors": [ { "first": "Jennifer", "middle": [ "B" ], "last": "Doyon", "suffix": "" }, { "first": "John", "middle": [ "S" ], "last": "White", "suffix": "" }, { "first": "Kathryn", "middle": [ "B" ], "last": "Taylor", "suffix": "" } ], "year": 1999, "venue": "Proceedings of MT Summit 7", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jennifer B. Doyon, John S. White, and Kathryn B. Tay- lor. 1999. Task-based evaluation for machine trans- lation. In Proceedings of MT Summit 7.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Automatic evaluation method for machine translation using nounphrase chunking", "authors": [ { "first": "H", "middle": [], "last": "Echizen-Ya", "suffix": "" }, { "first": "K", "middle": [], "last": "Araki", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Echizen-ya and K. Araki. 2010. Automatic eval- uation method for machine translation using noun- phrase chunking. In Proceedings of the ACL 2010.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Overview of the iwslt 2005 evaluation campaign", "authors": [ { "first": "Matthias", "middle": [], "last": "Eck", "suffix": "" }, { "first": "Chiori", "middle": [], "last": "Hori", "suffix": "" } ], "year": 2005, "venue": "proceeding of International Workshop on Spoken Language Translation (IWSLT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthias Eck and Chiori Hori. 2005. Overview of the iwslt 2005 evaluation campaign. In In proceeding of International Workshop on Spoken Language Trans- lation (IWSLT).", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Survey of machine translation evaluation", "authors": [], "year": null, "venue": "EuroMatrix Project Report, Statistical and Hybrid MT between All European Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Project EuroMatrix. 2007. 1.3: Survey of machine translation evaluation. In EuroMatrix Project Re- port, Statistical and Hybrid MT between All Euro- pean Languages, co-ordinator: Prof. Hans Uszkor- eit.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Overview of the iwslt 2011 evaluation campaign", "authors": [ { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Paul", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "St\u00fcker", "suffix": "" } ], "year": 2011, "venue": "In proceeding of International Workshop on Spoken Language Translation (IWSLT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcello Federico, Luisa Bentivogli, Michael Paul, and Sebastian St\u00fcker. 2011. Overview of the iwslt 2011 evaluation campaign. In In proceeding of In- ternational Workshop on Spoken Language Transla- tion (IWSLT).", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Unsupervised quality estimation for neural machine translation", "authors": [ { "first": "Marina", "middle": [], "last": "Fomicheva", "suffix": "" }, { "first": "Shuo", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Yankovskaya", "suffix": "" }, { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Blain", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Fishel", "suffix": "" }, { "first": "Nikolaos", "middle": [], "last": "Aletras", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "539--555", "other_ids": { "DOI": [ "10.1162/tacl_a_00330" ] }, "num": null, "urls": [], "raw_text": "Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Fr\u00e9d\u00e9ric Blain, Francisco Guzm\u00e1n, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020. Unsupervised quality estimation for neural machine translation. Transactions of the As- sociation for Computational Linguistics, 8:539-555.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Findings of the WMT 2019 shared tasks on quality estimation", "authors": [ { "first": "Erick", "middle": [], "last": "Fonseca", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Yankovskaya", "suffix": "" }, { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Martins", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Fishel", "suffix": "" }, { "first": "", "middle": [], "last": "Federmann", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "3", "issue": "", "pages": "1--10", "other_ids": { "DOI": [ "10.18653/v1/W19-5401" ] }, "num": null, "urls": [], "raw_text": "Erick Fonseca, Lisa Yankovskaya, Andr\u00e9 F. T. Martins, Mark Fishel, and Christian Federmann. 2019. Find- ings of the WMT 2019 shared tasks on quality es- timation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Pa- pers, Day 2), pages 1-10, Florence, Italy. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Sentence-level mt evaluation without reference translations beyond language modelling", "authors": [ { "first": "Michael", "middle": [], "last": "Gamon", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Aue", "suffix": "" }, { "first": "Martine", "middle": [], "last": "Smets", "suffix": "" } ], "year": 2005, "venue": "Proceedings of EAMT", "volume": "", "issue": "", "pages": "103--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Gamon, Anthony Aue, and Martine Smets. 2005. Sentence-level mt evaluation without refer- ence translations beyond language modelling. In Proceedings of EAMT, pages 103-112.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics", "authors": [ { "first": "Sebastian", "middle": [], "last": "Gehrmann", "suffix": "" }, { "first": "Tosin", "middle": [], "last": "Adewumi", "suffix": "" }, { "first": "Karmanya", "middle": [], "last": "Aggarwal", "suffix": "" }, { "first": "Pawan", "middle": [], "last": "Sasanka Ammanamanchi", "suffix": "" }, { "first": "Aremu", "middle": [], "last": "Anuoluwapo", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" }, { "first": "Miruna", "middle": [], "last": "Khyathi Raghavi Chandu", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Clinciu", "suffix": "" }, { "first": "Kaustubh", "middle": [ "D" ], "last": "Das", "suffix": "" }, { "first": "Wanyu", "middle": [], "last": "Dhole", "suffix": "" }, { "first": "Esin", "middle": [], "last": "Du", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Durmus", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Emezue", "suffix": "" }, { "first": "Cristina", "middle": [], "last": "Gangal", "suffix": "" }, { "first": "Tatsunori", "middle": [], "last": "Garbacea", "suffix": "" }, { "first": "Yufang", "middle": [], "last": "Hashimoto", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Harsh", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Yangfeng", "middle": [], "last": "Jhamtani", "suffix": "" }, { "first": "Shailza", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Mihir", "middle": [], "last": "Jolly", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Kale", "suffix": "" }, { "first": "Faisal", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Aman", "middle": [], "last": "Ladhak", "suffix": "" }, { "first": "Mounica", "middle": [], "last": "Madaan", "suffix": "" }, { "first": "Khyati", "middle": [], "last": "Maddela", "suffix": "" }, { "first": "Saad", "middle": [], "last": "Mahajan", "suffix": "" }, { "first": "", "middle": [], "last": "Mahamood", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2102.01672" ] }, "num": null, "urls": [], "raw_text": "Sebastian Gehrmann, Tosin Adewumi, Karmanya Ag- garwal, Pawan Sasanka Ammanamanchi, Aremu Anuoluwapo, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna Clinciu, Dipanjan Das, Kaus- tubh D. Dhole, Wanyu Du, Esin Durmus, Ond\u0159ej Du\u0161ek, Chris Emezue, Varun Gangal, Cristina Gar- bacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh Jhamtani, Yangfeng Ji, Shailza Jolly, Mihir Kale, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pe- dro Henrique Martins, Angelina McMillan-Major, Simon Mille, Emiel van Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Rubungo An- dre Niyongabo, Salomey Osei, Ankur Parikh, Laura Perez-Beltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, Jo\u00e3o Sedoc, Thibault Sellam, Samira Shaikh, Anastasia Shimorina, Marco Antonio So- brevilla Cabezudo, Hendrik Strobelt, Nishant Sub- ramani, Wei Xu, Diyi Yang, Akhila Yerukola, and Jiawei Zhou. 2021. The GEM Benchmark: Nat- ural Language Generation, its Evaluation and Met- rics. arXiv e-prints, page arXiv:2102.01672.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "A smorgasbord of features for automatic mt evaluation", "authors": [ { "first": "Jes\u00fas", "middle": [], "last": "Gim\u00e9ne", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e1rquez", "suffix": "" } ], "year": 2008, "venue": "Proceedings of WMT 2008", "volume": "", "issue": "", "pages": "195--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jes\u00fas Gim\u00e9ne and Llu\u00eds M\u00e1rquez. 2008. A smorgas- bord of features for automatic mt evaluation. In Pro- ceedings of WMT 2008, pages 195-198.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Linguistic features for automatic evaluation of heterogenous mt systems", "authors": [ { "first": "Jes\u00fas", "middle": [], "last": "Gim\u00e9nez", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e1rquez", "suffix": "" } ], "year": 2007, "venue": "Proceedings of WMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jes\u00fas Gim\u00e9nez and Llu\u00eds M\u00e1rquez. 2007. Linguistic features for automatic evaluation of heterogenous mt systems. In Proceedings of WMT 2007.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Accurate evaluation of segment-level machine translation metrics", "authors": [ { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Nitika", "middle": [], "last": "Mathur", "suffix": "" } ], "year": 2015, "venue": "The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1183--1191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yvette Graham, Timothy Baldwin, and Nitika Mathur. 2015. Accurate evaluation of segment-level ma- chine translation metrics. In NAACL HLT 2015, The 2015 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 -June 5, 2015, pages 1183-1191.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Continuous measurement scales in human evaluation of machine translation", "authors": [ { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Alistair", "middle": [], "last": "Moffat", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Zobel", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", "volume": "", "issue": "", "pages": "33--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Continuous measurement scales in human evaluation of machine translation. In Pro- ceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 33-41, Sofia, Bulgaria. Association for Computational Lin- guistics.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Can machine translation systems be evaluated by the crowd alone", "authors": [ { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Alistair", "middle": [], "last": "Moffat", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Zobel", "suffix": "" } ], "year": 2016, "venue": "Natural Language Engineering", "volume": "", "issue": "", "pages": "1--28", "other_ids": { "DOI": [ "10.1017/S1351324915000339" ] }, "num": null, "urls": [], "raw_text": "Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2016. Can machine translation sys- tems be evaluated by the crowd alone. Natural Lan- guage Engineering, FirstView:1-28.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Statistical power and translationese in machine translation evaluation", "authors": [ { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "72--81", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.6" ] }, "num": null, "urls": [], "raw_text": "Yvette Graham, Barry Haddow, and Philipp Koehn. 2020. Statistical power and translationese in ma- chine translation evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 72-81, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Achieving accurate conclusions in evaluation of automatic machine translation metrics", "authors": [ { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yvette Graham and Qun Liu. 2016. Achieving ac- curate conclusions in evaluation of automatic ma- chine translation metrics. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, San Diego California, USA, June 12-17, 2016, pages 1-10.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Named entity recognition in query", "authors": [ { "first": "Jiafeng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Gu", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xueqi", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2009, "venue": "Proceeding of SIGIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiafeng Guo, Gu Xu, Xueqi Cheng, and Hang Li. 2009. Named entity recognition in query. In Proceeding of SIGIR.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Machine translation evaluation using recurrent neural networks", "authors": [ { "first": "Rohit", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Constantin", "middle": [], "last": "Orasan", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "380--384", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rohit Gupta, Constantin Orasan, and Josef van Gen- abith. 2015a. Machine translation evaluation using recurrent neural networks. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 380-384, Lisbon, Portugal. Association for Computational Linguistics.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Reval: A simple and effective machine translation evaluation metric based on recurrent neural networks", "authors": [ { "first": "Rohit", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Constantin", "middle": [], "last": "Orasan", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Emperical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1066--1072", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rohit Gupta, Constantin Orasan, and Josef van Gen- abith. 2015b. Reval: A simple and effective ma- chine translation evaluation metric based on recur- rent neural networks. In Proceedings of the 2015 Conference on Emperical Methods in Natural Lan- guage Processing, pages 1066-1072. Association for Computational Linguistics, o.A.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Pairwise neural machine translation evaluation", "authors": [ { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Shafiq", "middle": [], "last": "Joty", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and The 7th International Joint Conference of the Asian Federation of Natural Language Processing (ACL'15)", "volume": "", "issue": "", "pages": "805--814", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francisco Guzm\u00e1n, Shafiq Joty, Llu\u00eds M\u00e0rquez, and Preslav Nakov. 2015. Pairwise neural machine translation evaluation. In Proceedings of the 53rd Annual Meeting of the Association for Computa- tional Linguistics and The 7th International Joint Conference of the Asian Federation of Natural Lan- guage Processing (ACL'15), pages 805-814, Bei- jing, China. Association for Computational Linguis- tics.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Machine translation evaluation with neural networks", "authors": [ { "first": "Francisco", "middle": [], "last": "Guzmn", "suffix": "" }, { "first": "Shafiq", "middle": [], "last": "Joty", "suffix": "" }, { "first": "Llus", "middle": [], "last": "Mrquez", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2017, "venue": "Comput. Speech Lang", "volume": "45", "issue": "C", "pages": "180--200", "other_ids": { "DOI": [ "10.1016/j.csl.2016.12.005" ] }, "num": null, "urls": [], "raw_text": "Francisco Guzmn, Shafiq Joty, Llus Mrquez, and Preslav Nakov. 2017. Machine translation evalua- tion with neural networks. Comput. Speech Lang., 45(C):180-200.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "A History of Mathematical Statistics from 1750 to 1930", "authors": [ { "first": "Anders", "middle": [], "last": "Hald", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders Hald. 1998. A History of Mathematical Statis- tics from 1750 to 1930. ISBN-10: 0471179124. Wiley-Interscience; 1 edition.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Chinese named entity recognition with conditional random fields in the light of chinese characteristics", "authors": [ { "first": "L-F", "middle": [], "last": "Aaron", "suffix": "" }, { "first": "Derek", "middle": [ "F" ], "last": "Han", "suffix": "" }, { "first": "Lidia", "middle": [ "S" ], "last": "Wong", "suffix": "" }, { "first": "", "middle": [], "last": "Chao", "suffix": "" } ], "year": 2013, "venue": "Language Processing and Intelligent Information Systems", "volume": "", "issue": "", "pages": "57--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aaron L-F Han, Derek F Wong, and Lidia S Chao. 2013a. Chinese named entity recognition with con- ditional random fields in the light of chinese char- acteristics. In Language Processing and Intelligent Information Systems, pages 57-68. Springer.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "LEPOR: An Augmented Machine Translation Evaluation Metric", "authors": [ { "first": "Lifeng", "middle": [], "last": "Han", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lifeng Han. 2014. LEPOR: An Augmented Ma- chine Translation Evaluation Metric. University of Macau, Macao.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Machine Translation Evaluation Resources and Methods: A Survey. arXiv e-prints", "authors": [ { "first": "Lifeng", "middle": [], "last": "Han", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1605.04515" ] }, "num": null, "urls": [], "raw_text": "Lifeng Han. 2016. Machine Translation Evaluation Resources and Methods: A Survey. arXiv e-prints, page arXiv:1605.04515.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "AlphaMWE: Construction of multilingual parallel corpora with MWE annotations", "authors": [ { "first": "Lifeng", "middle": [], "last": "Han", "suffix": "" }, { "first": "Gareth", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Smeaton", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Joint Workshop on Multiword Expressions and Electronic Lexicons", "volume": "", "issue": "", "pages": "44--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lifeng Han, Gareth Jones, and Alan Smeaton. 2020a. AlphaMWE: Construction of multilingual parallel corpora with MWE annotations. In Proceedings of the Joint Workshop on Multiword Expressions and Electronic Lexicons, pages 44-57, online. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "MultiMWE: Building a multi-lingual multi-word expression (MWE) parallel corpora", "authors": [ { "first": "Lifeng", "middle": [], "last": "Han", "suffix": "" }, { "first": "Gareth", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Smeaton", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "2970--2979", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lifeng Han, Gareth Jones, and Alan Smeaton. 2020b. MultiMWE: Building a multi-lingual multi-word expression (MWE) parallel corpora. In Proceed- ings of the 12th Language Resources and Evaluation Conference, pages 2970-2979, Marseille, France. European Language Resources Association.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Chinese Character Decomposition for Neural MT with Multi-Word Expressions. arXiv e-prints", "authors": [ { "first": "Lifeng", "middle": [], "last": "Han", "suffix": "" }, { "first": "J", "middle": [ "F" ], "last": "Gareth", "suffix": "" }, { "first": "Alan", "middle": [ "F" ], "last": "Jones", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Smeaton", "suffix": "" }, { "first": "", "middle": [], "last": "Bolzoni", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.04497" ] }, "num": null, "urls": [], "raw_text": "Lifeng Han, Gareth J. F. Jones, Alan F. Smeaton, and Paolo Bolzoni. 2021. Chinese Character Decompo- sition for Neural MT with Multi-Word Expressions. arXiv e-prints, page arXiv:2104.04497.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "Language-independent model for machine translation evaluation with reinforced factors", "authors": [ { "first": "Lifeng", "middle": [], "last": "Han", "suffix": "" }, { "first": "Derek", "middle": [ "F" ], "last": "Wong", "suffix": "" }, { "first": "Lidia", "middle": [ "S" ], "last": "Chao", "suffix": "" }, { "first": "Liangye", "middle": [], "last": "He", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Junwen", "middle": [], "last": "Xing", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Zeng", "suffix": "" } ], "year": 2013, "venue": "Machine Translation Summit XIV", "volume": "", "issue": "", "pages": "215--222", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lifeng Han, Derek F. Wong, Lidia S. Chao, Liangye He, Yi Lu, Junwen Xing, and Xiaodong Zeng. 2013b. Language-independent model for machine translation evaluation with reinforced factors. In Machine Translation Summit XIV, pages 215-222. International Association for Machine Translation.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "A robust evaluation metric for machine translation with augmented factors", "authors": [ { "first": "Lifeng", "middle": [], "last": "Han", "suffix": "" }, { "first": "Derek", "middle": [ "Fai" ], "last": "Wong", "suffix": "" }, { "first": "Lidia", "middle": [ "Sam" ], "last": "Chao", "suffix": "" } ], "year": 2012, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lifeng Han, Derek Fai Wong, and Lidia Sam Chao. 2012. A robust evaluation metric for machine trans- lation with augmented factors. In Proceedings of COLING.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "Phrase tagset mapping for french and english treebanks and its application in machine translation evaluation", "authors": [ { "first": "Lifeng", "middle": [], "last": "Han", "suffix": "" }, { "first": "Derek", "middle": [ "Fai" ], "last": "Wong", "suffix": "" }, { "first": "Lidia", "middle": [ "Sam" ], "last": "Chao", "suffix": "" }, { "first": "Liangeye", "middle": [], "last": "He", "suffix": "" }, { "first": "Shuo", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ling", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2013, "venue": "ternational Conference of the German Society for Computational Linguistics and Language Technology, LNAI", "volume": "8105", "issue": "", "pages": "119--131", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lifeng Han, Derek Fai Wong, Lidia Sam Chao, Liang- eye He, Shuo Li, and Ling Zhu. 2013c. Phrase tagset mapping for french and english treebanks and its ap- plication in machine translation evaluation. In In- ternational Conference of the German Society for Computational Linguistics and Language Technol- ogy, LNAI Vol. 8105, pages 119-131.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "Unsupervised quality estimation model for english to german translation and its application in extensive supervised evaluation", "authors": [ { "first": "Lifeng", "middle": [], "last": "Han", "suffix": "" }, { "first": "Derek", "middle": [ "Fai" ], "last": "Wong", "suffix": "" }, { "first": "Lidia", "middle": [ "Sam" ], "last": "Chao", "suffix": "" }, { "first": "Liangeye", "middle": [], "last": "He", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2014, "venue": "The Scientific World Journal. Issue: Recent Advances in Information Technology", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lifeng Han, Derek Fai Wong, Lidia Sam Chao, Liang- eye He, and Yi Lu. 2014. Unsupervised quality estimation model for english to german translation and its application in extensive supervised evalua- tion. In The Scientific World Journal. Issue: Recent Advances in Information Technology, pages 1-12.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "A description of tunable machine translation evaluation systems in wmt13 metrics task", "authors": [ { "first": "Lifeng", "middle": [], "last": "Han", "suffix": "" }, { "first": "Derek", "middle": [ "Fai" ], "last": "Wong", "suffix": "" }, { "first": "Lidia", "middle": [ "Sam" ], "last": "Chao", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Liangye", "middle": [], "last": "He", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jiaji", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2013, "venue": "Proceedings of WMT 2013", "volume": "", "issue": "", "pages": "414--421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lifeng Han, Derek Fai Wong, Lidia Sam Chao, Yi Lu, Liangye He, Yiming Wang, and Jiaji Zhou. 2013d. A description of tunable machine translation evalua- tion systems in wmt13 metrics task. In Proceedings of WMT 2013, pages 414-421.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "authors": [ { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Thorat", "suffix": "" }, { "first": "Fernanda", "middle": [ "B" ], "last": "Vi\u00e9gas", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Wattenberg", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Macduff", "middle": [], "last": "Hughes", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho- rat, Fernanda B. Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's multilingual neural machine translation system: Enabling zero-shot translation. CoRR, abs/1611.04558.", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "A new measure of rank correlation", "authors": [ { "first": "Maurice", "middle": [ "G" ], "last": "Kendall", "suffix": "" } ], "year": 1938, "venue": "Biometrika", "volume": "30", "issue": "", "pages": "81--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maurice G. Kendall. 1938. A new measure of rank cor- relation. Biometrika, 30:81-93.", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "Rank Correlation Methods", "authors": [ { "first": "Maurice", "middle": [ "G" ], "last": "Kendall", "suffix": "" }, { "first": "Jean", "middle": [ "Dickinson" ], "last": "Gibbons", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maurice G. Kendall and Jean Dickinson Gibbons. 1990. Rank Correlation Methods. Oxford Univer- sity Press, New York.", "links": null }, "BIBREF75": { "ref_id": "b75", "title": "Femti: Creating and using a framework for mt evaluation", "authors": [ { "first": "Marrgaret", "middle": [], "last": "King", "suffix": "" }, { "first": "Andrei", "middle": [], "last": "Popescu-Belis", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Machine Translation Summit IX", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marrgaret King, Andrei Popescu-Belis, and Eduard Hovy. 2003. Femti: Creating and using a framework for mt evaluation. In Proceedings of the Machine Translation Summit IX.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "Statistical significance tests for machine translation evaluation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP.", "links": null }, "BIBREF77": { "ref_id": "b77", "title": "Statistical Machine Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press.", "links": null }, "BIBREF78": { "ref_id": "b78", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Ondrej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Proceedings of Conference on Association of Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of Conference on Association of Com- putational Linguistics.", "links": null }, "BIBREF79": { "ref_id": "b79", "title": "Manual and automatic evaluation of machine translation between european languages", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2006, "venue": "Proceedings on the Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "102--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn and Christof Monz. 2006a. Manual and automatic evaluation of machine translation between european languages. In Proceedings on the Work- shop on Statistical Machine Translation, pages 102- 121, New York City. Association for Computational Linguistics.", "links": null }, "BIBREF80": { "ref_id": "b80", "title": "Manual and automatic evaluation of machine translation between european languages", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2006, "venue": "Proceedings of WMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn and Christof Monz. 2006b. Manual and automatic evaluation of machine translation between european languages. In Proceedings of WMT 2006.", "links": null }, "BIBREF81": { "ref_id": "b81", "title": "Crosslingual language model pretraining", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. CoRR, abs/1901.07291.", "links": null }, "BIBREF82": { "ref_id": "b82", "title": "The measurement of observer agreement for categorical data", "authors": [ { "first": "J", "middle": [], "last": "", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Landis", "suffix": "" }, { "first": "Gary", "middle": [ "G" ], "last": "Koch", "suffix": "" } ], "year": 1977, "venue": "Biometrics", "volume": "33", "issue": "1", "pages": "159--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Richard Landis and Gary G. Koch. 1977. The mea- surement of observer agreement for categorical data. Biometrics, 33(1):159-174.", "links": null }, "BIBREF83": { "ref_id": "b83", "title": "Task-based mt evaluation: From who/when/where extraction to event understanding", "authors": [ { "first": "Jamal", "middle": [], "last": "Laoudi", "suffix": "" }, { "first": "Ra", "middle": [ "R" ], "last": "Tate", "suffix": "" }, { "first": "Clare", "middle": [ "R" ], "last": "Voss", "suffix": "" } ], "year": 2006, "venue": "Proceedings of LREC-06", "volume": "", "issue": "", "pages": "2048--2053", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jamal Laoudi, Ra R. Tate, and Clare R. Voss. 2006. Task-based mt evaluation: From who/when/where extraction to event understanding. In in Proceedings of LREC-06, pages 2048-2053.", "links": null }, "BIBREF84": { "ref_id": "b84", "title": "Probabilistic text structuring: Experiments with sentence ordering", "authors": [ { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mirella Lapata. 2003. Probabilistic text structuring: Experiments with sentence ordering. In Proceed- ings of ACL 2003.", "links": null }, "BIBREF85": { "ref_id": "b85", "title": "Has machine translation achieved human parity? a case for document-level evaluation", "authors": [ { "first": "Samuel", "middle": [], "last": "L\u00e4ubli", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Volk", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4791--4796", "other_ids": { "DOI": [ "10.18653/v1/D18-1512" ] }, "num": null, "urls": [], "raw_text": "Samuel L\u00e4ubli, Rico Sennrich, and Martin Volk. 2018. Has machine translation achieved human parity? a case for document-level evaluation. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4791-4796, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF86": { "ref_id": "b86", "title": "Automated metrics for mt evaluation. Machine Translation", "authors": [ { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2013, "venue": "", "volume": "11", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alon Lavie. 2013. Automated metrics for mt evalua- tion. Machine Translation, 11:731.", "links": null }, "BIBREF87": { "ref_id": "b87", "title": "Combining rankings using conditional probability models on permutations", "authors": [ { "first": "Guy", "middle": [], "last": "Lebanon", "suffix": "" }, { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 2002, "venue": "Proceeding of the ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guy Lebanon and John Lafferty. 2002. Combining rankings using conditional probability models on permutations. In Proceeding of the ICML.", "links": null }, "BIBREF88": { "ref_id": "b88", "title": "Edit distances with block movements and error rate confidence estimates", "authors": [ { "first": "Gregor", "middle": [], "last": "Leusch", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2009, "venue": "Machine Translation", "volume": "23", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregor Leusch and Hermann Ney. 2009. Edit distances with block movements and error rate confidence es- timates. Machine Translation, 23(2-3).", "links": null }, "BIBREF89": { "ref_id": "b89", "title": "Results of the 2005 nist machine translation evaluation", "authors": [ { "first": "A", "middle": [ "Li" ], "last": "", "suffix": "" } ], "year": 2005, "venue": "Proceedings of WMT 2005", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. LI. 2005. Results of the 2005 nist machine transla- tion evaluation. In Proceedings of WMT 2005.", "links": null }, "BIBREF90": { "ref_id": "b90", "title": "Phrase-based evaluation for machine translation", "authors": [ { "first": "Zheng", "middle": [], "last": "Liang You Li", "suffix": "" }, { "first": "Guo Dong", "middle": [], "last": "Xian Gong", "suffix": "" }, { "first": "", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2012, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "663--672", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang You Li, Zheng Xian Gong, and Guo Dong Zhou. 2012. Phrase-based evaluation for machine transla- tion. In Proceedings of COLING, pages 663-672.", "links": null }, "BIBREF91": { "ref_id": "b91", "title": "Bojan Cukic, and Samira Shaikh. 2021. Shellcode_IA32: A Dataset for Automatic Shellcode Generation. arXiv e-prints", "authors": [ { "first": "Pietro", "middle": [], "last": "Liguori", "suffix": "" }, { "first": "Erfan", "middle": [], "last": "Al-Hossami", "suffix": "" }, { "first": "Domenico", "middle": [], "last": "Cotroneo", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Natella", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.13100" ] }, "num": null, "urls": [], "raw_text": "Pietro Liguori, Erfan Al-Hossami, Domenico Cotro- neo, Roberto Natella, Bojan Cukic, and Samira Shaikh. 2021. Shellcode_IA32: A Dataset for Au- tomatic Shellcode Generation. arXiv e-prints, page arXiv:2104.13100.", "links": null }, "BIBREF92": { "ref_id": "b92", "title": "Automatic evaluation of summaries using n-gram co-occurrence statistics", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" }, { "first": "E", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 2003, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin and E. H. Hovy. 2003. Automatic eval- uation of summaries using n-gram co-occurrence statistics. In Proceedings of NAACL 2003.", "links": null }, "BIBREF93": { "ref_id": "b93", "title": "Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin and Franz Josef Och. 2004. Auto- matic evaluation of machine translation quality us- ing longest common subsequence and skip-bigram statistics. In Proceedings of ACL 2004.", "links": null }, "BIBREF94": { "ref_id": "b94", "title": "Syntactic features for evaluation of machine translation", "authors": [ { "first": "Ding", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2005, "venue": "Proceedingsof theACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ding Liu and Daniel Gildea. 2005. Syntactic features for evaluation of machine translation. In Proceed- ingsof theACL Workshop on Intrinsic and Extrin- sic Evaluation Measures for Machine Translation and/or Summarization.", "links": null }, "BIBREF95": { "ref_id": "b95", "title": "Fully automatic semantic mt evaluation", "authors": [ { "first": "Chi", "middle": [ "Kiu" ], "last": "Lo", "suffix": "" }, { "first": "Anand", "middle": [], "last": "Karthik Turmuluru", "suffix": "" }, { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2012, "venue": "Proceedings of WMT 2012", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chi Kiu Lo, Anand Karthik Turmuluru, and Dekai Wu. 2012. Fully automatic semantic mt evaluation. In Proceedings of WMT 2012.", "links": null }, "BIBREF96": { "ref_id": "b96", "title": "Meant: An inexpensive, high-accuracy, semi-automatic metric for evaluating translation utility based on semantic roles", "authors": [ { "first": "Chi", "middle": [ "Kiu" ], "last": "Lo", "suffix": "" }, { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chi Kiu Lo and Dekai Wu. 2011a. Meant: An inexpen- sive, high-accuracy, semi-automatic metric for eval- uating translation utility based on semantic roles. In Proceedings of ACL 2011.", "links": null }, "BIBREF97": { "ref_id": "b97", "title": "Structured vs. flat semantic role representations for machine translation evaluation", "authors": [ { "first": "Chi", "middle": [ "Kiu" ], "last": "Lo", "suffix": "" }, { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 5th Workshop on Syntax and Structure in StatisticalTranslation (SSST-5)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chi Kiu Lo and Dekai Wu. 2011b. Structured vs. flat semantic role representations for machine transla- tion evaluation. In Proceedings of the 5th Work- shop on Syntax and Structure in StatisticalTransla- tion (SSST-5).", "links": null }, "BIBREF98": { "ref_id": "b98", "title": "A set of recommendations for assessing human-machine parity in language translation", "authors": [ { "first": "Samuel", "middle": [], "last": "L\u00e4ubli", "suffix": "" }, { "first": "Sheila", "middle": [], "last": "Castilho", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Qinlan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Toral", "suffix": "" } ], "year": 2020, "venue": "Journal of Artificial Intelligence Research", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1613/jair.1.11371" ] }, "num": null, "urls": [], "raw_text": "Samuel L\u00e4ubli, Sheila Castilho, Graham Neubig, Rico Sennrich, Qinlan Shen, and Antonio Toral. 2020. A set of recommendations for assessing hu- man-machine parity in language translation. Jour- nal of Artificial Intelligence Research, 67.", "links": null }, "BIBREF99": { "ref_id": "b99", "title": "Maxsd: A neural machine translation evaluation metric optimized by maximizing similarity distance", "authors": [ { "first": "Qingsong", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Fandong", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Daqi", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Mingxuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Wenbin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "Natural Language Understanding and Intelligent Applications -5th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2016, and 24th International Conference on Computer Processing of Oriental Languages", "volume": "", "issue": "", "pages": "153--161", "other_ids": { "DOI": [ "10.1007/978-3-319-50496-4_13" ] }, "num": null, "urls": [], "raw_text": "Qingsong Ma, Fandong Meng, Daqi Zheng, Mingxuan Wang, Yvette Graham, Wenbin Jiang, and Qun Liu. 2016. Maxsd: A neural machine translation evalua- tion metric optimized by maximizing similarity dis- tance. In Natural Language Understanding and In- telligent Applications -5th CCF Conference on Nat- ural Language Processing and Chinese Computing, NLPCC 2016, and 24th International Conference on Computer Processing of Oriental Languages, IC- CPOL 2016, Kunming, China, December 2-6, 2016, Proceedings, pages 153-161.", "links": null }, "BIBREF100": { "ref_id": "b100", "title": "Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges", "authors": [ { "first": "Qingsong", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Johnny", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "2", "issue": "", "pages": "62--90", "other_ids": { "DOI": [ "10.18653/v1/W19-5302" ] }, "num": null, "urls": [], "raw_text": "Qingsong Ma, Johnny Wei, Ond\u0159ej Bojar, and Yvette Graham. 2019. Results of the WMT19 metrics shared task: Segment-level and strong MT sys- tems pose big challenges. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62-90, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF101": { "ref_id": "b101", "title": "Summarization evaluation: An overview", "authors": [ { "first": "I", "middle": [], "last": "Mani", "suffix": "" } ], "year": 2001, "venue": "NTCIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Mani. 2001. Summarization evaluation: An overview. In NTCIR.", "links": null }, "BIBREF102": { "ref_id": "b102", "title": "Muc-7 evaluation of ie technology: Overview of results", "authors": [ { "first": "Elaine", "middle": [], "last": "Marsh", "suffix": "" }, { "first": "Dennis", "middle": [], "last": "Perzanowski", "suffix": "" } ], "year": 1998, "venue": "Proceedingsof Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elaine Marsh and Dennis Perzanowski. 1998. Muc-7 evaluation of ie technology: Overview of results. In Proceedingsof Message Understanding Conference (MUC-7).", "links": null }, "BIBREF103": { "ref_id": "b103", "title": "Paraphrasing using given and new information in a question-answer system", "authors": [ { "first": "R", "middle": [], "last": "Kathleen", "suffix": "" }, { "first": "", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 1979, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kathleen R. McKeown. 1979. Paraphrasing using given and new information in a question-answer sys- tem. In Proceedings of ACL 1979.", "links": null }, "BIBREF104": { "ref_id": "b104", "title": "Microsoft research treelet translation system: Naacl 2006 europarl evaluation", "authors": [ { "first": "Marie", "middle": [], "last": "Meteer", "suffix": "" }, { "first": "Varda", "middle": [], "last": "Shaked", "suffix": "" } ], "year": 1988, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie Meteer and Varda Shaked. 1988. Microsoft re- search treelet translation system: Naacl 2006 eu- roparl evaluation. In Proceedings of COLING.", "links": null }, "BIBREF105": { "ref_id": "b105", "title": "Wordnet: an on-line lexical database", "authors": [ { "first": "G", "middle": [ "A" ], "last": "Miller", "suffix": "" }, { "first": "R", "middle": [], "last": "Beckwith", "suffix": "" }, { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" }, { "first": "D", "middle": [], "last": "Gross", "suffix": "" }, { "first": "K", "middle": [ "J" ], "last": "Miller", "suffix": "" } ], "year": 1990, "venue": "International Journal of Lexicography", "volume": "3", "issue": "4", "pages": "235--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. A. Miller, R. Beckwith, C. Fellbaum, D. Gross, and K. J. Miller. 1990. Wordnet: an on-line lex- ical database. International Journal of Lexicogra- phy, 3(4):235-244.", "links": null }, "BIBREF106": { "ref_id": "b106", "title": "Applied statistics and probability for engineers", "authors": [ { "first": "C", "middle": [], "last": "Douglas", "suffix": "" }, { "first": "George", "middle": [ "C" ], "last": "Montgomery", "suffix": "" }, { "first": "", "middle": [], "last": "Runger", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas C. Montgomery and George C. Runger. 2003. Applied statistics and probability for engineers, third edition. John Wiley and Sons, New York.", "links": null }, "BIBREF107": { "ref_id": "b107", "title": "Weakly supervised approaches for quality estimation", "authors": [ { "first": "Erwan", "middle": [], "last": "Moreau", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2013, "venue": "Machine Translation", "volume": "27", "issue": "3-4", "pages": "257--280", "other_ids": { "DOI": [ "10.1007/s10590-013-9142-8" ] }, "num": null, "urls": [], "raw_text": "Erwan Moreau and Carl Vogel. 2013. Weakly super- vised approaches for quality estimation. Machine Translation, 27(3-4):257-280.", "links": null }, "BIBREF108": { "ref_id": "b108", "title": "Limitations of MT quality estimation supervised systems: The tails prediction problem", "authors": [ { "first": "Erwan", "middle": [], "last": "Moreau", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "2205--2216", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erwan Moreau and Carl Vogel. 2014. Limitations of MT quality estimation supervised systems: The tails prediction problem. In Proceedings of COLING 2014, the 25th International Conference on Compu- tational Linguistics: Technical Papers, pages 2205- 2216, Dublin, Ireland. Dublin City University and Association for Computational Linguistics.", "links": null }, "BIBREF109": { "ref_id": "b109", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": { "DOI": [ "10.1162/089120103321337421" ] }, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.", "links": null }, "BIBREF110": { "ref_id": "b110", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL 2002.", "links": null }, "BIBREF111": { "ref_id": "b111", "title": "Overview of the iwslt 2009 evaluation campaign", "authors": [ { "first": "M", "middle": [], "last": "Paul", "suffix": "" } ], "year": 2009, "venue": "Proceeding of IWSLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Paul. 2009. Overview of the iwslt 2009 evaluation campaign. In Proceeding of IWSLT.", "links": null }, "BIBREF112": { "ref_id": "b112", "title": "Overview of the iwslt 2010 evaluation campaign", "authors": [ { "first": "Michael", "middle": [], "last": "Paul", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "St\u00fcker", "suffix": "" } ], "year": 2010, "venue": "Proceeding of IWSLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Paul, Marcello Federico, and Sebastian St\u00fcker. 2010. Overview of the iwslt 2010 evalua- tion campaign. In Proceeding of IWSLT.", "links": null }, "BIBREF113": { "ref_id": "b113", "title": "On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling", "authors": [ { "first": "Karl", "middle": [], "last": "Pearson", "suffix": "" } ], "year": 1900, "venue": "Philosophical Magazine", "volume": "50", "issue": "5", "pages": "157--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Pearson. 1900. On the criterion that a given sys- tem of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. Philosophical Magazine, 50(5):157-175.", "links": null }, "BIBREF114": { "ref_id": "b114", "title": "Evaluation without references: Ibm1 scores as evaluation metrics", "authors": [ { "first": "Maja", "middle": [], "last": "Popovi\u0107", "suffix": "" }, { "first": "David", "middle": [], "last": "Vilar", "suffix": "" } ], "year": 2011, "venue": "Proceedings of WMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maja Popovi\u0107, David Vilar, Eleftherios Avramidis, and Aljoscha Burchardt. 2011. Evaluation without ref- erences: Ibm1 scores as evaluation metrics. In Pro- ceedings of WMT 2011.", "links": null }, "BIBREF115": { "ref_id": "b115", "title": "Word error rates: Decomposition over pos classes and applications for error analysis", "authors": [ { "first": "M", "middle": [], "last": "Popovic", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2007, "venue": "Proceedings of WMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Popovic and Hermann Ney. 2007. Word error rates: Decomposition over pos classes and applications for error analysis. In Proceedings of WMT 2007.", "links": null }, "BIBREF116": { "ref_id": "b116", "title": "Informative manual evaluation of machine translation output", "authors": [ { "first": "Maja", "middle": [], "last": "Popovi\u0107", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "5059--5069", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.444" ] }, "num": null, "urls": [], "raw_text": "Maja Popovi\u0107. 2020a. Informative manual evalua- tion of machine translation output. In Proceedings of the 28th International Conference on Compu- tational Linguistics, pages 5059-5069, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.", "links": null }, "BIBREF117": { "ref_id": "b117", "title": "Relations between comprehensibility and adequacy errors in machine translation output", "authors": [ { "first": "Maja", "middle": [], "last": "Popovi\u0107", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 24th Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "256--264", "other_ids": { "DOI": [ "10.18653/v1/2020.conll-1.19" ] }, "num": null, "urls": [], "raw_text": "Maja Popovi\u0107. 2020b. Relations between compre- hensibility and adequacy errors in machine trans- lation output. In Proceedings of the 24th Confer- ence on Computational Natural Language Learning, pages 256-264, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF118": { "ref_id": "b118", "title": "Evaluating text-type suitability for machine translation a case study on an englishdanish system", "authors": [ { "first": "Claus", "middle": [], "last": "Povlsen", "suffix": "" }, { "first": "Nancy", "middle": [], "last": "Underwood", "suffix": "" }, { "first": "Bradley", "middle": [], "last": "Music", "suffix": "" }, { "first": "Anne", "middle": [], "last": "Neville", "suffix": "" } ], "year": 1998, "venue": "Proceeding LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claus Povlsen, Nancy Underwood, Bradley Music, and Anne Neville. 1998. Evaluating text-type suitability for machine translation a case study on an english- danish system. In Proceeding LREC.", "links": null }, "BIBREF119": { "ref_id": "b119", "title": "detecting errors in machine-translated sentences", "authors": [ { "first": "David", "middle": [], "last": "Sylvain Raybaud", "suffix": "" }, { "first": "Kamel", "middle": [], "last": "Langlois", "suffix": "" }, { "first": "", "middle": [], "last": "Sma\u00efli", "suffix": "" } ], "year": 2011, "venue": "Machine Translation", "volume": "25", "issue": "", "pages": "1--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sylvain Raybaud, David Langlois, and Kamel Sma\u00efli. 2011. \"this sentence is wrong.\" detecting errors in machine-translated sentences. Machine Translation, 25(1):1-34.", "links": null }, "BIBREF120": { "ref_id": "b120", "title": "Investigation of intelligibility judgments", "authors": [ { "first": "Florence", "middle": [], "last": "Reeder", "suffix": "" } ], "year": 2004, "venue": "Machine Translation: From Real Users to Research", "volume": "", "issue": "", "pages": "227--235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Florence Reeder. 2004. Investigation of intelligibil- ity judgments. In Machine Translation: From Real Users to Research, pages 227-235, Berlin, Heidel- berg. Springer Berlin Heidelberg.", "links": null }, "BIBREF121": { "ref_id": "b121", "title": "XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation. arXiv e-prints", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Botha", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Siddhant", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Jinlan", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.07412" ] }, "num": null, "urls": [], "raw_text": "Sebastian Ruder, Noah Constant, Jan Botha, Aditya Siddhant, Orhan Firat, Jinlan Fu, Pengfei Liu, Jun- jie Hu, Graham Neubig, and Melvin Johnson. 2021. XTREME-R: Towards More Challenging and Nu- anced Multilingual Evaluation. arXiv e-prints, page arXiv:2104.07412.", "links": null }, "BIBREF122": { "ref_id": "b122", "title": "Multiword expressions: A pain in the neck for nlp", "authors": [ { "first": "A", "middle": [], "last": "Ivan", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Sag", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Bond", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics and Intelligent Text Processing", "volume": "", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan A. Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multiword expressions: A pain in the neck for nlp. In Com- putational Linguistics and Intelligent Text Process- ing, pages 1-15, Berlin, Heidelberg. Springer Berlin Heidelberg.", "links": null }, "BIBREF123": { "ref_id": "b123", "title": "The impact of multiword expression compositionality on machine translation evaluation", "authors": [ { "first": "Bahar", "middle": [], "last": "Salehi", "suffix": "" }, { "first": "Nitika", "middle": [], "last": "Mathur", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 11th Workshop on Multiword Expressions", "volume": "", "issue": "", "pages": "54--59", "other_ids": { "DOI": [ "10.3115/v1/W15-0909" ] }, "num": null, "urls": [], "raw_text": "Bahar Salehi, Nitika Mathur, Paul Cook, and Timothy Baldwin. 2015. The impact of multiword expression compositionality on machine translation evaluation. In Proceedings of the 11th Workshop on Multiword Expressions, pages 54-59, Denver, Colorado. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF124": { "ref_id": "b124", "title": "A study of translation edit rate with targeted human annotation", "authors": [ { "first": "Mattthew", "middle": [], "last": "Snover", "suffix": "" }, { "first": "Bonnie", "middle": [ "J" ], "last": "Dorr", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Linnea", "middle": [], "last": "Micciulla", "suffix": "" }, { "first": "John", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2006, "venue": "Proceeding of AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mattthew Snover, Bonnie J. Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annota- tion. In Proceeding of AMTA.", "links": null }, "BIBREF125": { "ref_id": "b125", "title": "Combining confidence estimation and reference-based metrics for segment-level mt evaluation", "authors": [ { "first": "L", "middle": [], "last": "Specia", "suffix": "" }, { "first": "J", "middle": [], "last": "Gim\u00e9nez", "suffix": "" } ], "year": 2010, "venue": "The Ninth Conference of the Association for Machine Translation in the Americas (AMTA)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Specia and J. Gim\u00e9nez. 2010. Combining con- fidence estimation and reference-based metrics for segment-level mt evaluation. In The Ninth Confer- ence of the Association for Machine Translation in the Americas (AMTA).", "links": null }, "BIBREF126": { "ref_id": "b126", "title": "Findings of the WMT 2020 shared task on quality estimation", "authors": [ { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Blain", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Fomicheva", "suffix": "" }, { "first": "Erick", "middle": [], "last": "Fonseca", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Andr\u00e9", "middle": [ "F T" ], "last": "Martins", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "743--764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Marina Fomicheva, Erick Fonseca, Vishrav Chaudhary, Francisco Guzm\u00e1n, and Andr\u00e9 F. T. Martins. 2020. Findings of the WMT 2020 shared task on quality estimation. In Proceedings of the Fifth Conference on Machine Translation, pages 743-764, Online. Association for Computational Linguistics.", "links": null }, "BIBREF127": { "ref_id": "b127", "title": "Findings of the WMT 2018 shared task on quality estimation", "authors": [ { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Blain", "suffix": "" }, { "first": "Varvara", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "Ram\u00f3n", "middle": [ "F" ], "last": "Astudillo", "suffix": "" }, { "first": "Andr\u00e9", "middle": [ "F T" ], "last": "Martins", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", "volume": "", "issue": "", "pages": "689--709", "other_ids": { "DOI": [ "10.18653/v1/W18-6451" ] }, "num": null, "urls": [], "raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Varvara Logacheva, Ram\u00f3n F. Astudillo, and Andr\u00e9 F. T. Martins. 2018. Findings of the WMT 2018 shared task on quality estimation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 689-709, Belgium, Brussels. Association for Com- putational Linguistics.", "links": null }, "BIBREF128": { "ref_id": "b128", "title": "Predicting machine translation adequacy", "authors": [ { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Naheh", "middle": [], "last": "Hajlaoui", "suffix": "" }, { "first": "Catalina", "middle": [], "last": "Hallett", "suffix": "" }, { "first": "Wilker", "middle": [], "last": "Aziz", "suffix": "" } ], "year": 2011, "venue": "Machine Translation Summit XIII", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucia Specia, Naheh Hajlaoui, Catalina Hallett, and Wilker Aziz. 2011. Predicting machine translation adequacy. In Machine Translation Summit XIII.", "links": null }, "BIBREF129": { "ref_id": "b129", "title": "Machine translation evaluation versus quality estimation", "authors": [ { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Dhwaj", "middle": [], "last": "Raj", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Turchi", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucia Specia, Dhwaj Raj, and Marco Turchi. 2010. Machine translation evaluation versus quality esti- mation. Machine translation.", "links": null }, "BIBREF130": { "ref_id": "b130", "title": "QuEst -a translation quality estimation framework", "authors": [ { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Kashif", "middle": [], "last": "Shah", "suffix": "" }, { "first": "G", "middle": [ "C" ], "last": "Jose", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "De Souza", "suffix": "" }, { "first": "", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "79--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucia Specia, Kashif Shah, Jose G.C. de Souza, and Trevor Cohn. 2013. QuEst -a translation quality es- timation framework. In Proceedings of the 51st An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 79-84, Sofia, Bulgaria. Association for Computational Lin- guistics.", "links": null }, "BIBREF131": { "ref_id": "b131", "title": "A new quantitative quality measure for machine translation systems", "authors": [ { "first": "Keh-Yih", "middle": [], "last": "Su", "suffix": "" }, { "first": "Wu", "middle": [], "last": "Ming-Wen", "suffix": "" }, { "first": "Chang", "middle": [], "last": "Jing-Shin", "suffix": "" } ], "year": 1992, "venue": "Proceeding of COL-ING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keh-Yih Su, Wu Ming-Wen, and Chang Jing-Shin. 1992. A new quantitative quality measure for ma- chine translation systems. In Proceeding of COL- ING.", "links": null }, "BIBREF132": { "ref_id": "b132", "title": "Accelerated dp based search for statistical translation", "authors": [ { "first": "Christoph", "middle": [], "last": "Tillmann", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" }, { "first": "Arkaitz", "middle": [], "last": "Zubiaga", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sawaf", "suffix": "" } ], "year": 1997, "venue": "Proceeding of EUROSPEECH", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christoph Tillmann, Stephan Vogel, Hermann Ney, Arkaitz Zubiaga, and Hassan Sawaf. 1997. Accel- erated dp based search for statistical translation. In Proceeding of EUROSPEECH.", "links": null }, "BIBREF133": { "ref_id": "b133", "title": "Evaluation of machine translation and its evaluation", "authors": [ { "first": "Luke", "middle": [], "last": "Joseph P Turian", "suffix": "" }, { "first": "I Dan Melamed ; Dtic", "middle": [], "last": "Shea", "suffix": "" }, { "first": "", "middle": [], "last": "Document", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph P Turian, Luke Shea, and I Dan Melamed. 2006. Evaluation of machine translation and its evaluation. Technical report, DTIC Document.", "links": null }, "BIBREF134": { "ref_id": "b134", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Conference on Neural Information Processing System", "volume": "", "issue": "", "pages": "6000--6010", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Conference on Neural Information Processing System, pages 6000-6010.", "links": null }, "BIBREF135": { "ref_id": "b135", "title": "Task-based evaluation of machine translation (mt) engines: Measuring how well people extract who, when, where-type elements in mt output", "authors": [ { "first": "Clare", "middle": [ "R" ], "last": "Voss", "suffix": "" }, { "first": "R", "middle": [], "last": "Ra", "suffix": "" }, { "first": "", "middle": [], "last": "Tate", "suffix": "" } ], "year": 2006, "venue": "Proceedings of 11th Annual Conference of the European Association for Machine Translation (EAMT-2006", "volume": "", "issue": "", "pages": "203--212", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clare R. Voss and Ra R. Tate. 2006. Task-based eval- uation of machine translation (mt) engines: Measur- ing how well people extract who, when, where-type elements in mt output. In In Proceedings of 11th Annual Conference of the European Association for Machine Translation (EAMT-2006, pages 203-212.", "links": null }, "BIBREF136": { "ref_id": "b136", "title": "Translation. Machine Translation of Languages: Fourteen Essays", "authors": [ { "first": "Warren", "middle": [], "last": "Weaver", "suffix": "" } ], "year": 1955, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Warren Weaver. 1955. Translation. Machine Transla- tion of Languages: Fourteen Essays.", "links": null }, "BIBREF137": { "ref_id": "b137", "title": "The arpa mt evaluation methodologies: Evolution, lessons, and future approaches", "authors": [ { "first": "John", "middle": [ "S" ], "last": "White", "suffix": "" }, { "first": "O'", "middle": [], "last": "Theresa", "suffix": "" }, { "first": "Francis O'", "middle": [], "last": "Connell", "suffix": "" }, { "first": "", "middle": [], "last": "Mara", "suffix": "" } ], "year": 1994, "venue": "Proceeding of AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John S. White, Theresa O' Connell, and Francis O' Mara. 1994. The arpa mt evaluation methodologies: Evolution, lessons, and future approaches. In Pro- ceeding of AMTA.", "links": null }, "BIBREF138": { "ref_id": "b138", "title": "A taskoriented evaluation metric for machine translation", "authors": [ { "first": "John", "middle": [ "S" ], "last": "White", "suffix": "" }, { "first": "Kathryn", "middle": [ "B" ], "last": "Taylor", "suffix": "" } ], "year": 1998, "venue": "Proceeding LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John S. White and Kathryn B. Taylor. 1998. A task- oriented evaluation metric for machine translation. In Proceeding LREC.", "links": null }, "BIBREF139": { "ref_id": "b139", "title": "Atec: automatic evaluation of machine translation via word choice and word order", "authors": [ { "first": "Billy", "middle": [], "last": "Wong", "suffix": "" }, { "first": "Chun", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Kit", "middle": [], "last": "", "suffix": "" } ], "year": 2009, "venue": "Machine Translation", "volume": "23", "issue": "2-3", "pages": "141--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Billy Wong and Chun yu Kit. 2009. Atec: automatic evaluation of machine translation via word choice and word order. Machine Translation, 23(2-3):141- 155.", "links": null }, "BIBREF140": { "ref_id": "b140", "title": "RED: A reference dependency based MT evaluation metric", "authors": [ { "first": "Hui", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Xiaofeng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Wenbin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shouxun", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2014, "venue": "COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers", "volume": "", "issue": "", "pages": "2042--2051", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hui Yu, Xiaofeng Wu, Jun Xie, Wenbin Jiang, Qun Liu, and Shouxun Lin. 2014. RED: A reference de- pendency based MT evaluation metric. In COLING 2014, 25th International Conference on Computa- tional Linguistics, Proceedings of the Conference: Technical Papers, August 23-29, 2014, Dublin, Ire- land, pages 2042-2051.", "links": null }, "BIBREF141": { "ref_id": "b141", "title": "Deep neural networks in machine translation: An overview", "authors": [ { "first": "Jiajun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "" } ], "year": 2015, "venue": "IEEE Intelligent Systems", "volume": "", "issue": "5", "pages": "16--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiajun Zhang and Chengqing Zong. 2015. Deep neu- ral networks in machine translation: An overview. IEEE Intelligent Systems, (5):16-25.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "Figure 1: Human Assessment Methods" }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "Automatic Quality Assessment Methods" }, "FIGREF3": { "uris": null, "type_str": "figure", "num": null, "text": "\u00d7 N P osP enal \u00d7 Harmonic(\u03b1R, \u03b2P ) (22) hLEPOR = Harmonic(w LP LP , w N P osP enal N P osP enal, w HP R HP R) nLEPOR = LP \u00d7 N P osP enal \u00d7exp( N n=1 w n logHP R)" } } } }