{ "paper_id": "2005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:23:31.136092Z" }, "title": "Machine Translation Evaluation Inside QARLA", "authors": [ { "first": "Jes\u00fas", "middle": [], "last": "Gim\u00e9nez", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universidad Nacional de Educaci\u00f3n a Distancia \u00a2 InterACT Language Technologies Institute Carnegie Mellon University", "location": {} }, "email": "jgimenez@lsi.upc.edu" }, { "first": "Enrique", "middle": [], "last": "Amig\u00f3", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universidad Nacional de Educaci\u00f3n a Distancia \u00a2 InterACT Language Technologies Institute Carnegie Mellon University", "location": {} }, "email": "enrique@lsi.uned.es" }, { "first": "Chiori", "middle": [], "last": "Hori", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universidad Nacional de Educaci\u00f3n a Distancia \u00a2 InterACT Language Technologies Institute Carnegie Mellon University", "location": {} }, "email": "chiori@cs.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this work we present the fundamentals of the IQMT framework for MT evaluation. IQMT offers a common workbench on which existing evaluation metrics can be utilized. We suggest the IQ measure and test it on the Chinese-to-English data from the IWSLT 2004 Evaluation Campaign. We show how the correlation with human assessments at the system level improves substantially for most individual metrics. Moreover, IQMT allows to robustly combine several metrics avoiding scaling problems and metric weightings. Several metric combinations were tried, but correlations did not further improve significantly.", "pdf_parse": { "paper_id": "2005", "_pdf_hash": "", "abstract": [ { "text": "In this work we present the fundamentals of the IQMT framework for MT evaluation. IQMT offers a common workbench on which existing evaluation metrics can be utilized. We suggest the IQ measure and test it on the Chinese-to-English data from the IWSLT 2004 Evaluation Campaign. We show how the correlation with human assessments at the system level improves substantially for most individual metrics. Moreover, IQMT allows to robustly combine several metrics avoiding scaling problems and metric weightings. Several metric combinations were tried, but correlations did not further improve significantly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "At the current level of improvement in a couple of years there will probably exist Machine Translation (MT) systems that perform better than humans according to existing MT evaluation metrics. By then, these metrics, as they are currently applied, will become useless and more sophisticated metrics will be needed (Franz Och, talk at the ACL 2005 Workshop on \"Building and Using Parallel Texts: Data-Driven Machine Translation and Beyond\"). We refer to this problem as the '2008 MT Evaluation Challenge'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this work we present the fundamentals of IQMT 1 (Inside QARLA MT evaluation), a framework for MT evaluation which intends to overcome the MT evaluation challenge by offering a common workbench on which existing evaluation metrics can be used and combined.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Inside QARLA [1] , automatic evaluation of translations is interpreted as the application of similarity metrics between a set of candidate translations and a set of reference translations. In this context, one of the main issues is to determine how similar a machine-produced translation must be to a set of human references to certify that it is a good translation.", "cite_spans": [ { "start": 13, "end": 16, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "That is, how the scale properties of the similarity metrics must be interpreted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Another important issue is how to combine information from different metrics into a single measure of quality. In the last years, it has been repeatedly argued that current MT evaluation metrics do not capture well possible improvements attained by means of incorporating linguistic knowledge [2] . One of the possible reasons for that is that most of the current metrics are based on rewarding lexical similarity, thus not taking into account any additional syntactic or semantic information. We believe that new metrics should be investigated and combined with current ones. The question then would be how to ponderate new similarity metrics with respect to existing ones.", "cite_spans": [ { "start": 293, "end": 296, "text": "[2]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The QARLA framework has been successfully applied to the automatic evaluation of summaries [3] . Its probabilistic model is not affected by the scale properties of individual metrics. It allows also to combine evaluation metrics in a single measure, QUEEN, such that it is non-dependent on individual metric scales. Our goal is to adapt the QUEEN measure to MT evaluation. For that purpose, we have defined the IQ (Innovated QUEEN) measure.", "cite_spans": [ { "start": 91, "end": 94, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We have applied IQMT to the task of evaluating the IWSLT 2004 results [4] . We show how existing metrics can be used inside QARLA exhibiting higher levels of correlation with human assessments at the system level. We also worked on combinations of metrics but did not achieve any further improvement.", "cite_spans": [ { "start": 70, "end": 73, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The rest of the paper is organized as follows. In Section 2 the QARLA framework is described. In Section 3 we discuss current trends in MT evaluation. Our approach to MT evaluation inside QARLA is described in Section 4. Experimental work is deployed in Section 5. In Section 6 we present a preliminary evaluation of the results of the IWSLT 2005. Finally, some conclusions and further work are drawn in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The QARLA framework was originally defined for automatic evaluation of summaries. QARLA uses similarity to models (human references) as a building block for the evaluation of automatic summarisation systems. The input for QARLA, in a summarisation task, is a set of test cases, a set of similarity metrics , and sets of models \u00a1 for each test case. With such a testbed, QARLA provides a measure QUEEN which combines assorted similarity metrics to estimate the quality of automatic summarisers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The QARLA Framework", "sec_num": "2." }, { "text": "QUEEN operates under the assumption that a good summary must be similar to all model summaries according to all metrics. QUEEN is defined as the probability, over", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The QARLA Framework", "sec_num": "2." }, { "text": "\u00a1 \u00a3 \u00a2 \u00a4 \u00a1 \u00a3 \u00a2 \u00a5 \u00a1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The QARLA Framework", "sec_num": "2." }, { "text": ", that for every metric in the automatic summary \u00a6 is closer to a model than two other models to each other: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The QARLA Framework", "sec_num": "2." }, { "text": "QUEEN \u00a7 \u00a9 \u00a6 \u00a1 ! # \" % $ # ' & ) ( 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The QARLA Framework", "sec_num": "2." }, { "text": "In the last years, many efforts have been devoted to including linguistic information further than lexical units in the parameter estimation of translation models in Statistical Machine Translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The '2008 MT Evaluation Challenge'", "sec_num": "3." }, { "text": "However, to our knowledge, no significant improvement has been reported so far. An exception is the case of [5] who presented a syntax-based language model based upon that described by [6] , which combined with the syntax based translation model described by [7] , achieved a notable improvement in grammaticality. However, they measured this improvement by means of human evaluation.", "cite_spans": [ { "start": 108, "end": 111, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 185, "end": 188, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 259, "end": 262, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "The '2008 MT Evaluation Challenge'", "sec_num": "3." }, { "text": "At this point, one may argue that evaluation metrics are not well suited to capture improvements attained. Most of the existing metrics work only at the lexical level. This is the case of metrics such as BLEU [8] , NIST [9] , WER and PER [10] , and GTM [11] . We may find some notable exceptions such as METEOR [12] , ROUGE [13] , and WNM [14] , which consider additional information. For instance, ROUGE and METEOR consider stemming, and allow for WordNet [15] lookup . METEOR performs a synonym search in WordNet. As to WNM, this metric is a variant of BLEU which weights n-grams according to their statistical salience estimated out from a monolingual corpus.", "cite_spans": [ { "start": 209, "end": 212, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 220, "end": 223, "text": "[9]", "ref_id": "BIBREF8" }, { "start": 238, "end": 242, "text": "[10]", "ref_id": "BIBREF9" }, { "start": 253, "end": 257, "text": "[11]", "ref_id": "BIBREF10" }, { "start": 311, "end": 315, "text": "[12]", "ref_id": "BIBREF11" }, { "start": 324, "end": 328, "text": "[13]", "ref_id": "BIBREF12" }, { "start": 339, "end": 343, "text": "[14]", "ref_id": "BIBREF13" }, { "start": 457, "end": 461, "text": "[15]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "The '2008 MT Evaluation Challenge'", "sec_num": "3." }, { "text": "Further than that, we may find the approach by [16] who introduce a series of syntactic features such as constituent/dependency precision and recall and head-word chain matching. They show how adding syntactic information to the evaluation metric improves both sentence-level and system-level correlation with human judgements.", "cite_spans": [ { "start": 47, "end": 51, "text": "[16]", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "The '2008 MT Evaluation Challenge'", "sec_num": "3." }, { "text": "In a recent work, [17] tried to combine some aspects of different metrics. They applied machine learning techniques to build a classifier that distinguished between humangenerated (good) and machine-generated (bad) translations. They used features inspired in metrics like BLEU, NIST, WER and PER, obtaining higher levels of correlation with human judgements. Similarly, the IQMT framework permits metric combinations, with the singularity that there is no need to perform any training or adjustment of parameters.", "cite_spans": [ { "start": 18, "end": 22, "text": "[17]", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "The '2008 MT Evaluation Challenge'", "sec_num": "3." }, { "text": "QUEEN operates under the assumption that there exists a set of similarity metrics which are capable of grouping models as opposite to low quality elements. That is, QUEEN assumes that a good element must be close to all models. The question is whether it is possible to find such a set of metrics in the context of translations. In a first experiment we tried to apply the original QUEEN to MT, but we did not obtain significant improvements in correlation with human judgements for most of the metrics. See details in Subsection 5.4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QARLA for MT Evaluation", "sec_num": "4." }, { "text": "A possible reason is that translations are shorter than summaries. While a summary may contain around 100 words, tipically, sentences are much shorter. For instance, we are working on translations with an average length of 8 words. Less information implies more difficulties to find metrics which characterise the properties of models. In order to estimate the similarity to one model IQ considers the distribution of distances between pairs of models (6 C a nd 6 C C i n the formula). However, we work under the assumption that the metrics are not capable to group all models. Moreover, we do not know which model pairs should be chosen. Therefore, we define the following criterion: \"a good translation must be at least as similar to one of the models as the rest of model pairs are to each other\". In order to introduce this idea into the IQ definition we universally quantify the variables", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QARLA for MT Evaluation", "sec_num": "4." }, { "text": "6 C a nd 6 C C : \u00a9 \u00a7 \u00a7 P \u00a6 ) F 6 1 ! T if &( 1 0 3&6 C F 6 C C0 \u00a1 3 ( 5 \u00a6 7 6 2 9 8 @ ( \u00c4 B 6 C 7 6 C C R \" # \" % $ \u00a4 & # ' ( \u00a9 0 ) 1 &", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QARLA for MT Evaluation", "sec_num": "4." }, { "text": "This IQ definition satisfies the QUEEN properties described in Section 2. The main disadvantage of IQ with respect to QUEEN is that IQ considers only the similarity to the nearest model. Furthermore, it does not consider the distribution of distances between models. Therefore, IQ becomes a binary value (zero or one). That is, IQ assumes that there exist just 'correct' or 'incorrect' translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QARLA for MT Evaluation", "sec_num": "4." }, { "text": "In order to test our approach we utilized the data and results from the IWSLT04 evaluation campaign. We focused on the evaluation of the Chinese-to-English (CE) translation task, in which a set of 500 short sentences from the Basic Travel Expressions Corpus (BTEC) [4] were translated.", "cite_spans": [ { "start": 265, "end": 268, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1." }, { "text": "For purposes of automatic evaluation, 16 reference translations and outputs by 20 different MT systems were available for each sentence. Moreover, each of these outputs was evaluated by three judges on the basis of adequacy and fluency [18] .", "cite_spans": [ { "start": 236, "end": 240, "text": "[18]", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1." }, { "text": "We considered a set of 26 different metrics from 7 metric families:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Set", "sec_num": "5.2." }, { "text": "BLEU 2 accumulated BLEU scores for several 2 -gram lev- els (2 3 T 5 4 X 7 6 S % 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Set", "sec_num": "5.2." }, { "text": ").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Set", "sec_num": "5.2." }, { "text": "NIST 3 ).", "cite_spans": [ { "start": 5, "end": 6, "text": "3", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Metric Set", "sec_num": "5.2." }, { "text": "mWER (default).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Set", "sec_num": "5.2." }, { "text": "mPER (default).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Set", "sec_num": "5.2." }, { "text": "METEOR 5 We used 4 variants.", "cite_spans": [ { "start": 7, "end": 8, "text": "5", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Metric Set", "sec_num": "5.2." }, { "text": "METEOR.exact running \"exact\" module only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Set", "sec_num": "5.2." }, { "text": "METEOR.porter (default) running \"exact\" and \"porter stem\" modules, in that order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Set", "sec_num": "5.2." }, { "text": "METEOR.wn1 running \"exact\", \"porter stem\" and \"wn stem\" modules, in that order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Set", "sec_num": "5.2." }, { "text": "METEOR.wn2 running \"exact\", \"porter stem\", \"wn stem\" and \"wn synonymy\" modules, in that order. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Set", "sec_num": "5.2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "ROUGE 6 for several 2 -grams (2 B T 5 4 X 7 6 S", "eq_num": "%" } ], "section": "Metric Set", "sec_num": "5.2." }, { "text": "First, we studied the performance of individual metrics outside the QARLA framework. System-level scores for 5 different metrics (i.e. BLEU, NIST, mWER, mPER, and GTM) were available. Additionaly, we computed the rest of metrics described in Subsection 5.2. Table 1 shows Pearson Correlation between individual metrics and human assessments. The first two columns, 'Adequacy' and 'Fluency', respectively refer to the correlation with adequacy and fluency outside the QARLA framework. ROUGE variants outperform the rest of metrics both in adequacy and fluency. The highest correlation in adequacy is obtained by ROUGE-S*, whereas for fluency ROUGE.n3 obtains the highest correlation. BLEU and METEOR variants achieve also high levels of correlation. ' refer to correlation inside QARLA, using the IQ measure.", "cite_spans": [], "ref_spans": [ { "start": 258, "end": 265, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Automatic Evaluation Metrics outside QARLA", "sec_num": "5.3." }, { "text": "First, we computed the QUEEN measure based on each metric individually. See correlation results in Table 1 , columns 3 and 4, 'Adequacy ' and, 'Fluency ', respectively. For most of the metrics there is no significant improvement. Only in the case of the NIST family of metrics , there is a consistent and very substantial improvement with respect both to adequacy and fluency. The highest levels of correlation are again achieved by ROUGE.n3 and ROUGE-S* metrics, but at the same degree than outside the QARLA framework. The combination of these two metrics, \u00a4 ROUGE.n3, ROUGE-S*\u00a5 , does not report any significant improvement.", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 106, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Automatic Evaluation Metrics inside QARLA", "sec_num": "5.4." }, { "text": "Next, we computed IQ measure based on each metric individually. See correlation results in Table 1 , columns 5 and 6, 'Adequacy\u00a1 \u00a2 ' and, 'Fluency\u00a1 \u00a2 ', respectively. All metrics but BLEU-based, WER and PER, obtain higher levels of correlation both with respect to adequacy and fluency when applied inside QARLA. Again, ROUGE variants attain the highest levels of correlation in adequacy and fluency. ME-TEOR variants obtain also high levels of correlation. The highest correlation in adequacy is obtained by ROUGE-L, whereas for fluency ROUGE.n3 achieves the highest correlation. The combination of these two metrics, \u00a4 ROUGE.n3, ROUGE-L\u00a5 , does not report any significant improvement.", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 98, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Automatic Evaluation Metrics inside QARLA", "sec_num": "5.4." }, { "text": "The extremely low levels of correlation attained by BLEU, WER and PER deserve further analysis. By inspecting results, we observe that these metrics generate very low IQ values. A possible explanation is that while most of the current metrics are able to exploit multiple references simultaneously, QARLA works with similarities on a singlereference basis. Each translation is contrasted with each reference independently, so there is a decrease in the reliability of automatic metric scores. The QUEEN measure is not affected because it considers the similarity to all references whereas the IQ measure considers only the similarity to the closest reference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation Metrics inside QARLA", "sec_num": "5.4." }, { "text": "BLEU, WER and PER seem to be specially sensitive to this problem. BLEU looks for high precision over any of the models. We conjecture that BLEU is specially useful when it works over a set of models (multiple references), which is not the case in QARLA. Regarding WER and PER, we think that these metrics are possibly capturing non-relevant differ-ences between translations. Thus, they are placing models too close to each other. Recall the IQ definition in Section 4. Good translations must be at least as similar to one of the models as the rest of model pairs are to each other. WER and PER are therefore obliging candidate translations to be extremely similar to one of the references in order to be considered correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation Metrics inside QARLA", "sec_num": "5.4." }, { "text": "One of the main features of QARLA is that it allows to robustly combine several evaluation metrics. We study several combinations. Due to the computational complexity of exhaustively trying all metric combinations 7 we performed a clustering as described in [3] so as to detect metrics that behave similarly. This clustering process is based on the behaviour of metrics over samples", "cite_spans": [ { "start": 214, "end": 215, "text": "7", "ref_id": "BIBREF6" }, { "start": 258, "end": 261, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Metric Combinations", "sec_num": "5.5." }, { "text": "\u00a4 \u00a6 ) F 6 H F 6 C F 6 C C\u00a5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Combinations", "sec_num": "5.5." }, { "text": ". We consider that two sets of metrics behave similarly if the automatic translation \u00a6 is as close to the model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Combinations", "sec_num": "5.5." }, { "text": "6 as 6 C , 6 C C", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Combinations", "sec_num": "5.5." }, { "text": "are to each other for both sets of metrics. We applied the k-means algorithm [19] .", "cite_spans": [ { "start": 77, "end": 81, "text": "[19]", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Metric Combinations", "sec_num": "5.5." }, { "text": "Clustering results are shown in Table 2 . Very interestingly, clusters 1 to 4 group some metric variants at the same level of granularity (from 1-gram to 4-gram). WER and PER remain together in cluster 5. Clusters 6 to 9 put together several variants of METEOR, NIST, GTM, and ROUGE, respectively.", "cite_spans": [], "ref_spans": [ { "start": 32, "end": 39, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Metric Combinations", "sec_num": "5.5." }, { "text": "From each cluster we selected a representative based on the level of correlation between the IQ measure and human assessments, as reported in Table 1 (columns 5 and 6). Actually, a representative for adequacy and a representative for fluency were chosen. We did not use cluster 5. Therefore, we limited our exploration to 510 metric combinations, 255 for fluency and 255 for adequacy. Table 3 and Table 4 show correlation with adequacy and fluency, respectively, for some combinations of metrics. In the case of adequacy we did not find a combination exhibiting a higher correlation than ROUGE-L alone. In the case of fluency 4 combinations outperformed ROUGE.n3, although not very significantly. The best combination is \u00a4 ROUGE.n3, ROUGE-SU*\u00a5 .", "cite_spans": [], "ref_spans": [ { "start": 142, "end": 149, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 385, "end": 392, "text": "Table 3", "ref_id": null }, { "start": 397, "end": 404, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Metric Combinations", "sec_num": "5.5." }, { "text": "We suspect that the benefits of combining metrics are hidden by the very high levels of correlation already achieved by single metrics. We further discuss this problem in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Combinations", "sec_num": "5.5." }, { "text": "We present preliminary results on the evaluation of the Chinese-to-English Supplied Data track of the IWSLT 2005 Evaluation Campaign [20] . The test set consists of 506 very short sentences (average length of 6 words). 16 reference translations and 11 system outputs were available for each sentence. Human assessments, based on adequacy, fluency and meaning maintenance at the system level were available. 7 There are", "cite_spans": [ { "start": 133, "end": 137, "text": "[20]", "ref_id": "BIBREF20" }, { "start": 407, "end": 408, "text": "7", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "IQMT for IWSLT 2005", "sec_num": "6." }, { "text": "\u00a1 \u00a3 \u00a2 \u00a5 \u00a4 \u00a7 \u00a6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IQMT for IWSLT 2005", "sec_num": "6." }, { "text": "possible combinations if we take into account all metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IQMT for IWSLT 2005", "sec_num": "6." }, { "text": "We studied the behaviour of individual metrics outside QARLA. Very high levels of correlation (over 0.95) are achieved. METEOR variants and ROUGE.n1 are the metrics that obtain the highest levels of correlation with respect to adequacy (0.98) and meaning maintenance (0.99). For fluency, BLEU.n4 and GTM.e3 obtain the highest correlation (0.95).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IQMT for IWSLT 2005", "sec_num": "6." }, { "text": "In spite of the very high levels of correlation already achieved outside QARLA, we tested the behaviour of these metrics inside QARLA. Levels of correlation attained for adequacy and meaning maintenance are also very high inside QARLA. NIST.n1 is the highest scoring metric for adequacy (0.98) and meaning maintenance (0.97). All metrics exhibit very high levels of correlation for adequacy (over 0.82), and meaning maintenance (over 0.85). As in the case of the IWSLT 2004, ROUGE variants obtain very competitive results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IQMT for IWSLT 2005", "sec_num": "6." }, { "text": "However, for fluency, a significant drop is observed. The levels of correlation range from 0.56 to 0.85, being the highest correlation value achieved by BLEU.n4. Although most metrics correlate better with fluency inside QARLA, metrics such as BLEU.n4, GTM.e2, GTM.3 or ROUGE.n4, which reward longer matches, exhibit a substantial decrease. We suspect that our framework is not well suited to measure the fluency over translations that are so short (6 words). In fact, we argue whether it makes sense to do so. By working on very short translations we are practically forcing candidate translations to match exactly one of the references.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IQMT for IWSLT 2005", "sec_num": "6." }, { "text": "Finally, we tried some metric combinations. Again, due to time constraints, we performed a clustering, obtaining similar clusters to those derived from the IWSLT 2004 data. We arbitrarily explored some combinations by selecting the six most promising metrics. For adequacy and meaning maintenance we explored 63 combinations determined by the set \u00a4 BLEU.n1, GTM.e1, METEOR.wn2, NIST.n1, ROUGE.n1, 1-PER\u00a5 . For fluency we explored the 63 combinations in the set \u00a4 BLEU.n4, GTM.e2, METEOR.exact, NIST.n5, ROUGE.n4, 1-WER\u00a5 . Table 5 shows Pearson correlation values with respect to adequacy, fluency and meaning maintenance, for the best combinations. Consistently to the results on the IWSLT 2004 data, no significant improvements are reported when combining different metrics.", "cite_spans": [], "ref_spans": [ { "start": 522, "end": 529, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "IQMT for IWSLT 2005", "sec_num": "6." }, { "text": "The most important conclusion in this work is that most individual metrics improve when they are applied inside the QARLA framework. The reason for that improvement is that IQ takes as reference similarities between models, normalising the scale of the metric regarding to the models set distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7." }, { "text": "We observed that improvements obtained in the case of the IWSLT 2004 are more significant than in the case of the IWSLT 2005. We believe that the sentence average length is a key factor to explain this fact.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7." }, { "text": "Moreover, one of the motivations for our work was to NIST.n1 1-PER\u00a5 0.9766 Table 5 : Adequacy, Fluency and Meaning Maintenance correlation coefficients for best combinations of automatic evaluation metrics inside the QARLA Framework, for the IWSLT'05 CE Supplied Data track.", "cite_spans": [], "ref_spans": [ { "start": 75, "end": 82, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Conclusions", "sec_num": "7." }, { "text": "The IQMT package is publically available, released under the GNU Lesser General Public License (LGPL) of the Free Software Foundation, and may be freely downloaded at http://www.lsi.upc.edu/\u02dcnlp/IQMT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used mteval-kit-v10/mteval-v11b.pl for BLEU calculation.3 We used mteval-kit-v10/mteval-v11b.pl for NIST calculation.4 We used GTM version 1.2.5 We used METEOR version 0.4.3.6 We used ROUGE version 1.5.5. Options are \"-z SPL -2 -1 -U -m -r 1000 -n 4 -w 1.2 -c 95 -d\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research has been funded by the Spanish Ministry of Science and Technology, projects R2D2 (TIC-2003-7180) and ALIADO (TIC-2002-04447-C02). The TALP Research Center is recognized as a Quality Research Group (2001 SGR 00254) by DURSI, the Research Department of the Catalan Government. Authors are thankful to Michael Gamon, Julio Gonzalo and Llu\u00eds M\u00e0rquez for their valuable comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "8." }, { "text": " Table 3 : Adequacy correlation coefficients for some combinations of automatic evaluation metrics inside the QARLA Framework, using the IQ measure, for the IWSLT'04 CE Supplied Data track. study how to improve MT evaluation by combining different metrics. However, our results show that the correlation with human judgements does not improve when metric combinations are considered. We point some possible reasons. First, we are calculating Pearson correlations with human assessments over only 20 systems, and the levels of correlation achieved by individual metrics are already very high. With so very few samples and these high levels of correlation, one could perhaps argue that improvements are not very significant. This problem could be solved by testing correlation at the sentence level. We would then have thousands of samples. Correlations at this level would also tend to be lower.A second reason for the lack of success in the combination of metrics is that we have used metrics that capture similar features. In future works, new metrics centered in partial features that capture linguistic aspects of translation further than lexical will be included.Furthermore, a main drawback of the IQ measure is that it requires several reference translations, when actually in most cases a single reference is available. Others, like [21] , avoid the use of references by building classifiers that learn to distinguish between human-produced and machine-produced translations. In the short term, we plan to apply IQMT to other working sets so as to study its behaviour when fewer reference translations are available. That would allow us to test also our approach over longer sentences.A final remark, IQMT is not yet properly a framework because it does not allow for meta-evaluation yet. Further work involves dealing with the two other QARLA components, namely KING and JACK, which measure the quality of a set of metrics, and the quality of a test set with respect to a set of metrics, respectively.", "cite_spans": [ { "start": 1340, "end": 1344, "text": "[21]", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 1, "end": 8, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "QARLA: a Framework for the Evaluation of Automatic Sumarization", "authors": [ { "first": "Enrique", "middle": [], "last": "Amig\u00f3", "suffix": "" }, { "first": "Julio", "middle": [], "last": "Gonzalo", "suffix": "" }, { "first": "Anselmo", "middle": [], "last": "Pe\u00f1as", "suffix": "" }, { "first": "Felisa", "middle": [], "last": "Verdejo", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Enrique Amig\u00f3, Julio Gonzalo, Anselmo Pe\u00f1as, and Felisa Verdejo, \"QARLA: a Framework for the Eval- uation of Automatic Sumarization\", Proceedings of the 43th Annual Meeting of the Association for Computa- tional Linguistics, 2005.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Final Report of the Summer Workshop on Syntax for Statistical Machine Translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Sarkar", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "Shankar", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Libin", "middle": [], "last": "Shen", "suffix": "" }, { "first": "David", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Eng", "suffix": "" }, { "first": "Viren", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Zhen", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och, Daniel Gildea, Sanjeev Khudanpur, Anoop Sarkar, Kenji Yamada, Alex Fraser, Shankar Kumar, Libin Shen, David Smith, Katherine Eng, Viren Jain, Zhen Jin and Dragomir Radev, \"Final Report of the Summer Workshop on Syntax for Statistical Ma- chine Translation\", Johns Hopkins University, 2003.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Evaluating DUC 2004 with QARLA Framework", "authors": [ { "first": "Enrique", "middle": [], "last": "Amig\u00f3", "suffix": "" }, { "first": "Julio", "middle": [], "last": "Gonzalo", "suffix": "" }, { "first": "Anselmo", "middle": [], "last": "Pe\u00f1as", "suffix": "" }, { "first": "Felisa", "middle": [], "last": "Verdejo", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL'05 Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Enrique Amig\u00f3, Julio Gonzalo, Anselmo Pe\u00f1as, and Felisa Verdejo, \"Evaluating DUC 2004 with QARLA Framework\", Proceedings of the ACL'05 Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization, 2005.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Overview of the IWSLT04 Evaluation Campaign", "authors": [ { "first": "Yasuhiro", "middle": [], "last": "Akiba", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Noriko", "middle": [], "last": "Kando", "suffix": "" }, { "first": "Hiromi", "middle": [], "last": "Nakaiwa", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Paul", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yasuhiro Akiba, Marcello Federico, Noriko Kando, Hiromi Nakaiwa, Michael Paul and Jun'ichi Tsujii, \"Overview of the IWSLT04 Evaluation Campaign\", Proceedings of the International Workshop on Spoken Language Translation, 2004.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Syntax-based Language Models for Machine Translation", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" } ], "year": 2003, "venue": "Proceedings of MT SUMMIT IX", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak, Kevin Knight and Kenji Yamada, \"Syntax-based Language Models for Machine Transla- tion\", Proceedings of MT SUMMIT IX, 2003.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Immediate-Head Parsing for Language Models", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak, \"Immediate-Head Parsing for Lan- guage Models\", Proceedings of ACL, 2001.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A Syntax-based Statistical Translation Model", "authors": [ { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Yamada and Kevin Knight, \"A Syntax-based Sta- tistical Translation Model\", Proceedings of ACL, 2001.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Bleu: a method for automatic evaluation of machine translation, IBM Research Report, RC22176", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" }, { "first": ";", "middle": [ "J" ], "last": "Ibm T", "suffix": "" }, { "first": "", "middle": [], "last": "Watson Research", "suffix": "" }, { "first": "", "middle": [], "last": "Center", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward and Wei- Jing Zhu, \"Bleu: a method for automatic evaluation of machine translation, IBM Research Report, RC22176\", IBM T.J. Watson Research Center, 2001.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automatic Evaluation of Machine Translation Quality Using N-gram Co-Occurrence Statistics", "authors": [ { "first": "George", "middle": [], "last": "Doddington", "suffix": "" } ], "year": 2002, "venue": "Proc. of the 2nd Internation Conference on Human Language Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Doddington, \"Automatic Evaluation of Machine Translation Quality Using N-gram Co- Occurrence Statistics\", Proc. of the 2nd Internation Conference on Human Language Technology, 2002.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Accelerated DP based Search for Statistical Translation", "authors": [ { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" }, { "first": "S", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "A", "middle": [], "last": "Zubiaga", "suffix": "" }, { "first": "H", "middle": [], "last": "Sawaf", "suffix": "" } ], "year": 1997, "venue": "Proceedings of European Conference on Speech Communication and Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Tillmann, S. Vogel, H. Ney, A. Zubiaga and H. Sawaf, \"Accelerated DP based Search for Statistical Translation\", Proceedings of European Conference on Speech Communication and Technology, 1997.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Precision and Recall of Machine Translation", "authors": [ { "first": "I", "middle": [], "last": "", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Melamed", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Green", "suffix": "" }, { "first": "Joseph", "middle": [ "P" ], "last": "Turian", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT/NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Dan Melamed, Ryan Green and Joseph P. Turian, \"Precision and Recall of Machine Translation\", Pro- ceedings of HLT/NAACL, 2003.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments", "authors": [ { "first": "Satanjeev", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satanjeev Banerjee and Alon Lavie, \"METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments\", Proceedings of ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization, 2005.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Automatic Evaluation of Machine Translation Quality Using Longest Common Subsequence and Skip-Bigram Statics", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin and Franz Josef Och, \"Automatic Eval- uation of Machine Translation Quality Using Longest Common Subsequence and Skip-Bigram Statics\", Pro- ceedings of ACL, 2004.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Extending the BLEU MT Evaluation Method with Frequency Weightings", "authors": [ { "first": "Bogdan", "middle": [], "last": "Babych", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Hartley", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bogdan Babych and Tony Hartley, \"Extending the BLEU MT Evaluation Method with Frequency Weight- ings\", Proceedings of ACL, 2004.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "WordNet. An Electronic Lexical Database", "authors": [ { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Fellbaum, \"WordNet. An Electronic Lexical Database\", The MIT Press, 1998.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Syntactic Features for Evaluation of Machine Translation", "authors": [ { "first": "Ding", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ding Liu and Daniel Gildea, \"Syntactic Features for Evaluation of Machine Translation\", Proceedings of ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization, 2005.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A learning approach to improving sentence-level MT evaluation", "authors": [ { "first": "Alex", "middle": [], "last": "Kulesza", "suffix": "" }, { "first": "Stuart", "middle": [ "M" ], "last": "Shieber", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 10th International Conference on Theoretical and Methodological Issues in Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Kulesza and Stuart M. Shieber, \"A learning ap- proach to improving sentence-level MT evaluation\", Proceedings of the 10th International Conference on Theoretical and Methodological Issues in Machine Translation, 2004.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Linguistic Data Annotation Specification: Assessment of Fluency and Adequacy in Chinese-English Translations Revision 1.0", "authors": [], "year": null, "venue": "Linguistic Data Consortium", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "LDC, \"Linguistic Data Annotation Specification: Assessment of Fluency and Adequacy in Chinese- English Translations Revision 1.0\", Linguistic Data Consortium. http://www.ldc.upenn.edu/Projects/-", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "TIDES/Translation/TransAssess02.pdf", "authors": [], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "TIDES/Translation/TransAssess02.pdf, 2002.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Some Methods for classification and Analysis of Multivariate Observations", "authors": [ { "first": "J", "middle": [ "B" ], "last": "Macqueen", "suffix": "" } ], "year": 1967, "venue": "Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability", "volume": "1", "issue": "", "pages": "281--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. B. MacQueen, \"Some Methods for classification and Analysis of Multivariate Observations\", Proceedings of 5th Berkeley Symposium on Mathematical Statis- tics and Probability, Berkeley, University of California Press, 1:281-297, 1967.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Overview of the IWSLT 2005 Evaluation Campaign", "authors": [ { "first": "Matthias", "middle": [], "last": "Eck", "suffix": "" }, { "first": "Chiori", "middle": [], "last": "Hori", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthias Eck and Chiori Hori, \"Overview of the IWSLT 2005 Evaluation Campaign\", Proceedings of the International Workshop on Spoken Language Translation, 2005.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Sentence-Level MT evaluation without reference translations: beyond language modeling", "authors": [ { "first": "Michael", "middle": [], "last": "Gamon", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Aue", "suffix": "" }, { "first": "Martine", "middle": [], "last": "Smets", "suffix": "" } ], "year": 2005, "venue": "Proceedings of EAMT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Gamon, Anthony Aue and Martine Smets, \"Sentence-Level MT evaluation without reference translations: beyond language modeling\", Proceedings of EAMT, 2005.", "links": null } }, "ref_entries": { "FIGREF1": { "text": ", and 4 other variants at the 4-gram level, always with stemming: ROUGE-L longest common subsequence (LCS). ROUGE-S* skip bigrams with no max-gap-length. ROUGE-SU* skip bigrams with no max-gap-length, including unigrams. ROUGE-W weighted longest common subsequence (WLCS) with weighting factor '", "uris": null, "type_str": "figure", "num": null }, "TABREF1": { "content": "
\u00a2 \u00a1 from\u00a7 P \u00a6 ) \u00a6 \u00a1 t o the nearest model in i s maximum. For this, we consider the distance \u00a1 :
\u00a2 \u00a1\u00a7\u00a8 \u00a6\u00a16 2 \u00a6( \u00a4 \u00a3 \u00a6 \u00a5 \u00a7\u00a9 \u00a7\u00a7 \u00a6 7 6 2
\u00a9 \u00a7\u00a7 P \u00a6 ) 7 6 2! # \" % $ # E &(04 3(\u00c4 P \u00a6 ) F 6 1 8( 5 B 6C F 6CC 7
", "num": null, "html": null, "text": "That is, models are not grouped separate from incorrect translations. This means that current MT evaluation metrics do not satisfy the QUEEN conjectures. Therefore, we defined a new metric IQ (Innovated QUEEN) derived from QUEEN. The first change is to assume that a good translation should be similar to just one of the models, and not necessarily to all models. Formally, if an automatic translation \u00a6 i s equal to one of the models, then", "type_str": "table" }, "TABREF2": { "content": "
Adequacy Fluency AdequacyFluencyAdequacy\u00a1 \u00a2Fluency\u00a1 \u00a2
BLEU.n10.76230.63800.67810.59330.05290.0802
BLEU.n20.84420.80020.87700.82150.25670.2788
BLEU.n30.84490.83260.84990.82120.39230.4064
BLEU.n40.74070.86000.85690.80630.31560.3434
GTM.e10.51360.52140.62040.54520.82930.7715
GTM.e20.67840.65660.66870.61400.80150.8126
GTM.e30.70220.69060.65900.60940.77750.8213
METEOR.exact0.88990.74630.78360.68880.93580.8593
METEOR.porter0.88370.72650.78000.67060.94940.8599
METEOR.wn10.87840.71470.78860.67090.94200.8554
METEOR.wn20.87250.69230.77840.65130.89420.8094
NIST.n10.40770.23230.78370.61500.51240.4845
NIST.n20.52450.36290.83850.69340.79450.7063
NIST.n30.57450.42220.84210.70000.72770.6952
NIST.n40.59650.44970.84380.70300.84660.8136
NIST.n50.68200.59500.84400.70360.87680.8650
ROUGE.n10.85820.65900.90280.73030.96950.8876
ROUGE.n20.92870.84350.92380.84210.96730.9142
ROUGE.n30.91900.86460.90760.86300.95880.9180
ROUGE.n40.90100.85270.87560.81560.94920.9008
ROUGE-L0.91530.76440.93250.81120.97130.8979
ROUGE-S*0.93760.81640.93570.81190.96630.9062
ROUGE-SU*0.93280.81140.93170.80960.96560.9064
ROUGE-W0.92190.77370.89180.78990.92340.8503
mPER/1-PER-0.5779 -0.60100.42120.36620.02420.0421
mWER/1-WER-0.6427 -0.72140.45070.42090.08800.0770
", "num": null, "html": null, "text": "", "type_str": "table" }, "TABREF4": { "content": "
Metric CombinationCorrelation
Best Best \u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a6 \u00a7 \u00a7 \u00a9 \u00a3 Best\u00a3 \u00a3 \u00a9 \u00a6 \u00a4 ! \" \u00a9 # \u00a4 \u00a4 \u00a4 BLEU.n4\u00a5 NIST.n1\u00a50.9826 0.8549
", "num": null, "html": null, "text": "Fluency correlation coefficients for some combinations of automatic evaluation metrics inside the QARLA Framework, using the IQ measure, for the IWSLT'04 CE Supplied Data track.", "type_str": "table" } } } }