{ "paper_id": "2004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:20:32.778692Z" }, "title": "Towards Fairer Evaluations of Commercial MT Systems on Basic Travel Expressions Corpora", "authors": [ { "first": "Herv\u00e9", "middle": [], "last": "Blanchon", "suffix": "", "affiliation": { "laboratory": "Laboratoire CLIPS BP 53", "institution": "", "location": { "postCode": "38041", "settlement": "Grenoble Cedex 9", "country": "France" } }, "email": "" }, { "first": "Christian", "middle": [], "last": "Boitet", "suffix": "", "affiliation": { "laboratory": "Laboratoire CLIPS BP 53", "institution": "", "location": { "postCode": "38041", "settlement": "Grenoble Cedex 9", "country": "France" } }, "email": "" }, { "first": "Francis", "middle": [], "last": "Brunet-Manquat", "suffix": "", "affiliation": { "laboratory": "Laboratoire CLIPS BP 53", "institution": "", "location": { "postCode": "38041", "settlement": "Grenoble Cedex 9", "country": "France" } }, "email": "" }, { "first": "Mutsuko", "middle": [], "last": "Tomokiyo", "suffix": "", "affiliation": { "laboratory": "Laboratoire CLIPS BP 53", "institution": "", "location": { "postCode": "38041", "settlement": "Grenoble Cedex 9", "country": "France" } }, "email": "" }, { "first": "Agn\u00e8s", "middle": [], "last": "Hamon", "suffix": "", "affiliation": { "laboratory": "Laboratoire de Statistique (Universit\u00e9 Haute Bretagne) Place du recteur H. Le Moal", "institution": "", "location": { "postCode": "24307 35043", "settlement": "Rennes Cedex", "region": "CS", "country": "France" } }, "email": "agn\u00e8s.hamon@uhb.fr" }, { "first": "Vo", "middle": [], "last": "Trung", "suffix": "", "affiliation": { "laboratory": "Laboratoire CLIPS BP 53", "institution": "", "location": { "postCode": "38041", "settlement": "Grenoble Cedex 9", "country": "France" } }, "email": "" }, { "first": "Youcef", "middle": [], "last": "Bey", "suffix": "", "affiliation": { "laboratory": "Laboratoire CLIPS BP 53", "institution": "", "location": { "postCode": "38041", "settlement": "Grenoble Cedex 9", "country": "France" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We compare the performance of several SYSTRAN systems on the BTEC corpus. Two language pairs: Chinese to English and Japanese to English are used. Whenever it is possible the system will be used \"off the shelf\" and then tuned. The first system we use is freely available on the web. The second system, SYSTRAN Premium, is commercial. It is used in two ways: (1) choosing and ordering available original dictionaries and setting parameters, (2) same + user dictionaries. As far as the evaluation is concerned, we competed in the unlimited data track.", "pdf_parse": { "paper_id": "2004", "_pdf_hash": "", "abstract": [ { "text": "We compare the performance of several SYSTRAN systems on the BTEC corpus. Two language pairs: Chinese to English and Japanese to English are used. Whenever it is possible the system will be used \"off the shelf\" and then tuned. The first system we use is freely available on the web. The second system, SYSTRAN Premium, is commercial. It is used in two ways: (1) choosing and ordering available original dictionaries and setting parameters, (2) same + user dictionaries. As far as the evaluation is concerned, we competed in the unlimited data track.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We first give our motivations for participating in this campaign with a commercial system. Then we briefly describe the system tested (SYSTRAN \u00ae 5.0) and the ways it can be parametrized. The bulk of this paper describes the evaluation procedure. We finish by presenting and analyzing the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "MT evaluation is a hot topic since 1960 or so. The literature on evaluation may even be larger than that on MT techniques proper. MT evaluation may have several goals:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rationale", "sec_num": "2." }, { "text": "help buyers buy MT (or CAT) system best suited to their needs, -help funders decide on which technology to support, and help developers measure various aspects of their systems, and measure progress. The MT evaluation campaign organized by the C-STAR III consortium falls in the latter category. Its aim is to measure the \"quality\" of various MT systems developed for speech-tospeech translation when applied to the BTEC corpus [7] . Another goal is to compare the MT systems developed by the C-STAR partner not only between them, but also with other systems, notably commercial systems. In the past, we often got the impression that, in similar campaigns, the commercial system used as a \"baseline\" were tested in quite biased ways. From what was reported, the experimenters submitted the input texts to free MT web servers, and evaluated the results. But that method is quite unfair, which makes all the conclusions scientifically invalid. For example, long ago, the CANDIDE system trained intensively on the Hansard corpus, was compared with an offthe-shelf version of SYTSRAN without any tuning. SYSTRAN clearly won, but the margin might have been far bigger (or perhaps not, this should have been studied!), if SYSTRAN had been tuned to this totally unseen corpus, at the level of dictionaries, of course, but perhaps also of grammars. Another example is given by MSR [5] on comparison between their French-English system, highly tuned to their documents (actually, the transfer component was 100% induced from 150000 pairs of sentences and their associated \"logical forms\" or deep syntactic trees). They used also SYSTRAN, this time slightly tuning it by giving priority to SYSTRAN dictionaries containing computer related terms 1 . However, they apparently did not invest time to produce a user dictionary containing the MicroSoft computer science dictionary. Technical terminology varies a lot from firm to firm and even from product to product. What is then the value of the conclusion that their system was (slightly) better than SYSTRAN? And when they tried to do the same comparison on the Hansard, SYSTRAN (\"general\") won. As members of C-STAR III not engaged in developing J-E or C-E systems (although we worked on prototypes in the 80's), we felt it was interesting to take part in this evaluation campaign to establish a \"most faithful baseline\" for a commercial system. Which commercial system(s) to use? We choose SYSTRAN because:", "cite_spans": [ { "start": 428, "end": 431, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 1373, "end": 1376, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 1735, "end": 1736, "text": "1", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Rationale", "sec_num": "2." }, { "text": "it offers the two pairs C-E and J-E, -these pairs have been recently slightly improved (although more work has been done on E-C and E-J), -SYSTRAN agreed to give us free access to the latest version (v5), in its Premium packaging (Windows interface, with many tunable parameters, and the possibility to create a user dictionary), -it can be considered as a kind of \"medium\" baseline, when compared with the other commercial MT systems for C-E and J-E (some are far worse, and some far better, e.g. ATLAS-II for E-J and ALT/JE for J-E). There is another worry with the current evaluation campaigns: objective evaluation is performed using \"reference\" human translations, but the measures, based on n-gram co-occurrence counts, correlate only (very) weakly with human judgment [9] , and that human judgment is too often not task oriented. As a result:", "cite_spans": [ { "start": 775, "end": 778, "text": "[9]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Rationale", "sec_num": "2." }, { "text": "the evaluations produce tables of figures with no decisive, clear interpretation about intrinsic quality (relative to some precise goal); -real translation results are never shown side by side for subjective evaluation by the readers themselves; -no task-oriented measure is computed. To compute such a measure, one should measure the time it takes a human to produce a \"reference translation\" from an MT output, as done by [8] . As a byproduct, one also gets new reference translations, for cheaper than with usual human translation: o, 510 sentences of the BTEC, the second author spent about 12 mn per page (of 250 words) using SYSTRAN English-French output while 3 colleagues each spent 59 mn per page usig no machine help. If the goal is to produce good translations, the intended use of the MT system is to help human translators, and that measure is perfect. If the goal is to help readers understand text in a foreign language, it is also a very good indicator, if the human \"judge\" is asked not to look at the source text before having really tried (hard) to understand the MT output. In the framework of this campaign, we worked only on the first aspect (how to test a commercial system as faithfully as possible) and not on the second (how to improve the evaluation itself), although we produced parallel presentations of source (J/C), reference (E), and MT outputs for human direct inspection and subjective, global evaluation. Let us now describe in more detail the SYSTRAN systems used, before moving to the evaluation protocol and its results.", "cite_spans": [ { "start": 424, "end": 427, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Rationale", "sec_num": "2." }, { "text": "The SYSTRAN systems involved are: \u2022 SYSTRAN 5.0, freely available on the web \u2022 SYSTRAN 5.0 tuned by some parameters settings, and with a user dictionary Those systems use there own linguistic resources, we took then part in the unlimited track of the evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Version and usage", "sec_num": "3.1." }, { "text": "\u2022 Japanese to English \u2022 Chinese to English Those are by far not the best SYSTRAN pairs. They have been slightly improved from earlier ones in the context of a side project with CISCO, the main project concerning English to Chinese, Japanese, and Kor\u00e9an. However, using Systran output to help human translation still gives a clear productivity increase, as a sizable pat of the translations need 1 or 0 changes only. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language pairs considered:", "sec_num": "3.2." }, { "text": "The morphosyntactic analysis module examines each sentence in the text input, noting all uncertainties and errors. This examination allows for reanalysis and decision-making on alternate translations in later processing. The program flow and basic algorithms for the syntactic analysis module is essentially the same for all systems sharing the same source language, and the system design and architecture are the same for all language pairs. However, in the case of lexical and syntactic ambiguities, decisions are often taken with respect to the target language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source language analysis", "sec_num": "3.3.1." }, { "text": "It is the only module that is unique to each language pair. It restructures the syntactic structure (a kind of chart) as necessary, and selects the correct target lexical equivalents of identified words and expressions. Regardless of the fact that restructuring and selection are different, the basic architecture and strategy are similar for all language pairs. That architecture is a kind of \"descending transfer\" [1] , because the only independent phase in generation is the morphological generation (there are actually very few real \"horizontal\" transfer systems).", "cite_spans": [ { "start": 416, "end": 419, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Language pair transfer", "sec_num": "3.3.2." }, { "text": "The module makes all necessary assignments of case, tense, number, etc. according to the rules of the target language in order to generate the target language output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target language synthesis", "sec_num": "3.3.3." }, { "text": "Two dictionaries are used: a stem dictionary and an expression dictionary. The stem dictionary contains terminology and base forms. An expression dictionary contains phrases and conditional expressions. A dictionary manager tool provides a mean for improving translation results through the formation of multilingual dictionaries. Multilingual dictionaries are user-created collections of subject-specific terms that are analyzed prior to being integrated directly into the translation process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dictionaries", "sec_num": "3.3.4." }, { "text": "The dictionaries of the SYSTRAN Japanese to English and Chinese to English were updated in the following way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation protocol 4.1. Tuning SYSTRAN", "sec_num": "4." }, { "text": "We choose two original dictionaries of the SYSTRAN premium 5.0: the business and colloquial language dictionaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Choosing the SYSTRAN dictionaries available", "sec_num": "4.1.1." }, { "text": "Dictionary update for the Chinese to English system SYSTRAN system with original dictionaries found 178 unknown words in the Chinese training corpus. So, we created a Chinese user dictionary containing these words and their English translation. Furthermore, the SYSTRAN system associated with this user dictionary found 4 unknown words in the test corpus. These words were added to the user dictionary. This final user dictionary is used to improve the SYSTRAN premium 5.0 system afterward.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.1.2.", "sec_num": null }, { "text": "We also create a Japanese user dictionary with the same method. We found 304 unknown words in the Japanese training corpus and 13 unknown words in the Japanese test corpus. The new Japanese user dictionary is also used to improve the SYSTRAN premium 5.0 system afterward.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dictionary update for the Japanese to English system", "sec_num": "4.1.3." }, { "text": "Subjective evaluation was conducted using the NIST protocol 2 . Both fluency and adequacy were evaluated with a set of three judges. For the translation of each sentence, judges make the fluency judgment before the adequacy judgment. Fluency refers to the degree to which the target is well formed according to the Where English translations retain source language characters or words, judges are instructed to give a score between \"1: Incomprehensible\" and \"3: Non-native English\" depending upon the degree to which the untranslated characters, among the other factors, affect the fluency of the translation.", "cite_spans": [ { "start": 60, "end": 61, "text": "2", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Subjective evaluation", "sec_num": "4.2.1." }, { "text": "Having made the fluency judgment for a translation of a segment, the judge is presented with one of four reference translations. Comparing the target translation against the reference translation, judges determine whether the translation is adequate. Adequacy refers to the degree to which information present in the original is also communicated in the translation. Thus for a d e q u a c y judgments, the reference translation will serve as a proxy for the original source language text. An adequacy judgment is one of the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subjective evaluation", "sec_num": "4.2.1." }, { "text": "How much of the meaning expressed in the goldstandard translation is also expressed in the target translation? 5 All 4 Most 3 Much 2 Little 1 Non", "cite_spans": [], "ref_spans": [ { "start": 111, "end": 149, "text": "5 All 4 Most 3 Much 2 Little 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Subjective evaluation", "sec_num": "4.2.1." }, { "text": "Where English translations retain Chinese and or Japanese characters from the original sentences, judges are instructed to give a score between \"1: None\" and \"4: Most\" depending upon the degree to which the un-translated characters, among the other factors, affect the adequacy of the translation. A simple statistical approach to quantify inter-judge agreement and concordance for the three pairs of judges is to compute the Cohen Kappa and Gamma coefficients [6] . The overall agreement between the three judges is assessed with an extension of the Kappa. The extension of the Kappa coefficient to evaluate agreement among the three judge is based on the assumption of identical marginal ratings, that is all the judge have the same level of grading severity. A valid interpretation of the overall Kappa coefficient is only possible when this hypothesis holds. Indeed, if the raters do not use the same scores in the same proportions, substantial agreement cannot be expected.", "cite_spans": [ { "start": 461, "end": 464, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Subjective evaluation", "sec_num": "4.2.1." }, { "text": "Five automatic scoring techniques have been used: BLEU, NIST, WER, PER, and GTM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Objective evaluation", "sec_num": "4.2.2." }, { "text": "One first compute a modified n-gram precision p n (1\u2264n\u2264N)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BLEU", "sec_num": null }, { "text": "where Count clip (n-gram) is the maximum number of n-grams co-occurring in a candidate translation and a reference translation, and Count(n-gram) is the number of n-grams in the candidate translation. In order to prevent very short translations to try to maximize their precision scores a brevity penalty, BP, is used:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BLEU", "sec_num": null }, { "text": "Here |c| is the length of the candidate translation and |r| is the length of the reference translation. Then:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BLEU", "sec_num": null }, { "text": "The weighting factor, w n , is set at 1/N. [2] quoted several limits of BLEU. First, the geometric mean of co-occurrence over n induces a counterproductive variance due to low co-occurrences for the larger values of n. In other words, a lack of long n-gram match will have a strong impact on the score. Second, it may be better to weight more heavily the more informative n-grams. Those n-grams are those that occur less frequently. Third, small variations of translation length impact sensibly the score. This impact should be minimized with a different Brevity Penalty.", "cite_spans": [ { "start": 43, "end": 46, "text": "[2]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "BLEU", "sec_num": null }, { "text": "The NIST score implements these proposals.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NIST", "sec_num": null }, { "text": "The Information weights are computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NIST", "sec_num": null }, { "text": "An new brevity penalty is introduced to minimize the impact on score of small variations in the length of the translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NIST", "sec_num": null }, { "text": "The NIST score is computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NIST", "sec_num": null }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NIST", "sec_num": null }, { "text": "The WER is the standard technique used to evaluate speech recognition modules based on the Levenstein edit distance [4] . [3] proposed a Levenstein-like measure independent form the words position. (PER -Position-independent Error Rate) ; A sentence is seen as a bag of words and the distance between a sentence and any of its permutations is null 3 .", "cite_spans": [ { "start": 116, "end": 119, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 122, "end": 125, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "WER/PER", "sec_num": null }, { "text": "At the sentence level, the GMT [9] score is the harmonic mean (F-measure) of a new proposed precision and recall measures based on a maximum match size.", "cite_spans": [ { "start": 31, "end": 34, "text": "[9]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "GTM", "sec_num": null }, { "text": "case insensitive (lower-case only) -no punctuation marks (remove '.' ',' '?' '!' '\"') ; periods that are parts of a word should not be removed, e.g., abbreviations, like \"mr.\",\"a.m.\", remain as they occur in the corpus data no word compounds (substitute hyphens '-' with blank space) -spelling-out of numerals", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Parameters:", "sec_num": "4.3." }, { "text": "We submitted three runs for the Chinese-to-English language pair. These runs were produced by SYSTRAN :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese to English", "sec_num": "5.1.1." }, { "text": "C_1 SYSTRAN web 5.0 C_2 SYSTRAN premium 5.0 with original dictionaries C_3 SYSTRAN premium 5.0 with original and user dictionaries", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese to English", "sec_num": "5.1.1." }, { "text": "We submitted four runs for the Japanese-to-English language pair. Three runs were produced by SYSTRAN : The last run was made of the original translations produced for the J_3 run revised by a human translator instructed to produce an adequate translation out of the SYSTRAN English translation minimizing the changes. Out of the 500 utterances, 50 (10%) were left unchanged. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Japanese to English", "sec_num": "5.1.2." }, { "text": "J_1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Japanese to English", "sec_num": "5.1.2." }, { "text": "We were expecting far better results with the revised translations. The results confirmed our intuition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Objective evaluation results for J_4", "sec_num": "5.4." }, { "text": "NIST PER WER J_4 0.4691 0.7777 9.9189 0.3236 0.3711 Table 1) . This is because the Premium version was frozen in May-June, while the web version was updated later and keeps changing. With this evaluation, its rank is 8 th /9 (Table 4 and Table 5 ).", "cite_spans": [], "ref_spans": [ { "start": 52, "end": 60, "text": "Table 1)", "ref_id": "TABREF3" }, { "start": 225, "end": 233, "text": "(Table 4", "ref_id": "TABREF6" }, { "start": 238, "end": 245, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "BLEU GMT", "sec_num": null }, { "text": "Runs J_3, J_2, and J_1 ranks are in agreement with the intuition that the better the system is tuned, the better are the results ( Table 2) . With this evaluation, the rank of J_3 is 4 th /9 ( Table 6 and Table 7) . Surprisingly, (perfect) human revised translation (J-4, Table 8 ) is ranked only 3 rd and does not reach the first position! This proves that the measure or the evaluation method is flawed.", "cite_spans": [], "ref_spans": [ { "start": 131, "end": 139, "text": "Table 2)", "ref_id": "TABREF4" }, { "start": 193, "end": 200, "text": "Table 6", "ref_id": "TABREF8" }, { "start": 205, "end": 213, "text": "Table 7)", "ref_id": "TABREF9" }, { "start": 272, "end": 280, "text": "Table 8", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Results on C-J", "sec_num": "6.2." }, { "text": "All the Gamma coefficients ( Table 9 , third column) are greater than 0.6 indicating no disagreement in the ordering of ratings. Before computing agreement, we informally checked the assumption of identical marginal ratings among the three raters (not all the results are reported here). For the two evaluations concerning fluency, it appears that raters number 2 use the score 2 (Disfluent English) in about 46% of the ratings while the two others did not show such a behavior. These two studies on fluency imply that it is not possible to interpret the value of the overall Kappa. For the two evaluations about adequacy, there were no notable differences among the marginal ratings. This first evaluation of agreement points out the concordance between judges on the ordering but a moderate agreement on the rating.", "cite_spans": [], "ref_spans": [ { "start": 29, "end": 36, "text": "Table 9", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Inter-judge agreement in the subjective evaluation for both C-E and J-E language pairs.", "sec_num": "6.3." }, { "text": "All the Japanese utterances can be considered as polished transcribes of oral dialogues in the tourism domain. The language level is rather polite. When the utterance is euphemistic ( ), the particle is always translated by \"but\". Some of the utterances do not make sense without any context (e.g.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of SYSTRAN-produced translations 4", "sec_num": "6.4." }, { "text": "\u279f \"it cuts\" ?). When the first person subject is omitted in Japanese, it is always translated as \"it\" ( \u279f \"It gets off here.\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of SYSTRAN-produced translations 4", "sec_num": "6.4." }, { "text": "The test set contains a lot of interrogative utterances. In the translations, the interrogative pronoun or adverb is always shifted at the end of the translation. The standard English word order is not respected (e.g. \u279f \"Is the opera house where?\"). A lot of spoken Japanese daily life idiomatic expressions are not present in the SY S T R A N dictionaries (e.g.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of SYSTRAN-produced translations 4", "sec_num": "6.4." }, { "text": "\u279f \"How doing.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of SYSTRAN-produced translations 4", "sec_num": "6.4." }, { "text": "\u279f \"It does.\" \u279f \"Way if.\"). Requests or invitations are not always well translated (e.g.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of SYSTRAN-produced translations 4", "sec_num": "6.4." }, { "text": "\u279f \"It is to like to order.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of SYSTRAN-produced translations 4", "sec_num": "6.4." }, { "text": "\u279f \"It will go together.\"). Lexicalized Japanese politeness is correctly analyzed (e.g.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of SYSTRAN-produced translations 4", "sec_num": "6.4." }, { "text": "\u279f \"Without cutting that way, please wait.\"). When the valency of the verb for two expressions in Japanese and English is different, the translation is almost always wrong (e.g.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of SYSTRAN-produced translations 4", "sec_num": "6.4." }, { "text": "\u279f \"Chill does.\"). Finally, the aspect of the japanese predicate is not correctly rendered in English (e.g.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of SYSTRAN-produced translations 4", "sec_num": "6.4." }, { "text": "\u279f \"The air ticket was forgotten in the house.\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of SYSTRAN-produced translations 4", "sec_num": "6.4." }, { "text": "Adding entries for unknown words in the SYSTRAN dictionary was not sufficient to raise the performance of the system at a score comparable with that of the other competing systems. However, ranking of (perfect) translation obtained by postediting SYSTRAN outputs was only 3/4, which indicates that the evaluation method or experimental methodology used in the campaign may be flawed. To investigate this, it will be absolutely necessary in future evaluation campaigns to produce side by side presentations of the various results in order to let a human compare the results. Finally, we have observed, as always, that the rank of a system is not consistent over several objective evaluation techniques As far as subjective evaluation is concerned, we have shown that agreement between the judges is not good. With hindsight, it would appear that the definition of adequacy and fluency proposed by NIST are too much geared 4 The appendix gives some example of real data submitted to the evaluation. towards written style. In the context of speech-to-speech translation some of the severely evaluated utterances can be considered as perfectly OK.", "cite_spans": [ { "start": 921, "end": 922, "text": "4", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "This measure is thus not a distance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "OriginalIs there a sightseeing tour?Is there a sightseeing tour? Very good translation: Impersonal sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix", "sec_num": "9." } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Machine Translation", "authors": [ { "first": "C", "middle": [], "last": "Boitet", "suffix": "" } ], "year": 2003, "venue": "Encyclopedia of Computer Science", "volume": "10", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boitet, C. (2003) Machine Translation. in Ralston, A., Reilly, E. and Hemmendinger, D. (ed.), Encyclopedia of Computer Science. John Willey & Sons. 10 p.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Automatic Evaluation of Machine Translation Quality Using N-gram Co-Occurrence Statistics", "authors": [ { "first": "G", "middle": [], "last": "Doddington", "suffix": "" } ], "year": 2002, "venue": "Proc. HLT", "volume": "1", "issue": "", "pages": "128--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Doddington, G. (2002) Automatic Evaluation of Machine Translation Quality Using N-gram Co- Occurrence Statistics. Proc. HLT 2002. San Diego, California. March 24-27, 2002. vol. 1/1: pp. 128-132 (note book proceedings).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Novel String-toString Distance Measure with Application to Machine Translation Evaluation", "authors": [ { "first": "G", "middle": [], "last": "Leusch", "suffix": "" }, { "first": "N", "middle": [], "last": "Ueffing", "suffix": "" } ], "year": 2003, "venue": "Proc. MT Summit IX", "volume": "8", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leusch, G., Ueffing, N., et al. (2003) A Novel String- toString Distance Measure with Application to Machine Translation Evaluation. Proc. MT Summit IX. New Orleans, U.S.A. September 23-27, 2003. 8 p.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Binary codes capable of correcting deletion, insertions and reversals", "authors": [ { "first": "V", "middle": [ "I" ], "last": "Levenshtein", "suffix": "" } ], "year": 1966, "venue": "Soviet Physics Doklady. vol", "volume": "10", "issue": "8", "pages": "707--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levenshtein, V. I. (1966) Binary codes capable of correcting deletion, insertions and reversals. in Soviet Physics Doklady. vol. 10(8): pp. 707-710.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Traduction automatique ancr\u00e9e dans l'analyse linguistique", "authors": [ { "first": "J", "middle": [], "last": "Pinkham", "suffix": "" }, { "first": "M", "middle": [], "last": "Smets", "suffix": "" } ], "year": 2002, "venue": "Proc. TALN'02", "volume": "1", "issue": "", "pages": "287--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pinkham, J. and Smets, M. (2002) Traduction automatique ancr\u00e9e dans l'analyse linguistique. Proc. TALN'02. Nancy, France. 24-27 juin 2002. vol. 1/2: pp. 287-296.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Nonparametric Statistics for the Behavioral Sciences", "authors": [ { "first": "S", "middle": [], "last": "Siegel", "suffix": "" }, { "first": "N", "middle": [ "J" ], "last": "Castellan", "suffix": "" } ], "year": 1988, "venue": "", "volume": "400", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siegel, S. and Castellan, N. J. (1988) Nonparametric Statistics for the Behavioral Sciences; 2nd ed. McGraw- Hill. New-York. 400 p.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Towards a Broad-coverage Bilingual Corpus for Speech Translation of Travel Conversation in the Real World", "authors": [ { "first": "T", "middle": [], "last": "Takezawa", "suffix": "" }, { "first": "E", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2002, "venue": "Proc. LREC-2002", "volume": "1", "issue": "", "pages": "147--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takezawa, T., Sumita, E., et al. (2002) Towards a Broad-coverage Bilingual Corpus for Speech Translation of Travel Conversation in the Real World. Proc. LREC-2002. Las Palmas, Spain. May 29-31, 2002. vol. 1/3: pp. 147-152.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A Quantitative Method for Machine Translation Evaluation", "authors": [ { "first": "J", "middle": [], "last": "Tom\u00e1s", "suffix": "" }, { "first": "J", "middle": [ "\u00c0" ], "last": "Mas", "suffix": "" } ], "year": 2003, "venue": "Proc. EACL 2003 -Workshop on Evaluation Initiatives in Natural Language Processing: are evaluation methods, metrics and resouces reusable", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1s, J., Mas, J. \u00c0., et al. (2003) A Quantitative Method for Machine Translation Evaluation. Proc. EACL 2003 -Workshop on Evaluation Initiatives in Natural Language Processing: are evaluation methods, metrics and resouces reusable? Budapest, Hungary. April 14, 2003. vol. 1/1: 8 p.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Evaluation of Machine Translation and its Evaluation", "authors": [ { "first": "J", "middle": [ "P" ], "last": "Turian", "suffix": "" }, { "first": "L", "middle": [], "last": "Shen", "suffix": "" } ], "year": 2003, "venue": "Proc. MT Summit IX", "volume": "", "issue": "", "pages": "386--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Turian, J. P., Shen, L., et al. (2003) Evaluation of Machine Translation and its Evaluation. Proc. MT Summit IX. New Orleans, U.S.A. September 23-27, 2003. pp. 386-393.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "This is not in the paper but what answered to a question.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF1": { "text": "SYSTRAN web 5.0 J_2 SYSTRAN premium 5.0 with original dictionaries J_3 SYSTRAN premium 5.0 with original and user dictionaries", "uris": null, "num": null, "type_str": "figure" }, "TABREF1": { "html": null, "text": "TransAssess02.pdf rules of Standard Written English. A fluent segment is one that is well-formed grammatically, contains correct spelling, adheres to common use of terms, titles and names, is intuitively acceptable, and can be sensibly interpreted by a native speaker of English. A fluency judgment is one of the following:", "num": null, "content": "
How do you judge the fluency of this translation? It is:
5Flawless English
4Good English
3Non-native English
2Disfluent English
1Incomprehensible
", "type_str": "table" }, "TABREF2": { "html": null, "text": "", "num": null, "content": "", "type_str": "table" }, "TABREF3": { "html": null, "text": "Objective evaluation results for the CLIPS C-E runs 0.5687 1 5.6476 1 0.5978 1 0.7304 1 J_2 0.1311 2 0.5672 2 5.6096 2 0.6012 2 0.7349 2 J_1 0.0810 3 0.5116 3 4.1935 3 0.7179 3 0.8726 3", "num": null, "content": "
BLEUGMTNISTPERWER
J_3 0.1320 1
", "type_str": "table" }, "TABREF4": { "html": null, "text": "Objective evaluation results for the CLIPS J-E runs", "num": null, "content": "", "type_str": "table" }, "TABREF5": { "html": null, "text": "Objective evaluation results for the CLIPS J_4 run systems took part in the competitive evaluation. SYSTRAN is the C_1 system with original and user dictionaries.", "num": null, "content": "
5.5.1.Subjective evaluation
FluencyAdequacy
CE_8 3.7760 1 3.6620 1
CE_3 3.0360 4 2.9960 6
CE_7 2.9340 6 3.2540 3
CE_5 3.7760 1 3.5260 2
CE_9 3.4000 3 2.8000 8
CE_2 2.6480 8 3.1880 4
CE_6 2.9540 5 2.7840 9
C_12.5700 9 2.9600 7
CE_4 2.7180 7 3.0820 5
", "type_str": "table" }, "TABREF6": { "html": null, "text": "CE_4 0.0798 9 0.3862 9 3.6443 9 0.7650 9 0.8466 9", "num": null, "content": "
Subjective evaluation for the C-E Unlimited runs
submitted by all participants
5.5.2.Objective evaluation
BLEUGMTNISTPERWER
CE_8 0.5249 1 0.7482 1 9.5603 1 0.3198 1 0.3795 1
CE_3 0.3505 3 0.6849 2 7.3691 3 0.4428 4 0.5255 3
CE_7 0.2753 5 0.6669 4 7.5002 2 0.4276 3 0.5313 4
CE_5 0.4409 2 0.6720 3 7.2413 4 0.3930 2 0.4570 2
CE_9 0.3113 4 0.5639 8 5.9217 7 0.5310 7 0.5788 6
CE_2 0.2438 6 0.6119 5 6.1354 5 0.4872 5 0.5941 7
CE_6 0.2430 7 0.6023 6 5.4250 8 0.4998 6 0.5735 5
C_1 0.
", "type_str": "table" }, "TABREF7": { "html": null, "text": "", "num": null, "content": "
Objective evaluation for the C-E Unlimited runs
submitted by all participants
5.6. Subjective and objective competitive evaluation
results for J-E
4 systems took part in the competitive evaluation scheme.
SYSTRAN is the J_2 system with original and user dictionaries.
5.6.1.Subjective evaluation
Fluency Adequacy
JE_1 4.
", "type_str": "table" }, "TABREF8": { "html": null, "text": "", "num": null, "content": "
Subjective evaluation for the J-E Unlimited runs
submitted by all participants
5.6.2.Objective evaluation
BLEUGMTNISTPERWER
JE_1 0.6306 1 0.7967 2 10.7201 2 0.2333 1 0.2631 1
JE_3 0.6190 2 0.8243 1 11.2541 1 0.2492 2 0.3056 2
JE_4 0.3970 3 0.6722 3 7.8893 3 0.4202 3 0.4857 3
J_3 0.
", "type_str": "table" }, "TABREF9": { "html": null, "text": "", "num": null, "content": "
Objective evaluation for the J-E Unlimited runs
submitted by all participants
BLEUGMTNISTPERWER
JE_1 0.6306 1 0.7967 2 10.7201 2 0.2333 1 0.2631 1
JE_3 0.6190 2 0.8243 1 11.2541 1 0.2492 2 0.3056 2
J_4 0.4691 3 0.7777 3 9.9189 3 0.3236 3 0.3711 3
JE_4 0.3970 4 0.6722 4 7.8893 4 0.4202 4 0.4857 4
J_3 0.
", "type_str": "table" }, "TABREF10": { "html": null, "text": "Objective evaluation for the J-E Unlimited runs submitted by all participants and J_4When comparing the previous results, the J_4 run is ranked as third. That may seems not that high for humane revised translations!", "num": null, "content": "
6. Discussion
6.1. Results on C-E
Surprisingly, the web C-E version of SYSTRAN [C_1]
performed better than SYSTRAN Premium 5.0 [C_2] (
", "type_str": "table" }, "TABREF12": { "html": null, "text": "Gamma and Kappa values for the inter-judge agreement evaluationThe pairwise Kappa values (last column) are between 0.19 and 0.38 indicating a moderate agreement between the raters. The overall Kappa values are 0.309 for the evaluation of Chinese-English adequacy and 0.318 for that of Japanese-English. These values indicate a moderate agreement between the 3 raters.", "num": null, "content": "", "type_str": "table" } } } }