{ "paper_id": "2004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:23:20.560615Z" }, "title": "Overview of the IWSLT04 Evaluation Campaign", "authors": [ { "first": "Yasuhiro", "middle": [], "last": "Akiba", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": {} }, "email": "yasuhiro.akiba@atr.jp" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": {} }, "email": "federico@itc.it" }, { "first": "Noriko", "middle": [], "last": "Kando", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": {} }, "email": "kando@nii.ac.jp" }, { "first": "Michael", "middle": [], "last": "Paul", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": {} }, "email": "michael.paul@atr.jp" }, { "first": "", "middle": [], "last": "Nii", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper gives an overview of the evaluation campaign results of the IWSLT04 1 workshop, which is organized by the C-STAR 2 consortium to investigate novel speech translation technologies and their evaluation. The objectives of this workshop is to provide a framework for the applicability validation of existing machine translation evaluation methodologies to evaluate speech translation technologies. The workshop also strives to find new directions in how to improve current methods.", "pdf_parse": { "paper_id": "2004", "_pdf_hash": "", "abstract": [ { "text": "This paper gives an overview of the evaluation campaign results of the IWSLT04 1 workshop, which is organized by the C-STAR 2 consortium to investigate novel speech translation technologies and their evaluation. The objectives of this workshop is to provide a framework for the applicability validation of existing machine translation evaluation methodologies to evaluate speech translation technologies. The workshop also strives to find new directions in how to improve current methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The drastic increase in demands for the capability to assist trans-lingual conversations, triggered by IT technologies such as the Internet and the expansion of borderless communities such as the increased number of EU countries, has accelerated research activities on speech-to-speech translation technology. Many research projects have been designed to advance this technology, such as VERBMOBIL, C-STAR, NESPOLE!, and BABYLON. These projects, except for C-STAR, have mainly focused on the construction of a prototype system for several language pairs. On the contrary, one of C-STAR's ongoing projects is the joint development of a speech corpus that can handle a common task in multiple languages. As a first result of this activity, a Japanese-English speech corpus comprising tourism-related sentences, originally compiled by ATR, has been translated into the native languages of the C-STAR members. The corpus serves as a primary source for developing and evaluating broadcoverage speech translation technologies [1] . This corpus is used in the research and development of multi-lingual speech-to-speech translation systems on a \"common use\" basis.", "cite_spans": [ { "start": 1020, "end": 1023, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "For the effective and efficient research and development of speech-to-speech translation systems, the evaluation of current translation quality is very important. In particular, the system developments done by using a common corpus, like C-STAR project, require careful evaluation of the prominent translation techniques. Therefore, there is strong demand for the establishment of evaluation metrics for multilingual speech-to-speech translation systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "For this purpose, the Evaluation Campaign 2004 was carried out using parts of the multilingual corpus (cf. Section 2.1). The task was to translate 500 Chinese or Japanese sentences into English. Depending on the amount of permitted training data, three different language resource conditions (Small Data Track, Additional Data Track, Unrestricted Data Track) were distinguished. The translation quality was measured using both human assessments (subjective evaluation) and automatic scoring techniques (automatic evaluation). The evaluation results of the submitted MT systems are summarized in Section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The corpus supplied for this year's conference, the reference translations, the output of the participating MT systems, and the evaluation results will be made publicly available after the workshop. These resources can be used as a benchmark for future research on MT systems and MT evaluation methodologies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We hope that IWSLT2004 will become the first step toward establishing standard metrics and a standard corpus for speech-to-speech multi-lingual translation technology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The Evaluation Campaign 2004 was carried out using parts of the multilingual corpus jointly developed by the C-STAR partners (cf. Section 2.1). The task was to translate 500 Chinese or Japanese sentences into English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Campaign 2004", "sec_num": "2." }, { "text": "Depending on the amount of permitted training data, three different language resource conditions (Small Data Track, Additional Data Track, Unrestricted Data Track) were distinguished (cf. Section 2.2). Each participant was allowed to register only one MT system in each of the data tracks but could submit multiple translation results (runs) for the same track.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Campaign 2004", "sec_num": "2." }, { "text": "In total, 14 institutions took part in this year's workshop, submitting 20 MT systems for the Chinese-to-English (CE) and 8 MT systems for the Japanese-to-English (JE) translation tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Campaign 2004", "sec_num": "2." }, { "text": "The translation quality was measured using both human assessments (subjective evaluation) and automatic scoring techniques (automatic evaluation). The subjective evaluation was carried out by English native speakers. The translation quality was judged based on the fluency and adequacy of the translation (cf. Section 2.4). For the automatic evaluation, five different automatic scoring metrics (BLEU, NIST, WER, PER, GTM) were applied (cf. Section 2.5). All run submissions were evaluated using the automatic evaluation schemes. However, due to high evaluation costs, the subjective evaluation was limited to one run submission per track for each participant, which could be selected by the participants themselves. The results of the submitted MT systems are summarized in Section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Campaign 2004", "sec_num": "2." }, { "text": "The Basic Travel Expressions Corpus (BTEC\u00a9 ) 3 is a collection of sentences that bilingual travel experts consider useful for people going to or coming from another country and cover utterances for every potential subject in travel situations [2] .", "cite_spans": [ { "start": 45, "end": 46, "text": "3", "ref_id": "BIBREF2" }, { "start": 243, "end": 246, "text": "[2]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Multilingual Spoken Language Corpus", "sec_num": "2.1." }, { "text": "For the Evaluation Campaign 2004, parts of the Chinese, Japanese, and English subsets of the BTEC\u00a9 corpus were used. Details of the supplied IWSLT04 corpus are given in Table 1 , where word token refers to the number of words in the corpus and word type refers to the vocabulary size. The participants were supplied with 20,000 sentence pairs for each translation direction randomly selected from the BTEC\u00a9 corpus, and the training sets for CE and JE were disjunct. A development set of additional 506 sentences, including up to 16 reference translations, was provided to the participants to use for the tuning of their MT systems.", "cite_spans": [], "ref_spans": [ { "start": 169, "end": 176, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Multilingual Spoken Language Corpus", "sec_num": "2.1." }, { "text": "The test set consisted of 500 sentences randomly selected from parts of the BTEC\u00a9 corpus reserved for evaluation purposes. The Chinese data set was created from the original Japanese-English test set, and a consistency check was carried out to guarantee that the Japanese and Chinese source sentences had the same meaning as the English reference translations. Therefore, the test set of the CE and JE translation tasks were identical except for having different sentence 3 Up-to-date information on the BTEC corpus can be found at http://cstar.atr.jp/cstar-corpus orders. Up to 16 reference translations were used for the automatic evaluation of the translation results; the distributions of the number of unique reference translations for each source sentence are summarized in Table 2 . Word segmentations for the Chinese 4 and Japanese 5 subsets are provided when appropriate tools were not available for the participants. However, the participants were permitted to use their own language resources as long as they did not interfere with the language resource conditions of the respective data tracks (cf. Section 2.2). Table 3 gives some examples of the English IWSLT04 corpus. ", "cite_spans": [ { "start": 472, "end": 473, "text": "3", "ref_id": "BIBREF2" }, { "start": 825, "end": 826, "text": "4", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 780, "end": 787, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 1125, "end": 1132, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Multilingual Spoken Language Corpus", "sec_num": "2.1." }, { "text": "Three different language resource conditions were distinguished. The training data of the Small Data track was limited to the supplied corpus only. The Additional Data track was set-up for CE only and limited the use of bilingual resources to the ones listed in Table 4 . These resources are publicly available from the LDC 6 . No restrictions on linguistic resources were imposed for the Unrestricted Data track. Table 5 gives an overview of the kinds of linguistic resources permitted (3 ) or not-permitted ( 4 ) for each data set condition. ", "cite_spans": [ { "start": 324, "end": 325, "text": "6", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 262, "end": 269, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 414, "end": 421, "text": "Table 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Data Track Conditions", "sec_num": "2.2." }, { "text": "In contrast to the TIDES 7 series of MT evaluations, the objective of the IWSLT workshops is the evaluation of speech translation technologies. In this framework, orthographic features are less relevant and therefore ignored in the evaluation of the MT output results. The evaluation parameters used for automatic and subjective evaluation are as follows: No text pre-processing was carried out, i.e., the participants were responsible for providing their translation output in agreement with the above mentioned evaluation specifications. In the case of a sentence count mismatch or the existence of non-ASCII characters (source language words that were not translated) in the English translation output, the run submission was rejected and no evaluation was carried out.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Specifications", "sec_num": "2.3." }, { "text": "Previous competitive MT evaluations, like the series of DARPA MT evaluations in the mid 1990's [3] , evaluated machine translation output with human reference translations on the basis of fluency and adequacy [4] . Fluency refers to the degree to which the translation is well-formed according to the grammar of the target language. Adequacy refers to the degree to which the translation communicates the information present in the reference output. The fluency and adequacy judgments consist of one of the grades listed in Table 6 .", "cite_spans": [ { "start": 95, "end": 98, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 209, "end": 212, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 524, "end": 531, "text": "Table 6", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Subjective Evaluation", "sec_num": "2.4." }, { "text": "This evaluation methodology was adopted for the IWSLT04 workshop. The grading assignments for each grader were split into two parts. First, the MT output was displayed and the grader had to judge the fluency of the translation. In the second step, a reference translation was given 7 http://www.ldc.upenn.edu/Projects/TIDES/tidesmt.html and the grader had to evaluate the adequacy of the translation. In order to minimize grading inconsistencies between graders due to contextual misinterpretations of the translations, the situation in which the sentence is uttered (corpus annotations like \"sightseeing\" or \"restaurant\") was provided for the adequacy judgment. Each translation of a single MT system was evaluated by three judges. However, in order to minimize the costs of subjective evaluation, all translation results were pooled, i.e., in the case of identical translations of the same source sentence by multiple MT engines, the translation was graded only once, and the respective rank was assigned to all MT engines with the same output. When the MT engines failed to output any translation for a given input, a score of 0 was assigned to the empty output.", "cite_spans": [ { "start": 282, "end": 283, "text": "7", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Subjective Evaluation", "sec_num": "2.4." }, { "text": "In total, ten English native speakers were involved in the evaluation task, where each grader had to evaluate the output of all MT systems for a certain number of source sentences as summarized in Table 7 .", "cite_spans": [], "ref_spans": [ { "start": 197, "end": 204, "text": "Table 7", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Subjective Evaluation", "sec_num": "2.4." }, { "text": "In order to validate the reliability of each grader, two additional evaluation data sets were prepared. The first (common) data set was used to compare the grading differences between graders. It consisted of 100 sentences randomly selected from all MT outputs submitted for subjective evaluation. The common data set was evaluated by all human graders. The second (grader-specific) data set was used to validate the self-consistency of each grader, who had to evaluate 100 sentences randomly selected from the subset of MT outputs assigned to him or her a second time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subjective Evaluation", "sec_num": "2.4." }, { "text": "Various automatic scoring metrics have been proposed within the MT evaluation community. For the IWSLT04 workshop, we utilized the five metrics summarized in Table 8 .", "cite_spans": [], "ref_spans": [ { "start": 158, "end": 165, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "2.5." }, { "text": "Excluding NIST, the scores of all automatic evaluation metrics are in the range of [0,1]. NIST is always positive, and its scoring range does not have a theoretical upper limit. In contrast to mWER and mPER, higher BLEU, NIST, and GTM scores indicate better translations. For the BLEU/NIST Table 8 : Automatic evaluation metrics mWER: Multiple Word Error Rate: the edit distance between the system output and the closest reference translation [5] mPER: Position independent mWER: a variant of mWER that disregards word ordering [6] BLEU: the geometric mean of n-gram precision by the system output with respect to reference translations [7] NIST: a variant of BLEU using the arithmetic mean of weighted n-gram precision values [8] GTM: measures the similarity between texts by using a unigram-based F-measure [9] (v11a) 8 and GTM (v1.2) 9 scores, the software versions indicated in parentheses were used. The translation output was compared with up to 16 reference translations that were pre-processed in order to conform with the format required by the evaluation specifications described in Section 2.3.", "cite_spans": [ { "start": 443, "end": 446, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 528, "end": 531, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 637, "end": 640, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 727, "end": 730, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 809, "end": 812, "text": "[9]", "ref_id": "BIBREF8" }, { "start": 820, "end": 821, "text": "8", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 290, "end": 297, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "2.5." }, { "text": "Before comparing the data sets, the MT system output and the reference translations were tagged by using a publicly available part-of-speech tagger 10 .", "cite_spans": [ { "start": 148, "end": 150, "text": "10", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "2.5." }, { "text": "All scoring scripts were applied, and the results were sent back automatically to the participants via email using the IWSLT04 evaluation server 11 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "2.5." }, { "text": "In total, 14 institutions took part in the Evaluation Campaign 2004 using a large variety of translation methodologies (cf. Table 9 ). The most frequently entered types were statistical machine translation (SMT) engines (7), but example-based (EBMT) systems (3) and one rule-based (RBMT) translation system were also entered. Moreover, four institutions exploited hybrid MT approaches that combined corpus-based MT, translation memories, and interlingua approaches. For CE, 13 participants submitted 20 MT systems, and ten MT systems were submitted by six participants for the JE translation task (cf. Table 10 ).", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 131, "text": "Table 9", "ref_id": "TABREF7" }, { "start": 602, "end": 610, "text": "Table 10", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Evaluation Campaign Participants", "sec_num": "2.6." }, { "text": "In total, 11,134 translations (after pooling) from 28 MT systems had to be evaluated. To summarize the evaluation results, we assigned an ID to each MT system as listed in Table 11 . 8 http://www.nist.gov/speech/tests/mt/resources/scoring.htm 9 http://nlp.cs.nyu.edu/GTM/ 10 http://www.cs.jhu.edu/7 brill/RBT1 14.tar.Z 11 https://www.slt.atr.jp/EVAL/interface ", "cite_spans": [ { "start": 183, "end": 184, "text": "8", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 172, "end": 180, "text": "Table 11", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Evaluation Campaign Participants", "sec_num": "2.6." }, { "text": "The schedule of the evaluation campaign is summarized in Table 12 . The training corpus of the supplied IWSLT04 cor- pus was released three months in advance of the official test runs. The participants were able to validate their system performance one week ahead by submitting translation results of the development data set by using the automatic evaluation server.", "cite_spans": [], "ref_spans": [ { "start": 57, "end": 65, "text": "Table 12", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Evaluation Campaign Schedule", "sec_num": "2.7." }, { "text": "The official test run period was limited to three days, during which the automatic scoring result feedback from the evaluation server to the participant via email was switched off in order to avoid any system tuning with the test set data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Campaign Schedule", "sec_num": "2.7." }, { "text": "After the official test run period, the participants still had access to the evaluation server in order to try out new ideas and compare their effectiveness toward their own official test run results (automatic scoring only). In addition, the par- ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Campaign Schedule", "sec_num": "2.7." }, { "text": "This section reports results on subjective evaluations with regards to the following points:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subjective Evaluation Results", "sec_num": "3.1." }, { "text": "6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subjective Evaluation Results", "sec_num": "3.1." }, { "text": "How consistently did each grader evaluate translations?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subjective Evaluation Results", "sec_num": "3.1." }, { "text": "6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subjective Evaluation Results", "sec_num": "3.1." }, { "text": "How consistently did a group of three graders evaluate them?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subjective Evaluation Results", "sec_num": "3.1." }, { "text": "6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subjective Evaluation Results", "sec_num": "3.1." }, { "text": "How were MT systems ranked according to subjective evaluation?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subjective Evaluation Results", "sec_num": "3.1." }, { "text": "The variance among subjective evaluations was caused by the variance within graders (intra-grader variance) and the variance between graders (inter-grader variance). Thus, after analyzing these variances, the MT systems were ranked according to subjective evaluation with regard to the analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subjective Evaluation Results", "sec_num": "3.1." }, { "text": "This section shows the results of checking the selfconsistency of subjective evaluation for each grader. For this purpose, the graders evaluated 100 randomly selected translations 13 twice. This checking was scheduled so that the second assessments did not follow the first assessments. For each grader, the average difference between the first and second grades was calculated, and the rate of the first and second grades being different was also calculated. Table 13 shows the expected difference between the two assessments for each grader. Table 14 shows the error rate for each grader. The expected differences of fluency and adequacy ranged from 0.21 to 0.77 and from 0.33 to 0.64, respectively, which were around 0.4 on average. This indicates that the quality of two MT systems whose difference in either fluency or adequacy is less than 0.8 cannot be distinguished. In other words, we cannot judge which MT system is better by this subjective evaluation results by regarding individual grader's error.", "cite_spans": [ { "start": 180, "end": 182, "text": "13", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 460, "end": 468, "text": "Table 13", "ref_id": "TABREF0" }, { "start": 544, "end": 552, "text": "Table 14", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Self-consistency of Graders", "sec_num": "3.1.1." }, { "text": "As shown in Table 14 , the error rates were considerably larger than expected. Even in the smallest case, indicated by bold-face figures, they were around 20%.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Table 14", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Self-consistency of Graders", "sec_num": "3.1.1." }, { "text": "Subjective evaluation is a classification task. If we merge two classes that are difficult to distinguish, we can reduce the error rate in practice. To reduce the error rate, the authors considered four binary classifications as follows: 6 \"5\" versus \"less than 5\", 6 \"larger than or equal to 4\" versus \"less than 4\", 6 \"larger than or equal to 3\" versus \"less than 3\", and 6 \"larger than or equal to 2\" versus \"less than 2\". Tables 15 and 16 show the error rates of the above binary classifications in fluency and adequacy, respectively. The error rates of the binary classifications in fluency and adequacy (cf . Tables 15 and 16 ) were much smaller than those of the 5-grade classification (cf. Table 14) . The minimal error rates of the binary classifications ranged from 0.01 to 0.07.", "cite_spans": [], "ref_spans": [ { "start": 426, "end": 442, "text": "Tables 15 and 16", "ref_id": "TABREF0" }, { "start": 613, "end": 631, "text": ". Tables 15 and 16", "ref_id": "TABREF0" }, { "start": 698, "end": 707, "text": "Table 14)", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Self-consistency of Graders", "sec_num": "3.1.1." }, { "text": "This section shows the results of checking consistency among the median grades of three graders. For this check, 100 translations were randomly selected from all MT outputs submitted for subjective evaluation and evaluated by all human graders. For each pair of teams for three graders as shown in Table 7 , the average of differences between median grades was calculated. Table 17 shows the expected difference between two median grades. The expected differences of fluency and adequacy ranged from 0.44 to 0.75 and from 0.34 to 0.61, respectively, which were around 0.55 on average.", "cite_spans": [], "ref_spans": [ { "start": 298, "end": 305, "text": "Table 7", "ref_id": "TABREF6" }, { "start": 373, "end": 381, "text": "Table 17", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Consistency of Median Grades", "sec_num": "3.1.2." }, { "text": "This indicates that the performance of two MT systems whose difference in either fluency or adequacy is less than 1.1 cannot be distinguished. thus we cannot judge which MT system is better by these subjective evaluation results without giving consideration to individual grader's error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Consistency of Median Grades", "sec_num": "3.1.2." }, { "text": "From the discussions in Sections 3.1.1 and 3.1.2, the authors show two types of ranking lists according to the average grades and the ratios of 5-grade translations. The error rate of the first binary classification described in Section 3.1.1, \"5\" versus \"less than 5\", is the smallest on average among the four binary classifications. For more reliable ranking, a ranking based on the first binary classification was additionally calculated. The first ranking lists are typically used in MT system comparisons, which are hereafter called the regular ranking lists. The scores in this ranking are in the range of [0,5]. Higher scores indicate that the corresponding MT systems are better, but they are not necessarily useful for our analysis because the self-consistency of each grader was low; here they are given for comparison purposes only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking MT Systems", "sec_num": "3.1.3." }, { "text": "Therefore, in addition to the regular ranking of MT systems, we conducted an alternative ranking according to ratios of 5-grade translations. In this ranking, we used assessments by the grader whose error rate was the smallest among three graders. Table 18 shows the error rate for this ranking. The values whose head is \"Total\" are the weighted average of the error rates for Teams 1 to 4, where the weights are the number of source sentences assigned to each team (cf. Table 18 ). The second ranking lists are more reliable, and these are hereafter called the alternative ranking lists. The scores in Note that the regular ranking lists are based on the medians, each of which is a median among the grades by three graders, while the alternative ranking lists are based on the grade assigned by the grader with the smallest error rate. Table 19 shows the regular ranking lists and the alternative ranking lists. In some tracks, a line is found between two MT system IDs. In the regular ranking, this indicates that the difference between scores above the line is within twice the value of the corresponding expected difference on average, 0.58 for fluency or 0.51 for adequacy as shown in Table 17 . In the alternative ranking, this indicates that the difference between scores above the line is within twice the value of the corresponding error rate in total, 0.03 for fluency or 0.07 for adequacy as shown in Table 18 . For the CE supplied track in the regular ranking in Table 19 , for example, a line is found between the MT's ID in the first place and the MT's ID in the last place in the second and third columns. With regards to fluency, we cannot judge which MT system is better than the others among the top five MT systems. On the other hand, no line is found between the MT's ID in the first place and the MT's ID in the last place in the forth and fifth columns. With regards to adequacy, we cannot judge which MT system is better than the others among all the MT systems on this track. Moreover, in some tracks, we can find a asterisk mark (9 ) : The score of a marked MT system is not significantly less than that of the MT system placed in the first position, which was calculated according to a t-test based on 5-fold cross validation. Hereafter, all scores of either adequacy or fluency are compared with the highest scores of each track.", "cite_spans": [ { "start": 2056, "end": 2060, "text": "(9 )", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 248, "end": 256, "text": "Table 18", "ref_id": "TABREF0" }, { "start": 471, "end": 480, "text": "Table 18", "ref_id": "TABREF0" }, { "start": 839, "end": 847, "text": "Table 19", "ref_id": "TABREF0" }, { "start": 1192, "end": 1200, "text": "Table 17", "ref_id": "TABREF0" }, { "start": 1414, "end": 1422, "text": "Table 18", "ref_id": "TABREF0" }, { "start": 1477, "end": 1485, "text": "Table 19", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Ranking MT Systems", "sec_num": "3.1.3." }, { "text": "A summary of the alternative ranking for each track is as follows: in the CE supplied track, the second best MT system has a fluency score that is twice smaller than the expected difference of 0.03; and all MT systems except the last have adequacy scores within twice the value of the expected difference of 0.07. In the CE additional track, the second best MT system has a fluency score that is twice smaller than the expected difference; and all MT systems have adequacy scores within twice the value of the expected difference. In the CE unrestricted track, the two best MT systems have fluency and adequacy scores within twice the value of the expected difference. In the JE supplied track, the second best MT systems has a fluency score that is twice smaller than the expected difference; and all MT systems except the last have adequacy scores within twice the value of the expected difference. In the JE unrestricted track, the second best MT system has fluency score that is twice smaller than the expected difference; and the best two MT systems have adequacy scores within twice the value of the expected difference. A summary of the regular ranking for each track is omitted because it is easy for the readers to follow them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking MT Systems", "sec_num": "3.1.3." }, { "text": "Although the objective of this paper is not to discuss the superiority of MT systems, for convenience, the authors briefly summarize the common tendencies with regular and alternative rankings and some distinctive observations as follows: (1) SMT or Hybrid MT systems were ranked in the upper-half positions, that is, ATR-SMT, ISL-SMT, ISI, IRST, and RWTH for the CE supplied track; IRST for the CE additional track; IRST, and ISL-SMT for the CE unrestricted track; ATR-SMT, ISI, and RWTH for the JE supplied track; and ATR-Hybrid and RWTH for the JE unrestricted track.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking MT Systems", "sec_num": "3.1.3." }, { "text": "(2) Most MT system were better than only one RBMT. 3For adequacy, ATR-SMT was ranked in a lower-half position in the regular ranking, but it was ranked in an upper-half position in the alternative ranking. Table 20 shows the ranking lists according to the automatic evaluation metrics. Asterisk marks (9 ) in this table denote the same status as in Table 19 (insignificant difference between the marked MT system and the best MT system).", "cite_spans": [ { "start": 301, "end": 305, "text": "(9 )", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 206, "end": 214, "text": "Table 20", "ref_id": "TABREF1" }, { "start": 349, "end": 357, "text": "Table 19", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Ranking MT Systems", "sec_num": "3.1.3." }, { "text": "A summary of ranking lists on each track according to mWER is as follows: In the CE supplied track, only the second and third best MT systems in mWER have scores that are not significantly different from that of the best MT system. In the CE additional track, the second best or worse MT systems have mWER-scores that are significantly inferior to that of the best MT system. In the remaining three tracks, the results are the same as in the CE additional track. The summaries of ranking lists on each track according to the remaining automatic evaluation metrics are omitted because it is easy for the readers to follow them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation Results", "sec_num": "3.2." }, { "text": "As with the subjective evaluation results, the authors briefly summarize the common tendencies in ranking according to the automatic evaluation metrics as follows: (1) SMT and Hybrid MT systems were ranked in the upper-half positions, that is, ATR-SMT, RWTH, IRST, ISI, and ISL-SMT for the CE supplied track; IRST for the CE additional track; IBM, IRST, ISL-SMT, and ISL-EDTRL for the CE unrestricted track; RWTH, and ISI for the JE supplied track; and ATR-Hybrid, RWTH for the CE unrestricted track. (2) Some SMT systems, including ISL-SMT and RWTH, were ranked in the best or second best position in the ranking lists corresponding to almost of all the automatic evaluation metrics, even if they might have been optimized with a particular automatic evaluation metric. Table 21 shows the correlation co-efficients between subjective and automatic evaluation results. (A) shows the correlation co-efficients between average grades for either fluency or adequacy and automatic evaluation scores; (B) and (C) show the correlation co-efficients between ratios of 5grade translations for either fluency or adequacy and automatic evaluation scores.", "cite_spans": [], "ref_spans": [ { "start": 771, "end": 779, "text": "Table 21", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Automatic Evaluation Results", "sec_num": "3.2." }, { "text": "(A) and (B) are the results based on either all CE MT systems or all JE MT systems; (C) is the results based on either partial CE MT systems or partial JE MT systems, which were selected such that the differences in the their scores were twice larger than the error rates. A partial version of (A) was not be calculated because the number of the remaining MT system was two or three.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation between Subjective and Automatic Evaluation Results", "sec_num": "3.3." }, { "text": "As shown in the upper result of (A), BLEU is the automatic evaluation metric most closely correlated to average grades of fluency for CE or JE MT systems. Thus, BLEU is the most promising automatic evaluation metric according to average grades of fluency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation between Subjective and Automatic Evaluation Results", "sec_num": "3.3." }, { "text": "As shown in the lower result of (A), NIST is the automatic evaluation metric most closely correlated to the average grades of adequacy for CE or JE MT systems. Thus, NIST is the most promising automatic evaluation metric according to average grades of adequacy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation between Subjective and Automatic Evaluation Results", "sec_num": "3.3." }, { "text": "As shown in the upper result of (B) and (C), BLEU is the automatic evaluation metric most closely correlated to ratios of 5-grade translations in fluency for CE or JE MT systems. Thus, BLEU is the most promising automatic evaluation metric according to ratios of 5-grade translations in fluency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation between Subjective and Automatic Evaluation Results", "sec_num": "3.3." }, { "text": "As shown in the lower result of (B), BLEU is the automatic evaluation metric most closely correlated to ratios of 5-grade translations in adequacy for CE MT systems but only the fourth most closely correlated to ratios of 5-grade translations in adequacy for JE MT systems. mWER is the automatic evaluation metric most closely correlated to ratios of 5-grade translations in adequacy for JE MT systems but as well as the third most closely correlated to ratios of 5-grade translations in adequacy for CE MT systems. On the other hand, as shown in the lower result of (C), mPER is the automatic evaluation metric most closely correlated to ratios of 5-grade translations in adequacy for CE or JE MT systems. Considering these observations, we could not say which is best, the lower result of (B) or (C). Therefore, from these results, we could not judge which automatic evaluation metric is the most promising in regard to ratios of 5-grade translations in adequacy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation between Subjective and Automatic Evaluation Results", "sec_num": "3.3." }, { "text": "Various problems of subjective evaluation (fluency, adequacy) were found through this evaluation campaign. A summary of key findings is given as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4." }, { "text": "The grade description is ambiguous. As a result, the interpretation of grades for fluency or adequacy depended on each grader.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "Self-consistency of a grader's subjective evaluation results was poor due to the way the translations to be evaluated were displayed or to the way the graders were selected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "The variance of median grades by multiple graders was large due to the way the graders were selected. The number of graders is required to be as small as possible. appended to MT ID indicates that the MT system is SMT, EBMT, RBMT, or Hybrid MT, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "International Workshop on Spoken Language Translation, http://www.slt.atr.jp/IWSLT2004 2 Consortium for Speech Translation Advanced Research, http://www.cstar.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Semi-automatic segmentation using a tool provided by NLPR, a member of the C-STAR consortium, http://nlpr-web.ia.ac.cn/english/index.html5 Automatic segmentation using the CHASEN tool, http://chasen.naist.jp6 Linguistic Data Consortium, http://www.ldc.upenn.edu/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.slt.atr.jp/EVAL13 Note that the 100 translations were different for each grader.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Some single reference translations shown to graders for adequacy evaluation were ambiguous, which resulted in the interpretations of some reference translations being dependent to graders.6(adequacy) Sometimes the translation has MORE information than the reference translation. If the information was of considerable importance or essential, the lowest rank was assigned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The research reported here was supported in part by the National Institute of Information and Communications Technology project entitled \"A study of speech dialogue translation technology based on a large corpus\" (ATR), the project FU-PAT WebFaq funded by the Province of Trento (IRST), and the Nature Science Foundation of China under grant number 60175012 and 6037 5018 (NLPR).The IWSLT 2004 organizing committee would like to thank the organizing committees of ICSLP 2004 for their support and for kindly providing the template files. Moreover, we would like to thank the C-STAR partners for their accomplishments during the subjective evaluation task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null }, { "text": "NIST GTM CE -1.0000 -1.0000 1.0000 1.0000 1.0000 JE -0.9894 -0.9984 0.9195 0.9907 0.9977", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Towards innovative evaluation methodologies for speech translation", "authors": [ { "first": "M", "middle": [], "last": "Paul", "suffix": "" }, { "first": "H", "middle": [], "last": "Nakaiwa", "suffix": "" }, { "first": "M", "middle": [], "last": "Federico", "suffix": "" } ], "year": 2004, "venue": "Working Notes of the NTCIR-4 2004 Meeting", "volume": "2", "issue": "", "pages": "17--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Paul, H. Nakaiwa, and M. Federico, \"Towards in- novative evaluation methodologies for speech transla- tion,\" in Working Notes of the NTCIR-4 2004 Meeting, Supplement Volume 2, Tokyo, Japan, 2004, pp. 17-21.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Creating corpora for speech-to-speech translation", "authors": [ { "first": "G", "middle": [], "last": "Kikui", "suffix": "" }, { "first": "E", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "T", "middle": [], "last": "Takezawa", "suffix": "" }, { "first": "S", "middle": [], "last": "Yamamoto", "suffix": "" } ], "year": 2003, "venue": "Proc. of the EUROSPEECH03", "volume": "", "issue": "", "pages": "381--384", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Kikui, E. Sumita, T. Takezawa, and S. Yamamoto, \"Creating corpora for speech-to-speech translation,\" in Proc. of the EUROSPEECH03, Geneve, Switzerland, 2003, pp. 381-384.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The ARPA MT evaluation methodologies: evolution, lessons, and future approaches", "authors": [ { "first": "J", "middle": [ "S" ], "last": "White", "suffix": "" }, { "first": "T", "middle": [], "last": "O'connell", "suffix": "" }, { "first": "F", "middle": [], "last": "O'mara", "suffix": "" } ], "year": 1994, "venue": "Proc of the AMTA", "volume": "", "issue": "", "pages": "193--205", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. S. White, T. O'Connell, and F. O'Mara, \"The ARPA MT evaluation methodologies: evolution, lessons, and future approaches,\" in Proc of the AMTA, 1994, pp. 193-205.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Linguistic Data Annotation Specification: Assessment of Fluency and Adequacy in Chinese-English Translations Revision 1.0, Linguistic Data Consortium", "authors": [], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linguistic Data Annotation Specification: Assess- ment of Fluency and Adequacy in Chinese-English Translations Revision 1.0, Linguistic Data Consor- tium, 2002, http://www.ldc.upenn.edu/Projects/TIDES/ Translation/TransAssess02.pdf.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "An evaluation tool for machine translation: Fast evaluation for machine translation research", "authors": [ { "first": "S", "middle": [], "last": "Niessen", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "G", "middle": [], "last": "Leusch", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proc. of the 2nd LREC", "volume": "", "issue": "", "pages": "39--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Niessen, F. J. Och, G. Leusch, and H. Ney, \"An evaluation tool for machine translation: Fast evaluation for machine translation research,\" in Proc. of the 2nd LREC, Athens, Greece, 2000, pp. 39-45.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proc. of the 41st ACL", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och, \"Minimum error rate training in statistical machine translation,\" in Proc. of the 41st ACL, Sapporo, Japan, 2003, pp. 160-167.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BLEU: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proc. of the 40th ACL", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Papineni, S. Roukos, T. Ward, and W. Zhu, \"BLEU: a method for automatic evaluation of machine trans- lation,\" in Proc. of the 40th ACL, Philadelphia, USA, 2002, pp. 311-318.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Automatic evaluation of machine translation quality using n-gram co-occurrence statistics", "authors": [ { "first": "G", "middle": [], "last": "Doddington", "suffix": "" } ], "year": 2002, "venue": "Proc. of the HLT 2002", "volume": "", "issue": "", "pages": "257--258", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Doddington, \"Automatic evaluation of machine translation quality using n-gram co-occurrence statis- tics,\" in Proc. of the HLT 2002, San Diego, USA, 2002, pp. 257-258.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Evaluation of machine translation and its evaluation", "authors": [ { "first": "J", "middle": [ "P" ], "last": "Turian", "suffix": "" }, { "first": "L", "middle": [], "last": "Shen", "suffix": "" }, { "first": "I", "middle": [ "D" ], "last": "Melamed", "suffix": "" } ], "year": 2003, "venue": "Proc. of the MT Summmit IX", "volume": "", "issue": "", "pages": "386--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. P. Turian, L. Shen, and I. D. Melamed, \"Evaluation of machine translation and its evaluation,\" in Proc. of the MT Summmit IX, New Orleans, USA, 2003, pp. 386- 393.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "EBMT, SMT, Hybrid and More: ATR spoken language translation system", "authors": [ { "first": "E", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "Y", "middle": [], "last": "Akiba", "suffix": "" }, { "first": "T", "middle": [], "last": "Doi", "suffix": "" }, { "first": "A", "middle": [], "last": "Finch", "suffix": "" }, { "first": "K", "middle": [], "last": "Imamura", "suffix": "" }, { "first": "H", "middle": [], "last": "Okuma", "suffix": "" }, { "first": "M", "middle": [], "last": "Paul", "suffix": "" }, { "first": "M", "middle": [], "last": "Shimohata", "suffix": "" }, { "first": "T", "middle": [], "last": "Watanabe", "suffix": "" } ], "year": 2004, "venue": "Proc. of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "13--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Sumita, Y. Akiba, T. Doi, A. Finch, K. Imamura, H. Okuma, M. Paul, M. Shimohata, and T. Watan- abe, \"EBMT, SMT, Hybrid and More: ATR spoken language translation system,\" in Proc. of the Interna- tional Workshop on Spoken Language Translation, Ky- oto, Japan, 2004, pp. 13-20.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Auto word alignment based Chinese-English EBMT", "authors": [ { "first": "M", "middle": [], "last": "Yang", "suffix": "" }, { "first": "T", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "H", "middle": [], "last": "Liu", "suffix": "" }, { "first": "X", "middle": [], "last": "Shi", "suffix": "" }, { "first": "H", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2004, "venue": "Proc. of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "27--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Yang, T. Zhao, H. Liu, X. Shi, and H. Jiang, \"Auto word alignment based Chinese-English EBMT,\" in Proc. of the International Workshop on Spoken Lan- guage Translation, Kyoto, Japan, 2004, pp. 27-29.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Experimenting with phrase-based statistical translation within the IWSLT 2004 Chinese-to-English Shared Translation Task", "authors": [ { "first": "P", "middle": [], "last": "Langlais", "suffix": "" }, { "first": "M", "middle": [], "last": "Carl", "suffix": "" }, { "first": "O", "middle": [], "last": "Streiter", "suffix": "" } ], "year": 2004, "venue": "Proc. of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "31--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Langlais, M. Carl, and O. Streiter, \"Experiment- ing with phrase-based statistical translation within the IWSLT 2004 Chinese-to-English Shared Translation Task,\" in Proc. of the International Workshop on Spo- ken Language Translation, Kyoto, Japan, 2004, pp. 31- 38.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "IBM spoken language translation system evaluation", "authors": [ { "first": "Y.-S", "middle": [], "last": "Lee", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" } ], "year": 2004, "venue": "Proc. of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "39--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y.-S. Lee and S. Roukos, \"IBM spoken language trans- lation system evaluation,\" in Proc. of the International Workshop on Spoken Language Translation, Kyoto, Japan, 2004, pp. 39-46.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The ITC-irst statistical machine translation system for IWSLT-2004", "authors": [ { "first": "N", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "R", "middle": [], "last": "Cattoni", "suffix": "" }, { "first": "M", "middle": [], "last": "Cettolo", "suffix": "" }, { "first": "M", "middle": [], "last": "Federico", "suffix": "" } ], "year": 2004, "venue": "Proc. of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "51--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Bertoldi, R. Cattoni, M. Cettolo, and M. Federico, \"The ITC-irst statistical machine translation system for IWSLT-2004,\" in Proc. of the International Workshop on Spoken Language Translation, Kyoto, Japan, 2004, pp. 51-58.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The ISI/USC MT system", "authors": [ { "first": "E", "middle": [], "last": "Ettelaie", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "D", "middle": [ "S" ], "last": "Munteanu", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "I", "middle": [], "last": "Thayer", "suffix": "" }, { "first": "Q", "middle": [], "last": "Tipu", "suffix": "" } ], "year": 2004, "venue": "Proc. of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Ettelaie, K. Knight, D. Marcu, D. S. Munteanu, F. J. Och, I. Thayer, and Q. Tipu, \"The ISI/USC MT sys- tem,\" in Proc. of the International Workshop on Spoken Language Translation, Kyoto, Japan, 2004, p. 59.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The ISL statistical translation system for spoken language translation", "authors": [ { "first": "S", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "S", "middle": [], "last": "Hewavitharana", "suffix": "" }, { "first": "M", "middle": [], "last": "Kolss", "suffix": "" }, { "first": "A", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2004, "venue": "Proc. of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "65--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Vogel, S. Hewavitharana, M. Kolss, and A. Waibel, \"The ISL statistical translation system for spoken lan- guage translation,\" in Proc. of the International Work- shop on Spoken Language Translation, Kyoto, Japan, 2004, pp. 65-72.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Alignment templates: the RWTH SMT system", "authors": [ { "first": "O", "middle": [], "last": "Bender", "suffix": "" }, { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "E", "middle": [], "last": "Matusov", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Proc. of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "79--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "O. Bender, R. Zens, E. Matusov, and H. Ney, \"Align- ment templates: the RWTH SMT system\",\" in Proc. of the International Workshop on Spoken Language Translation, Kyoto, Japan, 2004, pp. 79-84.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "TALP: Xgrambased spoken language translation system", "authors": [ { "first": "A", "middle": [ "D" ], "last": "Gispert", "suffix": "" }, { "first": "J", "middle": [ "B" ], "last": "Marino", "suffix": "" } ], "year": 2004, "venue": "Proc. of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "85--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. D. GISPERT and J. B. MARINO, \"TALP: Xgram- based spoken language translation system,\" in Proc. of the International Workshop on Spoken Language Translation, Kyoto, Japan, 2004, pp. 85-90.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Towards fairer evaluations of commercial MT systems on Basic Travel Expressions Corpora", "authors": [ { "first": "H", "middle": [], "last": "Blanchon", "suffix": "" }, { "first": "C", "middle": [], "last": "Boitet", "suffix": "" }, { "first": "F", "middle": [], "last": "Brunet-Manquat", "suffix": "" }, { "first": "M", "middle": [], "last": "Tomokiyo", "suffix": "" }, { "first": "A", "middle": [], "last": "Hamon", "suffix": "" }, { "first": "V", "middle": [ "T" ], "last": "Hung", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bey", "suffix": "" } ], "year": 2004, "venue": "Proc. of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "21--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Blanchon, C. Boitet, F. Brunet-Manquat, M. Tomokiyo, A. Hamon, V. T. Hung, and Y. Bey, \"Towards fairer evaluations of commercial MT systems on Basic Travel Expressions Corpora,\" in Proc. of the International Workshop on Spoken Language Translation, Kyoto, Japan, 2004, pp. 21-26.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "An EBMT system based on word alignment", "authors": [ { "first": "H", "middle": [], "last": "Hou", "suffix": "" }, { "first": "D", "middle": [], "last": "Deng", "suffix": "" }, { "first": "G", "middle": [], "last": "Zou", "suffix": "" }, { "first": "H", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Y", "middle": [], "last": "Liu", "suffix": "" }, { "first": "D", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Q", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2004, "venue": "Proc. of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "47--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Hou, D. Deng, G. Zou, H. Yu, Y. Liu, D. Xiong, and Q. Liu, \"An EBMT system based on word alignment,\" in Proc. of the International Workshop on Spoken Lan- guage Translation, Kyoto, Japan, 2004, pp. 47-49.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The ISL EDTRL system", "authors": [ { "first": "J", "middle": [], "last": "Reichert", "suffix": "" }, { "first": "A", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2004, "venue": "Proc. of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "61--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Reichert and A. Waibel, \"The ISL EDTRL system,\" in Proc. of the International Workshop on Spoken Lan- guage Translation, Kyoto, Japan, 2004, pp. 61-64.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Multi-engine based Chinese-to-English translation system", "authors": [ { "first": "Y", "middle": [], "last": "Zuo", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "C", "middle": [], "last": "Zong", "suffix": "" } ], "year": 2004, "venue": "Proc. of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "73--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Zuo, Y. Zhou, and C. Zong, \"Multi-engine based Chinese-to-English translation system,\" in Proc. of the International Workshop on Spoken Language Transla- tion, Kyoto, Japan, 2004, pp. 73-77.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Example-based machine translation using structual translation examples", "authors": [ { "first": "E", "middle": [], "last": "Aramaki", "suffix": "" }, { "first": "S", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2004, "venue": "Proc. of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "91--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Aramaki and S. Kurohashi, \"Example-based ma- chine translation using structual translation examples,\" in Proc. of the International Workshop on Spoken Lan- guage Translation, Kyoto, Japan, 2004, pp. 91-94.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "case-insensitive, i.e., lower-case only 6 no punctuation marks, i.e., remove '.' ',' '?' '!' '\"' 6 no word compounds, i.e., replace '-' with blank space 6 spelling-out of numerals", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "num": null, "content": "
type language sentence count avg.word word
total unique length tokens types
Chinese 20,000 19,288 9.1 182,904 7,643
training English19,949 9.4 188,935 8,191
Japanese 20,000 19,046 10.5 209,012 9,277
English19,923 9.4 188,712 8,074
Chinese5064956.93,515870
develop Japanese5065028.64,374954
English8,089 7,1737.567,410 2,435
Chinese5004927.63,794893
testJapanese5004918.74,370979
English8,000 6,9078.466,994 2,496
reference translations used for automatic evaluation
", "html": null, "text": "The IWSLT04 corpus", "type_str": "table" }, "TABREF1": { "num": null, "content": "
# of reference 16 15 14 13 12 11 10 9 8 7
test (500)169 101 73 60 44 25 12 9 6 1
develop (506) 191 95 89 67 32 19 9 2 2 0
", "html": null, "text": "Distributions of unique reference translations", "type_str": "table" }, "TABREF2": { "num": null, "content": "
My name is Paul Smith .
Are the tickets on sale yet ?
Thank you for the nice meal .
I 'd like one of those , please .
Let me have ten thirty-five cent stamps .
I have to transfer to another flight in Hong Kong at three .
Okay , here you are . If you need anything else , let me know .
", "html": null, "text": "English sample sentences", "type_str": "table" }, "TABREF3": { "num": null, "content": "
LDC resources
\u00a2! # \" $%! # & ( ' 0 ) 2 1
LDC2000T46 Hong Kong News Parallel Text
LDC2000T47 Hong Kong Laws Parallel Text
LDC2000T50 Hong Kong Hansards Parallel Text
LDC2001T11 Chinese Treebank 2.0
LDC2001T57 TDT2 Multilanguage Text Version 4.0
LDC2001T58 TDT3 Multilanguage Text Version 2.0
LDC2002L27 Chinese English Translation Lexicon version 3.0
LDC2002T01 Multiple-Translation Chinese Corpus
LDC2003T16 SummBank 1.0
LDC2003T17 Multiple-Translation Chinese (MTC) Part 2
LDC2004T05 Chinese Treebank 4.0
LDC2004T09 ACE 2003 Multilingual Training Data
", "html": null, "text": "", "type_str": "table" }, "TABREF4": { "num": null, "content": "
ResourcesData Track
Small Additional Unrestricted
IWSLT04 corpus LDC resources333
", "html": null, "text": "Permitted linguistic resources", "type_str": "table" }, "TABREF5": { "num": null, "content": "
FluencyAdequacy
5 Flawless English5 All Information
4 Good English4 Most Information
3 Non-native English3 Much Information
2 Disfluent English2 Little Information
1 Incomprehensible1 None
", "html": null, "text": "Human assessment", "type_str": "table" }, "TABREF6": { "num": null, "content": "
1st grader 2nd grader 3rd grader# input data
Team 1G0G2G9200
Team 2G4G5G8160
Team 3G1G3G680
Team 4G0G3G760
", "html": null, "text": "Workload of graders", "type_str": "table" }, "TABREF7": { "num": null, "content": "
SMT7 ATR-SMT, IBM, IRST, ISI, ISL-SMT, RWTH, TALP
EBMT 3 HIT, ICT, UTokyo
RBMT 1 CLIPS
Hybrid 4 ATR-HYBRID (SMT+EBMT)
IAI(SMT+TM)
ISL-EDTRL(SMT+IF)
NLPR(RBMT+TM)
", "html": null, "text": "MT system types", "type_str": "table" }, "TABREF8": { "num": null, "content": "
Data Track CE JE
Small94
Additional2-
Unrestricted94
Organization 13 6
", "html": null, "text": "MT system submissions", "type_str": "table" }, "TABREF9": { "num": null, "content": "
CE
SmallAdditionalUnrestricted
ATR-SMT (ATR-S) : [10]IRST : [14]CLIPS: [19]
HIT: [11]ISI : [15]HIT: [11]
IAI: [12]IBM: [13]
IBM: [13]ICT: [20]
IRST: [14]IRST: [14]
ISI: [15]ISI: [15]
ISL-SMT (ISL-S) : [16]ISL-EDTRL (ISL-E) : [21]
RWTH: [17]ISL-SMT (ISL-S) : [16]
TALP: [18]NLPR: [22]
JE
SmallUnrestricted
ATR-SMT (ATR-S) : [10]ATR-HYBRID (ATR-H) : [10]
IBM: [13]CLIPS: [19]
ISI: [15]RWTH: [17]
RWTH: [17]UTokyo: [23]
", "html": null, "text": "MT system ID", "type_str": "table" }, "TABREF10": { "num": null, "content": "
EventDate
Evaluation SpecificationsFebruary 15, 2004
Application SubmissionApril 15, 2004
Notification of AcceptanceApril 30, 2004
Sample Corpus ReleaseMay 7, 2004
Training Corpus ReleaseMay 21, 2004
Development Corpus ReleaseJuly 15, 2004
Evaluation Server OnlineAugust 1, 2004
Test Corpus ReleaseAugust 9, 2004
Run SubmissionAugust 12, 2004
Result Feedback to Participants September 10, 2004
Camera-ready Paper Submission September 17, 2004
WorkshopSeptember 30 -October 1, 2004
", "html": null, "text": "Evaluation campaign schedule", "type_str": "table" }, "TABREF11": { "num": null, "content": "
Grader IDFluency Adequacy
G00.210.33
G10.370.39
G20.350.44
G30.490.38
G40.340.34
G50.220.44
G60.770.64
G70.290.44
G80.440.44
G90.460.55
Average0.390.44
ticipants had access to an extended version 12 of the evalu-
ation server that allowed them to select specific evaluation
parameters and validate the robustness of their MT systems
for different evaluation specifications. Please refer to the par-
ticipants MT system descriptions in the IWSLT04 workshop
proceedings for details on their findings.
", "html": null, "text": "Expected difference between two assessments when each translation was evaluated twice by the same grader.", "type_str": "table" }, "TABREF12": { "num": null, "content": "
Grader IDFluency Adequacy
G00.190.23
G10.330.34
G20.320.34
G30.470.33
G40.260.32
G50.140.40
G60.530.55
G70.280.39
G80.370.37
G90.330.37
Average0.3220.364
", "html": null, "text": "Error rate of each grader in the same trial asTable 13", "type_str": "table" }, "TABREF13": { "num": null, "content": "
Grader's ID 5 or less G0 0.018 4 or not 0.078 3 or not 0.062 or not 8 0.07
G10.050.100.150.07
G20.080.030.130.11
G30.120.060.230.08
G40.070.070.090.11
G50.050.090.060.02
G60.110.170.300.19
G70.040.090.130.03
G80.090.060.190.10
G90.110.060.180.11
Average0.0730.080.1520.089
", "html": null, "text": "Error rate of binary fluency classifications", "type_str": "table" }, "TABREF14": { "num": null, "content": "
Grader's ID G05 or less 0.088 4 or not 0.108 3 or not 0.072 or not 8 0.08
G10.170.100.080.04
G20.070.130.130.11
G30.050.130.160.04
G40.090.060.080.11
G50.110.110.100.12
G60.070.220.270.08
G70.080.100.160.10
G80.110.130.150.05
G90.130.130.150.14
Average0.0960.1210.1350.087
", "html": null, "text": "Error rate of binary adequacy classifications", "type_str": "table" }, "TABREF15": { "num": null, "content": "
FluencyAdequacy
T2T3T4T2T3T4
Team 1 (T1)0.49 0.75 0.470.54 0.61 0.34
Team 2 (T2)-0.68 0.66-0.59 0.48
Team 3 (T3)--0.44--0.51
Ave.0.580.51
", "html": null, "text": "Expected difference between two medians of three graders by team", "type_str": "table" }, "TABREF16": { "num": null, "content": "
Fluency Adequacy
Team 1 0.010.07
Team 2 0.050.09
Team 3 0.050.05
Team 4 0.010.05
Total0.030.07
this ranking are in the range of [0,1]. Higher scores indicate
that the corresponding MT systems are better.
", "html": null, "text": "Error rate of assessments by grader with the smallest error rate among three graders", "type_str": "table" }, "TABREF17": { "num": null, "content": "
Chinese-to-English (CE)
Regular ranking
TrackFluencyAdequacy
Score MT IDScore MT ID
CE U 3.776 3.776 3.400@ IRST @ ISL-S* 3.526 3.662 A NLPR 3.254ISL-S @ IRST* @
", "html": null, "text": "Ranking lists according to subjective evaluation results", "type_str": "table" } } } }