{ "paper_id": "2004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:22:12.179976Z" }, "title": "IBM Spoken Language Translation System Evaluation", "authors": [ { "first": "Young-Suk", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM T. J. Watson Research Center Yorktown Heights", "location": { "postCode": "10598", "region": "NY" } }, "email": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM T. J. Watson Research Center Yorktown Heights", "location": { "postCode": "10598", "region": "NY" } }, "email": "roukos@us.ibm.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We discuss phrase-based statistical machine translation performance enhancing techniques which have proven effective for Japanese-to-English and Chinese-to-English translation of BTEC corpus. We also address some issues that arise in conversational speech translation quality evaluations.", "pdf_parse": { "paper_id": "2004", "_pdf_hash": "", "abstract": [ { "text": "We discuss phrase-based statistical machine translation performance enhancing techniques which have proven effective for Japanese-to-English and Chinese-to-English translation of BTEC corpus. We also address some issues that arise in conversational speech translation quality evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "IBM spoken language translation system is based on a statistical translation model introduced in [1] . We adopt a phrase translation model as the baseline, for which the unit of translation is a phrase consisting of one or more words, [2] , [3] , [4] , [5] , [6] .", "cite_spans": [ { "start": 97, "end": 100, "text": "[1]", "ref_id": "BIBREF0" }, { "start": 235, "end": 238, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 241, "end": 244, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 247, "end": 250, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 253, "end": 256, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 259, "end": 262, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The baseline system is augmented by the morphological analysis detailed in [7] for an improved word alignment and phrase selection. System performance is significantly improved by phrase selection from recall oriented word alignments (see Section 2 for the definition) and filtering. Re-ordering of source language sentence into the target language word order, [21] , [22] , further improves phrase selection and word order accuracy. Non-monotone decoding and language model probability computation for every word in a target phrase enhances the translation quality over monotone decoding and language model probability computation only for words at phrase boundaries.", "cite_spans": [ { "start": 75, "end": 78, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 361, "end": 365, "text": "[21]", "ref_id": "BIBREF20" }, { "start": 368, "end": 372, "text": "[22]", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In Section 2, we give an overview of the baseline system. In Section 3, we discuss translation quality enhancing techniques along with experimental results. In Section 4, we address some issues in conversational speech translation evaluation. We discuss future work in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We use the term block (b) to denote a phrase translation pair consisting of a source phrase ( f ) and a target phrase ( e ). We use the symbol Pr(\u2022) to denote general probability distribution and p(\u2022) to denote model-based probability distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Our baseline phrase translation system described in [Tillmann 2003 ] consists of three major components: word alignment, block selection, and decoding.", "cite_spans": [ { "start": 52, "end": 66, "text": "[Tillmann 2003", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline System Overview", "sec_num": "2." }, { "text": "We obtain word alignment between the source and the target language sentences by successive application of IBM Model 1 viterbi alignment for initialization and iterative HMM-based alignment, [8] , for refinement.", "cite_spans": [ { "start": 191, "end": 194, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Word Alignment", "sec_num": "2.1." }, { "text": "We align a parallel corpus bi-directionally: one from the source language to the target language (A 1 : f \u2192 e) and the other from the target language to the source language (A 2 : e \u2192 f), where f denotes a source word position and e a target word position. We define precision (A P ) and recall (A R ) oriented alignments as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Alignment", "sec_num": "2.1." }, { "text": "A P = A 1 \u2229 A 2 A R = A 1 U A 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Alignment", "sec_num": "2.1." }, { "text": "A P is the intersection of A 1 and A 2 , a high precision alignment. A R is the union of A 1 and A 2 , a high recall alignment. The set of all source word positions covered by some word links in A P are denoted as col(A P ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Alignment", "sec_num": "2.1." }, { "text": "Starting from a high precision word alignment A P , we obtain blocks according to (i) a projection algorithm and (ii) a block extension algorithm. Projection Algorithm: We first project source intervals [f\u00b4, f] , where f\u00b4, f \u2208 col(A P ). We compute the minimum target index e\u00b4 and maximum target index e for the word links that fall into the interval [f\u00b4, f]: The block consisting of the target and source words at the link positions is denoted as b. Target and source words in a block are subject to the contiguity condition.", "cite_spans": [ { "start": 203, "end": 210, "text": "[f\u00b4, f]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Block Selection", "sec_num": "2.2." }, { "text": "[f\u00b4, f] \u2192 [ min e\u00b4, max e] ]) , ([ ]) , ([ f f P e f f P e f f \u2032 \u2208 \u2032 \u2208 \u2032 P f (\u2022)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Block Selection", "sec_num": "2.2." }, { "text": "Extension Algorithm: We expand the alignment links to include alignment points in the neighborhood of the high precision alignment A P and lie within the high recall alignment A R . The extensions are carried out iteratively until no new alignment links from A R are added.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Block Selection", "sec_num": "2.2." }, { "text": "Among the candidate blocks obtained according to the projection and extension algorithm, blocks satisfying the following three conditions are kept for use in translation: i. Source phrase ( f ) length \u2264 10 morphemes 1 ii.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Block Selection", "sec_num": "2.2." }, { "text": "Target phrase ( e ) length \u2264 10 morphemes iii.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Block Selection", "sec_num": "2.2." }, { "text": "Block (b) frequency > 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Block Selection", "sec_num": "2.2." }, { "text": "Two types of model parameters, block unigram model and word trigram language model, are used in the baseline decoder. Block unigram probability is defined in (1) , where n is the number of distinct blocks:", "cite_spans": [ { "start": 158, "end": 161, "text": "(1)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "2.3." }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "2.3." }, { "text": "\u2211 = = n i i b count b count b p 0 ) ( ) ( ) (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "2.3." }, { "text": "Word trigram probability is computed at target phrase boundaries only, skipping over words within a target phrase in case the target phrase length \u2265 2. Trigram language model probability between adjacent target phrases is computed, as in is the previous (one or more) target phrase in the hypothesis. e 1 is the first word of i e 2 , e h the last target word in the hypothesis and e h-1 the second to the last target word in the hypothesis. The task of the decoder is to find the block sequence that maximizes the product of the unigram block probability and the trigram language model probability without reordering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "2.3." }, { "text": "In decoder implementation, we use a DP-based beam search procedure. We start with an initial empty hypothesis. We maximize over all block segmentations b 1, n , where n is the number of blocks covering the input sentence, with the source phrases yielding a segmentation of the input sentence, generating the target sentence simultaneously. The decoder processes the input sentence 'cardinality synchronously', i.e. all partial hypotheses active at a given point cover the same number of input words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "2.3." }, { "text": "We prune out weaker hypotheses based on the cost (for block unigram probability and trigram language model probability) they incurred so far. The cheapest final hypothesis \u2212 the hypothesis with the highest probability \u2212 with no untranslated source words is the translation output. e is 1, e 1 is the same as i e .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "2.3." }, { "text": "Performance evaluations are carried out on the C-STAR 2003 development test data consisting of 506 segments for both Japanese-to-English (J2E hereafter) and Chinese-to-English (C2E hereafter) translations. BLEU [9] has been used for translation quality evaluations, with 16 reference translations and the following evaluation parameters: ", "cite_spans": [ { "start": 211, "end": 214, "text": "[9]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Performance Enhancing Techniques", "sec_num": "3." }, { "text": "\u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Enhancing Techniques", "sec_num": "3." }, { "text": "Baseline system performances are given in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 49, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Baseline system", "sec_num": "3.1." }, { "text": "Languages J2E C2E Baseline 0.2924 0.2664 Table 2 : Baseline system performances The key properties of the baseline system include (i) block selection from high precision word alignments using the projection and the extension algorithm, (ii) monotone decoding using block unigram probability, and word trigram language model probability at target phrase boundaries only.", "cite_spans": [], "ref_spans": [ { "start": 41, "end": 48, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Baseline system", "sec_num": "3.1." }, { "text": "We have found it effective to select blocks from high recall word alignments according to the projection algorithm and then filter out blocks which do not satisfy a length ratio between the source and the target phrase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Block selection from high recall alignment", "sec_num": "3.2." }, { "text": "Filter out blocks if they satisfy the condition (3):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese-to-English", "sec_num": "3.2.1." }, { "text": "(3) target phrase length \u2265 source phrase length * 2.5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese-to-English", "sec_num": "3.2.1." }, { "text": "Target and source phrase length ratio \u2212 2.5 in (3) \u2212 is determined empirically. We start with a value higher than the source and target sentence length ratio (1.03 in our training corpus) and increase the value until the system finds the optimal value.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese-to-English", "sec_num": "3.2.1." }, { "text": "Block selection for Japanese-to-English translation takes place in three steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Japanese-to-English", "sec_num": "3.2.2." }, { "text": "Step 1 \u2212 Morphological analysis as a preprocessing to TM training, [7] .", "cite_spans": [ { "start": 67, "end": 70, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Japanese-to-English", "sec_num": "3.2.2." }, { "text": "Step 2 \u2212 Block selection from high recall word alignments & filtering according to the source and target length ratio.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Japanese-to-English", "sec_num": "3.2.2." }, { "text": "Step 3 \u2212 Merge blocks with the same source phrase to be translated into punctuations . and ?.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Japanese-to-English", "sec_num": "3.2.2." }, { "text": "Morphological analysis: Japanese overtly marks the sentence types using sentence particles, as in (4) and (5):(4) \u9769 \u898b\u672c \u3092 \u307f\u305b \u3066 \u3044\u305f\u3060\u3051 \u307e\u3059 \u304b \u3002", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Japanese-to-English", "sec_num": "3.2.2." }, { "text": "Can you show me leather samples ? (5) \u6bd2\u866b \u306b \u523a\u3055 \u308c \u307e\u3057 \u305f \u3002 I was stung by a poisonous insect .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Japanese-to-English", "sec_num": "3.2.2." }, { "text": "The question sentence (4) is marked by the particle \u304b and the statement (5) by the particle \u305f. As shown in 5, the role played by a sentence particle is often repeated by a punctuation (\u3002). We delete sentence particles including \u3046, \u306d, \u304c, \u305f, \u306e, \u308f before TM training. The morphemes undergoing deletion analysis are typically those with a high null word translation probability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Japanese-to-English", "sec_num": "3.2.2." }, { "text": "We obtain word alignments between English and morphologically analyzed Japanese parallel corpus. We apply the projection algorithm to high recall word alignments and filter out blocks satisfying the condition (6). (6) target phrase length > source phrase length * 1.5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Block selection and filtering:", "sec_num": null }, { "text": "The value for the target and source phrase length ratio \u2212 1.5 in (6) \u2212 is determined empirically in the manner described for C2E translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Block selection and filtering:", "sec_num": null }, { "text": "Merge blocks with fixed translations: We merge blocks containing source phrases to be translated into the question marker \"?\" and the period \".\" to insure that these source phrases are always correctly translated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Block selection and filtering:", "sec_num": null }, { "text": "Performance improvement by block selection from high recall word alignments and filtering is shown in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 109, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Block selection and filtering:", "sec_num": null }, { "text": "Languages J2E C2E Baseline 0.2924 0.2664 Union + Filtering 0.3249 0.2895 Table 3 . Impact of block selection from high recall word alignments and filtering", "cite_spans": [], "ref_spans": [ { "start": 73, "end": 80, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Block selection and filtering:", "sec_num": null }, { "text": "Japanese and English word orders display a high degree of distortion primarily because the Japanese default word order is subject-object-verb whereas the English default word order is subject-verb-object, as in (7) . 7[\u30b8\u30e3\u30b1\u30c3\u30c8 \u3092] object [\u63a2\u3057 \u3066 \u3044 \u307e\u3059] verb \u3002 I'm looking for a jacket.", "cite_spans": [ { "start": 211, "end": 214, "text": "(7)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Reordering and block combination", "sec_num": "3.3." }, { "text": "We also observe word order discrepancies between Chinese and English questions, as indicated by the underlines in (8) .", "cite_spans": [ { "start": 114, "end": 117, "text": "(8)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Reordering and block combination", "sec_num": "3.3." }, { "text": "(8) \u65e5\u672c \u822a\u7a7a \u516c\u53f8 \u7684 \u67dc\u53f0 \u5728 \u54ea\u91cc \uff1f Japan airline counter is where Where is the Japan airline counter?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reordering and block combination", "sec_num": "3.3." }, { "text": "We identify words and phrases that indicate a high degree of distortion between the source and the target sentences, for example, by viterbi alignment. We then reorder the source language sentence into the target language word order, as in (9) and 10:", "cite_spans": [ { "start": 240, "end": 243, "text": "(9)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Reordering and block combination", "sec_num": "3.3." }, { "text": "(9) [\u30b8\u30e3\u30b1\u30c3\u30c8 \u3092] object [\u63a2\u3057 \u3066 \u3044 \u307e\u3059] verb \u3002 \u2192 [\u63a2\u3057 \u3066 \u3044 \u307e\u3059] verb [\u30b8\u30e3\u30b1\u30c3\u30c8 \u3092] object \u3002 (10) \u65e5\u672c \u822a\u7a7a \u516c\u53f8 \u7684 \u67dc\u53f0 \u5728 \u54ea\u91cc \uff1f \u2192 \u54ea\u91cc \u5728 \u65e5\u672c \u822a\u7a7a \u516c\u53f8 \u7684 \u67dc\u53f0\uff1f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reordering and block combination", "sec_num": "3.3." }, { "text": "With reordering of source language sentences, we obtain two sets of parallel corpora: one in which no reordering is applied, and the other in which reordering is applied to the source language corpus. We acquire two sets of blocks from the two sets of parallel training corpora. We combine the two sets of blocks and recompute the block unigram probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reordering and block combination", "sec_num": "3.3." }, { "text": "Performance improvement by reordering and block combination is shown in Table 4 : Impact of reordering and block combination", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 79, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Reordering and block combination", "sec_num": "3.3." }, { "text": "We conjecture that the performance improvement by reordering and block combination is partially due to improvement in HMM word alignment. As pointed out in [8] , HMM alignment is good at capturing local distortion whereas distortion models in the IBM source channel models are better at capturing long distance distortion. Reordering source language sentences into the target language word order results in either monotone alignment or local distortion between the source and the target languages.", "cite_spans": [ { "start": 156, "end": 159, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Reordering and block combination", "sec_num": "3.3." }, { "text": "We derive a list of Chinese vocabulary and word bigrams from the word segmented Chinese training corpus. We apply unknown word segmentation as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese unknown word segmentation", "sec_num": "3.4." }, { "text": "For each word w in the input, check to see if w occurs in the vocabulary list. If w does not occur in the vocabulary list, compute all possible segmentations of w at each character position. For example, if w consists of three characters C 1 C 2 C 3 , then there are four possible segmentations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese unknown word segmentation", "sec_num": "3.4." }, { "text": "Segmentation 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese unknown word segmentation", "sec_num": "3.4." }, { "text": "C 1 C 2 C 3 Segmentation 2: C 1 C 2 C 3 Segmentation 3: C 1 C 2 C 3 Segmentation 4: C 1 C 2 C 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese unknown word segmentation", "sec_num": "3.4." }, { "text": "For each segmentation, check to see if each two subword sequence occurs in the bigram list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese unknown word segmentation", "sec_num": "3.4." }, { "text": "i. Select the segmentation with the least number of subwords not covered by bigrams. Suppose Segmentation 2 and Segmentation 4 contain bigram sequences as shown below, where the italicized boldface indicates bigrams seen in the training corpus:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese unknown word segmentation", "sec_num": "3.4." }, { "text": "Segmentation 2: C 1 C 2 C 3 Segmentation 4: C 1 C 2 C 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese unknown word segmentation", "sec_num": "3.4." }, { "text": "All sub-words in Segmentation 2 are covered by bigrams, whereas C 3 is not covered by a bigram in Segmentation 4. Therefore, Segmentation 2 is selected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese unknown word segmentation", "sec_num": "3.4." }, { "text": "ii. If more than one segmentation is equally covered by bigrams, select the segmentation with the least number of sub-words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese unknown word segmentation", "sec_num": "3.4." }, { "text": "Suppose Segmentation 2 and Segmentation 4 are covered by bigrams as shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese unknown word segmentation", "sec_num": "3.4." }, { "text": "Segmentation 2: C 1 C 2 C 3 Segmentation 4: C 1 C 2 C 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese unknown word segmentation", "sec_num": "3.4." }, { "text": "Segmentation 2 is chosen in this case since it contains two sub-words, whereas Segmentation 4 contains 3 subwords.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese unknown word segmentation", "sec_num": "3.4." }, { "text": "iii. If more than one segmentation is equally covered by bigrams and contain the same number of sub-words, the segmentation with the most number of characters in the first sub-word is selected. Suppose Segmentation 2 and Segmentation 3, as shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese unknown word segmentation", "sec_num": "3.4." }, { "text": "Segmentation 2: C 1 C 2 C 3 Segmentation 3: C 1 C 2 C 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese unknown word segmentation", "sec_num": "3.4." }, { "text": "Since the first sub-word in Segmentation 2 contains 2 characters and the first sub-word in Segmentation 3 contains 1 character, Segmentation 2 is selected. 5 Performance improvement by unknown word segmentation is shown in Table 5 Table 5 : Impact of unknown word segmentation", "cite_spans": [ { "start": 156, "end": 157, "text": "5", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 223, "end": 230, "text": "Table 5", "ref_id": null }, { "start": 231, "end": 238, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Chinese unknown word segmentation", "sec_num": "3.4." }, { "text": "We adopt skip operation for non-monotone decoding, [12] , to capture the word order variations between the source and the target languages.", "cite_spans": [ { "start": 51, "end": 55, "text": "[12]", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Skip operation in decoding", "sec_num": "3.5." }, { "text": "Skip is applied to delay translating one or more source phrases in case the current target phrase should be placed after subsequent target phrases to generate an accurate target sentence word order. We explain the intuition using the example 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skip operation in decoding", "sec_num": "3.5." }, { "text": "(7) [\u30b8\u30e3\u30b1\u30c3\u30c8 \u3092] object [\u63a2\u3057 \u3066 \u3044 \u307e\u3059] verb \u3002", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skip operation in decoding", "sec_num": "3.5." }, { "text": "I'm looking for a jacket.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skip operation in decoding", "sec_num": "3.5." }, { "text": "Suppose there are two blocks shown in (11) and 12, which cover the entire source word sequence.", "cite_spans": [ { "start": 38, "end": 42, "text": "(11)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Skip operation in decoding", "sec_num": "3.5." }, { "text": "(11) a jacket | \u30b8\u30e3\u30b1\u30c3\u30c8 \u3092 (12) I\u00b4m looking for | \u63a2\u3057 \u3066 \u3044 \u307e\u3059", "cite_spans": [ { "start": 24, "end": 28, "text": "(12)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Skip operation in decoding", "sec_num": "3.5." }, { "text": "To decode the sentence (7), the system selects the blocks (11) and (12) . If the system processes the blocks (11) and (12) monotonically, it will produce an inaccurate translation output \"a jacket I\u00b4m looking for .\" On the other hand, if the system skips to process the block (11) until after it processes (12) and translates \u63a2 \u3057 \u3066 \u3044 \u307e \u3059 first, it will produce an accurate translation output \"I\u00b4m looking for a jacket.\"", "cite_spans": [ { "start": 58, "end": 62, "text": "(11)", "ref_id": "BIBREF10" }, { "start": 67, "end": 71, "text": "(12)", "ref_id": "BIBREF11" }, { "start": 109, "end": 113, "text": "(11)", "ref_id": "BIBREF10" }, { "start": 118, "end": 122, "text": "(12)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Skip operation in decoding", "sec_num": "3.5." }, { "text": "We impose two sets of constraints on skip, stated in (13) and (14) , to prune out highly improbable word order sequences in advance. (13) Do not skip a block whose source phrase ends with a delimiter. (14) Do not skip across a block whose source phrase starts with a delimiter.", "cite_spans": [ { "start": 53, "end": 57, "text": "(13)", "ref_id": "BIBREF12" }, { "start": 62, "end": 66, "text": "(14)", "ref_id": "BIBREF13" }, { "start": 133, "end": 137, "text": "(13)", "ref_id": "BIBREF12" }, { "start": 201, "end": 205, "text": "(14)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Skip operation in decoding", "sec_num": "3.5." }, { "text": "Delimiters are a set of punctuations and function words across which word orders do not change. Any source word occurring to the left of a delimiter should occur to the left of the (translation of the) delimiter in its translation. Any source word occurring to the right of a delimiter should occur to the right of the (translation of the) delimiter in its translation. A set of delimiters we use include but not restricted to {. ? , \u3002\u3001\u304b \u5417}. Delimiters can be automatically acquired by identifying the source words for which there is no crossing between the words to their left and the words to their right in viterbi alignment for each language pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skip operation in decoding", "sec_num": "3.5." }, { "text": "Performance improvement by skip is shown in Table 6 : Impact of skip operation in decoding", "cite_spans": [], "ref_spans": [ { "start": 44, "end": 51, "text": "Table 6", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Skip operation in decoding", "sec_num": "3.5." }, { "text": "Instead of computing trigram language model probabilities only for words occurring at target phrase boundaries, we compute LM probabilities for each word in a target phrase, as schematically shown in (15) . . e h-1 is the second to the last word in", "cite_spans": [ { "start": 200, "end": 204, "text": "(15)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Language model probability computation", "sec_num": "3.6." }, { "text": "1 \u2212 i e", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language model probability computation", "sec_num": "3.6." }, { "text": ". e 1 and e 2 are the first and the second word of i e , respectively. e j is the j th word in i e , where j > 2. LM score for e 1 and e j (j > 1) may be differentiated by different weights denoted as \u03b1 in (15b, c). The value for \u03b1 may be parameterized for different language pairs. 7 Once we compute the LM probability for each word in a target phrase, the system tends to generate less words than when we compute the LM probability for words at phrase boundaries only. We offset this side 6 With skip operation, application of reordering to Japanese input does not yield a better performance than without reordering. The BLEU scores for Japanese-to-English translation in Tables 6 and 7 are obtained from the Japanese input without reordering. 7 We have set \u03b1 to 1 for Japanese-to-English and 1.21 for Chinese-to-English translation in the LM cost formula where the LM probability is represented as a cost using sum of -log likelihood.", "cite_spans": [ { "start": 283, "end": 284, "text": "7", "ref_id": "BIBREF6" }, { "start": 491, "end": 492, "text": "6", "ref_id": "BIBREF5" }, { "start": 746, "end": 747, "text": "7", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 674, "end": 688, "text": "Tables 6 and 7", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Language model probability computation", "sec_num": "3.6." }, { "text": "effect by adjusting the word generation penalty, [13] , so that the system produces more words in the translation output without losing accuracy.", "cite_spans": [ { "start": 49, "end": 53, "text": "[13]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Language model probability computation", "sec_num": "3.6." }, { "text": "Performance improvement by the refined LM probability computation and word generation penalty is shown in Table 7 Table 7 . Impact of refined LM probability computation", "cite_spans": [], "ref_spans": [ { "start": 106, "end": 113, "text": "Table 7", "ref_id": null }, { "start": 114, "end": 121, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Language model probability computation", "sec_num": "3.6." }, { "text": "While the experimental results in previous sections indicate that improvements in block selection and decoding techniques improve the translation quality independent of each other, there is an indication that the performance improvement by skip correlates with various block selection techniques. Table 8 shows the apparent correlation in performance improvement by skip according to various block selection techniques for Japanese-to-English translation. Block selection from intersection \u2212 high precision word alignment \u2212 according to the extension algorithm is used in our baseline system. Block selection from union \u2212 high recall word alignment \u2212 results in a performance improvement over the baseline (BLEU score improvement from 0.2924 to 0.3100). Combining blocks derived from reordered and un-reordered training corpora using high recall word alignment (union) and filtering results in the best performance with the baseline decoding (BLEU score 0.3460).", "cite_spans": [], "ref_spans": [ { "start": 297, "end": 304, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Correlation between block selection and skip", "sec_num": "3.7." }, { "text": "Performance improvement by skip is most significant with the block selection technique which results in the highest BLEU score, i.e. 22.2% improvement. We posit that a good block selection technique is more likely to generate blocks whose source phrases coincide with natural units for reordering, e.g. the object \u30b8\u30e3\u30b1\u30c3\u30c8 \u3092 and the verbs \u63a2\u3057 \u3066 \u3044 \u307e\u3059 in (7), accounting for the significant performance improvement by skip.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Block selection", "sec_num": null }, { "text": "However, performance improvement by the refined LM probability is least significant with the block selection which results in the highest BLEU score, i.e. 1.9% improvement. We attribute this to an overlap in roles played by LM probability and other constraints on block selection. LM probability computation of each word in a target phrase is equivalent to filtering out some candidate blocks whose target phrase LM probabilities are less likely than others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Block selection", "sec_num": null }, { "text": "We address issues to be worked out before adopting an automatic evaluation metric as a single means of conversational speech translation evaluation: (i) characteristics of spoken language dialogs which typically do not occur in written texts, and yet significantly contribute to the information content of the entire utterance and (ii) lack of correlation between human and automatic evaluations. All examples in this section are taken from the BTEC training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spoken language translation evaluation", "sec_num": "4." }, { "text": "Speech Act: Spoken language dialogs crucially depend on speech acts for successful communications such as questions, requests, suggestions as well as statements, [17] , [18] , [20] . For instance, out of 20k segments in the BTEC training corpus for Japanese-to-English translation, 7,438 segments contain questions (denoted by the question marker ?), and at least 1,775 segments contain requests (denoted by the phrase please). Examples are given in (16)\u2212 (19) . The fact that (16)\u2212(18) are questions -as opposed to a statement, as in \"I would like to go to the zoo.\" -can be construed only by the question mark \"?\". 8 The fact that (19) is a request (as opposed to a question, as in \"Do you have a seat in the back?\") can be construed by the function word \"please\".", "cite_spans": [ { "start": 162, "end": 166, "text": "[17]", "ref_id": "BIBREF16" }, { "start": 169, "end": 173, "text": "[18]", "ref_id": "BIBREF17" }, { "start": 176, "end": 180, "text": "[20]", "ref_id": "BIBREF19" }, { "start": 456, "end": 460, "text": "(19)", "ref_id": "BIBREF18" }, { "start": 617, "end": 618, "text": "8", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Characteristics of spoken language dialogs", "sec_num": "4.1." }, { "text": "Negation: Spoken language dialogs often center around the notion of affirmation/negation, especially if the utterances are expressed by yes-no questions. Out of 20k segments in the BTEC training corpus for Negation typically applies over the entire utterance, and incorrect translation of negation often leads to an interpretation opposite to what has been intended by the speaker.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Characteristics of spoken language dialogs", "sec_num": "4.1." }, { "text": "Examples 16- (19) suggest that punctuations play a major role in an accurate interpretation of speech act in conversational speech translation, and therefore should be included as a legitimate vocabulary in the evaluation. The real question is how much weight should be given to the information conveyed by speech act. Speaking in terms of BLEU, is it sufficient to treat speech act as one more vocabulary item and subsume it under the modified precision and brevity penalty, or we need a third parameter \u2212 speech act \u2212 in the scoring formula and assign an appropriate weight? (20)-(23) indicate that information conveyed by negation is more significant than that conveyed by other lexical items. Loss of negation in (23), as in \"I do quite understand\" is very likely to result in a communication failure, whereas loss of the adverb quite, as in \"I don't understand\" is not.", "cite_spans": [ { "start": 13, "end": 17, "text": "(19)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Characteristics of spoken language dialogs", "sec_num": "4.1." }, { "text": "Given the significant role played by speech acts and negation, [19] , it seems worthwhile to conduct experiments to precisely measure their impacts on overall translation quality and incorporate them to an automatic evaluation metric accordingly. Table 9 shows the ranks of our system (out of 9 systems) submitted to the Chinese-to-English unrestricted data track. respectively. BLEU, GTM, NIST, PER and WER are 5 automatic evaluation metrics used in the evaluation. Apparently, automatic evaluations and Human-Adequacy judgment do not correlate, contrary to what has been reported in previous studies, [9] for BLEU, [14] for GTM, [15] for NIST, where they all report a strong correlation between automatic and human evaluations. The lack of correlation between human adequacy judgment and automatic evaluations might be attributed largely to two factors: One to different genre material and the other to different evaluation parameters.", "cite_spans": [ { "start": 63, "end": 67, "text": "[19]", "ref_id": "BIBREF18" }, { "start": 603, "end": 606, "text": "[9]", "ref_id": "BIBREF8" }, { "start": 617, "end": 621, "text": "[14]", "ref_id": "BIBREF13" }, { "start": 631, "end": 635, "text": "[15]", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 247, "end": 254, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Characteristics of spoken language dialogs", "sec_num": "4.1." }, { "text": "Genre: The current evaluation focuses on spoken dialogs consisting of short sentences (8.7 words/sentence on average for Japanese and 7.6 words/sentence on average for Chinese) with many variations in dialog acts (e.g. question, statement, request, etc.), whereas the previous studies focus mainly on written news texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation between human and automatic evaluations", "sec_num": "4.2." }, { "text": "Evaluation Parameters: The current evaluation evaluates all lowercased translation output without any punctuations. In addition, part-of-speech tagging is applied to automatic evaluations but not to human evaluations. 9 However, previous studies -reporting a strong correlation between automatic and human evaluations -base their studies on translation output and reference translations where both punctuations and upper/lowercase distinctions are preserved. We have pointed out in Section 4.1 the potential significance of punctuations in conversational speech translation. [15] reports that upper/lower case distinction needs to be preserved in order for automatic evaluations to correlate with human evaluations. Furthermore, none of the previous studies have applied part-of-speech tagging in automatic evaluations.", "cite_spans": [ { "start": 218, "end": 219, "text": "9", "ref_id": "BIBREF8" }, { "start": 575, "end": 579, "text": "[15]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Correlation between human and automatic evaluations", "sec_num": "4.2." }, { "text": "Setting all evaluation parameters the same for the current evaluation as previous studies would shed light on the cause for the lack of correlation between human and automatic evaluations. If it turns out that human evaluations still do not correlate with automatic evaluations even after setting all evaluation parameters the same, it would indicate that conversational speech translations require a new evaluation metric to adequately capture the characteristics of spoken language dialogs not present in written texts. Table 10 : Chinese-to-English automatic evaluation scores BLEU and GTM/NIST scores do not correlate with each other. BLEU score is the highest for the system C2E_1, whereas GTM/NIST scores are the highest for the system C2E_3. Our experiments on C-STAR 2003 development test set show that BLEU score difference of about 0.03 is statistically significant at 95% confidence interval, indicating that the BLEU score difference of 0.033 between C2E_1 (0.3619) and C2E_3 (0.3289) is very likely to be statistically significant. 10 With the caveat that the evaluation parameters are different for the current evaluations from previous studies, the scores in Table 10 suggest that some automatic evaluation metric should fit better for spoken language translation evaluation than others. Note that BLEU, GTM and NIST all incorporate the notion of precision and the length ratio between the translation output and the reference translation into their scoring formula. BLEU and GTM crucially differ in the way how length ratio is computed. Brevity penalty (BP) plays a less significant role than precision in BLEU whereas recall plays an equally important role as precision in GTM. Spoken language translation evaluation could serve as a test bed for differentiating the fitness of some version of length ratio to the overall translation evaluation task, which is not easily distinguishable in an evaluation of written news texts. 11 ", "cite_spans": [ { "start": 1045, "end": 1047, "text": "10", "ref_id": "BIBREF9" }, { "start": 1944, "end": 1946, "text": "11", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 522, "end": 530, "text": "Table 10", "ref_id": "TABREF8" }, { "start": 1174, "end": 1182, "text": "Table 10", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Correlation between human and automatic evaluations", "sec_num": "4.2." }, { "text": "Recent success in machine translation of texts with the adoption of automatic evaluation metric BLEU indicates that a good evaluation metric correlating well with human judgments drives the machine translation technology development.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "5." }, { "text": "To come up with a good spoken language translation evaluation metric, however, there are at least two major issues to be worked out. First, we need to figure out what is the correct format of the reference translations to be used by human assessors. A good first approximation might be the format consistent with human transcriptions of speech. Second, we need to 10 [Paul et al. 2004 ] also show the lack of correlation between BLEU and NIST scores of the systems evaluated in C-STAR spoken language translation evaluation in 2003. 11 According to [Melamed et al. 2003 ], BLEU and GTM both correlate well with human adequacy judgments on documents of more than 10 segments with more than 1 reference translation.", "cite_spans": [ { "start": 364, "end": 384, "text": "10 [Paul et al. 2004", "ref_id": null }, { "start": 533, "end": 535, "text": "11", "ref_id": "BIBREF10" }, { "start": 549, "end": 569, "text": "[Melamed et al. 2003", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "5." }, { "text": "factor out characteristics of spoken language not present in written texts and decide whether or not these need to be introduced as independent parameters in the evaluation metric.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "5." }, { "text": "We believe that the notion of precision and brevity penalty in BLEU are applicable to all types of machine translation quality evaluations, and should serve as the baseline parameters for an improved spoken language translation evaluation metric which will drive a rapid improvement of the technology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "5." }, { "text": "Morpheme is defined to be the minimal unit of meaning, and may or may not overlap with words, e.g. the Japanese object case marker \u3092 and English plural marker -s are a morpheme but not a word, whereas president in English \u3053\u308c `this' in Japanese are both a morpheme and a word.2 In case the length of i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Basic Traveler's Expression Corpus distributed for the supplied data track training. 4 English-Chinese parallel corpus distributed by Foreign Broadcast Information Service.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Conditions (i) to (iii) can be easily subsumed by incorporating language model probabilities derived from the training corpus,[10],[11], for language model based word segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Yuqing Gao for the Chinese-English parallel corpus we have used in the Chinese-to-English unrestricted data track and Fei Xia for her Chinese word segmentation system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "6." } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The mathematcs of statistical machine translation: parameter estimation", "authors": [ { "first": "P", "middle": [], "last": "Brown", "suffix": "" }, { "first": "V", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "S", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "R", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Brown, V. Della Pietra, S. Della Pietra, and R. Mercer. \"The mathematcs of statistical machine translation: parameter estimation\", Computational Linguistics, 19(2):263\u2212311, 1993.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Improved alignment models for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Joint Conference of Empirical Methods in Natural Language Processing and Very Large Corpora", "volume": "", "issue": "", "pages": "20--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och, C. Tillmann, and H. Ney. \"Improved alignment models for statistical machine translation\", Proceedings of the Joint Conference of Empirical Methods in Natural Language Processing and Very Large Corpora, pages 20\u221228, 1999.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A syntax-based statistical translation model", "authors": [ { "first": "K", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 39 th ACL\u22122001 Conference", "volume": "", "issue": "", "pages": "523--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Yamada and K. Knight. \"A syntax-based statistical translation model\", Proceedings of the 39 th ACL\u22122001 Conference, pages 523\u2212530, 2001.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A phrase-based, joint probability model for statistical machine translation", "authors": [ { "first": "D", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "W", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Marcu and W. Wong. \"A phrase-based, joint probability model for statistical machine translation\", Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, 2002.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A projection extension algorithm for statistical machine translation", "authors": [ { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Tillmann. \"A projection extension algorithm for statistical machine translation\", Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 1\u22128, 2003.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Statistical phrase-based translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT\u2212NAACL 2003", "volume": "", "issue": "", "pages": "48--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn, F. J. Och, and D. Marcu. \"Statistical phrase-based translation\", Proceedings of HLT\u2212NAACL 2003, pages 48\u221254, 2003.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Morphological analysis for statistical machine translation", "authors": [ { "first": "Y-S", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "Proceedings of HLT\u2212NAACL 2004: Companion Volume", "volume": "", "issue": "", "pages": "57--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y-S. Lee. \"Morphological analysis for statistical machine translation\", Proceedings of HLT\u2212NAACL 2004: Companion Volume, pages 57\u221260, 2004.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "HMM-based word alignment in statistical translation", "authors": [ { "first": "S", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 1996, "venue": "Proceedings of COLING\u221296", "volume": "", "issue": "", "pages": "836--841", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Vogel, H. Ney, and C. Tillmann. \"HMM-based word alignment in statistical translation\", Proceedings of COLING\u221296, pages 836\u2212841, 1996.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bleu: A method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40 th Annual Meeting of ACL 2002", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Papineni, S. Roukos, T. Ward, and W. Zhu. \"Bleu: A method for automatic evaluation of machine translation\", Proceedings of the 40 th Annual Meeting of ACL 2002, pages 311\u2212318, 2002.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An iterative algorithm to build Chinese language models", "authors": [ { "first": "X", "middle": [], "last": "Luo", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Annual Meeting of ACL 1996", "volume": "", "issue": "", "pages": "139--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Luo and S. Roukos. \"An iterative algorithm to build Chinese language models\", Proceedings of the Annual Meeting of ACL 1996, pages 139\u2212143, 1996.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Language model based Arabic word segmentation", "authors": [ { "first": "Y-S", "middle": [], "last": "Lee", "suffix": "" }, { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "O", "middle": [], "last": "Emam", "suffix": "" }, { "first": "H", "middle": [], "last": "Hassan", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41 st Annual Meeting of ACL 2003", "volume": "", "issue": "", "pages": "399--406", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y-S. Lee, K. Papineni, S. Roukos, O. Emam, and H. Hassan. \"Language model based Arabic word segmentation\", Proceedings of the 41 st Annual Meeting of ACL 2003, pages 399\u2212406, 2003.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Word reordering and a DP beam search algorithm for statistical machine translation", "authors": [ { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": null, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "97--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Tillmann and H. Ney. \"Word reordering and a DP beam search algorithm for statistical machine translation\", Computational Linguistics, 29(1):97\u2212133.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Improvements in phrasebased statistical machine translation", "authors": [ { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Proceedings of HLT-NAACL 2004", "volume": "", "issue": "", "pages": "257--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Zens and H. Ney. \"Improvements in phrase- based statistical machine translation\", Proceedings of HLT-NAACL 2004, pages 257\u2212264, 2004.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Precision and recall of machine translation", "authors": [ { "first": "D", "middle": [], "last": "Melamed", "suffix": "" }, { "first": "R", "middle": [], "last": "Green", "suffix": "" }, { "first": "J", "middle": [], "last": "Turian", "suffix": "" } ], "year": 2004, "venue": "Proceedings of HLT\u2212NAACL 2004", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Melamed, R. Green, and J. Turian. \"Precision and recall of machine translation\", Proceedings of HLT\u2212NAACL 2004, 2004.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Analysis of NIST Evaluation Data", "authors": [ { "first": "G", "middle": [], "last": "Doddington", "suffix": "" } ], "year": 2002, "venue": "NIST presentation at DARPA IAO Machine Translation Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Doddington. \"Analysis of NIST Evaluation Data\", NIST presentation at DARPA IAO Machine Translation Workshop, Santa Monica, CA, USA, July 22-23, 2002.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Towards innovative evaluation methodologies for speech translation", "authors": [ { "first": "M", "middle": [], "last": "Paul", "suffix": "" }, { "first": "H", "middle": [], "last": "Nakaiwa", "suffix": "" }, { "first": "M", "middle": [], "last": "Federico", "suffix": "" } ], "year": 2004, "venue": "Working Notes of NTCIR\u22124", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Paul, H. Nakaiwa, and M. Federico. \"Towards innovative evaluation methodologies for speech translation\", Working Notes of NTCIR\u22124, Tokyo, 2\u22124 June 2004.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Interlingua-based English-Korean two-way speech translation of doctor-patient dialogues with CCLINC", "authors": [ { "first": "Y-S", "middle": [], "last": "Lee", "suffix": "" }, { "first": "D", "middle": [], "last": "Sinder", "suffix": "" }, { "first": "C", "middle": [], "last": "Weinstein", "suffix": "" } ], "year": 2002, "venue": "Machine Translation", "volume": "17", "issue": "", "pages": "213--243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y-S. Lee, D. Sinder, and C. Weinstein. \"Interlingua-based English-Korean two-way speech translation of doctor-patient dialogues with CCLINC\", Machine Translation, 17(3):213\u2212243, 2002.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The Janus\u2212III translation system: speech-to-speech translation in multiple domains", "authors": [ { "first": "L", "middle": [], "last": "Levin", "suffix": "" }, { "first": "A", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "M", "middle": [], "last": "Woszczyna", "suffix": "" }, { "first": "D", "middle": [], "last": "Gates", "suffix": "" }, { "first": "M", "middle": [], "last": "Gavald\u00e0", "suffix": "" }, { "first": "D", "middle": [], "last": "Koll", "suffix": "" }, { "first": "A", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2000, "venue": "Machine Translation", "volume": "15", "issue": "", "pages": "3--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Levin, A. Lavie, M. Woszczyna, D. Gates, M. Gavald\u00e0, D. Koll, and A. Waibel. \"The Janus\u2212III translation system: speech-to-speech translation in multiple domains\", Machine Translation, 15:3\u221225. 2000.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Minimizing cumulative error in discourse context", "authors": [ { "first": "Y", "middle": [], "last": "Qu", "suffix": "" }, { "first": "B", "middle": [], "last": "Dieugenio", "suffix": "" }, { "first": "A", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "L", "middle": [], "last": "Levin", "suffix": "" }, { "first": "C", "middle": [ "P" ], "last": "Rose", "suffix": "" } ], "year": 1997, "venue": "Dialogue Processing in Spoken Language Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Qu, B. DiEugenio, A. Lavie, L. Levin and C. P. Rose. \"Minimizing cumulative error in discourse context\", In Dialogue Processing in Spoken Language Systems, Springer Verlag, 1997.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Dialogue act modeling for automatic tagging and recognition of conversational speech", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "K", "middle": [], "last": "Ries", "suffix": "" }, { "first": "N", "middle": [], "last": "Coccaro", "suffix": "" }, { "first": "E", "middle": [], "last": "Shriberg", "suffix": "" }, { "first": "R", "middle": [], "last": "Bates", "suffix": "" }, { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "P", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "R", "middle": [], "last": "Martin", "suffix": "" }, { "first": "C", "middle": [ "V" ], "last": "Ess-Dykema", "suffix": "" }, { "first": "M", "middle": [], "last": "Meteer", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics", "volume": "26", "issue": "3", "pages": "339--374", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Stolcke, K. Ries, N. Coccaro, E. Shriberg, R. Bates, D. Jurafsky, P. Taylor, R. Martin, C. V. Ess- Dykema, and M. Meteer, \"Dialogue act modeling for automatic tagging and recognition of conversational speech\", Computational Linguistics, 26(3):339-374. 2000.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Proceedings of DARPA Machichine Translation Evaluation Workshop", "authors": [ { "first": "Y", "middle": [], "last": "Al-Onaizan", "suffix": "" }, { "first": "Niyu", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Y-S", "middle": [], "last": "Lee", "suffix": "" }, { "first": "K", "middle": [], "last": "Papineni", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Al-Onaizan, Niyu Ge, Y-S. Lee, K. Papineni, \"IBM Site Report\", Proceedings of DARPA Machichine Translation Evaluation Workshop, Alexandria, VA, USA, June 22-23, 2004.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Improving a statistical MT system with automatically learned rewrite patterns", "authors": [ { "first": "F", "middle": [], "last": "Xia", "suffix": "" }, { "first": "M", "middle": [], "last": "Mccord", "suffix": "" } ], "year": 2004, "venue": "Proceedings of COLING-2004", "volume": "", "issue": "", "pages": "508--514", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Xia and M. McCord, \"Improving a statistical MT system with automatically learned rewrite patterns\", Proceedings of COLING-2004, pages 508-514, 2004.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "projects source intervals into target intervals. The pair ([f\u00b4, f], [e\u00b4, e]) defines a block alignment link a.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "word sequence in the translation hypothesis. e h is the last word in 1 \u2212 i e", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "To the zoo ?(17) This row empty ? (18) And the number and name of the person you are calling ? (19) A seat in the back, please.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "Speech acts are often denoted by sentence particles in Japanese such as \u304b for a question,\u3046 for a proposal , \u305f for a statement, as well as a phrase \u304f\u3060\u3055\u3044 for a request. Japanese-to-English translation, 329 segments contain some form of negation, as shown in (20)-(23). (20) I ca n't have dessert, really . (21) No, I just got here . (22) Do n't take too much off the top . (23) I do n't quite understand .", "num": null, "uris": null, "type_str": "figure" }, "TABREF2": { "num": null, "type_str": "table", "text": "", "content": "
LanguagesJ2EC2E
Baseline0.29240.2664
Union + Filtering0.32490.2895
Reorder+Combine blocks 0.34600.2957
", "html": null }, "TABREF4": { "num": null, "type_str": "table", "text": "", "content": "
6
LanguagesJ2EC2E
Baseline0.29240.2664
Union + Filtering0.32490.2895
Reorder+ Combine blocks0.34600.2957
Unknown word segmentation0.3111
Skip in decoding0.42280.3470
", "html": null }, "TABREF8": { "num": null, "type_str": "table", "text": "shows some of our automatic evaluation scores of Chinese-to-English translations. This information is due to personal communications with", "content": "
Michael Paul.
", "html": null } } } }