{ "paper_id": "2004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:23:22.288754Z" }, "title": "EBMT, SMT, Hybrid and More: ATR Spoken Language Translation System", "authors": [ { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "", "affiliation": { "laboratory": "ATR Spoken Language Translation Research Laboratories", "institution": "Keihanna Science City", "location": { "addrLine": "2-2-2 Hikaridai", "postCode": "619-0288", "settlement": "Kyoto", "country": "JAPAN" } }, "email": "eiichiro.sumita@atr.jp" }, { "first": "Yasuhiro", "middle": [], "last": "Akiba", "suffix": "", "affiliation": { "laboratory": "ATR Spoken Language Translation Research Laboratories", "institution": "Keihanna Science City", "location": { "addrLine": "2-2-2 Hikaridai", "postCode": "619-0288", "settlement": "Kyoto", "country": "JAPAN" } }, "email": "" }, { "first": "Takao", "middle": [], "last": "Doi", "suffix": "", "affiliation": { "laboratory": "ATR Spoken Language Translation Research Laboratories", "institution": "Keihanna Science City", "location": { "addrLine": "2-2-2 Hikaridai", "postCode": "619-0288", "settlement": "Kyoto", "country": "JAPAN" } }, "email": "" }, { "first": "Andrew", "middle": [], "last": "Finch", "suffix": "", "affiliation": { "laboratory": "ATR Spoken Language Translation Research Laboratories", "institution": "Keihanna Science City", "location": { "addrLine": "2-2-2 Hikaridai", "postCode": "619-0288", "settlement": "Kyoto", "country": "JAPAN" } }, "email": "" }, { "first": "Kenji", "middle": [], "last": "Imamura", "suffix": "", "affiliation": { "laboratory": "ATR Spoken Language Translation Research Laboratories", "institution": "Keihanna Science City", "location": { "addrLine": "2-2-2 Hikaridai", "postCode": "619-0288", "settlement": "Kyoto", "country": "JAPAN" } }, "email": "" }, { "first": "Hideo", "middle": [], "last": "Okuma", "suffix": "", "affiliation": { "laboratory": "ATR Spoken Language Translation Research Laboratories", "institution": "Keihanna Science City", "location": { "addrLine": "2-2-2 Hikaridai", "postCode": "619-0288", "settlement": "Kyoto", "country": "JAPAN" } }, "email": "" }, { "first": "Michael", "middle": [], "last": "Paul", "suffix": "", "affiliation": { "laboratory": "ATR Spoken Language Translation Research Laboratories", "institution": "Keihanna Science City", "location": { "addrLine": "2-2-2 Hikaridai", "postCode": "619-0288", "settlement": "Kyoto", "country": "JAPAN" } }, "email": "" }, { "first": "Mitsuo", "middle": [], "last": "Shimohata", "suffix": "", "affiliation": { "laboratory": "ATR Spoken Language Translation Research Laboratories", "institution": "Keihanna Science City", "location": { "addrLine": "2-2-2 Hikaridai", "postCode": "619-0288", "settlement": "Kyoto", "country": "JAPAN" } }, "email": "" }, { "first": "Taro", "middle": [], "last": "Watanabe", "suffix": "", "affiliation": { "laboratory": "ATR Spoken Language Translation Research Laboratories", "institution": "Keihanna Science City", "location": { "addrLine": "2-2-2 Hikaridai", "postCode": "619-0288", "settlement": "Kyoto", "country": "JAPAN" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper introduces ATR's project named Corpus-Centered Computation (C3), which aims at developing a translation technology suitable for spoken language translation. C3 places corpora at the center of its technology. Translation knowledge is extracted from corpora, translation quality is gauged by referring to corpora, the best translation among multiple-engine outputs is selected based on corpora, and the corpora themselves are paraphrased or filtered by automated processes to improve the data quality on which translation engines are based. In particular, this paper reports the hybridization architecture of different machine translation systems, our technologies, their performance on the IWSLT04 task, and paraphrasing methods.", "pdf_parse": { "paper_id": "2004", "_pdf_hash": "", "abstract": [ { "text": "This paper introduces ATR's project named Corpus-Centered Computation (C3), which aims at developing a translation technology suitable for spoken language translation. C3 places corpora at the center of its technology. Translation knowledge is extracted from corpora, translation quality is gauged by referring to corpora, the best translation among multiple-engine outputs is selected based on corpora, and the corpora themselves are paraphrased or filtered by automated processes to improve the data quality on which translation engines are based. In particular, this paper reports the hybridization architecture of different machine translation systems, our technologies, their performance on the IWSLT04 task, and paraphrasing methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "There are two main strategies used in corpus-based translation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "1. Example-Based Machine Translation (EBMT) [1] :", "cite_spans": [ { "start": 44, "end": 47, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "EBMT uses the corpus directly. EBMT retrieves the translation examples that are best matched to an input expression and then adjusts the examples to obtain the translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "SMT learns statistical models for translation from corpora and dictionaries and then searches for the best translation at run-time according to the statistical models for language and translation. By using the IWSLT04 task, this paper describes two endeavors that are independent at this moment: (a) a hybridization of EBMT and statistical models, and (b) a new approach for SMT, phrase-based HMM. (a) is used in the \"unrestricted\" Japanese-to-English track (Section 2), and (b) is used in \"supplied\" Japanese-to-English and Chinese-to-English tracks (Section 3). In addition, paraphrasing technologies, which are not used in the IWSLT04 task but boost translation performance, are also introduced in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Machine Translation (SMT) [2]:", "sec_num": "2." }, { "text": "No complete translation system has emerged nor is likely to emerge in the foreseeable future. Every approach to translation has its own way of acquiring translation knowledge and using the knowledge. Each system generates its peculiar errors in attempting translation. As a result, translation performance differs sentence-by-sentence, system-bysystem. There is the possibility of boosting translation performance through exploitation of multiple translations generated by different systems. Among several possible architectures to integrate multiple translation engines (Section 2.5), we demonstrate the acrchitecture below (Sections from 2.1 to 2.4) as one effective approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid MT System (Unrestricted J-to-E Track)", "sec_num": "2." }, { "text": "It is important to integrate \"different\" types of element machine translation systems in order to boost the overall performance by having them compensate each other. We propose an architecture in which multiple EBMT engines work in parallel and their outputs are passed to a post-process that selects the best candidate according to SMT models. Most EBMT systems employ phrases or sentences as the translation unit so that they can translate while taking a wider perspective in order to handle case relations, idiomatic expressions, sentence structure, and so on. However, when there is ambiguity in translation, EBMT selects the best translation mainly by the similarity between the input and the source part of the example. EBMT's validation of its translation is flawed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Hybridization: Multiple EBMTs Followed By A Selector Based On SMT Models", "sec_num": "2.1." }, { "text": "On the other hand, SMT employing IBM models translates an input sentence by a combination of word transfer and word re-ordering. Therefore, when it is applied to a language pair in which the word order is much different (e.g. English and Japanese), it is difficult to find a globally optimal solution due to the enormous search space. However, SMT can sort translations in the order of their quality according to its statistical models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Hybridization: Multiple EBMTs Followed By A Selector Based On SMT Models", "sec_num": "2.1." }, { "text": "We show two different EBMT systems here, briefly explain each system, and then compare them. Finally, we ex-plain the selector used to determine the best from multiple translations based on SMT models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Hybridization: Multiple EBMTs Followed By A Selector Based On SMT Models", "sec_num": "2.1." }, { "text": "Sumita [3] proposed D3 (Dp-match Driven transDucer), which exploits DP-matching between word sequences.", "cite_spans": [ { "start": 7, "end": 10, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "D3, DP-based EBMT", "sec_num": "2.2.1." }, { "text": "Let's illustrate the process with a simple sample below. Suppose we are translating a Japanese sentence into English. The Japanese input sentence (1-j) is translated into the English sentence (1-e) by utilizing the English sentence (2-e), whose source sentence (2-j) is similar to (1-j). The common parts are unchanged, and the different portions, shown in bold face, are substituted by consulting a bilingual dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D3, DP-based EBMT", "sec_num": "2.2.1." }, { "text": ";;; A Japanese input (1-j) iro/ga/ki/ni/iri/masen ;;; the most similar example in corpus (2-j) dezain/ga/ki/ni/iri/masen (2-e) I do not like the design. ;;; the English output (1-e) I do not like the color.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D3, DP-based EBMT", "sec_num": "2.2.1." }, { "text": "We retrieve the most similar source sentence of examples from a bilingual corpus. For this, we use DP-matching, which tells us the edit distance between word sequences while giving us the matched portions between the input and the example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D3, DP-based EBMT", "sec_num": "2.2.1." }, { "text": "The edit distance is calculated as follows. The count of the inserted words, the count of the deleted words, and the semantic distance of the substituted words are summed. Then, this total is normalized by the sum of the lengths of the input and the source part of translation example. The semantic distance between two substituted words is calculated by using the hierachy of a thesaurus [4] .", "cite_spans": [ { "start": 389, "end": 392, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "D3, DP-based EBMT", "sec_num": "2.2.1." }, { "text": "Our language resources in addition to a bilingual corpus are a bilingual dictionary, which is used for generating target sentences, and thesauri of both languages, which are used for incorporating the semantic distance between words into the distance between word sequences. Furthermore, lexical resources are also used for word alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D3, DP-based EBMT", "sec_num": "2.2.1." }, { "text": "The second EBMT is different from the first EBMT in that it parses bitexts of a parallel coupus with grammars for both source and target languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HPAT, Grammar-based EBMT", "sec_num": "2.2.2." }, { "text": "Imamura [5] proposed a new phrase alignment approach called Hierarchical Phrase Alignment (HPA). First, two sentences are tagged and parsed independently. This operation obtains two syntactic trees. Next, words are linked by the word alignment program. Then, HPA retrieves equivalent phrases that satisfy two conditions: 1) words in the pair correspond with no deficiency and no excess; 2) the phrases are of the same syntactic category.", "cite_spans": [ { "start": 8, "end": 11, "text": "[5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "HPAT, Grammar-based EBMT", "sec_num": "2.2.2." }, { "text": "Imamura [6] subsequently proposed HPA-based translation (HPAT). HPAed bilingual trees include all information necessary to automatically generate transfer patterns. Translation is done according to transfer patterns using the TDMT engine [7] . First, the source part of transfer patterns are utilized, and source structure is obtained. Second, structural changes are performed by mapping source patterns to target patterns. Finally, lexical items are inserted by referring to a bilingual dictionary, and then a conventional generation is performed.", "cite_spans": [ { "start": 8, "end": 11, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 238, "end": 241, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "HPAT, Grammar-based EBMT", "sec_num": "2.2.2." }, { "text": "Finally, Imamura [8] proposed a feedback cleaning method that utilizes automatic evaluation to remove incorrect/redundant translation rules. BLEU was utilized to measure translation quality for the feedback process, and the hillclimbing algorithm was applied in searching for the combinatorial optimization. Utilizing the features of this task, incorrect/redundant rules were removed from the initial solution, which contains all rules acquired from the training corpus. Our experiments showed a considerable improvement in MT quality.", "cite_spans": [ { "start": 17, "end": 20, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "HPAT, Grammar-based EBMT", "sec_num": "2.2.2." }, { "text": "As can be seen in Section 2.2.1, Section 2.2.2, and Table 1 , the main difference between the two EBMT systems is in their use of grammars. D3 achieves a good quality, when there is a similar translation example in the parallel corpus, otherwise D3 may fail to produce a good translation. On the contrary, HPAT produces a modest quality translation for most of the inputs (Table 2). Table 3 . Here, we show MT's quality by using five ranks, S, A, B, C, and D 1 , from good quality to poor qual-ity. This is judged by English native-speakers who are also familiar with Japanese. The evaluator investigates bilingual information, i.e., the source sentence and its MT output. This is an overall score that considers both adequacy and fluency, which are particular scores used in the IWSLT evaluation campaign. The IWSLT evaluator makes a monolingual evaluation, i.e., a reference translation made in advance by a professional translator and MT output, and judges the adequacy and fluency of the MT translation. The portion of translations with rank \"S\" for D3 is very large, while the portions of translations with ranks \"A,\" \"B,\" and \"C\" are relatively small. Thus, the slope is very steep, while the slope of HPAT is gentle.", "cite_spans": [], "ref_spans": [ { "start": 52, "end": 59, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 383, "end": 390, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Comparison of Two EBMTs", "sec_num": "2.2.3." }, { "text": "We proposed an SMT-based method of automatically selecting the best translation among outputs generated by multiple machine translation (MT) systems [9] .", "cite_spans": [ { "start": 149, "end": 152, "text": "[9]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "SMT-based Selector", "sec_num": "2.3." }, { "text": "Conventional approaches to the selection problem include a method that automatically selects the output to which the highest probability is assigned according to a language model (LM). [10] These existing methods have two problems. First, they do not check whether information on source sentences is adequately translated into MT outputs, although they do check the fluency of MT outputs. Second, they do not take the statistical behavior of assigned scores into consideration.", "cite_spans": [ { "start": 185, "end": 189, "text": "[10]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "SMT-based Selector", "sec_num": "2.3." }, { "text": "The proposed approach scores MT outputs by using not only the language but also a translation model (TM). To conduct a statistical test later, this scoring is done by using each of multiple pairs of language and translation models. The method, then, checks whether the average TM * LM score of an MT output is significantly higher than that of another MT output. This check uses a multiple comparison test based on the Kruskal-Wallis test [11] .", "cite_spans": [ { "start": 439, "end": 443, "text": "[11]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "SMT-based Selector", "sec_num": "2.3." }, { "text": "As shown in Table 4 , all of the metrics taken together show that the proposed selector outperforms both element trans-Good: easy to understand, with either some unimportant information missing or flawed grammar; (C) Fair: broken, but understandable with effort; (D) Unacceptable: important information has been translated incorrectly. lation systems; for example, mWER is decreased by 2.55 (about 7.5% reduction) from 28.86 to 26.31. Next, the relationship between translation quality of element systems and gain by the selector was analyzed. Table 5 shows that the proposed selector reduces the number of low-quality translations (ranked \"D\") while it increases the number of high-quality translations (ranked \"S\" to \"B\").", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 544, "end": 552, "text": "Table 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Selecting Effect", "sec_num": "2.4.1." }, { "text": "Since the methods are corpus-based, the quantity of the corpus determines the system performance. The corpus used in this experiment is ten times larger than the supplied corpus, and the drastic reduction in mWER has been demonstrated (Table 6 ).", "cite_spans": [], "ref_spans": [ { "start": 235, "end": 243, "text": "(Table 6", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Performance vs. Corpus Size", "sec_num": "2.4.2." }, { "text": "However, the quality with the small corpus is not so bad in the subjective evaluation shown in Table 7 . We conjecture that adequacy is not low even with the supplied corpus, and the translation become similar to native English, that is, its fluency improves as the size of corpus increases.", "cite_spans": [], "ref_spans": [ { "start": 95, "end": 102, "text": "Table 7", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Performance vs. Corpus Size", "sec_num": "2.4.2." }, { "text": "Related works have proposed ways to merge MT outputs from multiple MT systems [12] in order to output better translations. When the source language and the target language have similar sentence structures, this merging ap- In addition to the merging and selecting approaches, a modification approach can be taken. For example, Marcu [14] proposed a method in which initial translations are constructed by combining bilingual phrases from translation memory, which is followed by modifying the translations by greedy decoding [15] . Watanabe et al. [16] proposed a decoding algorithm in which translations that are similar to the input sentence are retrieved from bilingual corpora and then modified by greedy decoding.", "cite_spans": [ { "start": 78, "end": 82, "text": "[12]", "ref_id": "BIBREF11" }, { "start": 333, "end": 337, "text": "[14]", "ref_id": "BIBREF12" }, { "start": 525, "end": 529, "text": "[15]", "ref_id": "BIBREF13" }, { "start": 548, "end": 552, "text": "[16]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "2.5." }, { "text": "This section describes an innovative approach to statistical translation modeling, namely the phrase-based HMM translation model. The model directly structures the phrase-based translation approach in a Hidden Markov structure and proposes an efficient way to estimate and induce phrase translation pairs in a uniform fashion. In the statistical approach to machine translation, originally proposed in [2] , the problem of translating a source text in a foreign language, f , into a target language, for instance English, e is formulated as the maximization problem of\u00ea = argmax e P (e|f )", "cite_spans": [ { "start": 402, "end": 405, "text": "[2]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase-based HMM SMT System (Supplied J-to-E and C-to-E Tracks)", "sec_num": "3." }, { "text": "The noisy channel modeling of the above problem resulted in\u00ea = argmax e P (f |e)P (e)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based HMM SMT System (Supplied J-to-E and C-to-E Tracks)", "sec_num": "3." }, { "text": "Many previous efforts in the phrase-based approach to statistical machine translation basically approximated the former term, P (f |e), as the products of sequence of phrase translations with additional constraints [17, 18, 19] :", "cite_spans": [ { "start": 215, "end": 219, "text": "[17,", "ref_id": "BIBREF15" }, { "start": 220, "end": 223, "text": "18,", "ref_id": "BIBREF16" }, { "start": 224, "end": 227, "text": "19]", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase-based HMM SMT System (Supplied J-to-E and C-to-E Tracks)", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (f |e) \u2248 i P (f i |\u0113 ai )", "eq_num": "(3)" } ], "section": "Phrase-based HMM SMT System (Supplied J-to-E and C-to-E Tracks)", "sec_num": "3." }, { "text": "wheref i is the ith phrase of the phrase-segmented sentenc\u0113 f m 1 for f , and a i is the phrase alignment for the phrasesegmented texts. 2 Instead, we introduced two new hidden variables,f and e, to explicitly capture the phrase translation relationship:", "cite_spans": [ { "start": 137, "end": 138, "text": "2", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase-based HMM SMT System (Supplied J-to-E and C-to-E Tracks)", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (f |e) = f ,\u0113 P (f ,f ,\u0113|e)", "eq_num": "(4)" } ], "section": "Phrase-based HMM SMT System (Supplied J-to-E and C-to-E Tracks)", "sec_num": "3." }, { "text": "The term P (f ,f ,\u0113|e) is further decomposed into three terms: P (f ,f ,\u0113|e) = P (f |f ,\u0113, e)P (f |\u0113, e)P (\u0113|e)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based HMM SMT System (Supplied J-to-E and C-to-E Tracks)", "sec_num": "3." }, { "text": "The first term of Equation 5 represents the probability that a segmented input sentencef can be reordered and generated as the input text of f . The second term indicates the translation probability of the two phrase sequences of\u0113 and f . The last term is the likelihood of the phrase-segmented text e generated from e. We call these terms the Phrase Segmentation Model, the Phrase Translation Model, and the Phrase Ngram Model, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based HMM SMT System (Supplied J-to-E and C-to-E Tracks)", "sec_num": "3." }, { "text": "The phrase ngram model is approximated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Ngram Model", "sec_num": "3.1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (\u0113|e) \u2248 i P (\u0113 i |\u0113 i\u22121 )", "eq_num": "(6)" } ], "section": "Phrase Ngram Model", "sec_num": "3.1." }, { "text": "P (\u0113 i |\u0113 i\u22121 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Ngram Model", "sec_num": "3.1." }, { "text": "is treated as the bigram constraints of adjacent translated phrases\u0113 i and\u0113 i\u22121 . The phrase ngram model can be easily estimated with the Forward-Backward algorithm by expanding all possible phrase segmentations of e into a lattice structure\u0112 as shown in Figure 1 . Each node in the lattice represents a particular phrase\u0112 i in a sentence e connected by edges with associated probability of P (\u0112 i |\u0112 i ).", "cite_spans": [], "ref_spans": [ { "start": 255, "end": 263, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Phrase Ngram Model", "sec_num": "3.1." }, { "text": "The estimation procedure can be roughly summarized as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Ngram Model", "sec_num": "3.1." }, { "text": "1. Initialize the probability table.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Ngram Model", "sec_num": "3.1." }, { "text": "2. For each sentence e in the training corpus, estimate the posterior probabilities P (\u0112 i ,\u0112 i |e) on the lattice using the Forward-Backward algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Ngram Model", "sec_num": "3.1." }, { "text": "3. Estimate the prior probabilities based on the maximum likelihood estimation by using the estimated posterior probabilities as the frequency of the occurrence of words:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Ngram Model", "sec_num": "3.1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (\u0112 i |\u0112 i ) = e P (\u0112 i ,\u0112 i |e) e \u0112 i P (\u0112 i ,\u0112 i |e)", "eq_num": "(7)" } ], "section": "Phrase Ngram Model", "sec_num": "3.1." }, { "text": "4. Iterate steps 2 and 3 until a termination condition is satisfied. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Ngram Model", "sec_num": "3.1." }, { "text": "According to the generative modeling represented in Equation 5, the term P (f |f ,\u0113, e) can be regarded as the distortion probability of how a phrase segmented sentencef will be reordered to form the source sentence f . Instead, we model this as the likelihood of a particular phrase segmentf j observed in f :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Segmentation Model", "sec_num": "3.2." }, { "text": "P (f |f ,\u0113, e) \u221d P (f |f ) (8) \u2248 j P (f j |f ) (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Segmentation Model", "sec_num": "3.2." }, { "text": "The segmentation model is realized as the unigram posterior probability of the phrase ngram model presented in Section 3.1. To briefly summarize, the unigram posterior probability can be efficiently computed by the Forward-Backward algorithm using the lattice structureF for f :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Segmentation Model", "sec_num": "3.2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (F j |f ) = P (F j , f ) F j P (F j , f )", "eq_num": "(10)" } ], "section": "Phrase Segmentation Model", "sec_num": "3.2." }, { "text": "The phrase segmentation model can be viewed as the prior term to assign a certain weight to a particular phrase given a source text. If we restrict the phrase length to 1, i.e. each phrase consisting of only one word, then the phrase segmentation model will assign 1 to all phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Segmentation Model", "sec_num": "3.2." }, { "text": "The phrase translation model is approximated so that the phrase translation can be captured as the product of the individual phrase translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Translation Model", "sec_num": "3.3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (f |\u0113, e) \u2248 j P (f j |\u0113 aj )", "eq_num": "(11)" } ], "section": "Phrase Translation Model", "sec_num": "3.3." }, { "text": "where the a i represents phrase alignment as seen in word alignment based translation model, such as the IBM Models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Translation Model", "sec_num": "3.3." }, { "text": "Combining all of the submodels -the phrase ngram model, the phrase segmentation model, and the phrase translation model -Equation 4 can be rewritten as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based HMM Statistical Translation", "sec_num": "3.4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (f |e) \u2248 \u0113,f j,i P (f j |f )P (f j |\u0113 i )P (\u0113 i |\u0113 i )", "eq_num": "(12)" } ], "section": "Phrase-based HMM Statistical Translation", "sec_num": "3.4." }, { "text": "If the phrase segmented sentences\u0113 andf are expanded into the corresponding lattice structures of\u0112 andF, then\u0112 The use of the phrase-based HMM structure has already been proposed in [20] in the context of aligning documents and abstracts. In their approach, jump probabilities were explicitly encoded as the state transitions that roughly corresponded to the alignment probabilities in the context of the word-based statistical translation model. The use of the explicit jump or alignment probabilities served for the completeness of the translation modeling at the cost of the enormous search space needed to train the phrase-based HMM structure.", "cite_spans": [ { "start": 182, "end": 186, "text": "[20]", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase-based HMM Statistical Translation", "sec_num": "3.4." }, { "text": "In our approach, the state transitions are governed by the phrase ngram model, bigram of phrase connection probabilities, but this method ignores phrase alignment probabilities. Therefore, the phrase-based HMM translation model is a deficient model. However its simplicity contributes to the faster estimation of parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based HMM Statistical Translation", "sec_num": "3.4." }, { "text": "The parameters for the phrase-based HMM translation model can be efficiently estimated by using the Forward-Backward algorithm briefly described in Section 3.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation", "sec_num": "3.5." }, { "text": "For the Forward-Backward procedure, we define two auxiliary variables, \u03b1(e i2 i1 , f j2 j1 ) and \u03b2(e i2 i1 , f j2 j1 ). \u03b1(e i2 i1 , f j2 j1 ) represents the forward estimates of the probability of the phrase e i2 i1 translated into f j2 j1 after the emission of the all phrase combinations presented in e i1\u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation", "sec_num": "3.5." }, { "text": ". Similarly, \u03b2(e i2 i1 , f j2 j1 ) represents the backward estimates of the probability of the phrase e i2 i1 translated into f j2 j1 considering the all right phrase combinations of e l i2+1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1", "sec_num": null }, { "text": "Therefore, the Forward-Backward algorithm can be for-mulated to solve the recursions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1(e i2 i1 , f j2 j1 ) = i1\u22122 i =1 f j 2 j 1 \u2229f j 2 j 1 =\u2205 \u03b1(e i1\u22121 i , f j 2 j 1 ) \u00d7P (e i2 i1 |e i1\u22121 i )P (f j2 j1 |e i2 i1 )P (f j2 j1 |f ) (13) \u03b2(e i2 i1 , f j2 j1 ) = l i =i2+2 f j 2 j 1 \u2229f j 2 j 1 =\u2205 \u03b2(e i i2+1 , f j 2 j 1 ) \u00d7P (e i i2+1 |e i 2 i1 )P (f j 2 j 1 |e i i2+1 )P (f j 2 j 1 |f )", "eq_num": "(14)" } ], "section": "1", "sec_num": null }, { "text": "To overcome the problem of local convergence often observed in the EM algorithm [21] , we use the lexicon model from the GIZA++ [22] training as the initial parameters for the phrase translation model. In addition, the phrase ngram model and the phrase segmentation models are individually trained over the monolingual corpus and remained fixed during the HMM iterations.", "cite_spans": [ { "start": 80, "end": 84, "text": "[21]", "ref_id": "BIBREF19" }, { "start": 128, "end": 132, "text": "[22]", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "1", "sec_num": null }, { "text": "Equations 13 and 14 involve summation over all possible contexts, either in its left-hand-side or right-hand-side on the lattice structure of\u0112, and the summation over all possible segmentation overF. Since the computation is still enormous, even with the help of dynamic programming, we restrict the possible segmentation to those phrase translation pairs induced before the estimation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Segment Induction", "sec_num": "3.6." }, { "text": "The phrase pairs are induced by first considering all possible bilingual phrase pairs in a training corpus using the product of two phrase translation probabilities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Segment Induction", "sec_num": "3.6." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (\u0113|f )P (f |\u0113) = count(\u0113,f ) 2 f count(\u0113,f ) \u0113 count(\u0113,f )", "eq_num": "(15)" } ], "section": "Phrase Segment Induction", "sec_num": "3.6." }, { "text": "where count(\u0113,f ) is the cooccurrence frequency of the two phrases\u0113 andf . The basic idea of Equation 15 is to capture the bilingual correspondence while considering two directions. Additional phrases were exhaustively induced based on the intersection/union of the viterbi word alignments of the two directional models, P (e|f ) and P (f |e), computed by GIZA++ [17] .", "cite_spans": [ { "start": 363, "end": 367, "text": "[17]", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase Segment Induction", "sec_num": "3.6." }, { "text": "After the extraction of phrase translation pairs, their monolingual phrase lexicons were extracted and used as the possible segmentation for the source and target sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Segment Induction", "sec_num": "3.6." }, { "text": "The decision rule to compute the best translation is based on the log-linear combinations of all subcomponents of translation models as presented in [23] .", "cite_spans": [ { "start": 149, "end": 153, "text": "[23]", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.7." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e = argmax e 1 Z(f ) j \u03bb j log P r j (e, f )", "eq_num": "(16)" } ], "section": "Decoder", "sec_num": "3.7." }, { "text": "where P r j (e, f ) are the subcomponents of translation models, such as the phrase ngram model or the language model, and \u03bb j is the weight for each model. The weighting parameters, \u03bb j , can be efficiently computed based either on the maximum likelihood criterion [23] by IIS or GIS algorithms or on the minimum error rate criterion [24] by some unconstrained optimization algorithms, such as the Downhill Simplex Method [25] .", "cite_spans": [ { "start": 266, "end": 270, "text": "[23]", "ref_id": "BIBREF21" }, { "start": 335, "end": 339, "text": "[24]", "ref_id": "BIBREF22" }, { "start": 423, "end": 427, "text": "[25]", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.7." }, { "text": "The decoder is taken after the word-graph-based decoder [26] , which allows the multi-pass decoding strategies to incorporate complicated submodel structures. The first pass of the decoding procedure generates the word-graph, or the lattice, of translations for an input sentence by using a beam search. On the first pass, the submodels of all phrase-based HMM translation models were integrated with the wordbased trigram language model and the class 5-gram model. The second pass uses A* strategy to search for the best path of translation on the generated word-graph.", "cite_spans": [ { "start": 56, "end": 60, "text": "[26]", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.7." }, { "text": "The results appear strange in two points: (1) Our proposal didn't work well for the Japanese-to-English track but did work well for the Chinese-to-English track; (2) Our proposal achieved high fluency but marked low adequacy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3.8." }, { "text": "The former was attributed to the fact that we had to narrow down the beamwidth for handling long Japanese input. The latter was attributed to the fact that we tuned our parameter to mWER and we exploited phrase models as well. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3.8." }, { "text": "This section introduces another feature of C3: paraphrasing and filtering corpora, which are not used in the IWSLT04 task but are useful for boosting MT performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Features of C3", "sec_num": "4." }, { "text": "The large variety of possible translations in a corpus causes difficulty in building machine translation on the corpus. Specifically, theis variety makes it more difficult to find appropriate translation examples for D3, to extract good transfer patterns for HPAT, and to estimate the parameters for SMT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Features of C3", "sec_num": "4." }, { "text": "We propose ways to overcome these problems by paraphrasing corpora through automated processes or filtering corpora by abandoning inappropriate expressions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Features of C3", "sec_num": "4." }, { "text": "Three methods have been investigated for automatic paraphrasing. (1) Shimohata et al. [27] grouped sentences by the equivalence of the translation and extract rules of paraphrasing by DP-matching. (2) Finch et al. [28] clustered sentences in a paraphrase corpus to obtain pairs that are similar to each other for training SMT models. Then by using the models, the decoder generates a paraphrase. (3) Finch et al. [29] developed a paraphraser based on data-oriented parsing, which utilizes synatactic information within an examplebased framework.", "cite_spans": [ { "start": 86, "end": 90, "text": "[27]", "ref_id": "BIBREF25" }, { "start": 214, "end": 218, "text": "[28]", "ref_id": "BIBREF26" }, { "start": 413, "end": 417, "text": "[29]", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Paraphrasing", "sec_num": "4.1." }, { "text": "The experimental results indicate that the EBMT based on normalization of the source side had increased coverage [30] and that the SMT created on the normalized target sentences had a reduced word-error rate [31] . Finch et al. [32] demonstrated that the expansion of reference sentences by paraphrasing is effective for automatic machine translation evaluation.", "cite_spans": [ { "start": 113, "end": 117, "text": "[30]", "ref_id": "BIBREF28" }, { "start": 208, "end": 212, "text": "[31]", "ref_id": "BIBREF29" }, { "start": 228, "end": 232, "text": "[32]", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Paraphrasing", "sec_num": "4.1." }, { "text": "In addition, longer sentences, which are inherent in spoken language, can be translated effectively by splitting them into short sentences and then concatenating the translated short sentences. Doi proposed a new splitting method based on N-gram and sentence similarity [33] .", "cite_spans": [ { "start": 270, "end": 274, "text": "[33]", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Paraphrasing", "sec_num": "4.1." }, { "text": "Imamura et al. [34] proposed a calculation that measures the literalness of a translation pair and called it Translation Correspondece Rate (TCR). After the word alignment of a translation pair, TCR is calculated as the rate of the aligned word count over the count of words in the translation pair. After abandoning the non-literal parts of the corpus, the HPAT transfer patterns are acquired. The effect of this measure has been confirmed by the improvement in translation quality.", "cite_spans": [ { "start": 15, "end": 19, "text": "[34]", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Filtering", "sec_num": "4.2." }, { "text": "Our project, called C3, places corpora at the center of speechto-speech technology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." }, { "text": "In this paper, (1) a hybridization of multiple EBMTs followed by a statitical selector, (2) a new SMT, phrasebased HMM SMT, and (3) paraphrasing methods are introduced. Good performance by translation components is demonstrated through experiments, including the IWSLT04 task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." }, { "text": "Furthermore, we plan to pursue a better blend of multiple processes, EBMT, SMT and other innovations such as paraphrasing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." }, { "text": "The five grades are defined as follows: (S) Splendid: fluent like a naitive speaker; (A) Perfect: no problem with either information or grammar; (B)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A phrase is simply a consecutive sequence of words and is not always linguistically coherent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The research reported here was supported in part by a contract with the National Institute of Information and Communications Technology (NICT) of Japan entitled, \"A study of speech dialogue translation technology based on a large corpus\". The authors' heartfelt thanks go to Kadokawa-Shoten for providing the Ruigo-Shin-Jiten.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "6." } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Framework of a Mechanical Translation between Japanese and English by Analogy Principle", "authors": [ { "first": "M", "middle": [], "last": "Nagao", "suffix": "" } ], "year": 1984, "venue": "Artificial and Human Intelligence", "volume": "", "issue": "", "pages": "173--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nagao, M., \"A Framework of a Mechanical Translation between Japanese and English by Analogy Principle\", Artificial and Human Intelligence, North Holland, 173- 180, 1984.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A Statistical approach to machine translation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" } ], "year": 1990, "venue": "Computational Linguistics", "volume": "16", "issue": "2", "pages": "79--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown, P. F., \"A Statistical approach to machine trans- lation,\" Computational Linguistics, 16 (2), pp. 79-85, 1990.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Example-based machine translation using DP-matching between word sequences", "authors": [ { "first": "E", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2001, "venue": "Workshop on DDMT, ACL", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sumita, E., \"Example-based machine translation using DP-matching between word sequences\", Workshop on DDMT, ACL, pp. 1-8, 2001.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Experiments and prospects of examplebased machine translation", "authors": [ { "first": "E", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 1991, "venue": "ACL", "volume": "", "issue": "", "pages": "185--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sumita, E., \"Experiments and prospects of example- based machine translation\", ACL, pp. 185-192, 1991.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Hierarchical phrase alignment harmonized with parsing", "authors": [ { "first": "K", "middle": [], "last": "Imamura", "suffix": "" } ], "year": 2001, "venue": "NLPRS", "volume": "", "issue": "", "pages": "377--384", "other_ids": {}, "num": null, "urls": [], "raw_text": "Imamura, K., \"Hierarchical phrase alignment harmo- nized with parsing\", NLPRS, pp. 377-384, 2001.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Application of transfer knowledge acquired by hierarchical phrase alignmnet", "authors": [ { "first": "K", "middle": [], "last": "Imamura", "suffix": "" } ], "year": 2002, "venue": "TMI", "volume": "", "issue": "", "pages": "74--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Imamura, K., \"Application of transfer knowledge ac- quired by hierarchical phrase alignmnet\", TMI, pp. 74- 84, 2002.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Constituent boundary parsing for examplebased machine translation", "authors": [ { "first": "O", "middle": [], "last": "Furuse", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "105--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Furuse, O., \"Constituent boundary parsing for example- based machine translation\", Coling, pp. 105-111, 1994.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Feedback cleaning of machine translation rules using automatic evaluation", "authors": [ { "first": "K", "middle": [], "last": "Imamura", "suffix": "" } ], "year": 2003, "venue": "ACL", "volume": "", "issue": "", "pages": "447--454", "other_ids": {}, "num": null, "urls": [], "raw_text": "Imamura, K., \"Feedback cleaning of machine transla- tion rules using automatic evaluation\", ACL, pp. 447- 454, 2003.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Using language and translation models to select the best from outputs from multiple MT systems", "authors": [ { "first": "Y", "middle": [], "last": "Akiba", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "8--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Akiba, Y., \"Using language and translation models to select the best from outputs from multiple MT sys- tems\", Coling, pp. 8-14, 2002.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A program for automatically selecting the best output from multiple machine translation engines", "authors": [ { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2002, "venue": "MTS", "volume": "", "issue": "", "pages": "63--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Callison-Burch, C., \"A program for automatically se- lecting the best output from multiple machine transla- tion engines\", MTS, pp. 63-66, 2002.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Multiple comparison procedure", "authors": [ { "first": "C", "middle": [], "last": "Hochberg", "suffix": "" } ], "year": 1983, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hochberg, C., Multiple comparison procedure, Wiley, 1983.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An evaluation of multi-engine MT", "authors": [ { "first": "C", "middle": [], "last": "Hogan", "suffix": "" } ], "year": 1998, "venue": "AMTA", "volume": "", "issue": "", "pages": "113--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hogan, C., \"An evaluation of multi-engine MT\", AMTA, pp. 113-123, 1998.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Toward unified approach to memory-and statistical-based machine translation", "authors": [ { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2001, "venue": "ACL", "volume": "", "issue": "", "pages": "378--385", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcu, D., \"Toward unified approach to memory-and statistical-based machine translation\", ACL, pp. 378- 385, 2001.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Fast decoding and optimal decoding for machine translation", "authors": [ { "first": "U", "middle": [], "last": "Germann", "suffix": "" } ], "year": 2001, "venue": "ACL", "volume": "", "issue": "", "pages": "228--235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Germann, U., \"Fast decoding and optimal decoding for machine translation\", ACL, pp. 228-235, 2001.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Example-based decoding for statistical machine translation", "authors": [ { "first": "T", "middle": [], "last": "Watanabe", "suffix": "" } ], "year": 2003, "venue": "MTS", "volume": "", "issue": "", "pages": "410--417", "other_ids": {}, "num": null, "urls": [], "raw_text": "Watanabe, T., \"Example-based decoding for statistical machine translation\", MTS, pp. 410-417, 2003.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Statistical phrase-based translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2003, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "48--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, P., \"Statistical phrase-based translation\", HLT- NAACL, pp. 48-54, 2003.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The CMU statistical translation system", "authors": [ { "first": "S", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2003, "venue": "MTS", "volume": "", "issue": "", "pages": "402--409", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vogel, S., \"The CMU statistical translation system\", MTS, pp. 402-409, 2003.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A projection extension algorithm for statistical machine translation", "authors": [ { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 2003, "venue": "EMNLP", "volume": "", "issue": "", "pages": "402--409", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tillmann, C., \"A projection extension algorithm for statistical machine translation\", EMNLP, pp. 402-409, 2003.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A phrase-based hmm approach to document/abstract alignment", "authors": [ { "first": "Iii", "middle": [], "last": "Daume", "suffix": "" }, { "first": "H", "middle": [], "last": "", "suffix": "" } ], "year": 2004, "venue": "EMNLP", "volume": "", "issue": "", "pages": "119--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daume III, H., \"A phrase-based hmm approach to docu- ment/abstract alignment\", EMNLP, pp. 119-126, 2004.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Maximum likelihood from imcomplete data via the em algorithm", "authors": [ { "first": "A", "middle": [ "P" ], "last": "Dempster", "suffix": "" } ], "year": 1977, "venue": "Journal of Royal Statistical Society", "volume": "", "issue": "39", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dempster, A. P., \"Maximum likelihood from imcom- plete data via the em algorithm\", Journal of Royal Sta- tistical Society, B(39), pp. 1-38, 1977.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Systematical comparison of various statistical alignment models", "authors": [ { "first": "F", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och, F., \"Systematical comparison of various statistical alignment models\", Computational Linguistics, 29(1), pp. 19-51, 2003.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Discriminative training and maximum entropy model for statistical machine translation", "authors": [ { "first": "F", "middle": [], "last": "Och", "suffix": "" } ], "year": 2002, "venue": "ACL", "volume": "", "issue": "", "pages": "295--302", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och, F., \"Discriminative training and maximum en- tropy model for statistical machine translation\", ACL, pp. 295-302, 2002.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "F", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "ACL", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och, F., \"Minimum error rate training in statistical ma- chine translation\", ACL, pp. 160-167, 2003.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Numerical Recipes in C++", "authors": [ { "first": "W", "middle": [ "H" ], "last": "Press", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Press, W. H., \"Numerical Recipes in C++\", Cambridge University Press, 2002.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Generation of word graphs in statistical machine translation", "authors": [ { "first": "N", "middle": [], "last": "Ueffing", "suffix": "" } ], "year": 2002, "venue": "EMNLP", "volume": "", "issue": "", "pages": "156--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ueffing, N., \"Generation of word graphs in statistical machine translation\", EMNLP, pp. 156-163, 2002.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Automatic paraphrasing based on parallel corpus for normalization", "authors": [ { "first": "M", "middle": [], "last": "Shimohata", "suffix": "" } ], "year": 2002, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shimohata, M., \"Automatic paraphrasing based on par- allel corpus for normalization\", LREC, 2002.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Paraphrasing by Statistical machine translation", "authors": [ { "first": "A", "middle": [], "last": "Finch", "suffix": "" } ], "year": 2002, "venue": "FIT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Finch, A., \"Paraphrasing by Statistical machine trans- lation\", FIT, 2002.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Data-oriented Paraphrasing", "authors": [ { "first": "A", "middle": [], "last": "Finch", "suffix": "" } ], "year": 2003, "venue": "RANLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Finch, A., \"Data-oriented Paraphrasing\", RANLP, 2003.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Identifying synonymous expressions from a bilingual corpus for exmaple-based machine translation", "authors": [ { "first": "M", "middle": [], "last": "Shimohata", "suffix": "" } ], "year": 2002, "venue": "Workshop on machine translation in Asia, Coling", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shimohata, M., \"Identifying synonymous expressions from a bilingual corpus for exmaple-based machine translation\", Workshop on machine translation in Asia, Coling, 2002.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Statistical machine translation based on paraphrased corpus", "authors": [ { "first": "T", "middle": [], "last": "Watanabe", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Watanabe, T., \"Statistical machine translation based on paraphrased corpus\", LREC, 2002.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Using paraphraser to improve machine translation evaluation", "authors": [ { "first": "A", "middle": [], "last": "Finch", "suffix": "" } ], "year": 2004, "venue": "IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Finch, A., \"Using paraphraser to improve machine translation evaluation\", IJCNLP, 2004.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Splitting input sentence for machine translation using language model with sentence similarity", "authors": [ { "first": "T", "middle": [], "last": "Doi", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Doi, T., \"Splitting input sentence for machine transla- tion using language model with sentence similarity\", Coling, 2004.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Automatic construction of machine translation knowledge using translation literalness", "authors": [ { "first": "K", "middle": [], "last": "Imamura", "suffix": "" } ], "year": 2003, "venue": "EACL", "volume": "", "issue": "", "pages": "155--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Imamura, K., \"Automatic construction of machine translation knowledge using translation literalness\", EACL, pp. 155-162, 2003.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Phrase Ngram Model", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "Phrase-based HMM Statistical Translation Model Equation 12 can be regarded as a Hidden Markov Model in which each source phraseF j in the latticeF is treated as an observation emitted from a state\u0112 i , a target phrase, in the lattice\u0112, as shown in Figure 2.", "num": null, "type_str": "figure" }, "TABREF0": { "text": "Resources used for two EBMTs in IWSLT04 unresticted Japanese-to-English track.", "content": "
D3HPAT
bilingual corpustravel domain (20K) travel domain (20K)
bilingual dictionaryin-housein-house
thesaurusin-housein-house
grammarN.A.in-house
", "type_str": "table", "num": null, "html": null }, "TABREF1": { "text": "Features of the two EBMTs.", "content": "
D3HPAT
Unitsentence grammatical unit
Coveragenarrowwide
Qualitygoodmodest
This is confirmed by the subjective evaluation of qual-
ity in
", "type_str": "table", "num": null, "html": null }, "TABREF2": { "text": "ATR's Overall Subjective Evaluation -percentages of S, A, B, C, and D ranks.", "content": "
D3HPAT
S 57.00 38.60
A 13.00 21.20
B7.6017.60
C5.806.00
D 16.60 16.60
", "type_str": "table", "num": null, "html": null }, "TABREF3": { "text": "Objective Evaluation.", "content": "
D3HPAT SELECT DIFF.
BLEU60.36 49.3363.06+3.00
NIST10.359.7810.72+0.37
GTM77.70 76.8879.67+1.97
mWER 28.86 37.1826.31-2.55
mPER26.07 31.0623.33-2.97
", "type_str": "table", "num": null, "html": null }, "TABREF4": { "text": "ATR", "content": "
's Overall Subjective Evaluation -cumulative
percentages of S, A, B, C, and D ranks.
D3HPAT SELECT DIFF.
S57.00 38.6059.80+2.80
S,A70.00 59.8073.00+3.00
S,A,B77.60 77.4082.40+4.80
S,A,B,C 83.40 83.4087.80+4.40
D16.60 16.6012.20-4.40
", "type_str": "table", "num": null, "html": null }, "TABREF5": { "text": "mWER vs. Corpus size.", "content": "
Training corpusD3HPAT
IWSLT-supplied (2K)45.7147.28
(20K)28.8637.18
DIFF.-16.85 -10.10
", "type_str": "table", "num": null, "html": null }, "TABREF6": { "text": "ATR's Overall Subjective Evaluation -IWSLT supplied corpus. On the other hand, when the source language and the target language have different sentence structures, such as English and Japanese, we often have translations whose structures are different from each other for a single input sentences. Thus, the authors regard the merging approach as less suitable than the approach of selecting.Hybridization can be implemented in several arichitectures, for example, SMT followed by EBMT, SMT and EBMT in parallel, and so on. Which archtecture is best is still an interesting open question.", "content": "
D3HPAT SELECT
S34.80 25.2034.00
S,A47.40 44.2050.60
S,A,B62.60 70.4072.20
S,A,B,C 73.40 80.4081.80
D26.60 19.6018.20
proach is very attractive.
", "type_str": "table", "num": null, "html": null }, "TABREF7": { "text": "Evaluation -IWSLT Chinese-to-English supplied task.", "content": "
System mWER Fluency Adequacy
Top45.5938.2033.38
Our46.9938.2029.50
Bottom61.6925.0429.06
", "type_str": "table", "num": null, "html": null } } } }