{ "paper_id": "2005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:21:02.276030Z" }, "title": "The NTT Statistical Machine Translation System for IWSLT2005", "authors": [ { "first": "Hajime", "middle": [], "last": "Tsukada", "suffix": "", "affiliation": { "laboratory": "", "institution": "NTT Communication Science Laboratories", "location": {} }, "email": "tsukada@cslab.kecl.ntt.co.jp" }, { "first": "Taro", "middle": [], "last": "Watanabe", "suffix": "", "affiliation": { "laboratory": "", "institution": "NTT Communication Science Laboratories", "location": {} }, "email": "" }, { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "", "affiliation": { "laboratory": "", "institution": "NTT Communication Science Laboratories", "location": {} }, "email": "" }, { "first": "Hideto", "middle": [], "last": "Kazawa", "suffix": "", "affiliation": { "laboratory": "", "institution": "NTT Communication Science Laboratories", "location": {} }, "email": "kazawa@cslab.kecl.ntt.co.jp" }, { "first": "Hideki", "middle": [], "last": "Isozaki", "suffix": "", "affiliation": { "laboratory": "", "institution": "NTT Communication Science Laboratories", "location": {} }, "email": "isozaki@cslab.kecl.ntt.co.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper reports the NTT statistical translation system participating in the evaluation campaign of IWSLT 2005. The NTT system is based on a phrase translation model and utilizes a large number of features with a log-linear model. We studied the various features recently developed in this research field and evaluate the system using supplied data as well as publicly available Chinese, Japanese, and English data. Despite domain mismatch, additional data helped improve translation accuracy.", "pdf_parse": { "paper_id": "2005", "_pdf_hash": "", "abstract": [ { "text": "This paper reports the NTT statistical translation system participating in the evaluation campaign of IWSLT 2005. The NTT system is based on a phrase translation model and utilizes a large number of features with a log-linear model. We studied the various features recently developed in this research field and evaluate the system using supplied data as well as publicly available Chinese, Japanese, and English data. Despite domain mismatch, additional data helped improve translation accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recently, phrase-based translation combined with other features by log-linear models has become the standard technique for statistical machine translation. Shared task based workshops of machine translation including IWSLT and NIST Machine Translation Workshops showed which features effectively improve translation accuracy. However, it remains unclear whether using of these features all together with our system is helpful.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "One unavoidable problem with statistical approaches is training data preparation. Since the amount of training data is generally limited, how to utilize similar monolingual or bilingual resources is an important research topic in statistical machine translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this evaluation campaign, we studied the use of a large number of reportedly effective features with our system and also evaluated both additional monolingual and bilingual corpus to improve translation accuracies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Our system adopts the following log-linear decision rule to obtain the maximum likely translation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-linear Models", "sec_num": "2." }, { "text": "e I 1 = argmax e I 1 1 Z(f J 1 ) exp \uf8eb \uf8ed j \u03bb j f j (e I 1 , f J 1 ) \uf8f6 \uf8f8 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-linear Models", "sec_num": "2." }, { "text": "where f j (e I 1 , f J 1 ) represents a feature function and Z(f J 1 ) denotes a normalization term. Feature function scaling factors \u03bb j are efficiently computed based either on the maximum likelihood criterion [1] or the minimum error rate crite-rion [2]. Our system adopts the latter criterion in the experiments.", "cite_spans": [ { "start": 212, "end": 215, "text": "[1]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Log-linear Models", "sec_num": "2." }, { "text": "One advantage of log-linear models is the ability to easily combine various features relating to translation models, language models, and lexical reordering models. The feature details are described in the following sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-linear Models", "sec_num": "2." }, { "text": "In statistical machine translation, improving language models strongly impacts translation accuracy. Especially recently, the power of long-span n-grams and the use of huge amounts of training data have been reported [3] . In this evaluation campaign, we combined several long n-gram language models as features.", "cite_spans": [ { "start": 217, "end": 220, "text": "[3]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Language Model Features", "sec_num": "3.1." }, { "text": "To train language models from various corpora in different domains, corpus weighting is necessary to fit the trained language models to the test set domain. Log-linear models naturally provide weighting of a language model trained by each corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Model Features", "sec_num": "3.1." }, { "text": "We used the following long n-gram language models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Model Features", "sec_num": "3.1." }, { "text": "\u2022 6-gram \u2022 Class-based 9-gram \u2022 Prefix-4 9-gram \u2022 Suffix-4 9-gram", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Model Features", "sec_num": "3.1." }, { "text": "All n-grams are based on mixed casing. The prefix-4 (suffix-4) language model takes only 4-letter prefixes (suffixes) of English words. Prefix-4 (suffix-4) roughly means the word stem (inflectional endings). For example, \"Would it be possible to ship it to Japan\" becomes \"Woul+ it be poss+ to ship it to Japa+\" by prefix-4, and \"+ould it be +ible to ship it to +apan\" by suffix-4, where \"+\" at the end or beginning of a word denotes deletion. Prefix-4 and suffix-4 are likely to contribute to word alignment and language modeling, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Model Features", "sec_num": "3.1." }, { "text": "Our system adopts a phrase-based translation model represented by phrase-based features, which are based on phrase translation pairs extracted by the method proposed by Och and Ney [4] . First, many-to-many word alignment is set by using both one-to-many and many-to-one word alignments generated by GIZA++ toolkit. In the experiment, we used prefix-4 for word-to-word alignment. Using prefix-4 produced better translations than the original form in preliminary experiments.", "cite_spans": [ { "start": 181, "end": 184, "text": "[4]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Features", "sec_num": "3.2." }, { "text": "Next, phrase pairs consistent with word alignment are extracted. The words in a legal phrase pair are only aligned to each other and not to words outside. Hereafter, we use count(\u1ebd) and count(f ,\u1ebd) to denote the number of extracted phrase\u1ebd and extracted phrase pair (f ,\u1ebd), respectively. We used the following features based on extracted phrase pairs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Features", "sec_num": "3.2." }, { "text": "\u2022 Phrase translation probability \u03c6(\u1ebd|f ) and \u03c6(f |\u1ebd), where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Features", "sec_num": "3.2." }, { "text": "\u03c6(\u1ebd|f ) = count(f ,\u1ebd) f count(f ,\u1ebd)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Features", "sec_num": "3.2." }, { "text": "\u2022 Frequency of phrase pairs count(\u1ebd,f ), count(\u1ebd), and count(f )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Features", "sec_num": "3.2." }, { "text": "\u2022 \u03c7 2 value and Dice coefficient off and\u1ebd", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Features", "sec_num": "3.2." }, { "text": "\u2022 Phrase extraction probability of source/target, i.e., ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Features", "sec_num": "3.2." }, { "text": "We used the following word-level features, where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-level Features", "sec_num": "3.3." }, { "text": "w(f |e) = count(f, e) f count(f , e) , I", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-level Features", "sec_num": "3.3." }, { "text": "is the number of words in the translation and J is the number of words in the input sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-level Features", "sec_num": "3.3." }, { "text": "\u2022 Lexical weight p w (f |\u1ebd) and p w (\u1ebd|f ) [6] , where", "cite_spans": [ { "start": 43, "end": 46, "text": "[6]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Word-level Features", "sec_num": "3.3." }, { "text": "p w (f |\u1ebd) = max a J j=1 1 |{i|(i, j) \u2208 a)}| \u2022 \u2200(i,j)\u2208a w(f j |e i ) \u2022 IBM Model 1 score p M1 (f |\u1ebd) and p M1 (\u1ebd|f ), where p M1 (f |\u1ebd) = 1 (\u0128 + 1)JJ j\u0128 i w(f j |\u1ebd i ) \u2022 Viterbi IBM Model 1 score p M1 (f |\u1ebd) and p M1 (\u1ebd|f ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-level Features", "sec_num": "3.3." }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-level Features", "sec_num": "3.3." }, { "text": "p M1 (f |\u1ebd) = 1 (\u0128 + 1)JJ j max i w(f j |\u1ebd i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-level Features", "sec_num": "3.3." }, { "text": "\u2022 Noisy OR gate p NOR (f |\u1ebd) and p NOR (\u1ebd|f ) [7] , where", "cite_spans": [ { "start": 46, "end": 49, "text": "[7]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Word-level Features", "sec_num": "3.3." }, { "text": "p NOR (f |\u1ebd) = j (1 \u2212 i (1 \u2212 w(f j |\u1ebd i ))) \u2022 Deletion penalty p del (\u1ebd,f )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-level Features", "sec_num": "3.3." }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-level Features", "sec_num": "3.3." }, { "text": "p del (\u1ebd,f ) = j del(\u1ebd\u0128 1 ,f j ) del(\u1ebd\u0128 1 ,f j ) = \uf8f1 \uf8f2 \uf8f3 1 i does not exist s.t. w(\u1ebd i |f j ) > threshold 0 otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-level Features", "sec_num": "3.3." }, { "text": "We used the following features to control the reordering of phrases:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Reordering Features", "sec_num": "3.4." }, { "text": "\u2022 Distortion model d(a i \u2212 b i\u22121 ) = exp \u2212|ai\u2212bi\u22121\u22121| ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Reordering Features", "sec_num": "3.4." }, { "text": "where a i denotes the starting position of the foreign phrase translated into the i-th English phrase, and b i\u22121 denotes the end position of the foreign phrase translated into the (i \u2212 1)-th English phrase [6] .", "cite_spans": [ { "start": 206, "end": 209, "text": "[6]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical Reordering Features", "sec_num": "3.4." }, { "text": "\u2022 Right monotone model P R (\u1ebd,f ) (and left monotone model P L (\u1ebd,f )) inspired by Och's scheme [8] , where", "cite_spans": [ { "start": 96, "end": 99, "text": "[8]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical Reordering Features", "sec_num": "3.4." }, { "text": "P R (f ,\u1ebd) = count R count(f ,\u1ebd) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Reordering Features", "sec_num": "3.4." }, { "text": "and count R denotes the number of right connected monotone phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Reordering Features", "sec_num": "3.4." }, { "text": "The following additional features are used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Features", "sec_num": "3.5." }, { "text": "\u2022 number of words that constitute a translation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Features", "sec_num": "3.5." }, { "text": "\u2022 number of phrases that constitute a translation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Features", "sec_num": "3.5." }, { "text": "The decoder is based on word graph [9] and uses a multi-pass strategy to generate n-best translations. It generates hypothesized translations in a left-to-right order by combining phrase translations for a source sentence. The first pass of our decoding algorithm generates a word graph, a compact representation of hypothesized translations, using a breadth-first beam search, as in [10] [11][12] [13] . Then, n-best translations are extracted from the generated word graph using A * search.", "cite_spans": [ { "start": 35, "end": 38, "text": "[9]", "ref_id": "BIBREF6" }, { "start": 384, "end": 388, "text": "[10]", "ref_id": "BIBREF7" }, { "start": 398, "end": 402, "text": "[13]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "4." }, { "text": "The search space for a beam search is constrained by restricting the reordering of source phrases. We have window size constraints that restrict the number of words skipped before selecting a segment of the source sequence [6] [12] . An ITG-constraint [14] is also implemented that prohibits the extension of a hypothesis that violates ITG constraints, which will be useful for language pairs with drastic reordering, such as Japanese-to-English and Korean-to-English translations.", "cite_spans": [ { "start": 223, "end": 226, "text": "[6]", "ref_id": "BIBREF3" }, { "start": 227, "end": 231, "text": "[12]", "ref_id": "BIBREF9" }, { "start": 252, "end": 256, "text": "[14]", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "4." }, { "text": "During the beam search stage, three kinds of pruning are performed to further reduce the search space [11] . First, observation pruning limits the number of phrase translation candidates to a maximum of N candidates. Second, threshold pruning is performed by computing the most likely partial hypothesis and by discarding hypotheses whose probability is lower than the maximum score multiplied with a threshold. Third, histogram pruning is carried out by restricting the number of hypotheses to a maximum of M candidates. Observation and threshold pruning are also applied to the back pointer to reduce the size of the word graph. In pruning hypotheses, future cost is also estimated on the fly and then integrated with the preceding score for beam pruning.", "cite_spans": [ { "start": 102, "end": 106, "text": "[11]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "4." }, { "text": "We estimated future cost as described in [13] . Exact costs for the phrase-based features and word level features can be calculated for each extracted phrase pair. For the language model features, their costs were approximated by using only output words contained by each phrase pair. The upper bound of lexical reordering feature costs can be computed beforehand by considering the possible permutations of phrase pairs for a given input.", "cite_spans": [ { "start": 41, "end": 45, "text": "[13]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "4." }, { "text": "After generating a word graph, it is then pruned using the posterior probabilities of edges [15] to further reduce the number of duplicate translations for A * search. An edge is pruned if its posterior score is lower than the highest posterior score in the graph by a certain amount.", "cite_spans": [ { "start": 92, "end": 96, "text": "[15]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "4." }, { "text": "To validate the use of the reportedly effective features, we conducted translation experiments using all features introduced in Section 3. Also, we conducted comparable experiments in both supplied and unrestricted data tracks to study the effectiveness of additional language resources. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5." }, { "text": "To obtain comparable results for all source and target language pairs, we concentrated on tracks generating English, i.e., Japanese-to-English, Chinese-to-English, Arabicto-English, and Korean-to-English. The English parts of the corpora are tokenized using LDC's standards. For Arabic, it is simply tokenized by splitting punctuation and then removing Arabic characters denoting \"and\". For other languages, supplied segmentation is used. For unrestricted data tracks, Japanese is segmented using ChaSen 1 , and Chinese is segmented using an LDC segmenter with lexicon entries gathered from supplied data and an LDC corpus. Test sets including ASR 1-best are also re-segmented in the same manner to maintain segmentation consistency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Preparation", "sec_num": "5.1." }, { "text": "We used mixed casing and prefix-4 form for word-toword alignment in the phrase extraction. Also, mixed casing was used for training n-grams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Preparation", "sec_num": "5.1." }, { "text": "6-gram language models and class-based/prefix-4/suffix-4 9gram models trained by the SRI language modeling toolkit [16] were used in both supplied and unrestricted data tracks.", "cite_spans": [ { "start": 115, "end": 119, "text": "[16]", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Language Models", "sec_num": "5.2." }, { "text": "We used the following additional monolingual corpora for language models of unrestricted data tracks: (i) ATR Spoken Language Database publically available from ATR 2 ; (ii) Web pages crawled from discussion groups and FAQs about travel; and (iii) English Gigaword corpus from LDC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Models", "sec_num": "5.2." }, { "text": "As additional bilingual corpora for translation models of unrestricted data tracks, we used the ATR Spoken Language Database for Japanese-to-English translation and the two largest corpora in the LDC collection, LDC2004T08 and LDC2005T10, for Chinese-to-English translation. No additional resources were used for other language pairs. Tables 1 and 2 illustrate the data size of each corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Models", "sec_num": "5.2." }, { "text": "Using the monolingual corpora, a total of 10 n-grams were trained and used as a feature of log-linear models when decoding. Table 3 shows the output language perplexity of each n-gram used in the decoder. On the other hand, Table 4 shows the input language perplexity of the trigram trained by the supplied corpora. Tables 3 and 4 and IWSLT datasets are similar, WEB is closer to IWSLT than Gigaword, and that LDC is very different from IWSLT. Since the collection is enormous in Gigaword, the vocabulary set is first limited to that observed in the English part of supplied corpus and the ATR database. Then for decoding, an actual n-gram language model is estimated on the fly by constraining the vocabulary set to that observed in a given test set.", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 131, "text": "Table 3", "ref_id": "TABREF6" }, { "start": 224, "end": 232, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 317, "end": 331, "text": "Tables 3 and 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Language Models", "sec_num": "5.2." }, { "text": "Following one of the best systems [17] in IWSLT 2004, feature function scaling factors \u03bb j are trained using NIST scores [18] in a loss function of minimum error rate training, and development set 1 (CSTAR) was used for it.", "cite_spans": [ { "start": 34, "end": 38, "text": "[17]", "ref_id": "BIBREF14" }, { "start": 121, "end": 125, "text": "[18]", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Other Setups", "sec_num": "5.3." }, { "text": "For Japanese and Korean, ITG constraints of lexical reordering were applied, and for Arabic and Chinese, simple window size constraints up to 7 were used. Table 5 summarizes the overall results of the supplied/unrestricted data tracks. The scores of the table are obtained by the comparable conditions for each language pair while some are not the same as those released by the organizer.", "cite_spans": [], "ref_spans": [ { "start": 155, "end": 162, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Other Setups", "sec_num": "5.3." }, { "text": "\" m unrestricted\" denotes that monolingual corpora are unrestricted but bilingual corpora are restricted; \" m bunrestricted\" denotes that both monolingual and bilingual corpora are unrestricted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.4." }, { "text": "The table shows that unrestricted data tracks consistently outperform restricted data tracks except for Japaneseto-English with ASR output. This may be because re-segmentation of the ASR output produces bad segmentation because of ASR errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.4." }, { "text": "\" mb unrestricted\" is inferior to \" m unrestricted\" in Chinese-to-English translation whereas the former is better than the latter in Japanese-to-English translation. This may be because the additional bilingual resources are similar in Japanese-to-English but are different in Chinese-to-English as shown in Tables 3 and 4 .", "cite_spans": [], "ref_spans": [ { "start": 309, "end": 323, "text": "Tables 3 and 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "5.4." }, { "text": "The overall results suggest that our feature design could not deal with domain mismatch of bilingual corpora but could deal with small mismatch of monolingual corpora. While Gigaword differs most from the supplied corpus in terms of perplexity, as shown in Table 3 , its n-gram surprisingly contributes more to translation than other n-grams in terms of feature function scaling factors of log-linear models. It would be interesting to study this finding in more detail.", "cite_spans": [], "ref_spans": [ { "start": 257, "end": 264, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Results", "sec_num": "5.4." }, { "text": "The NTT statistical machine translation system in the evaluation campaign is reported. A log-linear model naturally enabled weighting of various features including language models. As a result, we obtained competitive accuracies. The log-linear model effectively utilized n-grams trained by outof-domain corpora, and we improved the translation accuracy of the supplied data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "These experiments simply used all available features. However, feature extraction may additionally improve translation accuracy. It is worth studying.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "Compared to other sites, our results are better in terms of NIST scores but inferior in terms of BLEU scores. This is because feature function scaling factors are trained by a loss function based on NIST scores. We also doubt the overfitting of feature function scaling factors. We need to continue studying both training methods of the scaling factors and loss functions to improve other translation metrics as well as NIST scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "Our training tool for phrase translation models is an extension of that provided by Philipp Koehn under a contract of MIT-NTT collaboration. [2] F. J. Och, \"Minimum error rate training in statistical machine translation,\" in Proc. of the 41th Annual Meet- ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "7." }, { "text": "http://chasen.naist.jp 2 http://www.red.atr.jp/product/index.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The Google statistical machine translation system for the 2005 NIST MT evaluation (unpublished)", "authors": [], "year": 2005, "venue": "Machine Translation Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "--, \"The Google statistical machine translation sys- tem for the 2005 NIST MT evaluation (unpublished),\" in Machine Translation Workshop, 2005.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The alignment template approach to statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "4", "pages": "417--449", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och and H. Ney, \"The alignment template ap- proach to statistical machine translation,\" Computa- tional Linguistics, vol. 30, no. 4, pp. 417-449, Decem- ber 2004.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automatic extraction of translation patterns in parallel corpora", "authors": [ { "first": "M", "middle": [], "last": "Kitamura", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 1997, "venue": "IPSJ Transactions", "volume": "38", "issue": "4", "pages": "727--736", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Kitamura and Y. Matsumoto, \"Automatic extraction of translation patterns in parallel corpora,\" IPSJ Trans- actions, vol. 38, no. 4, pp. 727-736, 1997.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Statistical phrasebased translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proc. of Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL)", "volume": "", "issue": "", "pages": "127--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn, F. J. Och, and D. Marcu, \"Statistical phrase- based translation,\" in Proc. of Human Language Tech- nology Conference of the North American Chapter of the Association for Computational Linguistics (HLT- NAACL), May-June 2003, pp. 127-133.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Improvements in phrase-based statistical machine translation", "authors": [ { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Proc. of Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL)", "volume": "", "issue": "", "pages": "257--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Zens and H. Ney, \"Improvements in phrase-based statistical machine translation,\" in Proc. of Human Lan- guage Technology Conference of the North American Chapter of the Association for Computational Linguis- tics (HLT-NAACL), May 2004, pp. 257-264.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A smorgasbord of features for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "D", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "S", "middle": [], "last": "Khudanpur", "suffix": "" }, { "first": "A", "middle": [], "last": "Sarkar", "suffix": "" }, { "first": "K", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "A", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "S", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "L", "middle": [], "last": "Shen", "suffix": "" }, { "first": "D", "middle": [], "last": "Smith", "suffix": "" }, { "first": "K", "middle": [], "last": "Eng", "suffix": "" }, { "first": "V", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Z", "middle": [], "last": "Jin", "suffix": "" }, { "first": "D", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2004, "venue": "Proc. of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL)", "volume": "", "issue": "", "pages": "161--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och, D. Gildea, S. Khudanpur, A. Sarkar, K. Ya- mada, A. Fraser, S. Kumar, L. Shen, D. Smith, K. Eng, V. Jain, Z. Jin, and D. Radev, \"A smorgasbord of fea- tures for statistical machine translation,\" in Proc. of the Human Language Technology Conference of the North American Chapter of the Association for Com- putational Linguistics (HLT-NAACL), May 2004, pp. 161-168.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Generation of word graphs in statistical machine translation", "authors": [ { "first": "N", "middle": [], "last": "Ueffing", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "Proc. of Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "156--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Ueffing, F. J. Och, and H. Ney, \"Generation of word graphs in statistical machine translation,\" in Proc. of Empirical Methods in Natural Language Processing (EMNLP), July 2002, pp. 156-163.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Decoding algorithm in statistical machine translation", "authors": [ { "first": "Y.-Y", "middle": [], "last": "Wang", "suffix": "" }, { "first": "A", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 1997, "venue": "Proc. of the 35th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y.-Y. Wang and A. Waibel, \"Decoding algorithm in sta- tistical machine translation,\" in Proc. of the 35th An- nual Meeting of the Association for Computational Lin- guistics, 1997.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Word reordering and a dynamic programming beam search algorithm for statistical machine translation", "authors": [ { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "97--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Tillmann and H. Ney, \"Word reordering and a dy- namic programming beam search algorithm for statis- tical machine translation,\" Computational Linguistics, vol. 29, no. 1, pp. 97-133, March 2003.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The CMU statistical machine translation system", "authors": [ { "first": "S", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "F", "middle": [], "last": "Huang", "suffix": "" }, { "first": "A", "middle": [], "last": "Venugopal", "suffix": "" }, { "first": "B", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "A", "middle": [], "last": "Tribble", "suffix": "" }, { "first": "M", "middle": [], "last": "Eck", "suffix": "" }, { "first": "A", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2003, "venue": "Proc. of MT Summit IX", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Vogel, Y. Zhang, F. Huang, A. Venugopal, B. Zhao, A. Tribble, M. Eck, and A. Waibel, \"The CMU statisti- cal machine translation system,\" in Proc. of MT Summit IX, September 2003.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "PHARAOH: User manual and description for version 1.2, UCS Information Science Institute", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn, PHARAOH: User manual and description for version 1.2, UCS Information Science Institute, August 2004.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Improved word alignment using a symmetric lexicon model", "authors": [ { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "E", "middle": [], "last": "Matusov", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Proc. of 20th International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "36--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Zens, E. Matusov, and H. Ney, \"Improved word alignment using a symmetric lexicon model,\" in Proc. of 20th International Conference on Computational Linguistics (COLING), August 2004, pp. 36-42.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Word graphs for statistical machine translation", "authors": [ { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2005, "venue": "Proc. of the ACL Workshop on Building and Using Parallel Texts", "volume": "", "issue": "", "pages": "191--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Zens and H. Ney, \"Word graphs for statistical ma- chine translation,\" in Proc. of the ACL Workshop on Building and Using Parallel Texts, Ann Arbor, Michi- gan, June 2005, pp. 191-198.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "SRILM -an extensible language modeling toolkit", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Proc. of 7th International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Stolcke, \"SRILM -an extensible language model- ing toolkit,\" in Proc. of 7th International Conference on Spoken Language Processing, 2002.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Alignment templates: the RWTH SMT system", "authors": [ { "first": "O", "middle": [], "last": "Bender", "suffix": "" }, { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "E", "middle": [], "last": "Matusov", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Proc. of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "79--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "O. Bender, R. Zens, E. Matusov, and H. Ney, \"Align- ment templates: the RWTH SMT system,\" in Proc. of the International Workshop on Spoken Language Translation, Kyoto, Japan, 2004, pp. 79-84.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Automatic evaluation of machine translation quality using n-gram co-occurrence statistics", "authors": [ { "first": "G", "middle": [], "last": "Doddington", "suffix": "" } ], "year": 2002, "venue": "Proc. of HLT 2002", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Doddington, \"Automatic evaluation of machine translation quality using n-gram co-occurrence statis- tics,\" in Proc. of HLT 2002, 2002.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "of extracted source/target phrases # of source/target phrases appearing in the corpus \u2022 Phrase pair extraction probability, i.e., # of sentences phrase pairs extracted # of sentences phrase pairs appearing in the corpus \u2022 Adjusted Dice coefficient, which is an extension of the measure proposed in [5], i.e., Dice(f ,\u1ebd)log(count(f ,\u1ebd) + 1)", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "F. J. Och and H. Ney, \"Discriminative training and maximum entropy models for statistical machine translation,\" in Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), July 2002, pp. 295-302.", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "text": "Monolingual corpora for unrestricted data track", "type_str": "table", "num": null, "html": null, "content": "" }, "TABREF3": { "text": "Bilingual corpora for unrestricted data track", "type_str": "table", "num": null, "html": null, "content": "
Test setsJapaneseChinese
IWSLT ATR IWSLT LDC
devset116.9 29.556.6462
devset217.6 32.956.1449
testset24.5 28.650.7432
" }, "TABREF4": { "text": "Input language perplexity of trigram trained by supplied corpora", "type_str": "table", "num": null, "html": null, "content": "" }, "TABREF6": { "text": "Output language perplexity of n-grams for decoding Language pairs Translation input Training data BLEU scores NIST scores", "type_str": "table", "num": null, "html": null, "content": "
AEtranscriptionsupplied0.43509.1821
m unrestricted0.47649.3674
CEtranscriptionsupplied0.32758.0768
m unrestricted0.41128.8418
mb unrestricted0.39438.6804
ASR 1-bestsupplied0.27396.5185
mb unrestricted0.29656.9416
JEtranscriptionsupplied0.36697.9669
m unrestricted0.36798.1207
mb unrestricted0.39328.6442
ASR 1-bestsupplied0.38818.3855
mb unrestricted0.37628.3502
KEtranscriptionsupplied0.32187.8489
m unrestricted0.34978.0160
" }, "TABREF7": { "text": "NTT results of evaluation campaign ing of the Association for Computational Linguistics (ACL), July 2003, pp. 160-167.", "type_str": "table", "num": null, "html": null, "content": "" } } } }