{ "paper_id": "Y15-1031", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:41:57.843535Z" }, "title": "English to Chinese Translation: How Chinese Character Matters?", "authors": [ { "first": "Rui", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanghai Jiao Tong University", "location": { "postCode": "200240", "settlement": "Shanghai", "country": "China" } }, "email": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanghai Jiao Tong University", "location": { "postCode": "200240", "settlement": "Shanghai", "country": "China" } }, "email": "zhaohai@cs.sjtu.edu.cn" }, { "first": "Bao-Liang", "middle": [], "last": "Lu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanghai Jiao Tong University", "location": { "postCode": "200240", "settlement": "Shanghai", "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Word segmentation is helpful in Chinese natural language processing in many aspects. However it is showed that different word segmentation strategies do not affect the performance of Statistical Machine Translation (SMT) from English to Chinese significantly. In addition, it will cause some confusions in the evaluation of English to Chinese SMT. So we make an empirical attempt to translation English to Chinese in the character level, in both the alignment model and language model. A series of empirical comparison experiments have been conducted to show how different factors affect the performance of character-level English to Chinese SMT. We also apply the recent popular continuous space language model into English to Chinese SMT. The best performance is obtained with the BLEU score 41.56, which improve baseline system (40.31) by around 1.2 BLEU score.", "pdf_parse": { "paper_id": "Y15-1031", "_pdf_hash": "", "abstract": [ { "text": "Word segmentation is helpful in Chinese natural language processing in many aspects. However it is showed that different word segmentation strategies do not affect the performance of Statistical Machine Translation (SMT) from English to Chinese significantly. In addition, it will cause some confusions in the evaluation of English to Chinese SMT. So we make an empirical attempt to translation English to Chinese in the character level, in both the alignment model and language model. A series of empirical comparison experiments have been conducted to show how different factors affect the performance of character-level English to Chinese SMT. We also apply the recent popular continuous space language model into English to Chinese SMT. The best performance is obtained with the BLEU score 41.56, which improve baseline system (40.31) by around 1.2 BLEU score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Word segmentation is necessary in most Chinese language processing doubtlessly, because there are no natural spaces between characters in Chinese text (Xi et al., 2012) . It is defined in this paper as character-based segmentation if Chinese sentence is segmented into characters, otherwise as word segmentation.", "cite_spans": [ { "start": 151, "end": 168, "text": "(Xi et al., 2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In Statistical Machine Translation (SMT) in which Chinese is target language, few work have shown that better word segmentation will lead to better result in SMT (Zhao et al., 2013; Chang et al., 2008; Zhang et al., 2008) . Recently Xi et al. (2012) demonstrate that Chinese character alignment can improve both of alignment quality and translation performance, which also motivates us the hypothesis whether word segmentation is not even necessary for SMT where Chinese as target language.", "cite_spans": [ { "start": 162, "end": 181, "text": "(Zhao et al., 2013;", "ref_id": "BIBREF24" }, { "start": 182, "end": 201, "text": "Chang et al., 2008;", "ref_id": "BIBREF6" }, { "start": 202, "end": 221, "text": "Zhang et al., 2008)", "ref_id": "BIBREF20" }, { "start": 233, "end": 249, "text": "Xi et al. (2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "From the view of evaluation, the difference between the word-based segmentation methods will also makes the evaluation of SMT where Chinese as target language confusing. The automatic evaluation methods (such as BLEU and NIST BLEU score) in SMT are mostly based on n-gram precision. If the segmentation of test sets are different, the elements of the n-gram of test sets will also be different, which means that the evaluation is made on different test sets. To evaluate the quality of Chinese translation output, the International Workshop on Spoken Language Translation in 2005 (IWSLT'2005) used the word-level BLEU metric (Papineni et al.,2002) . However, IWSLT'08 and NIST'08 adopted character-level evaluation metrics to rank the submitted systems. Although there are also a lot of other works on automatic evaluation of SMT, such as METEOR (Lavie and Agarwal, 2007) , GTM (Melamed et al., 2003) and TER (Snover et al., 2006) , whether word or character is more suitable for automatic evaluation of Chinese translation output has not been systematically investigated (Li et al., 2011) . Recently, different kinds of characterlevel SMT evaluation metrics are proposed, which also support that character-level SMT may have its own advantage accordingly (Li et al., 2011; Liu and Ng, 2012) .", "cite_spans": [ { "start": 580, "end": 592, "text": "(IWSLT'2005)", "ref_id": null }, { "start": 625, "end": 647, "text": "(Papineni et al.,2002)", "ref_id": null }, { "start": 846, "end": 871, "text": "(Lavie and Agarwal, 2007)", "ref_id": null }, { "start": 874, "end": 900, "text": "GTM (Melamed et al., 2003)", "ref_id": null }, { "start": 909, "end": 930, "text": "(Snover et al., 2006)", "ref_id": "BIBREF16" }, { "start": 1072, "end": 1089, "text": "(Li et al., 2011)", "ref_id": null }, { "start": 1256, "end": 1273, "text": "(Li et al., 2011;", "ref_id": null }, { "start": 1274, "end": 1291, "text": "Liu and Ng, 2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Traditionally, Back-off N-gram Language Models (BNLM) (Chen and Goodman, 1996; Chen and Goodman, 1998; Stolcke, 2002) are being widely used for probability estimation. For a better probability estimation method, recently, Continuous-Space Language Models (CSLM), especially Neural Network Language Models (NNLM) (Bengio et al., 2003; Schwenk, 2007; Le et al., 2011) are being used in SMT (Schwenk et al., 2006; Son et al., 2010; Schwenk et al., 2012; Son et al., 2012; Wang et al., 2013) . These works have shown that CSLMs can improve the BLEU scores of SMT when compared with BNLMs, on the condition that the training data for language modeling are in the same size. However, in practice, CSLMs have not been widely used in SMT mainly due to high computational costs of training and using CSLMs. Since the using costs of CSLMs are very high, it is difficult to use C-SLMs in decoding directly. A common approach in SMT using CSLMs is the two pass approach, or nbest reranking. In this approach, the first pass uses a BNLM in decoding to produce an n-best list. Then, a CSLM is used to rerank those n-best translations in the second pass (Schwenk et al., 2006; Son et al., 2010; Schwenk et al., 2012; Son et al., 2012) . Nearly all of the previous works only conduct CSLMs on English, we conduct CSLM on Chinese in this paper. Vaswani et al. propose a method for reducing the training cost of CSLM and apply it into SMT decoder (Vaswani et al., 2013) . Some other studies try to implement neural network LM or translation model for SMT (Gao et al., 2014; Devlin et al., 2014; Zhang et al., 2014; Auli et al., 2013; Liu et al., 2013; Sundermeyer et al., 2014; Cho et al., 2014; Zou et al., 2013; Lauly et al., 2014; Kalchbrenner and Blunsom, 2013) .", "cite_spans": [ { "start": 54, "end": 78, "text": "(Chen and Goodman, 1996;", "ref_id": "BIBREF7" }, { "start": 79, "end": 102, "text": "Chen and Goodman, 1998;", "ref_id": null }, { "start": 103, "end": 117, "text": "Stolcke, 2002)", "ref_id": null }, { "start": 312, "end": 333, "text": "(Bengio et al., 2003;", "ref_id": "BIBREF3" }, { "start": 334, "end": 348, "text": "Schwenk, 2007;", "ref_id": "BIBREF14" }, { "start": 349, "end": 365, "text": "Le et al., 2011)", "ref_id": null }, { "start": 388, "end": 410, "text": "(Schwenk et al., 2006;", "ref_id": "BIBREF12" }, { "start": 411, "end": 428, "text": "Son et al., 2010;", "ref_id": "BIBREF17" }, { "start": 429, "end": 450, "text": "Schwenk et al., 2012;", "ref_id": "BIBREF13" }, { "start": 451, "end": 468, "text": "Son et al., 2012;", "ref_id": "BIBREF18" }, { "start": 469, "end": 487, "text": "Wang et al., 2013)", "ref_id": null }, { "start": 1139, "end": 1161, "text": "(Schwenk et al., 2006;", "ref_id": "BIBREF12" }, { "start": 1162, "end": 1179, "text": "Son et al., 2010;", "ref_id": "BIBREF17" }, { "start": 1180, "end": 1201, "text": "Schwenk et al., 2012;", "ref_id": "BIBREF13" }, { "start": 1202, "end": 1219, "text": "Son et al., 2012)", "ref_id": "BIBREF18" }, { "start": 1429, "end": 1451, "text": "(Vaswani et al., 2013)", "ref_id": null }, { "start": 1537, "end": 1555, "text": "(Gao et al., 2014;", "ref_id": null }, { "start": 1556, "end": 1576, "text": "Devlin et al., 2014;", "ref_id": null }, { "start": 1577, "end": 1596, "text": "Zhang et al., 2014;", "ref_id": "BIBREF22" }, { "start": 1597, "end": 1615, "text": "Auli et al., 2013;", "ref_id": "BIBREF2" }, { "start": 1616, "end": 1633, "text": "Liu et al., 2013;", "ref_id": null }, { "start": 1634, "end": 1659, "text": "Sundermeyer et al., 2014;", "ref_id": null }, { "start": 1660, "end": 1677, "text": "Cho et al., 2014;", "ref_id": null }, { "start": 1678, "end": 1695, "text": "Zou et al., 2013;", "ref_id": "BIBREF25" }, { "start": 1696, "end": 1715, "text": "Lauly et al., 2014;", "ref_id": null }, { "start": 1716, "end": 1747, "text": "Kalchbrenner and Blunsom, 2013)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder is organized as follows: In Section 2, we will review the background of English to Chinese SMT. The character based SMT will be proposed in Section 3. In Section 4, the experiments will be conducted and the results will be analyzed . We will conclude our work in the Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The ancient Chinese (or Classical Chinese, \u6587\u8a00\u6587) can be conveniently split into characters, for most characters in ancient Chinese still keep understood by one who only knows modern Chinese (or Written Vernacular Chinese, \u767d\u8bdd\u6587) words. For example, \"\u4e09\u4eba\u884c\uff0c\u5219\u5fc5\u6709\u6211\u5e08\u7109\u3002\" is one of the popular sentences in the Analects (\u8bba\u8bed), and its corresponding modern Chinese words and English meaning are shown in TABLE 1. From the table, we can see that the characters in ancient Chinese have independent meaning, but most of the characters in modern Chinese do not, and they must combine together into words to make sense. If we split modern Chinese sentences into characters, the semantic meaning in the words will partially lose. Whether or not this semantic function of Chinese word can be partly replaced by the alignment model and Language Model (LM) of character-based SMT will be shown in this paper. SMT as a research domain started in the late 1980s at IBM (Brown et al., 1993) , which maps individual words to words and allows for deletion and insertion of words. Lately, various research-es have shown better translation quality with phrase translation. Phrase-based SMT can be traced back to Och's alignment template model (Och and Ney, 2004) , which can be re-framed as a phrase translation system. Other researchers augmented their systems with phrase translation, such as Yamada and Knight (Yamada and Knight, 2001) , who used phrase translation in a syntax-based model. The phrase translation model is based on the noisy channel model. Bayes rule is mostly used to reformulate the translation probability for translating a foreign sentence f into target e as:", "cite_spans": [ { "start": 940, "end": 964, "text": "IBM (Brown et al., 1993)", "ref_id": null }, { "start": 1213, "end": 1232, "text": "(Och and Ney, 2004)", "ref_id": "BIBREF10" }, { "start": 1383, "end": 1408, "text": "(Yamada and Knight, 2001)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "argmax e p(e|f ) = argmax e p(f |e)p(e)", "eq_num": "(1)" } ], "section": "Ancient", "sec_num": null }, { "text": "This allows for the probabilities of an LM p(e) and a separated translation model p(f |e). During decoding, the foreign input sentence f is segmented into a sequence of phrases f i 1 . It is assumed a uniform probability distribution over all possible segmentations. Each foreign phrase f i in f i 1 is translated into an target phrase e i . The target phrases may be reordered. Phrase translation is modeled by a probability distribution \u2126(f i |e i ) . Recall that due to the Bayes rule, the translation direction is inverted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ancient", "sec_num": null }, { "text": "Reordering of the output phrases is modeled by a relative distortion probability distribution d(start i , end i\u22121 ), where start i denotes the start position of the foreign phrase that is translated into the ith target phrase, and end i\u22121 denotes the end position of the foreign phrase that was translated into the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ancient", "sec_num": null }, { "text": "(i \u2212 1) \u2212 th target phrase. A simple distortion model d(start i , end i\u22121 ) = \u03b1 |start i \u2212end i\u22121 \u22121|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ancient", "sec_num": null }, { "text": "with an appropriate value for the parameter \u03b1 is set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ancient", "sec_num": null }, { "text": "In order to calibrate the output length, a factor \u03c9 (called word cost) for each generated English word in addition to the tri-gram LM p LM is proposed. This is a simple means to optimize performance. Usually, this factor is larger than 1, biasing toward longer output. In summary, the best output sentence given a foreign input sentence f according to the model is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ancient", "sec_num": null }, { "text": "argmaxep(e|f ) = argmaxep(f |e)pLM (e)\u03c9 length(e) , (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ancient", "sec_num": null }, { "text": "where p(f |e) is decomposed into:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ancient", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(f i 1 |e i 1 ) = \u03d5 i 1 \u2126(f i |e i )d(start i , end i\u22121 ).", "eq_num": "(3)" } ], "section": "Ancient", "sec_num": null }, { "text": "In this paper, the f stands for English and the e stands for Chinese. In short, there are three main parts both in the English to Chinese and Chinese to English SMT: the alignment p(f |e), the LM p(e) and the parameters training (tuning). When Chinese is the foreign language, there is only the alignment model p(f |e) containing Chinese language processing. Contrarily, when Chinese is the target language, both the the alignment part p(f |e) and the LM p(e) will help retrieve the sematic meaning in the characters which is originally represented by words. So it is possible that we can process the English to Chinese in character level without word segmentation, which may also avoid the confusion in the evaluation part as proposed above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ancient", "sec_num": null }, { "text": "3 Character-based versus Word-based SMT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ancient", "sec_num": null }, { "text": "The standards of segmentation between word-based and character-based English to Chinese translation are different, as well as the standard of the evaluation of them. That is, the test data contains words as the smallest unit for word-based SMT, and characters for character-based SMT. So the translated sentences of word-based translation will be converted into character-based sentence, and evaluated together with character-based translation BLEU score for fair comparison. We select two popular segmentation segmenters, one of which is based on Forward Maximum Matching (FMM) algorithm with the lexicon of (Low et al., 2005) , and the other is based on Conditional Random Fields (CRF) with the same implementation of (Zhao et al., 2006) . Because most Chinese words contains 1 to 4 characters, so we set the word-based LM as default trigram in SRILM, and character-based LM for 5-gram. All the different methods share the same other default parameters in the toolkits which will be further introduced in Section 4. There seems to be no ambiguity in different character segmentations, however English characters, numbers and other symbols are also contained in the corpus. If they are split into \"characters\" like \"\u5e74 \u589e \u957f \u767e \u5206 \u4e4b 2 0 0\" ( 200 % increment per year) or \"J o r d a n \u662f \u4f1f \u5927 \u7684 \u7bee \u7403 \u8fd0 \u52a8 \u5458\" (Jordan is a great basketball player), they will cause a lot of misun-derstanding. So the segmentation is only used for Chinese characters, and the foreign letters, numbers and other symbols in Chinese text are still kept consequent.", "cite_spans": [ { "start": 609, "end": 627, "text": "(Low et al., 2005)", "ref_id": null }, { "start": 720, "end": 739, "text": "(Zhao et al., 2006)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Ancient", "sec_num": null }, { "text": "Shown in Table 2 , the BLEU score of SMT system with character-based segmenter is much higher than both FMM and CRF segmenters. The wordbased English to Chinese SMT system is trained and tuned in word level and evaluated in character level, so we use the character-based LM to re-score the nbest-list of the results of the FMM and CRF segmenters. Firstly we convert the translated 1000-best candidates for each sentence into characters. Then calculate their LM scores by the character-based LM, and replace the word-based LM score with character-based LM score. At last we re-calculate the global score to get the new 1-best candidate with the same tuning weight as before. The BLEU score of re-ranked method is slightly higher than before, but still much less than the result of character segmenter. Although we can not conclude the character-based segmenter is better simply according to this experiment, this result gives us the confidence that our approach is reasonable and feasible at least.", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 16, "text": "Table 2", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Ancient", "sec_num": null }, { "text": "We use the patent data for the Chinese to English patent translation subtask from the NTCIR-9 patent translation task (Goto et al., 2011). The parallel training, development, and test data consists of 1 million (M), 2,000, and 2,000 sentences, respectively 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison Experiment", "sec_num": "4" }, { "text": "The basic settings of the NTCIR-9 English to Chinese translation baseline system (Goto et al., 2011) was followed 2 . The Moses phrase-based SMT system was applied (Koehn et al., 2007) , together with GIZA++ (Och and Ney, 2003) for alignment and MERT (Och, 2003) for tuning on the development data. 14 standard SMT features were used: five translation model scores, one word penalty score, seven distortion scores and one LM score. The translation performance was measured by the caseinsensitive BLEU on the tokenized test data 3 .", "cite_spans": [ { "start": 164, "end": 184, "text": "(Koehn et al., 2007)", "ref_id": null }, { "start": 208, "end": 227, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF9" }, { "start": 251, "end": 262, "text": "(Och, 2003)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison Experiment", "sec_num": "4" }, { "text": "In this subsection we investigate two factors in the phrase alignment. Four different kinds of methods for heuristics and three kinds of maximum length of phrases in phrase table are used for word alignment, with other default parameters in the toolkits. The results are shown in Table 3 . The grow \u2212 diag \u2212 f inal \u2212 and, which will be set as default without special statement in the following sections, is shown better than other settings, and the BLEU score do not increase as the maximum length of phrases increases. ", "cite_spans": [], "ref_spans": [ { "start": 280, "end": 287, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "The Alignment", "sec_num": "4.1" }, { "text": "In this part, we will investigate how the factors in the n-gram LM influence the whole system. The scale of the training corpus is one of the most important factors to LM. And \"more data is better data\" (Brants and Xu, 2009) has been proved to be one of the most important rules for constructing a LMs. First we randomly divide the whole training sets into 4 parts equally. We build the LM with 1, 2 and 4 parts (i.e. for 1/4, 1/2 and the whole corpus respectively), with other setting as default. Then, we add the dictionary information to the LM. The pr stands for the size of the dictionary and the pf stands for the characters' frequency in the dictionary. The results in Table 4 show that using the whole corpus We select the three most popular smoothing algorithms, Witten-Bell, Kneser-Ney (KN), and improved Kneser-Ney (improved KN), and compare their performance in the character-level English to Chinese SMT task. As shown in Table 5 , when n is too small , the result is less satisfactory, and the BLEU score continues increase as n increases. However, the BLEU score begins to decrease when the LM becomes too long. The best 9-gram LM with Witten-Bell smoothing, corresponding to 5-gram to 7-gram in word-based LM, which is the widestly used in word-bases English to Chinese SMT.", "cite_spans": [ { "start": 203, "end": 224, "text": "(Brants and Xu, 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 676, "end": 683, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 935, "end": 942, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "The N -gram Language Model", "sec_num": "4.2" }, { "text": "We have shown that the different lengths of n-gram LMs make a significant influence in the English to Chinese translation. The 4-gram BLEU score is broadly accepted as the evaluate standard when we tune the other parameters using the minimum error rate training, which means that the MERT stage will not stop until it reaches the highest 4-gram BLEU on the development set. However, the same sentence To evaluate this hypothesis, the alignment model is set the same as the best performance in Table 3 , and 5-gram LM with improved KN smoothing is set for LM. The results in Table 6 show that singly increasing the n-gram of MERT can not improve the performance of SMT. ", "cite_spans": [], "ref_spans": [ { "start": 493, "end": 500, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 574, "end": 581, "text": "Table 6", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "The Tuning", "sec_num": "4.3" }, { "text": "We have investigated how different factors affect the performance of English to Chinese SMT. However, most of the other factors are fixed when we discuss one single factor. So in this subsection, we analyze how the combined factors perform in the whole system. Firstly, we combine the parameters of the smoothing methods and the maximum length of phrases together. The LM is set to 9-gram and grow \u2212 diag \u2212 f inal \u2212 and is set for alignment, which has the best BLEU score in n-gram LM experiments. Other factors is set as default in the toolkits. The results are shown in Table 7 . Then, the length of n-gram MERT and the different order n-gram LM are tuned together. We set the Improved KN as the smoothing method, and others as default in the toolkits. The results are shown in Table 8 . At last, the length of n-gram MERT and the smoothing methods are tuned together. The LM is set as 9-gram, the best BLEU score in n-gram LM experiments, and other factors set as default in the toolkits. The results are shown in Table 9 .", "cite_spans": [], "ref_spans": [ { "start": 572, "end": 579, "text": "Table 7", "ref_id": "TABREF12" }, { "start": 780, "end": 787, "text": "Table 8", "ref_id": "TABREF14" }, { "start": 1017, "end": 1024, "text": "Table 9", "ref_id": "TABREF16" } ], "eq_spans": [], "section": "Parameter Combinations", "sec_num": "4.4" }, { "text": "Among different parameters-combined setting, BLEU score is from 38.08 to 40.75, and the best performance is not gained when all the factors which singly perform best are put together. The highest BLEU score occurs when the 9-gram LM, the 7gram MERT method and the improved KN smoothing algorithm. This BLEU score is about one percent higher than our baseline. At last, we show three parameter combinations with their NIST scores that bring the best performance up to now in Table 10 .", "cite_spans": [], "ref_spans": [ { "start": 474, "end": 482, "text": "Table 10", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Parameter Combinations", "sec_num": "4.4" }, { "text": "Traditional Backoff N -gram LMs (BNLMs) have been widely used in many NLP tasks (Jia and Zhao, 2014; Zhang et al., 2012; Xu and Zhao, 2012) . Recently, Continuous-Space Language Models (CSLMs), especially Neural Network Language Models (NNLMs) (Bengio et al., 2003; Schwenk, 2007; Mikolov et al., 2010; Le et al., 2011) , are actively used in SMT (Schwenk et al., 2006; Schwenk et al., 2006; Schwenk et al., 2012; Son et al., 2012; Niehues and Waibel, 2012) . These models have demonstrated that CSLMs can improve BLEU scores of SMT over n-gram LMs with the same sized corpus for LM training. An attractive feature of C-SLMs is that they can predict the probabilities of ngrams outside the training corpus more accurately.", "cite_spans": [ { "start": 89, "end": 100, "text": "Zhao, 2014;", "ref_id": "BIBREF22" }, { "start": 101, "end": 120, "text": "Zhang et al., 2012;", "ref_id": "BIBREF21" }, { "start": 121, "end": 139, "text": "Xu and Zhao, 2012)", "ref_id": null }, { "start": 244, "end": 265, "text": "(Bengio et al., 2003;", "ref_id": "BIBREF3" }, { "start": 266, "end": 280, "text": "Schwenk, 2007;", "ref_id": "BIBREF14" }, { "start": 281, "end": 302, "text": "Mikolov et al., 2010;", "ref_id": null }, { "start": 303, "end": 319, "text": "Le et al., 2011)", "ref_id": null }, { "start": 347, "end": 369, "text": "(Schwenk et al., 2006;", "ref_id": "BIBREF12" }, { "start": 370, "end": 391, "text": "Schwenk et al., 2006;", "ref_id": "BIBREF12" }, { "start": 392, "end": 413, "text": "Schwenk et al., 2012;", "ref_id": "BIBREF13" }, { "start": 414, "end": 431, "text": "Son et al., 2012;", "ref_id": "BIBREF18" }, { "start": 432, "end": 457, "text": "Niehues and Waibel, 2012)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "4.5" }, { "text": "A CSLM implemented in a multi-layer neural network contains four layers: the input layer projects all words in the context h i onto the projection layer (the first hidden layer); the second hidden layer and the output layer achieve the non-liner probability estimation and calculate the LM probability P (w i |h i ) for the given context (Schwenk, 2007) .", "cite_spans": [ { "start": 338, "end": 353, "text": "(Schwenk, 2007)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "4.5" }, { "text": "The CSLM calculates the probabilities of al- Table 11 : CSLM Re-rank and decoding for TOP Performance l words in the vocabulary of the corpus given the context at once. However, due to too high computational complexity, the CSLM is only used to calculate the probabilities of a subset of the whole vocabulary. This subset is called a short-list, which consists of the most frequent words in the vocabulary. The CSLM also calculates the sum of the probabilities of all words not in the short-list by assigning a neuron. The probabilities of other words not in the short-list are obtained from an Backoff N-gram LM (BNLM) (Schwenk, 2007; Schwenk, 2010; Wang et al., 2013; Wang et al., 2015) . Let w i , h i be the current word and history, respectively. The CSLM with a BNLM calculates the probability of w i given h i , P (w i |h i ), as follows:", "cite_spans": [ { "start": 620, "end": 635, "text": "(Schwenk, 2007;", "ref_id": "BIBREF14" }, { "start": 636, "end": 650, "text": "Schwenk, 2010;", "ref_id": "BIBREF15" }, { "start": 651, "end": 669, "text": "Wang et al., 2013;", "ref_id": null }, { "start": 670, "end": 688, "text": "Wang et al., 2015)", "ref_id": null } ], "ref_spans": [ { "start": 45, "end": 53, "text": "Table 11", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "4.5" }, { "text": "P (w i |h i ) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 Pc(w i |h i ) \u2211 w\u2208V 0 Pc(w|h i ) P s (h i ) if w i \u2208 V 0 P b (w i |h i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "4.5" }, { "text": "otherwise (4) where V 0 is the short-list, P c (\u2022) is the probability calculated by the CSLM, \u2211 w\u2208V 0 P c (w|h i ) is the summary of probabilities of the neuron for all the words in the short-list, P b (\u2022) is the probability calculated by the BNLM, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "4.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P s (h i ) = \u2211 v\u2208V 0 P b (v|h i ).", "eq_num": "(5)" } ], "section": "Continues Space Language Model", "sec_num": "4.5" }, { "text": "We may regard that the CSLM redistributes the probability mass of all words in the short-list, which is calculated by using the n-gram LM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "4.5" }, { "text": "Due to too high computational cost, it is difficult to use CSLMs in decoding directly. As mentioned in the introduction, a common approach in SMT using CSLMs is a two-pass procedure, or nbest re-ranking. In this approach, the first pass uses a BNLM in decoding to produce an n-best list. Then, a CSLM is used to re-rank those n-best translations in the second pass (Schwenk et al., 2006; Son et al., 2010; Schwenk et al., 2012; Son et al., 2012) .", "cite_spans": [ { "start": 365, "end": 387, "text": "(Schwenk et al., 2006;", "ref_id": "BIBREF12" }, { "start": 388, "end": 405, "text": "Son et al., 2010;", "ref_id": "BIBREF17" }, { "start": 406, "end": 427, "text": "Schwenk et al., 2012;", "ref_id": "BIBREF13" }, { "start": 428, "end": 445, "text": "Son et al., 2012)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "4.5" }, { "text": "Because CSLM outperforms BNLM in probability estimation accuracy and BNLM outperforms C-SLM in computational time. To integrate CSLM more efficiently into decoding, some existing approaches calculate the probabilities of the n-grams before decoding and store them (Wang et al., 2013; Wang et al., 2014; Arsoy et al., 2013; Arsoy et al., 2014) in n-gram format. That is, n-grams from BNLM are used as the input of CSLM, and the output probabilities of CSLM together with the corresponding n-grams of BNLM constitute converted C-SLM. The converted CSLM is directly used in SMT, and its decoding speed is as fast as the n-gram LM.", "cite_spans": [ { "start": 264, "end": 283, "text": "(Wang et al., 2013;", "ref_id": null }, { "start": 284, "end": 302, "text": "Wang et al., 2014;", "ref_id": null }, { "start": 303, "end": 322, "text": "Arsoy et al., 2013;", "ref_id": "BIBREF0" }, { "start": 323, "end": 342, "text": "Arsoy et al., 2014)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "4.5" }, { "text": "From the above tables, we find the most important parameter for character-based English to Chinese translation is the LM, and other parameters just have a minor influence. To verify this observation, we use 9-gram character based CSLM (Schwenk et al., 2006) , with 4096 characters in the short list, the projection layer of dimension 256 and the hidden layer of dimension 192 are set in the CSLM exper-iments. (1) We add the CSLM score as the additional feature to re-rank the 1000-best candidates in the top three performance In Table 10 . The weight parameters were tuned by using Z-MERT (Zaidan, 2009) . This method is called CSLM Re-rank. (2) We follow (Wang et al., 2013)'s method and convert CSLM into n-gram LM. This converted CSLM can be directly applied to SMT decoding and called CSLM-decoding.", "cite_spans": [ { "start": 235, "end": 257, "text": "(Schwenk et al., 2006)", "ref_id": "BIBREF12" }, { "start": 590, "end": 604, "text": "(Zaidan, 2009)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 530, "end": 538, "text": "Table 10", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "4.5" }, { "text": "It is shown in Table 11 that the BLEU score nearly improve by 0.4 point to 0.6 point (CSLM Re-rank) and 0.6 point to 0.9 point (CSLM-decoding). This indicates that the CSLMs affect the performance of character based SMT in a significant way. This may indicate that the LM can take part place of the segmentation for character based English to Chinese SMT. A better character-based English to Chinese translation can be obtained by building a better LM.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 23, "text": "Table 11", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Continues Space Language Model", "sec_num": "4.5" }, { "text": "Because the role of word segmentation in English to Chinese translation is arguable, an attempt of character-based English to Chinese translation seems to be necessary. In this paper, we have shown why character-based English to Chinese translation is necessary and feasible, and investigated how different factors perform in the system from the alignment, LM and the tuning aspects. Several empirical studies, including recent popular CSLM, have been done to show how to determine a optimal parameters for better SMT performance, and the results show that the LM is the most important factor for character-based English to Chinese translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Since we are the participants of NTCIR-9, so we have the bilingual sides of the evaluation data.2 We are aware that the original NTCIR patentMT baseline is designed for Chinese-English translation. In this paper, we follow the same setting of the baseline system, only convert the source language and the target language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It is available at http://www.itl.nist.gov/iad/ mig/tests/mt/2009/ PACLIC 29", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We appreciate the anonymous reviewers for valuable comments and suggestions on our paper. Rui Wang, Hai Zhao and Bao-Liang Lu were partially supported by the National Natural Science Foundation of China (No. 60903119, No. 61170114, and No. 61272248) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "Association for Computational Linguistics, ACL '96, pages 310-318, Santa Cruz, California, USA. Association for Computational Linguistics. Stanley F. Chen and Joshua Goodman. 1998 ", "cite_spans": [ { "start": 139, "end": 179, "text": "Stanley F. Chen and Joshua Goodman. 1998", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Converting neural network language models into back-off language models for efficient decoding in automatic speech recognition", "authors": [ { "first": "Ebru", "middle": [], "last": "Arsoy", "suffix": "" }, { "first": "Stanley", "middle": [ "F" ], "last": "Chen", "suffix": "" }, { "first": "Bhuvana", "middle": [], "last": "Ramabhadran", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Sethy", "suffix": "" } ], "year": 2013, "venue": "Proceeding of International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ebru Arsoy, Stanley F. Chen, Bhuvana Ramabhadran, and Abhinav Sethy. 2013. Converting neural network language models into back-off language models for ef- ficient decoding in automatic speech recognition. In Proceeding of International Conference on Acoustic- s, Speech and Signal Processing, Vancouver, Canada, May.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Converting neural network language models into back-off language models for efficient decoding in automatic speech recognition", "authors": [ { "first": "Ebru", "middle": [], "last": "Arsoy", "suffix": "" }, { "first": "Stanley", "middle": [ "F" ], "last": "Chen", "suffix": "" }, { "first": "Bhuvana", "middle": [], "last": "Ramabhadran", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Sethy", "suffix": "" } ], "year": 2014, "venue": "Speech, and Language Processing", "volume": "22", "issue": "", "pages": "184--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ebru Arsoy, Stanley F. Chen, Bhuvana Ramabhadran, and Abhinav Sethy. 2014. Converting neural net- work language models into back-off language models for efficient decoding in automatic speech recognition. IEEE/ACM Transactions on Audio, Speech, and Lan- guage Processing, 22(1):184-192.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Joint language and translation modeling with recurrent neural networks", "authors": [ { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1044--1054", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Auli, Michel Galley, Chris Quirk, and Geoffrey Zweig. 2013. Joint language and translation model- ing with recurrent neural networks. In Proceedings of the 2013 Conference on Empirical Methods in Natu- ral Language Processing, pages 1044-1054, Seattle, Washington, USA, October.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R\u00e9jean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Janvin", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research (JMLR)", "volume": "3", "issue": "", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Research (JMLR), 3:1137-1155, March.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Distributed language models", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Tutorial Abstracts, NAACL-Tutorials '09", "volume": "", "issue": "", "pages": "3--4", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Brants and Peng Xu. 2009. Distributed lan- guage models. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Com- putational Linguistics, Companion Volume: Tutorial Abstracts, NAACL-Tutorials '09, pages 3-4, Boulder, Colorado, USA. Association for Computational Lin- guistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The mathematics of statistical machine translation: parameter estimation", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Brown", "suffix": "" }, { "first": "J", "middle": [ "Della" ], "last": "Vincent", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Comput. Linguist", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematic- s of statistical machine translation: parameter estima- tion. Comput. Linguist., 19(2):263-311, June.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Optimizing Chinese word segmentation for machine translation performance", "authors": [ { "first": "Pi-Chuan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Third Workshop on Statistical Machine Translation, StatMT '08", "volume": "", "issue": "", "pages": "224--232", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pi-Chuan Chang, Michel Galley, and Christopher D. Manning. 2008. Optimizing Chinese word segmen- tation for machine translation performance. In Pro- ceedings of the Third Workshop on Statistical Machine Translation, StatMT '08, pages 224-232, Columbus, Ohio, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An empirical study of smoothing techniques for language modeling", "authors": [ { "first": "F", "middle": [], "last": "Stanley", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 34th annual meeting on t neural network based language model. In INTER-SPEECH", "volume": "", "issue": "", "pages": "1045--1048", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanley F. Chen and Joshua Goodman. 1996. An empir- ical study of smoothing techniques for language mod- eling. In Proceedings of the 34th annual meeting on t neural network based language model. In INTER- SPEECH, pages 1045-1048.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Continuous space language models using restricted boltzmann machines", "authors": [ { "first": "Jan", "middle": [], "last": "Niehues", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the International Workshop for Spoken Language Translation, IWSLT 2012", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jan Niehues and Alex Waibel. 2012. Continuous space language models using restricted boltzmann machines. In Proceedings of the International Workshop for Spo- ken Language Translation, IWSLT 2012, pages 311- 318, Hong Kong.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Comput. Linguist", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A systemat- ic comparison of various statistical alignment models. Comput. Linguist., 29(1):19-51, March.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The alignment template approach to statistical machine translation", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Comput. Linguist", "volume": "30", "issue": "4", "pages": "417--449", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2004. The alignmen- t template approach to statistical machine translation. Comput. Linguist., 30(4):417-449, December.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Compu- tational Linguistics, pages 160-167, Sapporo, Japan, July. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Continuous space language models for statistical machine translation", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Dchelotte", "suffix": "" }, { "first": "Jean-Luc", "middle": [], "last": "", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the COLING/ACL on Main conference poster sessions, COLING-ACL '06", "volume": "", "issue": "", "pages": "723--730", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holger Schwenk, Daniel Dchelotte, and Jean-Luc Gau- vain. 2006. Continuous space language models for statistical machine translation. In Proceedings of the COLING/ACL on Main conference poster sessions, COLING-ACL '06, pages 723-730, Sydney, Australi- a. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Large, pruned or continuous space language models on a gpu for statistical machine translation", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Rousseau", "suffix": "" }, { "first": "Mohammed", "middle": [], "last": "Attik", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT, WLM '12", "volume": "", "issue": "", "pages": "11--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holger Schwenk, Anthony Rousseau, and Mohammed Attik. 2012. Large, pruned or continuous space lan- guage models on a gpu for statistical machine transla- tion. In Proceedings of the NAACL-HLT 2012 Work- shop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT, WLM '12, pages 11-19, Montreal, Canada, June. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Continuous space language models", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2007, "venue": "Computer Speech and Language", "volume": "21", "issue": "3", "pages": "492--518", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holger Schwenk. 2007. Continuous space language models. Computer Speech and Language, 21(3):492- 518.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Continuous-space language models for statistical machine translation", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2010, "venue": "The Prague Bulletin of Mathematical Linguistics", "volume": "", "issue": "", "pages": "137--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holger Schwenk. 2010. Continuous-space language models for statistical machine translation. The Prague Bulletin of Mathematical Linguistics, pages 137-146.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A study of translation edit rate with targeted human annotation", "authors": [ { "first": "Matthew", "middle": [], "last": "Snover", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Linnea", "middle": [], "last": "Micciulla", "suffix": "" }, { "first": "John", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2006, "venue": "Proceedings of Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "223--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of trans- lation edit rate with targeted human annotation. In In Proceedings of Association for Machine Translation in the Americas, pages 223-231.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Training continuous space language models: some practical issues", "authors": [ { "first": "Le", "middle": [], "last": "Hai Son", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Allauzen", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wisniewski", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Yvon", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10", "volume": "", "issue": "", "pages": "778--788", "other_ids": {}, "num": null, "urls": [], "raw_text": "Le Hai Son, Alexandre Allauzen, Guillaume Wisniewski, and Fran\u00e7ois Yvon. 2010. Training continuous space language models: some practical issues. In Proceed- ings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10, pages 778-788, Cambridge, Massachusetts, October. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Continuous space translation models with neural networks", "authors": [ { "first": "Le", "middle": [], "last": "Hai Son", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Allauzen", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Yvon", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference 39th Annual Meeting on Association for Computational Linguistics, ACL '01", "volume": "", "issue": "", "pages": "523--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "Le Hai Son, Alexandre Allauzen, and Fran\u00e7ois Yvon. 2012. Continuous space translation models with neu- ral networks. In Proceedings of the 2012 Conference 39th Annual Meeting on Association for Computation- al Linguistics, ACL '01, pages 523-530, Toulouse, France. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Z-MERT: A fully configurable open source tool for minimum error rate training of machine translation systems", "authors": [ { "first": "Omar", "middle": [ "F" ], "last": "Zaidan", "suffix": "" } ], "year": 2009, "venue": "The Prague Bulletin of Mathematical Linguistics", "volume": "91", "issue": "", "pages": "79--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omar F. Zaidan. 2009. Z-MERT: A fully configurable open source tool for minimum error rate training of machine translation systems. The Prague Bulletin of Mathematical Linguistics, 91:79-88.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Improved statistical machine translation by multiple Chinese word segmentation", "authors": [ { "first": "Ruiqiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Keiji", "middle": [], "last": "Yasuda", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Third Workshop on Statistical Machine Translation, StatMT '08", "volume": "", "issue": "", "pages": "216--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruiqiang Zhang, Keiji Yasuda, and Eiichiro Sumita. 2008. Improved statistical machine translation by multiple Chinese word segmentation. In Proceedings of the Third Workshop on Statistical Machine Trans- lation, StatMT '08, pages 216-223, Columbus, Ohio, USA. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A machine learning approach to convert CCGbank to Penn treebank", "authors": [ { "first": "Xiaotian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Cong", "middle": [], "last": "Hui", "suffix": "" } ], "year": 2012, "venue": "Proceedings of 24th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "535--542", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaotian Zhang, Hai Zhao, and Cong Hui. 2012. A ma- chine learning approach to convert CCGbank to Penn treebank. In Proceedings of 24th International Con- ference on Computational Linguistics, pages 535-542, Mumbai, India, December.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Learning hierarchical translation spans", "authors": [ { "first": "Jingyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "183--188", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingyi Zhang, Masao Utiyama, Eiichiro Sumita, and Hai Zhao. 2014. Learning hierarchical translation spans. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 183- 188, Doha, Qatar, October.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "An improved Chinese word segmentation system with conditional random field", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Chang-Ning", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "162--165", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai Zhao, Chang-Ning Huang, and Mu Li. 2006. An im- proved Chinese word segmentation system with con- ditional random field. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 162-165, Sydney, Australia, July. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "An empirical study on word segmentation for Chinese machine translation", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "Bao-Liang", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 14th international conference on Computational Linguistics and Intelligent Text Processing", "volume": "2", "issue": "", "pages": "248--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai Zhao, Masao Utiyama, Eiichiro Sumita, and Bao- Liang Lu. 2013. An empirical study on word seg- mentation for Chinese machine translation. In Pro- ceedings of the 14th international conference on Com- putational Linguistics and Intelligent Text Processing -Volume 2, CICLing'13, pages 248-263, Berlin, Hei- delberg. Springer-Verlag.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Bilingual word embeddings for phrase-based machine translation", "authors": [ { "first": "Will", "middle": [ "Y" ], "last": "Zou", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1393--1398", "other_ids": {}, "num": null, "urls": [], "raw_text": "Will Y. Zou, Richard Socher, Daniel Cer, and Christo- pher D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Nat- ural Language Processing, pages 1393-1398, Seattle, Washington, USA, October.", "links": null } }, "ref_entries": { "TABREF1": { "type_str": "table", "text": "Ancient Chinese and Modern Chinese", "num": null, "html": null, "content": "" }, "TABREF3": { "type_str": "table", "text": "", "num": null, "html": null, "content": "
" }, "TABREF5": { "type_str": "table", "text": "", "num": null, "html": null, "content": "
: Comparison Between Word-based Translation and Character-based Translation
for language training is necessary and using the dic-
tionary information does not improve the translation
performance.
Size ofBLEU BLEU
The Corpus(dev)(test)
1/4 Corpus42.3039.76
1/2 Corpus42.5140.19
the whole Corpus 42.8040.31
Dictionaries
pr=10k pf =542.6340.01
pr=10k pf =1042.6040.17
pr=20k pf =1042.7340.02
No Dictionary42.8040.31
" }, "TABREF6": { "type_str": "table", "text": "", "num": null, "html": null, "content": "" }, "TABREF8": { "type_str": "table", "text": "Different Smoothing Methods for LM becomes longer if the character based segmentation is applied. That is, four words may be segmented into around 10 characters. Will the system gain a better performance if the n-gram of BLEU score in the MERT convergence standard increases as the ngram in the LM increases?", "num": null, "html": null, "content": "
" }, "TABREF10": { "type_str": "table", "text": "Different Setting on MERT", "num": null, "html": null, "content": "
" }, "TABREF12": { "type_str": "table", "text": "Parameter Combinations of Smoothing Methods and Maximum Length of Phrase Alignment", "num": null, "html": null, "content": "
" }, "TABREF14": { "type_str": "table", "text": "Parameter Combinations of n-gram LM and ngram MERT", "num": null, "html": null, "content": "
" }, "TABREF16": { "type_str": "table", "text": "Parameter Combinations of n-gram MERT and Smoothing Methods", "num": null, "html": null, "content": "
" } } } }