{ "paper_id": "2005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:18:25.156672Z" }, "title": "ISI's 2005 Statistical Machine Translation Entries", "authors": [ { "first": "Steve", "middle": [], "last": "Deneefe", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern California", "location": { "addrLine": "4676 Admiralty Way, Suite 1001 Marina del Rey", "postCode": "90292", "region": "CA" } }, "email": "sdeneefe@isi.edu" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern California", "location": { "addrLine": "4676 Admiralty Way, Suite 1001 Marina del Rey", "postCode": "90292", "region": "CA" } }, "email": "knight@isi.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "ISI entered two statistical machine translation systems in the IWSLT evaluation this year: one was phrase-based and the other syntax-based. The syntax-based system represents the results of a current research effort, while the phrase-based system is representative of the current techniques in state-ofthe-art machine translation. This paper primarily describes the syntax-based system and its comparison to the phrasebased system. We will give a brief overview of the components of the systems and discuss the performance on the IWSLT development data, the evaluation results, and some post-evaluation results.", "pdf_parse": { "paper_id": "2005", "_pdf_hash": "", "abstract": [ { "text": "ISI entered two statistical machine translation systems in the IWSLT evaluation this year: one was phrase-based and the other syntax-based. The syntax-based system represents the results of a current research effort, while the phrase-based system is representative of the current techniques in state-ofthe-art machine translation. This paper primarily describes the syntax-based system and its comparison to the phrasebased system. We will give a brief overview of the components of the systems and discuss the performance on the IWSLT development data, the evaluation results, and some post-evaluation results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Statistical phrase-based machine translation is currently the state-of-the-art in many translation tasks, achieving results often surpassing other methods [1] . Systems that strive to improve on the current results of phrase-based machine translation often incorporate a higher-level notion of the structure in language, sometimes just the hierarchical structure [2] and other times a fully syntactic model. For the past several years, the ISI/USC machine translation group has been investigating how to use syntactic information to improve translation quality beyond the capability of our existing phrasebased translation system and in the process created a new syntax-based translation system. Both systems have similarities: they are both statistical and trained on bilingual parallel data, both combine their translation model with several other knowledge sources in a log-linear manner, and both require parameter tuning to determine the weights of the individual components. The syntax-based system is different in two main respects: the translation model incorporates syntactic structure on the target language side (in our case, English), and the decoder uses a parser-like method to create syntactic trees as output hypotheses.", "cite_spans": [ { "start": 155, "end": 158, "text": "[1]", "ref_id": "BIBREF0" }, { "start": 363, "end": 366, "text": "[2]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Simply put, our syntax model translates phrases in the source language into syntactic chunks in the target language. For example, when translating from Chinese into English, our system learns simple rules that translate words or phrases, such as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-based Translation Model", "sec_num": "1.1." }, { "text": "NPB(PRP(I)) \u2194 \u2022 \u2022 \u2022 NN(hotel) \u2194 \u00cb \u00cb \u00cb\u00a1 \u00a1 \u00a1 NP-C(NPB(DT(this) NN(address))) \u2194 Y Y Y\u00c7 \u00c7 \u00c7 \u2022 \u2022 \u2022OE OE OE", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-based Translation Model", "sec_num": "1.1." }, { "text": "It also learns phrases with \"holes\" in the source language (represented here by the variable x 0 ), as long as they conform to a syntactic structure in the target language", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-based Translation Model", "sec_num": "1.1." }, { "text": "NP-C(NPB(PRP$(my) x0:NN)) \u2194 \u2022 \u2022 \u2022 { { { x0 NP-C(NPB(PRP$(my) x0:NN)) \u2194 \u2022 \u2022 \u2022 x0 PP(TO(to) NP-C(NPB(x0:NNP NNP(park)))) \u2194 V V V x0 \u00da \u00da \u00da\u00c9 \u00c9 \u00c9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-based Translation Model", "sec_num": "1.1." }, { "text": "Other rules bring together already translated phrases, such as the following rules which take a translated verb next to a translated noun phrase and combine them together into a verb phrase:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-based Translation Model", "sec_num": "1.1." }, { "text": "VP(x0:VBZ x1:NP-C) \u2194 x0 x1 VP(x0:VBZ x1:NP-C) \u2194 x1 x0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-based Translation Model", "sec_num": "1.1." }, { "text": "The first rule combines the pair in order. The second takes a noun phrase located before a verb, switches the order, then builds the final verb phrase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax-based Translation Model", "sec_num": "1.1." }, { "text": "To learn these rules automatically, we first word-aligned a bilingual parallel corpus using GIZA++ [3] . We then parsed the target side 1 using our own implementation of Collins Model 2 [5] , [6] . This resulted in a large set of tree-string pairs, aligned at the word level. From this set, a list of translation rules were extracted, in the manner described by [7] . Probabilities were applied according to a relative frequency model conditioned on the root non-terminals of the left-hand sides of the rules. Table 2 : Brevity Penalties on Syntax and Phrase systems on both evaluation data and blind test sets for the same runs as shown in Table 1 .", "cite_spans": [ { "start": 99, "end": 102, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 186, "end": 189, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 192, "end": 195, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 362, "end": 365, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 510, "end": 517, "text": "Table 2", "ref_id": null }, { "start": 641, "end": 648, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Syntax-based Translation Model", "sec_num": "1.1." }, { "text": "For the evaluation run we integrated a smoothed bigram model into our decoder search, and generated lists of 25,000 hypotheses for each sentence, then re-ranked these results using a smooth trigram model. 2 We used the SRI Language Modeling Toolkit to train both language models, and trained on the English half of the supplied parallel training corpus, which contained 192,362 words (7,803 unique) after preprocessing.", "cite_spans": [ { "start": 205, "end": 206, "text": "2", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Language Model", "sec_num": "1.2." }, { "text": "To train the individual model weights of the log-linear model, we split the provided development data into two parts, nearly equal in size. For Chinese, Arabic, and Japanese, devset 1 was used as blind test data, while devset 2 was reserved for development training of the weights. Since only one devset was supplied for Korean, we split this devset in two, and used the first 253 lines for testing, while the second 253 were reserved for training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Weight Training", "sec_num": "1.3." }, { "text": "The syntax system does not yet have a reliable automatic parameter tuning method. Instead, we used a much slower exhaustive method to train our model weights. We ran our decoder on the development set using hundreds of parameter settings, each time recording the BLEU score. The settings that resulted in the highest BLEU score were then run on the blind test corpus, along with our baseline settings to ensure 2 Due to the search space complexity of combining our translation model with a language model, we were at the time unable to integrate a trigram language model into the search process. In our post-evaluation runs of the syntax system, we did use an integrated trigram model, and did no re-ranking. that we had made some improvement. This method was very time consuming, so we only had time to tune values for the Chinese development set. We used these same parameters for translating the other three languages.", "cite_spans": [ { "start": 411, "end": 412, "text": "2", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Model Weight Training", "sec_num": "1.3." }, { "text": "Our syntax decoder implements a probabilistic CKY-style parsing algorithm with beams. It applies the translation rules to the Chinese sentence and builds its way, step-by-step, to the top of an English parse structure, as discussed in [8] . This results in an English syntax tree corresponding to the Chinese sentence, which guarantees the output to have some kind of globally coherent syntactic structure.", "cite_spans": [ { "start": 235, "end": 238, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Syntax-based Decoder", "sec_num": "1.4." }, { "text": "The phrase-based machine translation system we entered in the evaluation is the same as last year [9] , [10] , except that it was trained solely on the supplied data. It used the smoothed trigram language model in an integrated fashion, and the model weights were trained using the minimum error rate training method described in [11] . For training this system, we used the same training/testing split of the development data described above.", "cite_spans": [ { "start": 98, "end": 101, "text": "[9]", "ref_id": "BIBREF8" }, { "start": 104, "end": 108, "text": "[10]", "ref_id": "BIBREF9" }, { "start": 330, "end": 334, "text": "[11]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "The Contrastive System: Phrase-Based MT", "sec_num": "1.5." }, { "text": "In Table 1 , we report three sets of BLEU scores for both the syntax and phrase systems: one for our blind test set (mea-sured on devset 1 after training on devset 2), 3 one for the final evaluation results, and one for a post-evaluation run. Note that for the syntax system, the evaluation and test scores are relatively comparable, while for the phrase system, the evaluation scores are much lower than the test scores. This was an error on our part while running the phrase-based system on the evaluation data: we did not correctly re-collect the phrase tables with respect to the evaluation source data, so our syntax system did not have all the relevant phrase-pairs while decoding. After this problem was discovered, we fixed this problem in our phrase tables, and re-ran the same system. The results are shown in Table 1 in the phrase-based post-evaluation column, and are more consistent with our expectations for this system.", "cite_spans": [ { "start": 168, "end": 169, "text": "3", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 820, "end": 827, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "2." }, { "text": "After the evaluation, we were also able to run the syntax system with an integrated trigram model. Those results, again tuned only on the Chinese development data, are shown in Table 1 's syntax-based post-evaluation column. Table 2 shows the brevity penalties for the same runs as Table 1 . Again, note the severe penalties given the phrasebased system on the evaluation runs (second column), as compared to the post-evaluation run (third column) and blind test results (first column). The syntax system, on the other hand, produces short sentences consistently for all languages except Chinese, an indication that tuning in each language might be advantageous.", "cite_spans": [], "ref_spans": [ { "start": 177, "end": 184, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 225, "end": 232, "text": "Table 2", "ref_id": null }, { "start": 282, "end": 289, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "2." }, { "text": "We were quite surprised at the poor evaluation scores of our phrase-based system. As our post-evaluation results (and the results of other teams) demonstrate, these scores were certainly not indicative of the caliber of the phrase-based approach. Even a good system can be thwarted by user error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3." }, { "text": "On the other hand, our syntax system's performance was a more pleasant surprise. Especially in our post-evaluation run of the Chinese system, when using the trigram language model integrated into the search, the syntax-based system achieved results close to those of the phrase-based system. This is surprising because the syntax system is currently not able to learn phrase pairs to the same level as a phrase-based system. With such a small training dataset as what was given for this evaluation, our system encountered many unknown words in the test data. Thus the resulting sentences were sometimes short on content words. But apparently the strengths of the syntax-based approach made up for this deficiency in part.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3." }, { "text": "Also worth mentioning is that questions comprised a large percentage of the data in this evaluation. This is an area where a syntax-based method could really shine, or do quite poorly, based on the quality of the parsing and how well the model handles large-scale movements in the tree. 4 Since we trained our parser on text that contains very few questions, it is unlikely that the resulting parse trees for questions were of very high quality. Manual inspection of our translations also shows that questions were not translated well. Better quality parsing of questions is one of the areas we will be investigating.", "cite_spans": [ { "start": 287, "end": 288, "text": "4", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3." }, { "text": "Actually, we bootstrapped our parser by first training it on the Penn Treebank[4], then used the resulting parsing model to parse the English half of the supplied training data. We then re-trained a second-generation parser on this data, which was then used to parse the same data a second time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As mentioned before, scores for Korean were measured only on first half of devset 1.4 Our current translation model does allow movements, but perhaps not at the scale necessary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Statistical phrase-based translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Franz", "middle": [ "Josef" ], "last": "Och", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Human Language Technology and North American Association for Computational Linguistics Conference (HLT-NAACL 2003)", "volume": "", "issue": "", "pages": "48--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceed- ings of the Human Language Technology and North American Association for Computational Linguistics Conference (HLT-NAACL 2003), pp 48-54.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A hierarchical phrase-based model for statistical machine translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005)", "volume": "", "issue": "", "pages": "263--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceed- ings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005), pp. 263-270.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och, Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Computational Linguistics, volume 29, number 1, pp. 19-51.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated cor- pus of English: The Penn Treebank. Computational Linguistics, volume 19, number 2, pp. 313-330.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Head-driven statistical models for natural language parsing. Computational Linguistics", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2003, "venue": "", "volume": "29", "issue": "", "pages": "589--673", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational Linguis- tics, volume 29, number 4, pp. 589-673.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Intricacies of Collins' Parsing Model", "authors": [ { "first": "M", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "", "middle": [], "last": "Bikel", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "", "pages": "479--511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel M. Bikel. 2004. Intricacies of Collins' Pars- ing Model. In Computational Linguistics, volume 30, number 4, pp. 479-511.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "What's in a translation rule", "authors": [ { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Hopkins", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Human Language Technology and North American Association for Computational Linguistics Conference", "volume": "", "issue": "", "pages": "273--280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a translation rule? In Proceedings of the Human Language Technology and North American Association for Computational Lin- guistics Conference (HLT-NAACL 2004), pp. 273-280.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Interactively Exploring a Machine Translation Model", "authors": [ { "first": "Steve", "middle": [], "last": "Deneefe", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Hayward", "middle": [ "H" ], "last": "Chan", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005) Interactive Poster and Demonstration Sessions", "volume": "", "issue": "", "pages": "97--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steve DeNeefe, Kevin Knight, and Hayward H. Chan. 2005. Interactively Exploring a Machine Translation Model. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005) Interactive Poster and Demonstration Sessions, pp. 97-100.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The alignment template approach to statistical machine translation. Computational Linguistics, volume", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "", "volume": "30", "issue": "", "pages": "417--449", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2004. The align- ment template approach to statistical machine transla- tion. Computational Linguistics, volume 30, number 4, pp. 417-449.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The ISI/USC MT system", "authors": [ { "first": "Ignacio", "middle": [], "last": "Thayer", "suffix": "" }, { "first": "Emil", "middle": [], "last": "Ettelaie", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Dragos", "middle": [ "Stefan" ], "last": "Munteanu", "suffix": "" }, { "first": "Franz", "middle": [ "Josef" ], "last": "Och", "suffix": "" }, { "first": "Quamrul", "middle": [], "last": "Tipu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "59--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ignacio Thayer, Emil Ettelaie, Kevin Knight, Daniel Marcu, Dragos Stefan Munteanu, Franz Josef Och, Quamrul Tipu. 2004. The ISI/USC MT system. In Proceedings of the International Workshop on Spoken Language Translation (IWSLT 2004), pp. 59-60.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Minimum error rate training for statistical machine translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL 2003)", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och. 2003. Minimum error rate training for statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Compu- tational Linguistics (ACL 2003), pp. 160-167.", "links": null } }, "ref_entries": { "TABREF0": { "type_str": "table", "html": null, "text": "BLEU scores on Syntax and Phrase systems on both evaluation data and blind test sets. The post-evaluation phrase system is using the correctly trained phrase tables. The post-evaluation syntax system is using a trigram language model integrated into the decoder search.", "num": null, "content": "
Phrase-basedSyntax-based
Pre-evalEvaluation Post-evalPre-evalEvaluation Post-eval
blind test(correctly trained) blind test(trigram model)
Arabic53.7937.3950.1643.8439.6244.47
Chinese32.133.2341.1625.7337.6440.08
Japanese44.0728.3133.8236.6627.4129.98
Korean35.4823.7430.0226.225.2227.65
LanguagePhrase-basedSyntax-based
Pre-evalEvaluation Post-evalPre-evalEvaluation Post-eval
blind test(correctly trained) blind test(trigram model)
Arabic0.95440.75280.95910.84440.81570.8989
Chinese0.95620.88970.97500.93120.97420.9757
Japanese0.97040.87150.95290.88850.74210.9042
Korean0.97340.92310.99970.83440.83650.9466
" } } } }