{ "paper_id": "2005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:19:59.414439Z" }, "title": "Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "", "affiliation": {}, "email": "pkoehn@inf.ed.ac.uk" }, { "first": "Amittai", "middle": [], "last": "Axelrod", "suffix": "", "affiliation": {}, "email": "amittai@mit.edu" }, { "first": "Alexandra", "middle": [], "last": "Birch Mayne", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Our participation in the IWSLT 2005 speech translation task is our first effort to work on limited domain speech data. We adapted our statistical machine translation system that performed successfully in previous DARPA competitions on open domain text translations. We participated in the supplied corpora transcription track. We achieved the highest BLEU score in 2 out of 5 language pairs and had competitive results for the other language pairs.", "pdf_parse": { "paper_id": "2005", "_pdf_hash": "", "abstract": [ { "text": "Our participation in the IWSLT 2005 speech translation task is our first effort to work on limited domain speech data. We adapted our statistical machine translation system that performed successfully in previous DARPA competitions on open domain text translations. We participated in the supplied corpora transcription track. We achieved the highest BLEU score in 2 out of 5 language pairs and had competitive results for the other language pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The statistical machine translation group at the University of Edinburgh has been focused on open domain text translation, so we welcomed the challenge to work on the IWSLT 2005 limited domain speech translation task. We participated in the transcription translation tasks for all five language pairs, using only the supplied corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our MT system was originally developed for translation of European parliament texts from German to English (Koehn et al., 2003) . We extended the system while working on the DARPA challenges to translate Chinese and Arabic news texts into English (Koehn, 2004a; Koehn et al., 2005) . Now, we were faced with the challenge of speech data in mostly Asian languages.", "cite_spans": [ { "start": 107, "end": 127, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF4" }, { "start": 247, "end": 261, "text": "(Koehn, 2004a;", "ref_id": "BIBREF1" }, { "start": 262, "end": 281, "text": "Koehn et al., 2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The translation of transcribed speech differs in many ways from our traditional translation scenario: Much less training data is available, the domain is more limited, and the text style is very differentshort questions and statements. In some respect, the task is easier, since smaller training corpora result in faster training times for the system. But it also meant that we had to re-examine various components of our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we present an overview of our current out-of-the-box system in the next section. It includes a more detailed treatment of models added over the last year, especially a novel lexicalised reordering model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experimental work went into the adaptation of our system to the IWSLT'05 translation tasks. This is described in Section 3. We used a Linux cluster of about 50 machines, which allowed extensive optimisation of key components of our system, especially word alignment, lexicalised reordering, and reordering limits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Finally, we report on our results in the competition and some post-evaluation analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The system employs a phrase-based statistical machine translation model (Koehn et al., 2003) that uses the Pharaoh decoder (Koehn, 2004b) . In this section, we will give an overview of the system.", "cite_spans": [ { "start": 72, "end": 92, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF4" }, { "start": 123, "end": 137, "text": "(Koehn, 2004b)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "2" }, { "text": "In phrase-based SMT models, the input (foreign) sentence is segmented into so-called phrases, which may be any sequences of adjacent words that do not have to be linguistically motivated. Each phrase is mapped into the target language (English). Phrases may be reordered. See Figure 1 for an illustration. ", "cite_spans": [], "ref_spans": [ { "start": 276, "end": 284, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Phrase-Based Statistical MT", "sec_num": "2.1" }, { "text": "Mathematically, we employ a log linear approach in our translation system. We search for the most probable English sentence e given some foreign sentence f by maximising the sum over a set of feature functions h m (e, f ):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-linear Model", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e = arg max e p(e|f ) (1) = arg max e M m=1 \u03bb m h m (e, f )", "eq_num": "(2)" } ], "section": "Log-linear Model", "sec_num": "2.2" }, { "text": "The log linear model provides a natural framework to integrate many components and to weigh them according to their performance. We are using the following feature functions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-linear Model", "sec_num": "2.2" }, { "text": "\u2022 language model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-linear Model", "sec_num": "2.2" }, { "text": "\u2022 phrase translation probability (both directions)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-linear Model", "sec_num": "2.2" }, { "text": "\u2022 lexical translation probability (both directions)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-linear Model", "sec_num": "2.2" }, { "text": "\u2022 word penalty", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-linear Model", "sec_num": "2.2" }, { "text": "\u2022 phrase penalty", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-linear Model", "sec_num": "2.2" }, { "text": "\u2022 linear reordering penalty", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-linear Model", "sec_num": "2.2" }, { "text": "\u2022 lexicalised reordering", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-linear Model", "sec_num": "2.2" }, { "text": "The language model is a smoothed trigram model trained on the target side training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-linear Model", "sec_num": "2.2" }, { "text": "The most important component of the system is the phrase translation table. We are extracting phrase pairs from the training corpus by first aligning the words in the corpus, extracting phrase pairs that are consistent with the word alignment, and then assigning probabilities (or scores) to the obtained phrase translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-linear Model", "sec_num": "2.2" }, { "text": "Word alignments are obtained by first using the GIZA++ toolkit in both translation directions and then symmetrising the two alignments. Since the Figure 2 : Obtaining a high precision, low recall word alignment by intersecting two GIZA++ alignments IBM Models implemented in GIZA++ are not able to map one target (English) word to multiple source (foreign) words, the method of symmetrisingcalled refined method (Och and Ney, 2003) -effectively overcomes this deficiency. Figure 2 shows the first step in the symmetrisation process: The intersection of the two GIZA++ alignments is taken. Only word alignment points that occur in both alignments are preserved. This is the intersection alignment.", "cite_spans": [ { "start": 412, "end": 431, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 146, "end": 154, "text": "Figure 2", "ref_id": null }, { "start": 472, "end": 480, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Word Alignment", "sec_num": "2.3" }, { "text": "In a second step, additional alignment points are added. Only alignment points that are in either of the two GIZA++ alignments (or, in the union of these alignments) are considered. In the growing step, potential alignment points that connect currently unaligned words and that neighbour established alignment points are added. Neighbouring can be either defined as directly to the left, right, top, or bottom (resulting in the grow alignment), or also include diagonally neighbourhood (resulting in the grow-diag alignment).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Alignment", "sec_num": "2.3" }, { "text": "In a final step, alignment points that do not neighbour established alignment points are added. In a method called grow(-diag)-final this is done for alignment points between words, of which at least one is currently unaligned. In the grow(-diag)-finaland method, only alignment points between two unaligned words are added. See Figure 3 for an illustration. The grey points in the matrix are potential alignment points that occur in the union, but not in the intersection of the two Figure 3 : Adding additional alignment points. Potential points are points in the union of the two GIZA++ alignments (grey). In the growing step, neighbouring points are added, when they connect at least one unaligned word. In a final step outlying points may be added (see Section 2.3).", "cite_spans": [], "ref_spans": [ { "start": 329, "end": 337, "text": "Figure 3", "ref_id": null }, { "start": 484, "end": 492, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Word Alignment", "sec_num": "2.3" }, { "text": "GROW-DIAG-FINAL(e2f,f2e): GIZA++ alignments. Three neighbouring points are added. The alignment point between did and a is added in the grow(-diag)-final method, but not in the grow(-diag)-final-and, since the Spanish word a is unaligned, but not the English word did. Figure 4 presents the symmetrisation method in pseudo code.", "cite_spans": [], "ref_spans": [ { "start": 269, "end": 277, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Word Alignment", "sec_num": "2.3" }, { "text": "neighbouring = ((-1,0),(0,-1),(1,0),(0,1), (-1,-1),(-1,1),(1,-1),(1,1)) alignment = intersect(e2f,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Alignment", "sec_num": "2.3" }, { "text": "We now extract phrase pairs for the phrase translation table. Any phrase pair that is consistent with the word alignment is collected. We define consistent as: The words in the phrase pair have to be aligned to each other and not to any words outside. Figure 5 for an illustration. Note that unaligned words may be included within and at the border of extracted phrase pairs (third example in Figure 5 ). Each phrase pair, however, must include at least one alignment point.", "cite_spans": [], "ref_spans": [ { "start": 252, "end": 260, "text": "Figure 5", "ref_id": "FIGREF2" }, { "start": 393, "end": 401, "text": "Figure 5", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Phrase Extraction", "sec_num": "2.4" }, { "text": "Using word-level alignments to induce phrasebased translation models is common practise in the statistical machine translation community. It has been adopted by most groups participating in the NIST MT Evaluation (Lee and Przybocki, 2005) .", "cite_spans": [ { "start": 213, "end": 238, "text": "(Lee and Przybocki, 2005)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase Extraction", "sec_num": "2.4" }, { "text": "In contrast to this, Marcu and Wong (2002) have defined a method for directly estimating phrasal translation models from parallel corpora, rather than using heuristic methods to induce phrase alignments from word alignments. Their joint probability phrase-based model is computationally demanding, and as such has not been applied to large data sets. Our group has been implementing a scalable version of the joint probability model (Mayne, 2005) , and we hope to submit it as a contrastive system in next year's IWSLT.", "cite_spans": [ { "start": 21, "end": 42, "text": "Marcu and Wong (2002)", "ref_id": "BIBREF6" }, { "start": 433, "end": 446, "text": "(Mayne, 2005)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase Extraction", "sec_num": "2.4" }, { "text": "The phrase translation probability is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Scoring", "sec_num": "2.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(f |\u0113) = count(f ,\u0113) f count(f ,\u0113)", "eq_num": "(3)" } ], "section": "Phrase Scoring", "sec_num": "2.5" }, { "text": "where count(f ,\u0113) gives the total number of times the phrasef is aligned with the phrase\u0113 in the parallel corpus. Phrase translation probabilities are lexically weighted as in (Koehn et al., 2003) : where n is the length of\u0113, and a is the word-level alignment between phrase\u0113 andf . Since a phrase alignment may have multiple possible word-level alignments, we retain a set of alignments and take the most frequent. Word and phrase penalty add a constant factor (\u03c9 and \u03c0) for each word or phrase generated.", "cite_spans": [ { "start": 176, "end": 196, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase Scoring", "sec_num": "2.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p lw (f |\u0113, a) = n i=1 1 |{i|(i, j) \u2208 a}| \u2200(i,j)\u2208a p(f j |e i )", "eq_num": "(4)" } ], "section": "Phrase Scoring", "sec_num": "2.5" }, { "text": "Our original reordering model only considers the distance of movements. The reordering penalty adds a factor \u03b4 n for movements over n words. The movement distance is measured on the foreign side.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reordering", "sec_num": "2.6" }, { "text": "Our current system includes a lexicalised reordering model. For each phrase pair, we learn, how likely it directly follows a previous phrase (monotone), is swapped with a previous phrase (swap), or is not connected to the previous phrase at all (discontinuous). See Figure 6 for an illustration.", "cite_spans": [], "ref_spans": [ { "start": 266, "end": 274, "text": "Figure 6", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Reordering", "sec_num": "2.6" }, { "text": "When collecting phrase pairs, can classify them into these three categories based on:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reordering", "sec_num": "2.6" }, { "text": "\u2022 monotone: a word alignment point to the top left exists \u2022 swap: an alignment point to the top right exists \u2022 discontinuous: no alignment points to the top left or top right Given these counts, we can learn probability distributions of the form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reordering", "sec_num": "2.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p r (orientation|\u0113,f )", "eq_num": "(5)" } ], "section": "Reordering", "sec_num": "2.6" }, { "text": "For the estimation of the probability distribution, we smooth the collected counts. This lexicalised reordering model is motivated by similar work by Tillmann (2004) .", "cite_spans": [ { "start": 150, "end": 165, "text": "Tillmann (2004)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Reordering", "sec_num": "2.6" }, { "text": "Recall that the components of our machine translation system are combined in a log-linear way. The weight of the feature functions, or model components, is set by minimum error rate training. We reimplemented a method suggested by Och (2003) .", "cite_spans": [ { "start": 231, "end": 241, "text": "Och (2003)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Discriminative Training", "sec_num": "2.7" }, { "text": "In short, we optimise the value of the parameter weights \u03bb m by iteratively: (a) running the decoder with a currently best weight setting, (b) extracting an n-best list of possible translations, and (c) finding a better weight setting that re-ranks the n-best-list, so that a better translation score is obtained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminative Training", "sec_num": "2.7" }, { "text": "To score translation quality, we employ the BLEU score (Papineni et al., 2002) . The search for the best weight setting is a line search for each \u03bb m , which is repeated until no improvement can be achieved.", "cite_spans": [ { "start": 55, "end": 78, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Discriminative Training", "sec_num": "2.7" }, { "text": "We thank David Chiang of the University of Maryland for providing us with a faster version of our implementation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminative Training", "sec_num": "2.7" }, { "text": "In a period of one month, we optimised our system to the IWSLT'05 task. We chose to only participate in the transcription task using the supplied data, since we did not have adequate additional resources or tools for these language pairs, and also had not enough time to investigate these.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptations to IWSLT'05 Task", "sec_num": "3" }, { "text": "The advantage of limiting ourselves to this track, meant that we could quickly train our system. Training the entire system (from corpus preparation over word alignment to building models) took only 15 minutes CPU time instead of about a week for the large-scale Arabic-English DARPA/NIST translation challenge. Hence, we were able to run many experiment to optimise performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptations to IWSLT'05 Task", "sec_num": "3" }, { "text": "We decided to use the 2003 test set as tuning set for minimum error rate training, and the 2004 test set as test set for development. All performance numbers reported in this section are %BLEU scores computed with our own evaluation script. This script takes as reference length the closest reference sentence length, as in the official evaluation, but does not eliminate punctuation, as done there.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptations to IWSLT'05 Task", "sec_num": "3" }, { "text": "In our experiments, we tried to find Table 2 : BLEU scores for systems trained using different alignment methods", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 44, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Adaptations to IWSLT'05 Task", "sec_num": "3" }, { "text": "We also carried out experiments to optimise GIZA++ parameters, but this did not yield any significant improvements. We would like to re-visit these experiments at some future time, since we did not have sufficient time for a thorough treatment at this time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptations to IWSLT'05 Task", "sec_num": "3" }, { "text": "We also tried to deal with language-specific problems, as previously done for German-English (Collins et al., 2005) . We created hand-written rules that move the Japanese verb from the end of the sentence to the beginning. However, we could not consistently achieve improvements using these rules. Since we did not have a part-of-speech tagger for Japanese, we had to rely on the assumption that the last word of a Japanese sentence is the verb. We did not apply these rules in our official submission.", "cite_spans": [ { "start": 93, "end": 115, "text": "(Collins et al., 2005)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Adaptations to IWSLT'05 Task", "sec_num": "3" }, { "text": "Our experience with GIZA++ alignments has been that IBM Model training performs poorly for source words that occur only once in the training corpus. These words are often incorrectly aligned to many target words. This effect creates problems with phrase extraction, since alignment points effectively limit possible phrase pairs. If one word is aligned to many words that are spread throughout the sentence, many reasonable phrase pairs can not be extracted because of the consistency constraints of our phrase extraction algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimising Word Alignment", "sec_num": "3.1" }, { "text": "Since we deal with much smaller data sets than we are used to, we expected to have more problems with singleton words and their adverse effect on phrase extraction. Hence, we explored a number of alignment methods, ranging from our default method (grow-diag-final), which establishes many word alignment points to the most sparse method of just allowing alignment points that occur in the intersection of the bidirectional alignments (intersect). The effect of alignment method on the number of alignment points and the number of extracted phrase pairs is exemplified in Table 1 on the case of the Japanese-English training data. Note the differences between the default method and the intersection methods: The intersection only establishes a third of the number of alignment points (79,200 vs. 282,110) , causing the number of extracted distinct phrase pairs to explode by a factor of about 40.", "cite_spans": [ { "start": 784, "end": 804, "text": "(79,200 vs. 282,110)", "ref_id": null } ], "ref_spans": [ { "start": 571, "end": 578, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Optimising Word Alignment", "sec_num": "3.1" }, { "text": "However, having a phrase table of 2.6 million distinct phrase pairs is not a computational problem for our system. In fact, for Arabic-English translation, we often work with phrase tables of up to 100 million distinct phrase pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimising Word Alignment", "sec_num": "3.1" }, { "text": "We carried out experiments using five different alignment methods for the different language pairs. For each alignment method and language pair, we trained a system and optimised it using minimum error rate training. The evaluation of the effect of the different alignment methods on translation quality presents a mixed picture: While for all language pairs, the default method does not result in higher performance than the sparser methods, not a single alignment method emerges as the optimal method for all language pairs. For two language pairs, Japanese-English and Chinese-English, the intersection method comes out ahead.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimising Word Alignment", "sec_num": "3.1" }, { "text": "Since we just implemented lexicalised reordering in our system, we used the IWSLT'05 translation task as a testbed to investigate its best configuration. We consider the following choices in the lexicalised reordering model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimising Lexicalised Reordering", "sec_num": "3.2" }, { "text": "\u2022 Do we distinguish between monotone, swap, and discontinuous ordering (orientation), or just test for monotone ordering (monotonicity)?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimising Lexicalised Reordering", "sec_num": "3.2" }, { "text": "\u2022 Do we condition on the identity of the foreign phrase (f), or on both the foreign and English phrase (fe)?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimising Lexicalised Reordering", "sec_num": "3.2" }, { "text": "\u2022 Do we model reordering in respect to the previous translated phrase, or also in respect to the following translated phrase (bidirectional)?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimising Lexicalised Reordering", "sec_num": "3.2" }, { "text": "These three different options lead to eight possible configurations for the lexicalised reordering model. We build translation systems for all possible configurations for all five language pairs. For all the language pairs, no single lexicalised reordering method emerged as significantly better than the others. However, any lexicalised reordering method is better than no lexicalised reordering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimising Lexicalised Reordering", "sec_num": "3.2" }, { "text": "In Table 3 , you can see which alignment method scored best for each language pair. Again, a very mixed picture emerged. The only consistent result is that conditioning on the identity of both the foreign and English phrase is superior. Any of the remaining four possible configurations comes out ahead for at least one of the language pairs.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Optimising Lexicalised Reordering", "sec_num": "3.2" }, { "text": "Since we optimised word alignment method and lexicalised reordering method in a integrated fashion, what is the best word alignment method changed for Arabic-English, Korean-English and Chinese-English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimising Lexicalised Reordering", "sec_num": "3.2" }, { "text": "We would like to stress again at this point that the differences are mostly not sufficiently significant to make a strong point here about which word alignment method or which lexicalised reordering method works best. However, we can clearly state that lexicalised reordering is beneficial for all language pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimising Lexicalised Reordering", "sec_num": "3.2" }, { "text": "After settling on a word alignment and lexicalised reordering method for each language pair in previous experiments, we concluded our adaptation experiments by optimising the reordering distance limit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimising Reordering Distance Limit", "sec_num": "3.3" }, { "text": "Ideally, we would allow reordering of any distance, since movements over long distance do occur when translating. One example is the movement of the Japanese verb from the end of the sentence to the position at the beginning just after the subject in English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimising Reordering Distance Limit", "sec_num": "3.3" }, { "text": "However, our previous experience has shown that the reordering model is not strong enough to correctly guide long distance movements. In fact, when we completely prohibit movements over more than four words, we achieved better translation results than when allowing more distant reordering. shown to be beneficial, it is still a very local model. Decisions are made for a particular phrase based on its empirical reordering behaviour with respect to directly neighbouring phrases. For instance, for a Japanese verb to be translated into English, we will learn that it is typically reordered, but not how far. Nevertheless, we wanted to carry out experiments with larger reordering limits. Recall that reordering distance is measured in respect to movements of foreign phrases. If we first translate the first foreign word, and then continue with the fifth word, we measure this as a movement over three words (the three foreign words 2, 3, and 4 are skipped). Table 4 displays the translation performance for systems with different reordering limits. Note that we did not have to retrain the models for these experiments, but we did have to optimise model weights using minimum error rate training.", "cite_spans": [], "ref_spans": [ { "start": 960, "end": 967, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Optimising Reordering Distance Limit", "sec_num": "3.3" }, { "text": "The results suggest that more permissive reordering limits than a maximum movement distance of 4 words are beneficial. While being aware of the limited statistical significance of these results, we are inclined to cautiously state that for translations involving Asian languages, the maximum reordering limit of 8 (or even higher) seems to be better than the traditional 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimising Reordering Distance Limit", "sec_num": "3.3" }, { "text": "For the translation of the test data of the IWSLT'05 translation task, we used the optimised configuration and parameter settings, as obtained by our adaptation experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "The results are displayed in Table 5 . Compared to the performance of the other participants, we are very satisfied with the results. We scored 1st place in two of the five tracks, and had very respectable showings for the other tracks.", "cite_spans": [], "ref_spans": [ { "start": 29, "end": 36, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "A closer look at the numbers, however, will reveal one striking oddity: For almost all language pairs, we incur a heavy length penalty, which has a devastating effect on the NIST score. Obviously our output is almost always too short.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "The culprit for this is our minimum error rate training, which optimises the BLEU score. It uses the shortest of the reference sentences as the basis to compute the length penalty. This inherently causes a optimisation to very short output. However, the official evaluation uses the closest reference length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "In a post-evaluation experiment, we altered our minimum error rate training to optimise to average reference sentence length. The effect on test scores is displayed in Table 6 .", "cite_spans": [], "ref_spans": [ { "start": 168, "end": 175, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Due to the more lenient length penalty, our NIST score improve dramatically. In the case of Japanese-English, it more than doubles from 4.0784 to 8.1209. The effect on the BLEU scores is less pronounced: For four out of five language pairs, we achieved slightly higher BLEU scores, for Chinese-English, the BLEU score drops.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Our participation at the IWSLT'05 Evaluation Campaign seems to confirm one of the selling points of statistical machine translation: the ability to quickly build machine translation systems for new language pairs. While we had no prior experience with building systems for Korean and Japanese, and only very limited knowledge about any of the non-English languages, we were able to build competitive systems for all the language-pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Our adaptation experiments revealed that translation tasks of speech transcriptions in limited domain, Table 6 : Optimisation to average reference sentence length instead of shortest reference length (length penalty in parenthesis): Note the improved length penalties and vastly improved NIST scores. 4 out of 5 BLEU scores are higher as well (exception is Chinese-English).", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 110, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "using small training corpus sizes, do require different settings of our translation system than we traditionally used for open domain text translation with much larger training corpora. We also were able to verify the benefits of our novel lexicalised reordering model, which consistently led to significant perform gains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Clause restructuring for statistical machine translation", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "I", "middle": [], "last": "Kucerova", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL '05)", "volume": "", "issue": "", "pages": "531--540", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, M., Koehn, P., and Kucerova, I. (2005). Clause restruc- turing for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computa- tional Linguistics (ACL '05), pages 531-540, Ann Arbor, Michigan. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The foundation for statistical machine translation at MIT", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "Proceedings of Machine Translation Evaluation Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, P. (2004a). The foundation for statistical machine trans- lation at MIT. In Proceedings of Machine Translation Eval- uation Workshop 2004.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Pharaoh: a beam search decoder for statistical machine translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "6th Conference of the Association for Machine Translation in the Americas, AMTA, Lecture Notes in Computer Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, P. (2004b). Pharaoh: a beam search decoder for sta- tistical machine translation. In 6th Conference of the As- sociation for Machine Translation in the Americas, AMTA, Lecture Notes in Computer Science. Springer.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Edinburgh system description for the 2005 NIST MT evaluation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "A", "middle": [], "last": "Axelrod", "suffix": "" }, { "first": "A", "middle": [ "B" ], "last": "Mayne", "suffix": "" }, { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "M", "middle": [], "last": "Osborne", "suffix": "" }, { "first": "D", "middle": [], "last": "Talbot", "suffix": "" }, { "first": "M", "middle": [], "last": "White", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Machine Translation Evaluation Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, P., Axelrod, A., Mayne, A. B., Callison-Burch, C., Os- borne, M., Talbot, D., and White, M. (2005). Edinburgh system description for the 2005 NIST MT evaluation. In Proceedings of Machine Translation Evaluation Workshop 2005.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Statistical phrase based translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Joint Conference on Human Language Technologies and the Annual Meeting of the North American Chapter of the Association of Computational Linguistics (HLT-NAACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, P., Och, F. J., and Marcu, D. (2003). Statistical phrase based translation. In Proceedings of the Joint Conference on Human Language Technologies and the Annual Meeting of the North American Chapter of the Association of Computa- tional Linguistics (HLT-NAACL).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "NIST 2005 machine translation evaluation official results. Official release of automatic evaluation scores for all submissions", "authors": [ { "first": "A", "middle": [], "last": "Lee", "suffix": "" }, { "first": "M", "middle": [], "last": "Przybocki", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, A. and Przybocki, M. (2005). NIST 2005 machine transla- tion evaluation official results. Official release of automatic evaluation scores for all submissions.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A phrase-based, joint probability model for statistical machine translation", "authors": [ { "first": "D", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "W", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcu, D. and Wong, W. (2002). A phrase-based, joint proba- bility model for statistical machine translation. In Proceed- ings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Scaling the joint probability phrase based statistical translation model", "authors": [ { "first": "A", "middle": [ "B" ], "last": "Mayne", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mayne, A. B. (2005). Scaling the joint probability phrase based statistical translation model. Master's thesis, University of Edinburgh.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Minimum error rate training for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the Association of Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och, F. J. (2003). Minimum error rate training for statistical ma- chine translation. In Proceedings of the 41st Annual Meeting of the Association of Computational Linguistics (ACL).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och, F. J. and Ney, H. (2003). A systematic comparison of vari- ous statistical alignment models. Computational Linguistics, 29(1):19-52.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "BLEU: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W.-J", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association of Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). BLEU: a method for automatic evaluation of machine trans- lation. In Proceedings of the 40th Annual Meeting of the Association of Computational Linguistics (ACL).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A unigram orientation model for statistical machine translation", "authors": [ { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Joint Conference on Human Language Technologies and the Annual Meeting of the North American Chapter of the Association of Computational Linguistics (HLT-NAACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tillmann, C. (2004). A unigram orientation model for statistical machine translation. In Proceedings of the Joint Conference on Human Language Technologies and the Annual Meeting of the North American Chapter of the Association of Com- putational Linguistics (HLT-NAACL).", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Phrase-based SMT: Input is segmented into phrases, each is mapped into output phrase and may be reordered" }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "f2e); GROW-DIAG(); FINAL(e2f); FINAL(f2e); GROW-DIAG(): iterate until no new points added for english word e = 0 ... en for foreign word f = 0 ... fn if ( e aligned with f ) for each neighbouring point ( e-new, f-new ): if ( ( e-new not aligned and f-new not aligned ) and ( e-new, f-new ) in union( e2f, f2e ) ) add alignment point ( e-new, f-new ) FINAL(a): for english word e-new = 0 ... en for foreign word f-new = 0 ... fn if ( ( e-new not aligned or f-new not aligned ) and ( e-new, f-new ) in alignment a ) add alignment point ( e-new, f-new ) Figure 4: Pseudo-code of the grow-diag-final method to symmetrise word alignments. See Section 2.3 for variations of this method." }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "Definition of consistent word alignments: Words of an extracted phrase pair have to be aligned to each other and nothing else See" }, "FIGREF3": { "type_str": "figure", "uris": null, "num": null, "text": "Possible orientations of phrases: monotone (m), swap (s), or discontinuous (d)" }, "TABREF1": { "type_str": "table", "num": null, "content": "
Language Pairfinal (default) final-and grow-diag grow intersect
Arabic-English48.848.549.939.947.5
Japanese-English40.439.939.039.145.1
Korean-English33.935.727.713.535.4
Chinese-English28.932.431.732.834.6
English-Chinese15.49.68.115.415.2
", "html": null, "text": "Different word alignment methods and the effect of the phrase table: Since alignment points restrict possible phrase pairs, fewer alignment points lead to larger phrase tables." }, "TABREF2": { "type_str": "table", "num": null, "content": "
displays resulting %BLEU
scores on the IWSLT'04 test set (using our BLEU
scoring script described at the beginning of this sec-
tion).
", "html": null, "text": "" }, "TABREF3": { "type_str": "table", "num": null, "content": "", "html": null, "text": "Best lexicalised reordering methods, compared against the baseline (using only distance-based reordering penalty): Improvements for all language pairs" }, "TABREF6": { "type_str": "table", "num": null, "content": "
Language PairBLEUNISTWERPERMETEOR GTM
Arabic-English0.5180 (0.98) 9.7749 (0.94) 0.3860 0.33230.72700.6613
Japanese-English 0.3941 (0.95) 8.1209 (0.91) 0.5489 0.45990.59710.4890
Korean-English0.3859 (1.00) 8.4455 (0.99) 0.5617 0.45590.62210.4980
Chinese-English 0.4364 (1.00) 9.0834 (0.99) 0.5043 0.40890.68410.5914
English-Chinese 0.2230 (0.91) 5.2391 (0.97) 0.6037 0.51490.09550.5657
", "html": null, "text": "Official Results: The scores for our official submission to the IWSLT'05 Evaluation Campaign (length penalty in parenthesis), and rank among participants according to the BLEU score." } } } }