ACL-OCL / Base_JSON /prefixI /json /iwslt /2004.iwslt-evaluation.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:22:46.624254Z"
},
"title": "Experimenting with Phrase-Based Statistical Translation within the IWSLT 2004 Chinese-to-English Shared Translation Task",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Langlais",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NUK University of Montreal Saarbr\u00fccken National University of Kaohsiung",
"location": {
"country": "Canada Germany Taiwan"
}
},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Carl",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NUK University of Montreal Saarbr\u00fccken National University of Kaohsiung",
"location": {
"country": "Canada Germany Taiwan"
}
},
"email": "carl@iai.uni-sb.de"
},
{
"first": "Oliver",
"middle": [],
"last": "Streiter",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NUK University of Montreal Saarbr\u00fccken National University of Kaohsiung",
"location": {
"country": "Canada Germany Taiwan"
}
},
"email": "ostreiter@nuk.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the system we built for the Chinese to English track of the IWSLT 2004 evaluation campaign. A one month effort was devoted to this exercise, starting from scratch and making use as much as possible of freely available packages. We show that a decent phrase-based translation engine can be built within this short time frame.",
"pdf_parse": {
"paper_id": "2004",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the system we built for the Chinese to English track of the IWSLT 2004 evaluation campaign. A one month effort was devoted to this exercise, starting from scratch and making use as much as possible of freely available packages. We show that a decent phrase-based translation engine can be built within this short time frame.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Machine Translation is a very active field nowadays strongly anchored into a paradigm of performance. Evaluation exercises such as those conducted within the TIDES project are pushing system designers to constantly improve their systems. Currently, many of the top-performing systems are phrase-based statistical machine translation (SMT) engines. The fact that SMT systems are among the best ones in those evaluation exercises is not surprizing considering the peculiarities of the translation tasks considered. The popularity of phrase-based models (PBMs) in SMT is neither a surprise, since PBMs allow to a certain extent to cope with local word reordering across languages, as well as to account for local context modelling. [1] also credit PBMs for being somehow tolerant to tokenization errors, an interesting characteristic when dealing with languages such as Chinese, the source language under consideration in this study.",
"cite_spans": [
{
"start": 729,
"end": 732,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "This effervescent activity comes with some bonus. Several freely available valuable packages (e.g. Giza++ [2] , Pharaoh [3] , SRILM [4] ) make possible the fast development of a phrase-based translation engine. Other packages allow to quickly evaluate a system according to a gold standard (e.g. MTEVAL http://www.nist.gov/speech/ tests/mt/mt2001/resource and GTM http:// nlp.cs.nyu.edu/GTM). This paper reports on the one month effort we spent building a system for the Chinese-to-English track of the IWSLT workshop, relying intensively on these packages.",
"cite_spans": [
{
"start": 106,
"end": 109,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 120,
"end": 123,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 132,
"end": 135,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Very recently, several authors [5, 6] proposed at the same time an astonishingly simple but powerful model which we designate hereafter as Flat Phrase-Based model (FPBM). A FPBM is simply a collection of pairs of sequences of words with one or several scores (or probabilities) attached to them. The main difference between a FPBM and an alignment template (AT) model [7] being that the former does not attempt to model internal reordering of phrases. Thus, FPBMs as such do not have generalization capabilities. Zens and Ney [8] give an experimental comparison of both models on three different test sets. On the German-English Verbmobil task, the AT engine outperforms the PB engine they tested, while on the other tasks -the Spanish-English Xerox task, and the French-English Hansards task -they observed the opposite. Tom\u00e0s et al. [9] recently revisited the AT model and report that combining it with a FPBM brings some improvements.",
"cite_spans": [
{
"start": 31,
"end": 34,
"text": "[5,",
"ref_id": "BIBREF4"
},
{
"start": 35,
"end": 37,
"text": "6]",
"ref_id": "BIBREF5"
},
{
"start": 368,
"end": 371,
"text": "[7]",
"ref_id": "BIBREF6"
},
{
"start": 526,
"end": 529,
"text": "[8]",
"ref_id": "BIBREF7"
},
{
"start": 835,
"end": 838,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Based Models",
"sec_num": "2."
},
{
"text": "The recipe given in [5, 6] for acquiring a FPBM is simple: use a word-alignment to identify in an heuristic way an alignment relation at the so-called phrasal level. Both articles propose relative frequency counts to score each pair of phrases. Several authors noted that the relative frequency estimator is particularly inappropriate to the task, since many phrases (and especially long ones) are seen only few times, sometimes only once. [5, 10] proposed to score a pair of phrases according to an IBM-like word-based model, the lexicon probabilities p(f |n) being learned by relative frequency over the word alignment set. This idea has also been tested by Vogel et al. [10] . Also, in [8] , the authors propose to score a pair of phrases according to a smoothed probabilistic word bilingual lexicon. And Vogel et al. [1] demonstrated experimentally that rating phrases according to an informationbased score yields noticeable improvements.",
"cite_spans": [
{
"start": 20,
"end": 23,
"text": "[5,",
"ref_id": "BIBREF4"
},
{
"start": 24,
"end": 26,
"text": "6]",
"ref_id": "BIBREF5"
},
{
"start": 440,
"end": 443,
"text": "[5,",
"ref_id": "BIBREF4"
},
{
"start": 444,
"end": 447,
"text": "10]",
"ref_id": "BIBREF9"
},
{
"start": 673,
"end": 677,
"text": "[10]",
"ref_id": "BIBREF9"
},
{
"start": 689,
"end": 692,
"text": "[8]",
"ref_id": "BIBREF7"
},
{
"start": 821,
"end": 824,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Based Models",
"sec_num": "2."
},
{
"text": "Other variations to the recipe mentioned above have been extensively investigated by the CMU team and summarized in [10] . They investigated variations on the way the word alignment is produced, considering for instance a bilingual bracketing alignment [11] , an alignment technique also tried at the same workshop by [12] .",
"cite_spans": [
{
"start": 116,
"end": 120,
"text": "[10]",
"ref_id": "BIBREF9"
},
{
"start": 253,
"end": 257,
"text": "[11]",
"ref_id": "BIBREF10"
},
{
"start": 318,
"end": 322,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Based Models",
"sec_num": "2."
},
{
"text": "Zhang et al. [13] proposed an alternative way to collect phrases without requiring word alignment. They rely instead on point-wise mutual information between source and target words to identify phrases in both languages. This has a clear advantage over methods that rely on a monolingual segmentation step, followed by a bilingual mapping one, as for instance the one described in [14] .",
"cite_spans": [
{
"start": 13,
"end": 17,
"text": "[13]",
"ref_id": "BIBREF12"
},
{
"start": 381,
"end": 385,
"text": "[14]",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Based Models",
"sec_num": "2."
},
{
"text": "We developed a translation engine around the freely available package Pharaoh [3] . This package is provided with a binary file, as well as a carefully written user manual. The core of this decoder is a beam search engine optimizing a noisy channel model, as described in equation 1, where",
"cite_spans": [
{
"start": 78,
"end": 81,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our system",
"sec_num": "3."
},
{
"text": "s I 1 = s 1 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our system",
"sec_num": "3."
},
{
"text": ". . , s I stands for the best sequence of I phrases that fully cover the source sentence s. e = argmax e p(c|e)p lm (e) \u03bb lm \u03c9 |e|\u00d7\u03bb\u03c9 = argmax e,I p \u03c6 (c I 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our system",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "|e I 1 )p lm (e) \u03bb lm \u03c9 |e|\u00d7\u03bb\u03c9",
"eq_num": "(1)"
}
],
"section": "Our system",
"sec_num": "3."
},
{
"text": "Here, an independence assumption is further assumed between phrases, and the transfer model p \u03c6 is formulated as in equation 2, where \u03c6 is a FPBM, and d is a simple distortion model depending on a i the starting position of the foreign phrase c i , and b i\u22121 the ending position of the native phrase e i\u22121 (see [5] for more).",
"cite_spans": [
{
"start": 311,
"end": 314,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our system",
"sec_num": "3."
},
{
"text": "1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "p(c I",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "|e I 1 ) = I i=1 \u03c6(c i |e i ) \u03bb \u03c6 d(a i \u2212 b i\u22121 ) \u03bb d",
"eq_num": "(2)"
}
],
"section": "p(c I",
"sec_num": null
},
{
"text": "What really matters from the user point of view of this package, is the fact that the decoder takes as input:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "p(c I",
"sec_num": null
},
{
"text": "\u2022 a pair of FPBMs, one for each direction, the direct model (in our case \u03c6(e|c)) being used for nbest-list rescoring, a functionality of Pharaoh we did not use in this study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "p(c I",
"sec_num": null
},
{
"text": "\u2022 a target language model (English), in the format output by the SRILM package [4] ,",
"cite_spans": [
{
"start": 79,
"end": 82,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "p(c I",
"sec_num": null
},
{
"text": "\u2022 a set of weights applied in a log-linear fashion to the different models, namely: \u03bb \u03c6 , the weight given to the transfer model; \u03bb lm , the weight given to the language model, \u03bb \u03c9 , the word-penalty weight and \u03bb d , the weight given to the distortion model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "p(c I",
"sec_num": null
},
{
"text": "We trained the language models of the target part of the training corpus (20 000 English sentences) with the SRILM package 1 . In order to feed the phrase extractor, we first wordaligned the training bitext making use of the Giza++ package. Since [5] shown that the degree of the IBM model from which the viterbi alignment is computed was not playing a crucial role, we used the viterbi approximation computed by Giza++ for the IBM model 3 (training IBM model 4 is more demanding, since we need to train the word classes that are conditioning the distortion probabilities of this model).",
"cite_spans": [
{
"start": 123,
"end": 124,
"text": "1",
"ref_id": "BIBREF0"
},
{
"start": 247,
"end": 250,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "p(c I",
"sec_num": null
},
{
"text": "The only thing we had really to implement was a prescription to get the FPBMs required by Pharaoh. This is described in the following section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "p(c I",
"sec_num": null
},
{
"text": "We tried two kinds of strategies to compute the FPBMs. The first one, is directly following the approach described in [5, 6] and is detailed in section 4.1. The second one is a simple string-based approach described in section 4.2.",
"cite_spans": [
{
"start": 118,
"end": 121,
"text": "[5,",
"ref_id": "BIBREF4"
},
{
"start": 122,
"end": 124,
"text": "6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase extraction",
"sec_num": "4."
},
{
"text": "A nowadays standard practice among the PBM practitioners consists in aligning the bitext at the word level making use of word-alignment models trained in both directions (here C\u2192E and E\u2192C). This double alignment process makes senses since the underlying alignment model (most often an IBM model [16] ) is not symmetrical. Two sets of links between words are then distinguished. We call P (for Precision) the set of links that are present in both alignment directions, and R (for Recall) the links that are present in at least one alignment (C\u2192E or E\u2192C). Note that P \u2286 R. The word alignment retained is constituted of the P-links, as well as some R-links in the neighborhood of P.",
"cite_spans": [
{
"start": 295,
"end": 299,
"text": "[16]",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word-alignment-based extractor",
"sec_num": "4.1."
},
{
"text": "We implemented a variant of this approach which is strongly inspired by [5, 6] . Although the principle is very straightforward, we did not find in those articles a precise enough description of the algorithm. So, for the sake of completeness, we report in algorithm 1 the pseudo-code of the variant that we implemented. Our algorithm works in 4 steps. First, the P-links are considered (line 6), then extended by considering R-links (lines 9-21). Third, independent boxes are collected (lines 24-33). An independent box ((x 1 , x 2 ), (y 1 , y 2 )) represents a region in the alignment matrix where none of the source words S x2",
"cite_spans": [
{
"start": 72,
"end": 75,
"text": "[5,",
"ref_id": "BIBREF4"
},
{
"start": 76,
"end": 78,
"text": "6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word-alignment-based extractor",
"sec_num": "4.1."
},
{
"text": "x1 is aligned to a word not belonging to T y2 y1 and vice-versa:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-alignment-based extractor",
"sec_num": "4.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2200x \u2208 [x 1 , x 2 ], \u2200y : (x, y), y \u2208 [y 1 , y 2 ] \u2200y \u2208 [y 1 , y 2 ], \u2200x : (x, y), x \u2208 [x 1 , x 2 ]",
"eq_num": "(3)"
}
],
"section": "Word-alignment-based extractor",
"sec_num": "4.1."
},
{
"text": "where is an alignment relation made explicit by step 1 and 2 of the algorithm. The fourth and last step of the algorithm (lines 36-42) consists in electing pairs of phrases, any sequence of adjacent (on the source side) boxes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-alignment-based extractor",
"sec_num": "4.1."
},
{
"text": "The pseudo-code of our variant makes use of a data structure T [x] (resp. T [y]) which stands for the target (resp. source) positions associated to the source (resp. target) position x (resp. y):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-alignment-based extractor",
"sec_num": "4.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T [x] = {y : (x, y)}, \u2200x \u2208 [1, |S|] T [y] = {x : (x, y)}, \u2200y \u2208 [1, |T |]",
"eq_num": "(4)"
}
],
"section": "Word-alignment-based extractor",
"sec_num": "4.1."
},
{
"text": "We also need a few functions to simplify the description of the algorithm. The first function maintains the T structure during step 1 and 2 of the projection algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-alignment-based extractor",
"sec_num": "4.1."
},
{
"text": "function add(x, y) T [x] \u2190 T [x] \u222a {y} T [y] \u2190 T [y] \u222a {x}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-alignment-based extractor",
"sec_num": "4.1."
},
{
"text": "The second one, called during the extension stage verifies that (x, y) is a valid link to extend on:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-alignment-based extractor",
"sec_num": "4.1."
},
{
"text": "function neighbor(x, y) if (x, y) \u2208 R, \u2208P then if T [x] = {} or T [y] = {} then a \u2190 a \u222a (x, y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-alignment-based extractor",
"sec_num": "4.1."
},
{
"text": "The third function collects the pairs of phrases after checking some few length properties and is called during step 4 of the algorithm:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-alignment-based extractor",
"sec_num": "4.1."
},
{
"text": "function add(x 1 , x 2 , y 1 , y 2 ) x \u2190 x 2 \u2212 x 1 + 1 y \u2190 y 2 \u2212 y 1 + 1 if x \u2208 [minLength, maxLength] then if y \u2208 [minLength, maxLength] then if (max(x, y)/min(x, y)) \u2264 ratio then res \u2190 res \u222a (S x2 x1 , T y2 y1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-alignment-based extractor",
"sec_num": "4.1."
},
{
"text": "The second phrase extractor performs simple string operations. It is intended to capture obvious redundancies at the sentence and phrasal level in the training corpus. It is based on the simplifying assumption that if two strings are in relation of translation and if part of them also are, then we can induce a specific translation relation between the other parts. This is the idea formulated in the algorithm 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "String-based extractor",
"sec_num": "4.2."
},
{
"text": "In practice, we factor out the prefix and suffix test carried out in lines 10 and 11 of Algorithm 2 by sorting the training corpus using as sort key: a) the Chinese sentence, b) the English sentence, c) the inverted Chinese sentence and d) the inverted English sentence. Iterating from the top to the bottom of these lists, whenever a line contains it's preceding line, the preceding line is subtracted and the new pair of phrases added to the training corpus. The process was stopped when the productivity of the algorithm decreased, producing about 60 000 new pairs of phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "String-based extractor",
"sec_num": "4.2."
},
{
"text": "We examined two ways of scoring the pairs of phrases (s, t).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase ranking",
"sec_num": "5."
},
{
"text": "Both are estimates of the conditional probability p(t|s). The first estimator is relative frequency (equation 5) which, as mentioned earlier, largely overestimates the probability of rare phrases. Table 1 reports the frequency distributions of the pair of phrases observed for different settings on the training corpus (20 000 pairs of sentences). Approximatively 90% of the observed pairs appear only once in the training corpus, and around 70% of the parameters are set to unity by the relative frequency estimator.",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Phrase ranking",
"sec_num": "5."
},
{
"text": "An alternative is to resort to IBM model 1 [16] to score a pair. This is done by computing equation 6. if p = 2 then ",
"cite_spans": [
{
"start": 43,
"end": 47,
"text": "[16]",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase ranking",
"sec_num": "5."
},
{
"text": "X m \u2190 X; Y m \u2190 Y 30: for all x \u2208 X do Y \u2190 Y \u222a T [x] 31: if Y != Y m then 32: for all y \u2208 Y do X \u2190 X \u222a T [y] 33: until X = X m and Y = Y m 34: b \u2190 b \u222a (min{x : x \u2208 X}, max{x : x \u2208 X}), (min{y : y \u2208 Y }, max{y : y \u2208 Y })",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase ranking",
"sec_num": "5."
},
{
"text": "35:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase ranking",
"sec_num": "5."
},
{
"text": "x \u2190 max{x : ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase ranking",
"sec_num": "5."
},
{
"text": "x \u2208 X} +",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase ranking",
"sec_num": "5."
},
{
"text": "let ((x mi , x Mi ), (y mi , y Mi )) = b i 40: add(x mi , x Mi , y mi , y Mi )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase ranking",
"sec_num": "5."
},
{
"text": "add(x mi , x Mj , y mi , y Mj )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase ranking",
"sec_num": "5."
},
{
"text": "Algorithm 2 A String-based phrase extractor Require: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase ranking",
"sec_num": "5."
},
{
"text": "T = {(E i , C i ), i \u2208 [1, |T |]},",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase ranking",
"sec_num": "5."
},
{
"text": "if (E 1 , C 1 ) \u2208 res then 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase ranking",
"sec_num": "5."
},
{
"text": "if (E 2 , C 2 ) \u2208 res then 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase ranking",
"sec_num": "5."
},
{
"text": "if C 2 = C 1 \u03b1 or C 1 = C 2 \u03b1 then 11: if E 2 = E 1 \u03b2 or E 1 = E 2 \u03b2 then 12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase ranking",
"sec_num": "5."
},
{
"text": "res \u2190 res \u222a (\u03b2, \u03b1) 13: until convergence of res",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase ranking",
"sec_num": "5."
},
{
"text": "p rel (t|s) = |(t, s)| |t| (5) p ibm (t|s) = (|S| + 1) \u2212|T | |T | i=1 j\u2208[0,|S|] p(t i |s j ) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase ranking",
"sec_num": "5."
},
{
"text": "During this exercise, we only used the corpora made available by the organizers, the characteristic of which are reported in Table 2 . No pre-processing was done to try to reinforce the parallelism between the two languages. Neither did we try to account for class of tokens such as numbers or dates. We did not change either the tokenization provided, but did convert the English into lowercase. Punctuation marks were left as is in the corpora, but removed after translation, as required by the organizers. The TRAIN corpus was split into TRAIN-Q and TRAIN-A corpus, gathering interrogative and affirmative sentences respectively. See section 7.5 for the motivations behind this split. The CSTAR corpus contains 506 Chinese sentences with ",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 132,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Corpora",
"sec_num": "6."
},
{
"text": "In this section, we summarize the experiments we did with the above described phrase-based acquisition methods. We ran the decoder on the CSTAR corpus. The best parameter setting would be the one we would use for translating the official test set. As discussed in few moments, many things have been tried, some useful, some not, and much script code has been churned out, with some inevitable bugs (recall that we devoted one month for the full exercise).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7."
},
{
"text": "Repeating the experiments with a less stringent schedule (that is, after the official test), we detected and corrected several bugs. The results that are reported here are mainly those we measured after the competition (after correcting the few bugs we found), but we also report in section 7.6 the results of the translations we officially submitted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7."
},
{
"text": "One of the goal of the IWSLT exercise was to evaluate the salience of different evaluation metrics. For our own purpose, we computed a subset of those metrics: the BLEU 3 and NIST scores using the mteval script. We also computed MWER and MSER measures with an in-house tool as follow:",
"cite_spans": [
{
"start": 169,
"end": 170,
"text": "3",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "MWER(T S 1 , R N 1 ) = 100 S S i=1 min r\u2208[1,N ] ed(T i , R r i ) (7) MSER(T S 1 , R N 1 ) = 100 S S i=1 \u03b4 min r\u2208[1,N ] ed(T i , R r i ) (8) with \u03b4(x) = 0 if x = 0 1 otherwise",
"eq_num": "(9)"
}
],
"section": "Experiments",
"sec_num": "7."
},
{
"text": "where T S 1 is the set of S candidate translations to be evaluated, T i being the candidate translation of the ith source sentence, R N 1 stands for the set of N reference translations to the S source sentence; R r i being the rth reference translation of the ith source sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7."
},
{
"text": "ed(a, b) is the classic edit distance between a and b (counting 1 for insertion, substitution and deletion) normalized by the total number of operations involved to map a into b (counting as well the identity operation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7."
},
{
"text": "The first experiment we conducted was to compare the performance of a word-based translation engine to the performance of Pharaoh seeded with a FPBM acquired by the approach described in section 4.1 (ratio = 2, maxLength = 8, and minLength = 1). Each parameter of this model was estimated by relative frequency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "From word-based models to phrase-based models",
"sec_num": "7.1."
},
{
"text": "The word-based SMT engine is an extension to a trigram language model of the inverted dynamic programming approach described in [17] . This decoder which is designed for an IBM model 2 had been implemented before the IWSLT exercise. The performance of both the word-based and the phrase-based engines are reported in Table 3 .",
"cite_spans": [
{
"start": 128,
"end": 132,
"text": "[17]",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 317,
"end": 324,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "From word-based models to phrase-based models",
"sec_num": "7.1."
},
{
"text": "As can be observed, the performances of both decoders do elicit differences, notably on the NIST score, but not as much as we would have expected at first, especially if we consider the simplicity of the word-based model (WBM) embedded into our word-based engine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "From word-based models to phrase-based models",
"sec_num": "7.1."
},
{
"text": "As a raw check that our word-based engine was not too buggy, we ran the Pharaoh decoder with the transfer parameters of the IBM model 2 converted into the appropriate format. The results are reported in line 3 of Table 3 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 213,
"end": 220,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "From word-based models to phrase-based models",
"sec_num": "7.1."
},
{
"text": "One important thing we learned is that significant improvements -as measured by the automatic metrics we computed -can be obtained by tuning the engine adequately. What we call tuning here is the choice of the decoder parameters (or meta parameters) we can control via the built-in options of Pharaoh. This is done without modifying the model themselves (translation or language models), but finding the appropriate value of: \u03bb lm , the weight given to the language model (see equation 1); \u03bb \u03c9 the word penalty (see equation 1); \u03bb \u03c6 , the weight given to the translation model (see equa-tion 2), and \u03bb d , the weight given to the distortion model: (see equation 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning the decoder parameters",
"sec_num": "7.2."
},
{
"text": "Since we had only a few parameters to tune, we applied a poor man's strategy: a) sample uniformly the range of each parameter, b) generate all the combinations of parameter values, and c) translate the full test corpus with all these configurations generated. Clearly, there are more clever ways to tune the decoder, but we had at our disposal around 30 processors, and thanks to the decoder speed, it was manageable to tune the decoder for a set of models within a few hours of computation 4 . We made the arbitrary choice of optimizing the performance as measured by the NIST metric.",
"cite_spans": [
{
"start": 491,
"end": 492,
"text": "4",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning the decoder parameters",
"sec_num": "7.2."
},
{
"text": "We report in Table 4 the influence of tuning for a translation model obtained by the FPBMs used in the previous experiment (relative frequency estimator, ratio = 2, maxLength = 8, minLength = 1).",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tuning the decoder parameters",
"sec_num": "7.2."
},
{
"text": "The first line of this table shows the performance we obtained with the default configuration of Pharaoh. The third line shows the configuration of the decoder which yields the largest NIST score. Clearly, tuning is very important, since we obtained a relative gain over the default configuration (line 1) of 23%. If we had to tune only one parameter, then the word penalty would be the one to tune, since it brings alone a relative improvement of 14%, which represents 61% of the higher gain observed. The performance of the decoder, tuned for the word penalty only is reported in the second line of Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 601,
"end": 608,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tuning the decoder parameters",
"sec_num": "7.2."
},
{
"text": "From now on, the performance reported for a given set of models are those obtained after tuning the NIST metric. Table 4 : Performances measured on the CSTAR corpus without tuning (line 1), after tuning the word penalty weight (line 2), and after the tuning of all the parameters (line 3). ",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 120,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tuning the decoder parameters",
"sec_num": "7.2."
},
{
"text": "\u03bb d \u03bb \u03c6 \u03bb w \u03bb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning the decoder parameters",
"sec_num": "7.2."
},
{
"text": "We observed that merging a FPBM with a word-based model enlarges its coverage. Merging two FPBMs (a word-based model can be seen as a special case of a FPBM) p 1 (t|s) and p 2 (t|s) was done by copying the parameters p i (t|s) when s was not in the other model. In cases both models had s in common, the union of the target phrases associated to s by both models was considered. In cases where both models had the same pair of phrases (s, t), its score was averaged over the two models. The parameters were finally normalized so that t p(t|s) = 1, \u2200s. Table 5 shows (line 2) that merging the FPBM described above (ratio = 2, minLength = 1, maxLength = 8) with a word-based model resulting from IBM training yields a relative improvement over the phrase-based model alone of 3.7%. Extending the resulting model with the pairs of phrases obtained by the methodology described in section 4.2 only slightly improves the performance (line 3). Actually this gain is probably due to the fact that the TRAIN and TEST corpora share some source sentences, and that the second phrase acquisition method includes the pairs of sentences of the training corpus as parameters.",
"cite_spans": [],
"ref_spans": [
{
"start": 552,
"end": 559,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Merging different FPBMs",
"sec_num": "7.3."
},
{
"text": "The improvement observed by merging the WBM with the FPBM is somehow surprising considering the very harsh way we did the merging. We tried a cleaner linear combination of both models without better improvements. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Merging different FPBMs",
"sec_num": "7.3."
},
{
"text": "We report in this section the influence of the way a pair of phrases is scored within the translation model. The baseline model we consider here is the merged FPBM of the last section (line 3 of Table 5 ), a model of 306 585 parameters trained by relative frequency. Rating these parameters by equation 6 yields a relative improvement in the NIST score of 3% (line 2 of Table 6 ). For a given set of phrase parameters, Pharaoh allows to provide several scores, in which case, a specific weight must be given to each model. We tuned a model with two scores, one computed by relative frequency, the other one computed by equation 6. Thus, the tuning of the decoder was involving 5 parameters. The result of this experiment can be seen in line 3 of Table 6 . A slight increase of the NIST metric as well as an improvement in BLEU% score can be observed.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 370,
"end": 377,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 746,
"end": 753,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Scoring phrases with IBM model 1",
"sec_num": "7.4."
},
{
"text": "Inspecting the last model, we observed that around half the parameters (150 127) where set to 1 by each score. This is due to the fact that up to now, we systematically normalized the parameters so that the stochastic constraints hold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring phrases with IBM model 1",
"sec_num": "7.4."
},
{
"text": "We tried a last model where the IBM model 1 score was not normalized in the cases where only one target phrase was associated to a given source sentence (we also tried with less success a model where no normalization was carried out at all for the IBM score). The result of this experiment is shown in line 4 of Table 6 : an improvement is observed on the NIST score, but at the detriment of the word error rate. ",
"cite_spans": [],
"ref_spans": [
{
"start": 312,
"end": 319,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Scoring phrases with IBM model 1",
"sec_num": "7.4."
},
{
"text": "Based on the observation that around 40% of the training sentences were interrogatives, we investigated whether splitting the training corpus into two parts (interrogative sentences versus affirmative ones) and training separately on these two corpora could lead to some improvements. Splitting the corpus was done by explicitly looking for the presence (or absence) of the question mark word at the end of the Chinese sentences. At translation time, the sentences ending with the question mark were translated first with the specific question configuration. The other sentences were translated with the affirmative configuration. The two translation sets were then merged appropriately to get a final translation of the source test corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Specific models",
"sec_num": "7.5."
},
{
"text": "We first tried to train two different translation models. None of the trials we made resulted in an improvement. The fact that Pharaoh does not lend itself to combining different translation models that do not have the exact same set of parameters might be a reason for our lack of success in this experiment. We found however that combining a specific language model with the one trained on the full corpus leads to a slight improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Specific models",
"sec_num": "7.5."
},
{
"text": "This time, we conducted separately two tunings -one for affirmative sentences, one for questions -over the 6 parameters now controlling the decoder: two parameters for the language models (one for the specific model, one for the general one), two parameters for the translation model (one for the relative frequency score, one for the IBM score), one parameter for the distortion model, and one for the word penalty. The best improvement on the NIST score is reported in line 4 of Table 7 . We observe that it is not correlated with improvements in the other metrics.",
"cite_spans": [],
"ref_spans": [
{
"start": 481,
"end": 488,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Specific models",
"sec_num": "7.5."
},
{
"text": "According to the experiences we conducted on the CSTAR corpus, we identified several variants that we wanted to submit. They are enumerated in increasing order of their expected merit as estimated by the NIST metric 5 . The variant we submitted for manual evaluation was the QA one, the last one in this list. We also submitted a translation involving manual intervention in order to measure the usefulness of the automatic translations for human postediting. One way of measuring the usefulness of a MT system is to see whether a post-editor can enhance the amount of correct translations without seeing the source text being translated. Therefore, for each source sentence, we presented a subject 6 with translations (produced by the above variants) from which he produced a final translation. He could do that by selecting one translation among the generated translations and enhancing its quality though slight modifications .",
"cite_spans": [
{
"start": 699,
"end": 700,
"text": "6",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The translations we submitted before the deadline",
"sec_num": "7.6."
},
{
"text": "Out of the 500 target sentences that were produced in this way, 423 (84.6%) were just selections of one of the automatic translations. Out of these 423 translations, 85 (20%) were produced by the word-based engine (ibm2+3g).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The translations we submitted before the deadline",
"sec_num": "7.6."
},
{
"text": "In many cases it was impossible to guess a meaning from the translations. Particularly and in most cases for longer sentences, it was hopeless to amend the translation (e.g. very sorry to have in case two people eight with two suitcases sit in the difference from here). These translations were just copied. In other cases, almost any choice of the produced translations was a priori equally good, as in: how many minutes on foot ? how many minutes ? how many minutes does it take to get to the station by taxi ? 6 One of the authors of this paper, not familiar with Chinese.",
"cite_spans": [
{
"start": 513,
"end": 514,
"text": "6",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The translations we submitted before the deadline",
"sec_num": "7.6."
},
{
"text": "In total, 77 sentences were manually modified. Some obvious errors, such as repetitions of words, missing or additional articles and incomplete phrases, were corrected and the word position was adjusted. A one-sentence session is illustrated in Table 8 ; the full session can be seen at www. iro.umontreal.ca/\u02dcfelipe/iwslt/manual. This submission is called manual in Table 9 which shows the scores that were returned to us by the organizers.",
"cite_spans": [],
"ref_spans": [
{
"start": 245,
"end": 252,
"text": "Table 8",
"ref_id": "TABREF10"
},
{
"start": 367,
"end": 374,
"text": "Table 9",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "The translations we submitted before the deadline",
"sec_num": "7.6."
},
{
"text": "As can be observed in Table 9 , the order of merit of the variants we tried as measured on the CSTAR corpus is close to the one we observe on the TEST corpus. The exception is for the QA variant which performed worst on the latter corpus than the merge variant. We also observed a gain in performance for the manual version. ",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 29,
"text": "Table 9",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "The translations we submitted before the deadline",
"sec_num": "7.6."
},
{
"text": "We took the opportunity of the IWSLT 2004 shared task to experiment with phrase-based models. Although FPBMs are conceptually very simple, we found that many factors must be considered to get the best out of them, and that a great amount of time must be spent to monitor the improvements. Due to lack of time, we studied in this exercise only few of the factors that can affect the performance. We did not for example study the impact of word alignment techniques on our phrase acquisition method. Neither did we test carefully the different variants of the phrase extractors we imple-mented. Finally, we did not find time to analyse why a given variant was working better than another very close one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "8."
},
{
"text": "However, we experienced the importance of adequately tuning the meta-parameters of the decoder. We also observed that improvements could be obtained by merging the parameters of phrase-based and word-based models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "8."
},
{
"text": "This work was manageable in a short period of time, thanks to the availability of the Pharaoh decoder. A byproduct that we found useful is that this decoder offers a reference performance against which we can compare another decoder. In particular, we verified that our word-based engine had reasonable performances compared to Pharaoh seeded with the same transfer model. The greatest frustration we had after accomplishing this work was to contemplate the numerous experiments we could have done but did not. One bottleneck into a systematic exploration of phrase-based variants is the tuning required after each change in any step of the acquisition of a FPBM. We plan to consider a better way of tuning the parameters toward a given metric or set of metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "8."
},
{
"text": "We did not investigate the many smoothing options this package handles, but applied the setting recommended in[15].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For some reasons, a few sentences had only 15 translations. Therefore, the reference we consider in this study is constituted only of the first 15 translations provided for each source sentence.3 For readability reasons, we report BLEU% scores, that is, the BLEU score multiplied by 100.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Actually, an important part of the tuning process is devoted to computing the NIST scores with the MTEVAL script, as well as loading the parameters into Pharaoh",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "At the time of the exercise, the performance of the QA-model we had was significantly higher than that of other variants we considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Phrase Pair Rescoring with Term Weightings for Statistical Machine Translation",
"authors": [
{
"first": "B",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhao B., Vogel S. and Waibel A., \"Phrase Pair Rescor- ing with Term Weightings for Statistical Machine Translation\", In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Barcelona, Spain, July 2004",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improved Statistical Alignment Models",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Conference of the Association for Computational Linguistic (ACL)",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och F.J. and Ney H., \"Improved Statistical Align- ment Models\", in Proceedings of the Conference of the Association for Computational Linguistic (ACL), Hongkong, China, pp. 440-447, 2000",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Pharaoh: a Beam Search Decoder for Phrase-Based SMT",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "To appear in Proceedings of the Conference of the Association for Machine Translation in the Americas (AMTA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn P., \"Pharaoh: a Beam Search Decoder for Phrase-Based SMT\", To appear in Proceedings of the Conference of the Association for Machine Translation in the Americas (AMTA), 2004",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "SRILM -An Extensible Language Modeling Toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the International Conference for Speech and Language Processing (ICSLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke A., \"SRILM -An Extensible Language Model- ing Toolkit\", In Proceedings of the International Con- ference for Speech and Language Processing (ICSLP), Denver, Colorado, September 2002",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Statistical Phrase-Based Translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Human Language Technology Conference (HLT)",
"volume": "",
"issue": "",
"pages": "127--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn P., Och F.J. and Marcu D., \"Statistical Phrase- Based Translation\", In Proceedings of the Human Lan- guage Technology Conference (HLT), pp. 127-133, 2003",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Projection Extension Algorithm for Statistical Machine Translation",
"authors": [
{
"first": "C",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tillmann C., \"A Projection Extension Algorithm for Statistical Machine Translation\", In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2003",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Improved Alignment Models for Statistical Machine Translation",
"authors": [
{
"first": "",
"middle": [
"F J"
],
"last": "Och",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tillmann",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "20--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och. F.J., Tillmann C. and Ney H., \"Improved Align- ment Models for Statistical Machine Translation\", in Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 20-28, College Park, Maryland, USA, 1999",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improvements in Phrase-Based Statistical Machine Translation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Human Language Technology Conference (HLT-NAACL)",
"volume": "",
"issue": "",
"pages": "257--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zens R. and Ney H., \"Improvements in Phrase-Based Statistical Machine Translation\", In Proceedings of the Human Language Technology Conference (HLT- NAACL), Boston, MA, pp. 257-264, May 2004",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Combining phrasebased and template-based aligned models in statistical translation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Tom\u00e0s",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Casacuberta",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the First Iberian Conference on Pattern Recognition and Image Analysis",
"volume": "",
"issue": "",
"pages": "1020--1031",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Tom\u00e0s and F. Casacuberta., \"Combining phrase- based and template-based aligned models in statistical translation\", In Proceedings of the First Iberian Confer- ence on Pattern Recognition and Image Analysis, pp. 1020-1031, Mallorca, Spain, June 2003",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The CMU Statistical Machine Translation System",
"authors": [
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Tribble",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Venugopal",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of MT Summit",
"volume": "",
"issue": "",
"pages": "110--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vogel S., Zhang Y., Huang F., Tribble A., Venugopal A., Zhao B., Waibel A., \"The CMU Statistical Machine Translation System\", in Proceedings of MT Summit, pp.110-117, New Orleans, Louisiana, September, 2003",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Word Alignment Based on Bilingual Bracketing",
"authors": [
{
"first": "B",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL 2003 Workshop: Building and Using Parallel Texts: Data Driven Machine Translation and Beyond",
"volume": "",
"issue": "",
"pages": "15--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhao B. and Vogel S., \"Word Alignment Based on Bilingual Bracketing\", In Proceedings of HLT-NAACL 2003 Workshop: Building and Using Parallel Texts: Data Driven Machine Translation and Beyond, pp. 15- 18, Edmonton, Alberta, Canada, May, 2003",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Statistical Translation alignment with Compositionnality Constraints",
"authors": [
{
"first": "M",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Langlais",
"suffix": ""
}
],
"year": 2003,
"venue": "HLT-NAACL Workshop: Building and Using Parallel Texts: Data Driven Machine Translation and Beyond",
"volume": "",
"issue": "",
"pages": "19--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simard M. and Langlais P., \"Statistical Translation alignment with Compositionnality Constraints\", HLT- NAACL Workshop: Building and Using Parallel Texts: Data Driven Machine Translation and Beyond, Edmon- ton, Canada, May 31, pp.19-22, 2003",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Integrated Phrase Segmentation and alignment Algorithm for Statistical Machine Translation",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the International Conference on Natural Language Processing and Knowledge Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang Y., Vogel S. and Waibel A., \"Integrated Phrase Segmentation and alignment Algorithm for Statisti- cal Machine Translation\", In Proceedings of the In- ternational Conference on Natural Language Process- ing and Knowledge Engineering (NLP-KE), Beijing, China, 2003",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Unit Completion for a Computer-aided Translation Typing System",
"authors": [
{
"first": "P",
"middle": [],
"last": "Langlais",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Lapalme",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of Applied Natural Language Processing (ANLP)",
"volume": "",
"issue": "",
"pages": "135--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Langlais P., Foster G. and Lapalme G., \"Unit Comple- tion for a Computer-aided Translation Typing System\", In Proceedings of Applied Natural Language Process- ing (ANLP), Seattle, Washington, pp. 135-141, May, 2000",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Beam Search Decoder for Phrase-Based Statistical Machine Translation Models",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn P., \"A Beam Search Decoder for Phrase-Based Statistical Machine Translation Models\", Technical Manual of the Pharaoh decoder, 2003.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The Mathematics of Statistical Machine Translation: Parameter Estimation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Pietra",
"middle": [
"S A"
],
"last": "Della",
"suffix": ""
},
{
"first": "Della",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "19",
"issue": "",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown P.F, Della Pietra S.A, Della Pietra V.J and Mer- cer R.L., \"The Mathematics of Statistical Machine Translation: Parameter Estimation\", in Computational Linguistics, vol. 19 (2), pp. 263-311, 1993",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A DP based Search Algorithm for Statistical Machine Translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Niessen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the International Conference On Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "960--966",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niessen S., Vogel S. Ney H. and Tillmann C., \"A DP based Search Algorithm for Statistical Machine Trans- lation\", in Proceedings of the International Conference On Computational Linguistics (COLING), pp. 960- 966, 1998",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "-1,y-1); neighbor(x+1,y-1); 17: neighbor(x-1,y+1); neighbor(x+1,y-1); 18: else 19: neighbor(x-1,y); neighbor(x+1,y); 20: neighbor(x,y-1); neighbor(x,y+1); 21: for all (x, y) \u2208 a do add(x, y) 22: until |a| = 0 23: 24: Step3: Collect independent boxes 25: b \u2190 {} 26: for x : 1 \u2192 |S| do 27: X \u2190 {x}; Y \u2190 {} 28: repeat 29:"
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "41:for j : i + 1 \u2192 |b| do42: let ((x mj , x Mj ), (y mj , y Mj )) = b j 43: if x Mi + 1 = x mj then 44:"
},
"TABREF0": {
"content": "<table><tr><td>5:</td><td/></tr><tr><td colspan=\"2\">6: Step1: P-projection</td></tr><tr><td colspan=\"2\">7: for all (x, y) \u2208 P do add(x, y)</td></tr><tr><td>8:</td><td/></tr><tr><td colspan=\"2\">9: Step2: Extension</td></tr><tr><td colspan=\"2\">10: for p : 1 \u2192 2 do</td></tr><tr><td>11:</td><td>repeat</td></tr><tr><td>12:</td><td>a \u2190 {}</td></tr><tr><td>13:</td><td>for s : 1 \u2192 |S| do</td></tr><tr><td>14:</td><td>for all t \u2208 T [s] do</td></tr><tr><td>15:</td><td/></tr></table>",
"text": "Koehn-Tilmann-like variant for phrase extraction Require: P, R, minLength, maxLength, ratio Ensure: res contains all the pairs of phrases 1: Initialization 2: res \u2190 {} 3: for all x : 1 \u2192 |S| do T [x] \u2190 {} 4: for all y : 1 \u2192 |T | do T [y] \u2190 {}",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td>4:</td><td>res \u2190 res \u222a (E i , C i )</td></tr><tr><td>5:</td><td/></tr><tr><td colspan=\"2\">6: Applying compositionality</td></tr><tr><td colspan=\"2\">7: repeat</td></tr><tr><td>8:</td><td/></tr></table>",
"text": "a training corpus Ensure: res contains the pair of phrases 1: Initialization 2: res \u2190 {} 3: for i : 1 \u2192 |T | do",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table><tr><td colspan=\"6\">min max |model| %f1 %f2 %f3+ %p = 1</td></tr><tr><td>1</td><td>8</td><td>166 481 90.6</td><td>4.9</td><td>4.5</td><td>74.6</td></tr><tr><td>2</td><td>8</td><td>153 512 92.7</td><td>4.3</td><td>3.0</td><td>78.5</td></tr><tr><td>2</td><td>4</td><td>73 369 87.0</td><td>7.1</td><td>5.9</td><td>68.7</td></tr></table>",
"text": "Frequency distribution of pairs of phrases observed in the training corpus for different values of minLength and maxLength. A ratio of 2.0 was applied. %f1, %f2 and %f3+ stand for the percentage of parameters (pairs of phrases) seen 1, 2 or at least 3 times in the TRAIN corpus. %p = 1 stands for the percentage of parameters that have a relative frequency of 1. English reference translations2 . It was available four weeks before the official test and was used to gain some expertise on the phrase-based models.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table><tr><td/><td/><td>Chinese</td><td/><td>English</td><td/></tr><tr><td>corpus</td><td>|pair|</td><td colspan=\"2\">tokens words</td><td colspan=\"2\">tokens words</td></tr><tr><td>TRAIN</td><td colspan=\"5\">20 000 182 904 7 643 188 935 7 181</td></tr><tr><td colspan=\"6\">TRAIN-A 11 884 112 000 6 456 116 343 6 008</td></tr><tr><td>TRAIN-Q</td><td>8 116</td><td colspan=\"2\">70 904 4 024</td><td colspan=\"2\">72 592 3 900</td></tr><tr><td>CSTAR</td><td>506</td><td>3 515</td><td>870</td><td>-</td><td>-</td></tr><tr><td>TEST</td><td>500</td><td>3 794</td><td>893</td><td>-</td><td>-</td></tr></table>",
"text": "Main characteristics of the corpora used in this study.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table><tr><td>engine</td><td>NIST</td><td colspan=\"3\">BLEU% MWER MSER</td></tr><tr><td>ibm2+3g</td><td colspan=\"2\">5.0726 26.57</td><td>60.56</td><td>94.47</td></tr><tr><td>Pharaoh</td><td colspan=\"2\">5.5646 26.16</td><td>59.70</td><td>94.27</td></tr><tr><td colspan=\"3\">wbm by Pharaoh 4.8417 15.54</td><td>64.95</td><td>97.63</td></tr></table>",
"text": "Performances measured on the CSTAR corpus of the word-based engine (line 1), and the phrase-based engine (line 2). Line 3 shows the performance of the Pharaoh decoder fed with an IBM model 2 transfer model",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF7": {
"content": "<table><tr><td>config</td><td>|p| NIST</td><td colspan=\"3\">BLEU% MWER MSER</td></tr><tr><td>FPBM</td><td colspan=\"2\">6.8401 28.44</td><td>56.25</td><td>94.07</td></tr><tr><td>+ WBM</td><td colspan=\"2\">7.0766 31.38</td><td>54.88</td><td>93.28</td></tr><tr><td>+ SPBM</td><td colspan=\"2\">7.0926 31.78</td><td>54.56</td><td>92.69</td></tr></table>",
"text": "Performances of the merged model measured on the CSTAR corpus. SBPM stands for the string-based phrase model extracted by the approach described in section 4.2.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF8": {
"content": "<table><tr><td>model</td><td>NIST</td><td colspan=\"3\">BLEU% MWER MSER</td></tr><tr><td>relfreq</td><td colspan=\"2\">7.0926 31.78</td><td>54.56</td><td>92.69</td></tr><tr><td>ibm</td><td colspan=\"2\">7.3067 32.98</td><td>53.86</td><td>92.49</td></tr><tr><td>relfreq&amp;ibm</td><td colspan=\"2\">7.3118 34.48</td><td>52.73</td><td>91.90</td></tr><tr><td colspan=\"3\">relfreq&amp;pn-ibm 7.4219 34.6</td><td>53.02</td><td>91.70</td></tr></table>",
"text": "Influence of the function used to score a parameter. relfreq stands for the relative frequency estimator, ibm for the IBM model 1 scoring (equation 6), and pn-ibm for the partially normalized IBM score.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF9": {
"content": "<table><tr><td>config</td><td>NIST</td><td colspan=\"3\">BLEU% MWER MSER</td></tr><tr><td colspan=\"3\">relfreq&amp;ibm 7.3118 34.48</td><td>52.73</td><td>91.90</td></tr><tr><td>A</td><td colspan=\"2\">7.1862 34.21</td><td>53.12</td><td>91.18</td></tr><tr><td>Q</td><td colspan=\"2\">6.4995 34.92</td><td>52.12</td><td>93.00</td></tr><tr><td colspan=\"3\">specific-lm 7.4702 33.64</td><td>53.27</td><td>91.90</td></tr><tr><td>A</td><td colspan=\"2\">7.3229 33.66</td><td>53.08</td><td>90.85</td></tr><tr><td>Q</td><td colspan=\"2\">6.7010 33.58</td><td>53.55</td><td>93.50</td></tr><tr><td colspan=\"4\">ibm2+3g our word-based translation engine,</td></tr><tr><td colspan=\"5\">straight the model obtained by the extraction method de-</td></tr><tr><td colspan=\"2\">scribed in section 4.1,</td><td/><td/></tr><tr><td colspan=\"5\">merge the best model obtained by merging word associ-</td></tr><tr><td colspan=\"5\">ations and phrase associations acquired by the two</td></tr><tr><td colspan=\"2\">methods we described,</td><td/><td/></tr><tr><td colspan=\"5\">QA the engine combining a general language model with</td></tr><tr><td colspan=\"5\">one specifically trained on the interrogative (resp. af-</td></tr><tr><td colspan=\"4\">firmative) sentences of the TRAIN corpus,</td></tr></table>",
"text": "Performances of the merged model measured on the CSTAR corpus. A and Q stand for the performance measured on the subset of respectively affirmative and interrogative sentences of the test corpus.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF10": {
"content": "<table><tr><td>trans1</td><td>take a bath for a twin room .</td></tr><tr><td>trans2</td><td>please take a bath for a double .</td></tr><tr><td>trans3</td><td>take a bath of double .</td></tr><tr><td>trans4</td><td>take one twin room with bath .</td></tr><tr><td>trans5</td><td>have a bath for double .</td></tr><tr><td>trans6</td><td>have a twin room with bath , please .</td></tr><tr><td>trans7</td><td>have a double room with bath , please .</td></tr><tr><td colspan=\"2\">manual please,</td></tr></table>",
"text": "Illustration of the manual experiment. The user was presented here with 7 different translations, and produced his own one out of them.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF11": {
"content": "<table><tr><td>config</td><td colspan=\"3\">BLEU% NIST GTM WER PER</td></tr><tr><td colspan=\"2\">ibm2+3g 27.27</td><td>6.55</td><td>62.49 58.12 48.82</td></tr><tr><td>straight</td><td>30.92</td><td>7.52</td><td>66.93 56.05 47.90</td></tr><tr><td>merge</td><td>35.32</td><td>8.00</td><td>68.60 51.74 43.86</td></tr><tr><td>QA</td><td>33.89</td><td>7.85</td><td>68.55 53.24 45.14</td></tr><tr><td>manual</td><td>36.93</td><td>8.13</td><td>68.42 49.62 42.53</td></tr></table>",
"text": "Quality of the translations submitted before the deadline for the TEST corpus. The QA variant is the one we submitted for manual evaluation. The figures reported are rounded versions of the ones reported by the organizers.",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}