ACL-OCL / Base_JSON /prefixI /json /iwslt /2005.iwslt-1.23.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:16:50.565188Z"
},
"title": "Ngram-based versus Phrase-based Statistical Machine Translation",
"authors": [
{
"first": "Josep",
"middle": [
"M"
],
"last": "Crego",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya",
"location": {
"settlement": "Barcelona"
}
},
"email": "jmcrego@gps.tsc.upc.edu"
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya",
"location": {
"settlement": "Barcelona"
}
},
"email": ""
},
{
"first": "Jos\u00e9",
"middle": [
"B"
],
"last": "Mari\u00f1o",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya",
"location": {
"settlement": "Barcelona"
}
},
"email": ""
},
{
"first": "Jos\u00e9",
"middle": [
"A R"
],
"last": "Fonollosa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya",
"location": {
"settlement": "Barcelona"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This work summarizes a comparison between two approaches to Statistical Machine Translation (SMT), namely Ngram-based and Phrase-based SMT. In both approaches, the translation process is based on bilingual units related by word-to-word alignments (pairs of source and target words), while the main differences are based on the extraction process of these units and the statistical modeling of the translation context. The study has been carried out on two different translation tasks (in terms of translation difficulty and amount of available training data), and allowing for distortion (reordering) in the decoding process. Thus it extends a previous work were both approaches were compared under monotone conditions. We finally report comparative results in terms of translation accuracy, computation time and memory size. Results show how the ngram-based approach outperforms the phrase-based approach by achieving similar accuracy scores in less computational time and with less memory needs.",
"pdf_parse": {
"paper_id": "2005",
"_pdf_hash": "",
"abstract": [
{
"text": "This work summarizes a comparison between two approaches to Statistical Machine Translation (SMT), namely Ngram-based and Phrase-based SMT. In both approaches, the translation process is based on bilingual units related by word-to-word alignments (pairs of source and target words), while the main differences are based on the extraction process of these units and the statistical modeling of the translation context. The study has been carried out on two different translation tasks (in terms of translation difficulty and amount of available training data), and allowing for distortion (reordering) in the decoding process. Thus it extends a previous work were both approaches were compared under monotone conditions. We finally report comparative results in terms of translation accuracy, computation time and memory size. Results show how the ngram-based approach outperforms the phrase-based approach by achieving similar accuracy scores in less computational time and with less memory needs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "From the initial word-based translation models [1] , research on statistical machine translation has been strongly boosted. At the end of the last decade the use of context in the translation model (phrase-based approach) lead to a clear improvement in translation quality ( [2] , [3] , [4] ). Nowadays the introduction of some reordering abilities is of crucial importance for some language pairs and is an important focus of research in the area of SMT.",
"cite_spans": [
{
"start": 47,
"end": 50,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 273,
"end": 278,
"text": "( [2]",
"ref_id": null
},
{
"start": 281,
"end": 284,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 287,
"end": 290,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In parallel to the phrase-based approach, the ngrambased approach [5] also introduces the word context in the translation model, what allows to obtain comparable results under monotone conditions (as shown in [6] ). The addition of reordering abilities in the phrase-based approach is achieved by enabling a certain level of reordering in the source sentence. Though, the translation process consists of a composition of phrases, where the sequential composition of the phrases source words corresponds to the source sentence reordered. This procedure poses additional difficulties when applied to the ngram-based approach, because the characteristics of the ngram-based translation model. Despite of this, recent works ( [7] , [8] ) have shown how applying a reordering schema in the training process the ngram-based approach can also take advantage of the distortion capabilities.",
"cite_spans": [
{
"start": 66,
"end": 69,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 209,
"end": 212,
"text": "[6]",
"ref_id": "BIBREF5"
},
{
"start": 722,
"end": 725,
"text": "[7]",
"ref_id": "BIBREF6"
},
{
"start": 728,
"end": 731,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this paper we study the differences and similarities of both approaches (ngram-based and phrase-based), focusing on the translation model, where the translation context is differently taken into account. We also investigate the differences in the translation (bilingual) units (tuples and phrases) and show efficiency results in terms of computation time and memory size for both systems. We have extended the comparison in [6] to a Chinese to English task (where the use of distortion capabilities implies a clear improvement in translation quality), and using a much larger Spanish to English task corpus.",
"cite_spans": [
{
"start": 427,
"end": 430,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In section 2 we introduce the modeling underlying both SMT systems, the additional models taken into account in the log-linear combination of features (see equation 1), and the bilingual units extraction methods (namely tuples and phrases). In section 3 is discussed the decoder used in both systems (MARIE) [9] , giving details of pruning and reordering techniques. The comparison framework, experiments and results are shown in section 4, while conclusions are detailed in section 5.",
"cite_spans": [
{
"start": 308,
"end": 311,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Alternatively to the classical source channel approach, statistical machine translation models directly the posterior probability p(e I 1 |f J 1 ) as a log-linear combination of feature models [10] , based on the maximum entropy framework, as shown in [11] . This simplifies the introduction of several additional models explaining the translation process, as the search becomes:",
"cite_spans": [
{
"start": 193,
"end": 197,
"text": "[10]",
"ref_id": "BIBREF9"
},
{
"start": 252,
"end": 256,
"text": "[11]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling",
"sec_num": "2."
},
{
"text": "arg max e I 1 {exp( i \u03bb i h i (e, f ))} (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling",
"sec_num": "2."
},
{
"text": "where the feature functions h i are the system models (translation model, language model, reordering model, ...), and the \u03bb i weights are typically optimized to maximize a scoring function on a development set. The Translation Model is based on bilingual units (here called tuples and phrases). A bilingual unit consists of two monolingual fragments, where each one is supposed to be the translation of its counterpart. During training, the system learns a dictionary of these bilingual fragments, the actual core of the translation systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling",
"sec_num": "2."
},
{
"text": "The Translation Model can be thought of a Language Model of bilingual units (here called tuples). These tuples define a monotonous segmentation of the training sentence pairs (f J",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ngram-based Translation Model",
"sec_num": "2.1."
},
{
"text": "1 , e I 1 ), into K units (t 1 , ..., t K ). The Translation Model is implemented using an Ngram language model, (for N = 3): Figure 1 shows an example of tuples extraction from a word-to-word aligned sentence pair.",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 134,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Ngram-based Translation Model",
"sec_num": "2.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(e, f ) = P r(t K 1 ) = K k=1 p(t k | t k\u22122 , t k\u22121 )",
"eq_num": "(2)"
}
],
"section": "Ngram-based Translation Model",
"sec_num": "2.1."
},
{
"text": "Bilingual units (tuples) are extracted from any word-toword alignment according to the following constraints [6] :",
"cite_spans": [
{
"start": 109,
"end": 112,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ngram-based Translation Model",
"sec_num": "2.1."
},
{
"text": "\u2022 a monotonous segmentation of each bilingual sentence pairs is produced,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ngram-based Translation Model",
"sec_num": "2.1."
},
{
"text": "\u2022 no word inside the tuple is aligned to words outside the tuple, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ngram-based Translation Model",
"sec_num": "2.1."
},
{
"text": "\u2022 no smaller tuples can be extracted without violating the previous constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ngram-based Translation Model",
"sec_num": "2.1."
},
{
"text": "As a consequence of these constraints, only one segmentation is possible for a given sentence pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ngram-based Translation Model",
"sec_num": "2.1."
},
{
"text": "Resulting from this procedure, some tuples consist of a monolingual fragment linked to the NULL word (words#NULL and NULL#words). Those tuples with a NULL word in its source side are not kept as bilingual units. To use these tuples in decoding it should appear a NULL word in the input sentence (test to translate). Though, we assign the target words of these tuples to the next tuple in the tuples sequence of the sentence (training). In the example of figure1, if the NULL word would be contained in the source side, its counterpart (does) would be assigned to the next tuple (does the flight last#dura el vuelo).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ngram-based Translation Model",
"sec_num": "2.1."
},
{
"text": "A complementary approach to translation with reordering can be followed if we allow for a certain reordering in the training data. This means that the translation units are modified so that they are not forced to sequentially produce the source and target sentences anymore. The reordering procedure in training tends to monotonize the word-to-word alignment through changing the word order of the source sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ngram-based Translation Model",
"sec_num": "2.1."
},
{
"text": "The rationale of this approach is double, on the one hand, it makes sense when applied into a decoder with reordering capabilities as the one presented in the following section, and on the other hand, the unfolding technique generates shorter tuples, alleviating the problem of embedded units (tuples only appearing within long distance alignments, not having any translation in isolation). A very relevant problem in a Chinese to English task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ngram-based Translation Model",
"sec_num": "2.1."
},
{
"text": "The unfolding technique is here outlined: It uses the word-to-word alignments obtained by any alignment procedure. It is decomposed in two steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ngram-based Translation Model",
"sec_num": "2.1."
},
{
"text": "\u2022 First an iterative procedure, where words in one side are grouped when linked to the same word (or group) in the other side. The procedure loops grouping words in both sides until no new groups are obtained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ngram-based Translation Model",
"sec_num": "2.1."
},
{
"text": "\u2022 The second step consists of outputting the resulting groups (unfolded tuples), keeping the word order of target sentece words. Though, the tuples sequence modifies the source sentence word order. As can be seen,to produce the source sentence, the extracted unfolded tuples must be reordered. It is not the case of the target sentence, as it can be produced in order using both sequence of units. Figure 1 shows the bilingual units extracted using the extract-tuples and extract-unfold-tuples methods, for a given word-to-word aligned sentence pair.",
"cite_spans": [],
"ref_spans": [
{
"start": 398,
"end": 406,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Ngram-based Translation Model",
"sec_num": "2.1."
},
{
"text": "The basic idea of phrase-based translation is to segment the given source sentence into phrases, then translate each phrase and finally compose the target sentence from these phrase translations [12] .",
"cite_spans": [
{
"start": 195,
"end": 199,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-based Translation Model",
"sec_num": "2.2."
},
{
"text": "Given a sentence pair and a corresponding word alignment, phrases are extracted following the criterion in [13] and the modification in phrase length in [14] . A phrase (or bilingual phrase) is any pair of m source words and n target words that satisfies two basic constraints: It is infesible to build a dictionary with all the phrases (recent papers show related work to tackle this problem, see [15] ). That is why we limit the maximum size of any given phrase. Also, the huge increase in computational and storage cost of including longer phrases does not provide a significant improve in quality [16] as the probability of reappearence of larger phrases decreases.",
"cite_spans": [
{
"start": 107,
"end": 111,
"text": "[13]",
"ref_id": "BIBREF12"
},
{
"start": 153,
"end": 157,
"text": "[14]",
"ref_id": "BIBREF13"
},
{
"start": 398,
"end": 402,
"text": "[15]",
"ref_id": "BIBREF14"
},
{
"start": 601,
"end": 605,
"text": "[16]",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-based Translation Model",
"sec_num": "2.2."
},
{
"text": "In our system we considered two length limits. We first extract all the phrases of length X or less (usually X equal to 3 or 4). Then, we also add phrases up to length Y (Y greater than X) if they cannot be generated by smaller phrases. Basically, we select additional phrases with source words that otherwise would be missed because of cross or long alignments [14] .",
"cite_spans": [
{
"start": 362,
"end": 366,
"text": "[14]",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-based Translation Model",
"sec_num": "2.2."
},
{
"text": "Given the collected phrase pairs, we estimate the phrase translation probability distribution by relative frecuency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-based Translation Model",
"sec_num": "2.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (f |e) = N (f, e) N (e)",
"eq_num": "(3)"
}
],
"section": "Phrase-based Translation Model",
"sec_num": "2.2."
},
{
"text": "where N(f,e) means the number of times the phrase f is translated by e. If a phrase e has N > 1 possible translations, then each one contributes as 1/N [12] .",
"cite_spans": [
{
"start": 152,
"end": 156,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-based Translation Model",
"sec_num": "2.2."
},
{
"text": "Both systems share the additional features which follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional features",
"sec_num": "2.3."
},
{
"text": "\u2022 Firstly, we consider the target language model. It actually consists of an n-gram model, in which the probability of a translation hypothesis is approximated by the product of word 3-gram probabilities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional features",
"sec_num": "2.3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(T k ) \u2248 k n=1 p(w n |w n\u22122 , w n\u22121 )",
"eq_num": "(4)"
}
],
"section": "Additional features",
"sec_num": "2.3."
},
{
"text": "where T k refers to the partial translation hypothesis and w n to the n th word in it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional features",
"sec_num": "2.3."
},
{
"text": "As default language model feature, we use a standard word-based trigram language model generated with smoothing Kneser-Ney and interpolation of higher and lower order ngrams (by using SRILM [17] ).",
"cite_spans": [
{
"start": 190,
"end": 194,
"text": "[17]",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Additional features",
"sec_num": "2.3."
},
{
"text": "\u2022 The following two feature functions correspond to a forward and backwards lexicon models. These models provides lexicon translation probabilities for each tuple based on the word-to-word IBM model 1 probabilities [18] . These lexicon models are computed according to the following equation:",
"cite_spans": [
{
"start": 215,
"end": 219,
"text": "[18]",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Additional features",
"sec_num": "2.3."
},
{
"text": "p((t, s) n ) = 1 (I + 1) J J j=1 I i=0 p IBM 1 (t i n |s j n ) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional features",
"sec_num": "2.3."
},
{
"text": "where s j n and t i n are the j th and i th words in the source and target sides of tuple (t, s) n , being J and I the corresponding total number words in each side of it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional features",
"sec_num": "2.3."
},
{
"text": "For computing the forward lexicon model, IBM model 1 probabilities from GIZA++ [19] source-to-target alignments are used. In the case of the backwards lexicon model, GIZA++ target-to-source alignments are used instead.",
"cite_spans": [
{
"start": 79,
"end": 83,
"text": "[19]",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Additional features",
"sec_num": "2.3."
},
{
"text": "\u2022 The last feature in common we consider corresponds to a word penalty model. This function introduces a sentence length penalization in order to compensate the system preference for short output sentences. This penalization depends on the total number of words contained in the partial translation hypothesis, and it is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional features",
"sec_num": "2.3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "wp(T k ) = exp(number of words in T k )",
"eq_num": "(6)"
}
],
"section": "Additional features",
"sec_num": "2.3."
},
{
"text": "where, again, T k refers to the partial translation hypothesis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional features",
"sec_num": "2.3."
},
{
"text": "In addition to the features from the section above, we use two more functions which get better scores in the phrase-based translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase features",
"sec_num": "2.4."
},
{
"text": "\u2022 As translation model in the phrase-based system we use the conditional probability. Note that no smoothing is performed, which may cause an overestimation of the probability of rare phrases. This is specially harmful given a bilingual phrase where the source part has a big frecuency of appearence but the target part appears rarely. That is why we use the posterior phrase probability, we compute again the relative frequency but replacing the count of the target phrase by the count of the source phrase [18] .",
"cite_spans": [
{
"start": 508,
"end": 512,
"text": "[18]",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase features",
"sec_num": "2.4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (e|f ) = N (f, e) N (f )",
"eq_num": "(7)"
}
],
"section": "Phrase features",
"sec_num": "2.4."
},
{
"text": "where N'(f,e) means the number of times the phrase e is translated by f. If a phrase f has N > 1 possible translations, then each one contributes as 1/N.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase features",
"sec_num": "2.4."
},
{
"text": "Adding this feature function we reduce the number of cases in which the overall probability is overestimated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase features",
"sec_num": "2.4."
},
{
"text": "\u2022 Finally, the last feature is the widely used phrase penalty [12] which is a constant cost per produced phrase. Here, a negative weight, which means reducing the costs per phrase, results in a preference for adding phrases. Alternatively, by using a positive scaling factors, the system will favor less phrases.",
"cite_spans": [
{
"start": 62,
"end": 66,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase features",
"sec_num": "2.4."
},
{
"text": "In SMT decoding, translated sentences are built incrementally from left to right in form of hypotheses, allowing for discontinuities in the source sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "3."
},
{
"text": "A Beam search algorithm with pruning is used to find the optimal path. The search is performed by building partial translations (hypotheses), which are stored in several lists. These lists are pruned out according to the accumulated probabilities of their hypotheses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "3."
},
{
"text": "Worst hypotheses with minor probabilities are discarded to make the search feasible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "3."
},
{
"text": "Hypotheses are stored in different lists depending on the number of source and target words already covered. Figure 2 shows an example of the search graph structure. It can be decomposed into three levels:",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 117,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Search Graph Structure",
"sec_num": "3.1."
},
{
"text": "\u2022 Hypotheses. In figure 2, represented using '*'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search Graph Structure",
"sec_num": "3.1."
},
{
"text": "\u2022 Lists. In figure 2 , the boxes with a tag corresponding to its covering vector. Every list contains an ordered set of hypotheses (all the hypotheses in a list have translated the same words of the source sentence).",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Search Graph Structure",
"sec_num": "3.1."
},
{
"text": "\u2022 Groups (of lists). In figure 2, delimited using dotted lines. Every group contains an ordered set of lists, corresponding to the lists of hypotheses covering the same number of source words (to order the lists in one group the cost of their best hypothesis is used). When the search is restricted to monotonous translations, only one list is allowed on each group of lists. The search loops expanding available hypotheses. The expansion proceeds incrementally starting in the group of lists covering 1 source word, ending with the group of lists covering J \u2212 1 source words (J is the size in words of the source sentence).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search Graph Structure",
"sec_num": "3.1."
},
{
"text": "See [9] for further details.",
"cite_spans": [
{
"start": 4,
"end": 7,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Search Graph Structure",
"sec_num": "3.1."
},
{
"text": "The search graph structure is thought to perform very accurate comparisons (only hypotheses covering the same source words are compared) in order to allow for very high pruning levels. Despite of this, the number of lists when allowing for reordering grows exponentially (an upper bound is 2 J , where J is the number of words of the source sentence) and forces the search to be further pruned out for efficiency reasons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning Hypotheses",
"sec_num": "3.2."
},
{
"text": "Only the best N hypotheses are kept on each list (histogram pruning, b), with best scores within a margin, given the best score in the list (threshold pruning, t). Not just the lists, but the groups are pruned out, following the same pruning strategies (B and T ). To score a list, the cost of its best scored hypothesis is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning Hypotheses",
"sec_num": "3.2."
},
{
"text": "When allowing for reordering, the pruning strategies are not enough to reduce the combinatory explosion without an important lost in translation performance. With this purpose, two reordering strategies are used:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering capabilities",
"sec_num": "3.3."
},
{
"text": "\u2022 A distortion limit (m). A source word (phrase or tuple)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering capabilities",
"sec_num": "3.3."
},
{
"text": "is only allowed to be reordered if it does not exceed a distortion limit, measured in words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering capabilities",
"sec_num": "3.3."
},
{
"text": "\u2022 A reorderings limit (j). Any translation path is only allowed to perform j reordering jumps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering capabilities",
"sec_num": "3.3."
},
{
"text": "The use of the reordering strategies suppose a necessary trade-off between quality and efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering capabilities",
"sec_num": "3.3."
},
{
"text": "Experiments have been carried out using two databases: the EPPS database (Spanish-English) and the BTEC [20] database (Chinese-English).",
"cite_spans": [
{
"start": 104,
"end": 108,
"text": "[20]",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Framework",
"sec_num": "4.1."
},
{
"text": "The BTEC is a small corpus translation task, used in the IWSLT'04 spoken language campaign 1 . Table 1 shows the main statistics of the used data, namely number of sentences, words, vocabulary, and mean sentence lengths for each language.",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Framework",
"sec_num": "4.1."
},
{
"text": "The EPPS data set corresponds to the parliamentary session transcriptions of the European Parliament and is currently available at the Parliament's website (http://www.euro parl.eu.int/). In the case of the results presented here, we have used the version of the EPPS data that was made available by RWTH Aachen University through the TC-STAR consortium 2 Table 2 , presents some basic statistics of training, development and test data sets for each considered language: English (en) and Spanish (es). More specifically, the statistics presented in table 2 are, the total number of sentences, the total number of words and the vocabulary size (or total number of distinct words).",
"cite_spans": [
{
"start": 354,
"end": 355,
"text": "2",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 356,
"end": 363,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation Framework",
"sec_num": "4.1."
},
{
"text": "We used GIZA++ to perform the word alignment of the whole training corpus, and refined the links by the union of both alignment directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Units",
"sec_num": "4.2."
},
{
"text": "In the phrase-based model, we extract phrases up to length 4 and, in addition, those phrases up to length 7 which could not be generated by smaller phrases. This lengths are applied to the BTEC corpus. In the case of the EPPS task, we extract phrases up to length 3, without any extension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Units",
"sec_num": "4.2."
},
{
"text": "The regular tuples extraction method was used in the monotone configuration of the ngram-model, while the un- folded extraction method was used in the reordering configuration. Figure 3 shows how tuples and phrases vocabulary sets are related. In addition, an extended tuples vocabulary set (tuples') is shown, which is built by concatenation of tuples. Consecutive tuples of each training sentence are concatenated building a new set of bilingual units. Pruned tuples in the sentence sequence are not taken into account to build the extended set.",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 185,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Units",
"sec_num": "4.2."
},
{
"text": "This extended set approaches the tuples set to the phrases set. Also it allows us to show how many phrases units can be reached with the tuples units. In table 3 are given the vocabulary sizes of these sets for the BTEC corpus using the unfolding method to extract tuples. In principle, all tuples should be included as phrases. However, there are longer tuples that have been pruned out as phrases. There are also some tuples extracted from word-to-null alignments (39 word-to-null tuples). Table 4 shows the number of Ngrams used by the decoder to translate the test file. For the phrase-based system, only 1grams (phrases) are used. The difference in number of loaded units implies a substantial impact in efficiency (in terms of computing time and memory size).",
"cite_spans": [],
"ref_spans": [
{
"start": 492,
"end": 499,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Units",
"sec_num": "4.2."
},
{
"text": "In this section we introduce the experiments that have been carried out in order to evaluate both approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3."
},
{
"text": "All the computation time and memory size results are approximated. The experiments were performed on a Pentium System 1gr 2gr 3gr 4gr PB zh2en 59,610 ---NB zh2en 8,999 23,335 3,429 1,999 PB es2en 7,017,894 ---NB es2en 335,299 1,426,582 767,827 - IV (Xeon 3.06GHz), with 4Gb of RAM memory.",
"cite_spans": [],
"ref_spans": [
{
"start": 103,
"end": 233,
"text": "Pentium System 1gr 2gr 3gr 4gr PB zh2en 59,610 ---NB zh2en 8,999 23,335 3,429 1,999 PB es2en 7,017,894 ---NB es2en",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3."
},
{
"text": "All the experiments reported in this paper have been performed setting the order of the target language model to N = 3. The order of the bilingual language model used for the BTEC task was N = 4, for the EPPS task was N = 3. When applying non-monotone decoding, the reordering constraints were set to m = 5 and j = 3 (in both Ngram-based and phrase-based approaches). Regarding the pruning adjustments, b and t are set to 10 units for the BTEC task and to 50 for the EPPS task, when applying reordering the B and T pruning values are also set to 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3."
},
{
"text": "The evaluation in the BTEC task has been carried out using references and translations in lowercase and without punctuation marks. We applied the SIMPLEX algorithm to optimize the model weights (on the development set) [21] . Results in the test set with 16 references are reported. Table 5 shows the number of 1-grams, 2-grams, 3-grams and 4-grams used when translating the test file using the best configuration of each system (allowing for reordering).",
"cite_spans": [
{
"start": 219,
"end": 223,
"text": "[21]",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 283,
"end": 290,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3."
},
{
"text": "The experiments in table 6 correspond to the Chinese to English translation task under the phrase-based SMT system. Results corresponding to the same translation task, under the ngram-based SMT system, are shown in table 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3."
},
{
"text": "The Spanish to English translation task results under the phrase-based SMT system, are shown in table 8. Results corresponding to the same translation task, under the ngrambased SMT system, are shown in table 9. The regular tuples extraction method was used in all cases as the translation was always performed under monotone conditions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3."
},
{
"text": "As can be seen, very similar results are achieved by both systems, when translating with the baseline and extended configurations. Thresholds for confidence margins are \u00b11.6 and \u00b10.6 (respectively for the Chinese-to-English and Spanish-to-English tasks given the number of words in the test sets for the mWER measure). In both cases the additional models (either the IBM1 lexicon model or the posterior probability model) seem to be used by the corresponding systems as a way to refine the translation probabilities. Examples of these situations are the overestimation problem introduced in previous sections for the phrase-based approach, and the apparition of bad tuples following incorrect word-toword alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3."
},
{
"text": "In this paper we have performed a comparison of two stateof-the-art statistical machine translation approaches, which only differ in the modeling of the translation context. The comparison was made as fair as possible, in terms of using the same training/development/test corpora, word-to-word alignment, decoder and additional shared models (ibm1, word penalty, target LM and reordering model).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
},
{
"text": "The comparison has been performed on two different translation tasks (in terms of reordering needs and related to the corpus size). Similar accuracy results in all tasks are reached for the baseline configurations. When upgrading the systems with additional features, slight differences appear. Although improvements added by each feature depends on the task and system, similar performances are reached in the best system's configurations. Under reordering conditions, the ngram-based system seems to take advantage of the unfolding method applied in training, outperforming the phrase-based system. However, last results obtained for the IWSLT'05 show an opposite behaviour of both systems, see [22] and [23] .",
"cite_spans": [
{
"start": 697,
"end": 701,
"text": "[22]",
"ref_id": "BIBREF21"
},
{
"start": 706,
"end": 710,
"text": "[23]",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
},
{
"text": "We can conclude that both approaches have a similar performance in terms of translation quality. The slight differences seen in the experiments are related to how the systems take advantage of each feature model and to the current system's implementation. In terms of the memory size and computation time, the ngram-based system has obtained consistently better results. This indicates how even though using a smaller vocabulary of bilingual units, it has been more efficiently built and managed. The last characteristic becomes of great importance when working with large databases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
},
{
"text": "This work has been partially funded by the European Union under the integrated project TC-STAR -Technology and Corpora for Speech to Speech Translation -(IST-2002-FP6-506738, http://www.tc-star.org), the Spanish government, under grant TIC-2002-04447-C02 (Aliado Project), Universitat Polit\u00e8cnica de Catalunya and the TALP Research Center under UPC-RECERCA and TALP-UPC-RECERCA grants. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "6."
},
{
"text": "www.slt.atr.jp/IWSLT2004 2 TC-STAR (Technology and Corpora for Speech to Speech Translation) is an European Community project funded by the Sixth Framework Programme. More information can be found at the consortium website:http: //www.tc-star.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The mathematics of statistical machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Brown, S. Della Pietra, V. Della Pietra, and R. Mer- cer, \"The mathematics of statistical machine transla- tion,\" Computational Linguistics, vol. 19, no. 2, pp. 263-311, 1993.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Phrase-based statistical machine translation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "KI -2002: Advances in artificial intelligence",
"volume": "2479",
"issue": "",
"pages": "18--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Zens, F. Och, and H. Ney, \"Phrase-based statistical machine translation,\" in KI -2002: Advances in arti- ficial intelligence, M. Jarke, J. Koehler, and G. Lake- meyer, Eds. Springer Verlag, September 2002, vol. LNAI 2479, pp. 18-32.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A syntax-based statistical translation model",
"authors": [
{
"first": "K",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "39th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Yamada and K. Knight, \"A syntax-based statistical translation model,\" 39th Annual Meeting of the Associ- ation for Computational Linguistics, pp. 523-530, July 2001.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A phrase-based, joint probability model for statistical machine translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the Conf. on Empirical Methods in Natural Language Processing, EMNLP'02",
"volume": "",
"issue": "",
"pages": "133--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Marcu and W. Wong, \"A phrase-based, joint prob- ability model for statistical machine translation,\" Proc. of the Conf. on Empirical Methods in Natural Language Processing, EMNLP'02, pp. 133-139, July 2002.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An\u00e1lisis de las relaciones cruzadas en el alineado estad\u00edstico para la traducci\u00f3n autom\u00e1tica",
"authors": [
{
"first": "A",
"middle": [],
"last": "De Gispert",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mari\u00f1o",
"suffix": ""
}
],
"year": 2002,
"venue": "II Jornadas en Tecnolog\u00eda del Habla",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. de Gispert and J. Mari\u00f1o, \"An\u00e1lisis de las relaciones cruzadas en el alineado estad\u00edstico para la traducci\u00f3n autom\u00e1tica,\" II Jornadas en Tecnolog\u00eda del Habla, De- cember 2002.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Finite-statebased and phrase-based statistical machine translation",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Crego",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mari\u00f1o",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "De Gispert",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of the 8th Int. Conf. on Spoken Language Processing, ICSLP'04",
"volume": "",
"issue": "",
"pages": "37--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. M. Crego, J. Mari\u00f1o, and A. de Gispert, \"Finite-state- based and phrase-based statistical machine translation,\" Proc. of the 8th Int. Conf. on Spoken Language Process- ing, ICSLP'04, pp. 37-40, October 2004.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Reordered search and tuple unfolding for ngram-based smt",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Crego",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mari\u00f1o",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gispert",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of the MT Summit X",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. M. Crego, J. Mari\u00f1o, and A. Gispert, \"Reordered search and tuple unfolding for ngram-based smt,\" Proc. of the MT Summit X, September 2005.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Novel reordering approaches in phrase-based statistical machine translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kanthak",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Vilar",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Matusov",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL Workshop on Building and Using Parallel Texts: Data-Driven Machine Translation and Beyond",
"volume": "",
"issue": "",
"pages": "167--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Kanthak, D. Vilar, E. Matusov, R. Zens, and H. Ney, \"Novel reordering approaches in phrase-based statisti- cal machine translation,\" Proceedings of the ACL Work- shop on Building and Using Parallel Texts: Data- Driven Machine Translation and Beyond, pp. 167-174, June 2005.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An ngrambased statistical machine translation decoder",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Crego",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mari\u00f1o",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gispert",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of the 9th European Conference on Speech Communication and Technology, Interspeech'05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. M. Crego, J. Mari\u00f1o, and A. Gispert, \"An ngram- based statistical machine translation decoder,\" Proc. of the 9th European Conference on Speech Communica- tion and Technology, Interspeech'05, September 2005.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Discriminative training and maximum entropy models for statistical machine translation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "295--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Och and H. Ney, \"Discriminative training and max- imum entropy models for statistical machine transla- tion,\" 40th Annual Meeting of the Association for Com- putational Linguistics, pp. 295-302, July 2002.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "39--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Berger, S. Della Pietra, and V. Della Pietra, \"A maxi- mum entropy approach to natural language processing,\" Computational Linguistics, vol. 22, no. 1, pp. 39-72, March 1996.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improvements in phrase-based statistical machine translation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of the Human Language Technology Conference",
"volume": "",
"issue": "",
"pages": "257--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Zens, F. Och, and H. Ney, \"Improvements in phrase-based statistical machine translation,\" Proc. of the Human Language Technology Conference, HLT- NAACL'2004, pp. 257-264, May 2004.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The alignment template approach to statistical machine translation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "4",
"pages": "417--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Och and H. Ney, \"The alignment template approach to statistical machine translation,\" Computational Lin- guistics, vol. 30, no. 4, pp. 417-449, December 2004.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improving the phrase-based statistical translation by modifying phrase extraction and including new features",
"authors": [
{
"first": "M",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Fonollosa",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL Workshop on Building and Using Parallel Texts: Data-Driven Machine Translation and Beyond",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. R. Costa-juss\u00e0 and J. Fonollosa, \"Improving the phrase-based statistical translation by modifying phrase extraction and including new features,\" Proceedings of the ACL Workshop on Building and Using Parallel Texts: Data-Driven Machine Translation and Beyond, June 2005.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Scaling phrase-based statistical machine translation to larger corpora and longer phrases",
"authors": [
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Bannard",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Scroeder",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Callison-Burch, C. Bannard, and J. Scroeder, \"Scal- ing phrase-based statistical machine translation to larger corpora and longer phrases,\" ACL05, June 2005.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Statistical phrasebased translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the Human Language Technology Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn, F. Och, and D. Marcu, \"Statistical phrase- based translation,\" Proc. of the Human Language Tech- nology Conference, HLT-NAACL'2003, May 2003.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Srilm -an extensible language modeling toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the 7th Int. Conf. on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke, \"Srilm -an extensible language modeling toolkit,\" Proc. of the 7th Int. Conf. on Spoken Language Processing, ICSLP'02, September 2002.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A smorgasbord of features for statistical machine translation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sarkar",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Eng",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of the Human Language Technology Conference",
"volume": "",
"issue": "",
"pages": "161--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Och, D. Gildea, S. Khudanpur, A. Sarkar, K. Ya- mada, A. Fraser, S. Kumar, L. Shen, D. Smith, K. Eng, V. Jain, Z. Jin, and D. Radev, \"A smorgasbord of features for statistical machine translation,\" Proc. of the Human Language Technology Conference, HLT- NAACL'2004, pp. 161-168, May 2004.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Giza++ software",
"authors": [
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Och, \"Giza++ software. http://www- i6.informatik.rwth-aachen.de/\u02dcoch/ soft- ware/giza++.html,\" RWTH Aachen University, Tech. Rep., 2003.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Toward a broad-coverage bilingual curpus for speech translation of travel conversations in the real world",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sugaya",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "147--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Takezawa, E. Sumita, F. Sugaya, H. Yamamoto, and S. Yamamoto, \"Toward a broad-coverage bilingual cur- pus for speech translation of travel conversations in the real world,\" LREC 2002, pp. 147-152, May 2002.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A simplex method for function minimization",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nelder",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mead",
"suffix": ""
}
],
"year": 1965,
"venue": "The Computer Journal",
"volume": "7",
"issue": "",
"pages": "308--313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nelder and R. Mead, \"A simplex method for function minimization,\" The Computer Journal, vol. 7, pp. 308- 313, 1965.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Talp: The upc tuple-based smt system",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Crego",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mari\u00f1o",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gispert",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of the Int. Workshop on Spoken Language Translation, IWSLT'05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. M. Crego, J. Mari\u00f1o, and A. Gispert, \"Talp: The upc tuple-based smt system,\" Proc. of the Int. Workshop on Spoken Language Translation, IWSLT'05, October 2005.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Tuning a phrasebased statistical translation system for the iwslt 2005 chinese to english and arabic to english tasks",
"authors": [
{
"first": "M",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Fonollosa",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. R. Costa-juss\u00e0 and J. Fonollosa, \"Tuning a phrase- based statistical translation system for the iwslt 2005 chinese to english and arabic to english tasks,\" IWSLT05, October 2005.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Different bilingual units (tuples) are extracted using the extract-tuples and extract-unfold-tuples methods."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Words are consecutive along both sides of the bilingual phrase, No word on either side of the phrase is aligned to a word out of the phrase."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Search graph corresponding to a source sentence with four words. Details of constraints are given in following sections."
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Phrases and tuples vocabulary sets."
},
"TABREF1": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>Corpus: Basic statistics for the considered</td></tr><tr><td>training, The Development data set and the Test data set have</td></tr><tr><td>2 references, (M and k stands for millions and thousands, respectively)</td></tr><tr><td>September 2004, the development data used included parlia-mentary session transcriptions from October 21st until Octo-ber 28th, 2004, and the test data from November 15th until November 18th, 2004.</td></tr></table>",
"text": "EuroParl",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>each set (rows), vocabulary, number of units</td></tr><tr><td>extracted from the corpus and intersection with the phrases</td></tr><tr><td>vocabulary set are shown (for the Chinese to English trans-lation task). The unfolded tuples are used to build the tuple</td></tr><tr><td>sets.</td></tr></table>",
"text": "For",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>System</td><td>1gr</td><td>2gr</td><td>3gr</td><td>4gr</td></tr><tr><td>PB zh2en</td><td>2,518</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan=\"3\">NB zh2en 1,653 1,241</td><td>284</td><td>89</td></tr><tr><td colspan=\"2\">PB es2en 15,619</td><td>-</td><td>-</td><td>-</td></tr><tr><td>NB es2en</td><td colspan=\"3\">2,988 8,490 9,333</td><td>-</td></tr></table>",
"text": "Number of N grams (translation model) loaded by the decoder to translate the test file. PB and NB stands for phrase-based and ngram-based, the first two rows correspond to the Chinese to English task, while the last two rows are related to the Spanish to English task.",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>of N grams used by the decoder when trans-</td></tr><tr><td>lating the test file. PB and NB stands for phrase-based and</td></tr><tr><td>ngram-based,the first two rows correspond to the Chinese to English task, while the last two rows are related to the Span-</td></tr><tr><td>ish to English task.</td></tr></table>",
"text": "Number",
"html": null
},
"TABREF7": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>Ngram-based</td><td colspan=\"4\">mWER BLEU TIME (sec) SIZE (Mb)</td></tr><tr><td>Baseline</td><td>49.68</td><td>35,41</td><td>17</td><td>1.2</td></tr><tr><td>Baseline + IBM1</td><td>48.42</td><td>35.75</td><td>21</td><td>1.4</td></tr><tr><td>Baseline + IBM1 + Reord.</td><td>45.30</td><td>41.66</td><td>225</td><td>1.6</td></tr></table>",
"text": "Results for the Chinese to English translation task using the phrase-based translation model and different features. The baseline uses translation model, language model, word penalty and phrase penalty. The IBM1 is used in both directions. The last row shows the best system and it includes reordering.",
"html": null
},
"TABREF8": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>Phrase-based</td><td colspan=\"4\">mWER BLEU TIME (sec) SIZE (Mb)</td></tr><tr><td>Baseline</td><td>39.35</td><td>48.84</td><td>900</td><td>1,180</td></tr><tr><td>Baseline + P(e|f) + IBM1</td><td>35.10</td><td>54.19</td><td>1084</td><td>1,640</td></tr></table>",
"text": "Results for the Chinese to English translation task using the ngram-based translation model and different features. The baseline configuration uses translation model, language model and word penalty. The IBM1 is used in both directions. The last row shows the best system and it includes reordering",
"html": null
},
"TABREF9": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>Ngram-based</td><td colspan=\"4\">mWER BLEU TIME (sec) SIZE (Mb)</td></tr><tr><td>Baseline</td><td>39.61</td><td>48.49</td><td>641</td><td>580</td></tr><tr><td>Baseline + IBM1</td><td>34.86</td><td>54.38</td><td>801</td><td>600</td></tr></table>",
"text": "Results for the Spanish to English translation task using the phrase translation model and different features. The baseline uses translation model, language model, word penalty and phrase penalty. The IBM1 model is used in both directions.",
"html": null
},
"TABREF10": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Results for the Spanish to English translation task using the phrase translation model and different features. The IBM1 is used in both directions. The baseline uses translation model, language model and word penalty. The IBM1 model is used in both directions.",
"html": null
}
}
}
}