Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I13-1029",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:15:26.217678Z"
},
"title": "Ensemble Triangulation for Statistical Machine Translation *",
"authors": [
{
"first": "Majid",
"middle": [],
"last": "Razmara",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Simon Fraser University Burnaby",
"location": {
"region": "BC",
"country": "Canada"
}
},
"email": "razmara@sfu.ca"
},
{
"first": "Anoop",
"middle": [],
"last": "Sarkar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Simon Fraser University Burnaby",
"location": {
"region": "BC",
"country": "Canada"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "State-of-the-art statistical machine translation systems rely heavily on training data and insufficient training data usually results in poor translation quality. One solution to alleviate this problem is triangulation. Triangulation uses a third language as a pivot through which another sourcetarget translation system can be built. In this paper, we dynamically create multiple such triangulated systems and combine them using a novel approach called ensemble decoding. Experimental results of this approach show significant improvements in the BLEU score over the direct sourcetarget system. Our approach also outperforms a strong linear mixture baseline.",
"pdf_parse": {
"paper_id": "I13-1029",
"_pdf_hash": "",
"abstract": [
{
"text": "State-of-the-art statistical machine translation systems rely heavily on training data and insufficient training data usually results in poor translation quality. One solution to alleviate this problem is triangulation. Triangulation uses a third language as a pivot through which another sourcetarget translation system can be built. In this paper, we dynamically create multiple such triangulated systems and combine them using a novel approach called ensemble decoding. Experimental results of this approach show significant improvements in the BLEU score over the direct sourcetarget system. Our approach also outperforms a strong linear mixture baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The objective of current statistical machine translation (SMT) systems is to build cheap and rapid corpus-based SMT systems without involving human translation expertise. Such SMT systems rely heavily on their training data. State-of-theart SMT systems automatically extract translation rules (e.g. phrase pairs), learn segmentation models, re-ordering models, etc. and find tuning weights solely from data and hence they rely heavily on high quality training data. There are many language pairs for which there is no parallel data or the available data is not sufficiently large to build a reliable SMT system. For example, there is no Chinese-Farsi parallel text, although there exists sufficient parallel data between these two languages and English. For SMT, an important research direction is to improve the quality of translation when there is no, insufficient or poor-quality parallel data between a pair of languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One approach that has been recently proposed is triangulation. Triangulation is the process of translating from a source language to a target language via an intermediate language (aka pivot, or bridge) . This is very useful specifically for lowresource languages as SMT systems built using small parallel corpora perform poorly due to data sparsity. In addition, ambiguities in translating from one language into another may disappear if a translation into some other language is available.",
"cite_spans": [
{
"start": 180,
"end": 202,
"text": "(aka pivot, or bridge)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One obvious benefit of triangulation is to increase the coverage of the model on the input text. In other words, we can reduce the number of out-of-vocabulary words (OOVs), which are a major cause of poor quality translations, using other paths to the target language. This can be especially helpful when the model is built using a small amount of parallel data. Figure 1 shows how triangulation can be useful in reducing the number of OOVs when translating from French to English through three pivot languages: Spanish (es), German (de) and Italian (it). The solid lines show the number of OOVs for a direct MT system with regard to a multi-language parallel test set (Section 6.2 contains the details about the data sets) and the dotted lines show the OOVs in the triangulated (src pvt tgt) systems. The number of OOVs on triangulated paths can never be less that the first edge (i.e. src pvt) and it is usually higher than the second edge (i.e. pvt tgt) as well. Thus, the choice of intermediate language is very important in triangulation.",
"cite_spans": [],
"ref_spans": [
{
"start": 363,
"end": 371,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Figure 1 also shows how combining multiple triangulated systems can reduce this number from 2600 (16%) OOVs to 1536 (9%) OOVs. Thus, combining triangulated systems with the original src tgt system is a good idea. When combining multiple systems, the upper bound on the number of OOVs is the minimum among all OOVs in the different triangulations. These OOV rates provide useful hints, among other clues, as to which pivot languages will be more useful. In Figure 1 , we can expect Italian (it) to help more than Spanish (es) and both to help more than German (de) in translation from French (fr) to English (en), which we confirmed in our experimental results (Table 1) .",
"cite_spans": [],
"ref_spans": [
{
"start": 456,
"end": 464,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 660,
"end": 669,
"text": "(Table 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition to providing translations for otherwise untranslatable phrases, triangulation can find new translations for current phrases. The conditional distributions used for the translation model have been estimated on small amounts of data and hence are not robust due to data sparseness. Using triangulation, these distributions are smoothed and become more reliable as a result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For each pivot language for which there exists parallel data with the source and the target language, we can create a src tgt system by bridging through the pivot language. If there are a number of such pivot languages with corresponding data, we can use mixture approaches to combine them in order to build a stronger model. We propose to apply the ensemble decoding approach of in this triangulation scenario. Ensemble decoding allows us to combine hypotheses from different models dynamically at the decoder. We experimented with 12 different language pairs and 3 pivot languages for each of them. Experimental results of this approach show significant improvements in the BLEU and ME-TEOR scores over the direct source-target system in all the 12 language pairs. We also compare to a strong linear mixture baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Use of pivot languages in machine translation dates back to the early days of machine translation. Boitet (1988) discusses the choice of pivot languages, natural or artificial (e.g. interlingua), in machine translation. Schubert (1988) argues that a proper choice for an intermediate language for high-quality machine translation is a natural language due to the inherent lack of expressiveness in artificial languages. Previous work in applying pivot languages in machine translation can be categorized into these divisions:",
"cite_spans": [
{
"start": 99,
"end": 112,
"text": "Boitet (1988)",
"ref_id": "BIBREF1"
},
{
"start": 220,
"end": 235,
"text": "Schubert (1988)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this approach, a src pvt translation system translates the source input into the pivot language and a second pvt tgt system takes the output of the previous system and translates it into the target language. Utiyama and Isahara (2007) approach to triangulate between Spanish, German and French through English. However, instead of using only the best translation, they took the n-best translations and translated them into the target language. MERT (Och, 2003) has been used to tune the weights for the new feature set which consists of src pvt and pvt tgt feature functions. The highest scoring sentence from the target language is used as the final translation. They showed that using 15 hypotheses in the pvt side is generally superior to using only one best hypothesis.",
"cite_spans": [
{
"start": 211,
"end": 237,
"text": "Utiyama and Isahara (2007)",
"ref_id": "BIBREF20"
},
{
"start": 452,
"end": 463,
"text": "(Och, 2003)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Cascades",
"sec_num": "2.1"
},
{
"text": "Given a pvt tgt MT system, one can translate the pivot side of a src-pvt parallel corpus into the target language and create a noisy src-tgt parallel corpus. This can also be exploited in the other direction, meaning that a pvt src MT system can be used to translate the pivot side of a pvt-tgt bitext.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Synthesis",
"sec_num": "2.2"
},
{
"text": "de Gispert and Marino (2006) , for example, translated the Spanish side of an English-Spanish bitext into Catalan using an available Spanish-Catalan SMT system. Then, they built an English-Catalan MT system by training on this new parallel corpus.",
"cite_spans": [
{
"start": 3,
"end": 28,
"text": "Gispert and Marino (2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Synthesis",
"sec_num": "2.2"
},
{
"text": "In this approach, instead of translating the input sentences from a source language to a pivot language and from that to a target language, triangulation is done on the phrase level by triangulating two phrase-tables: src pvt and pvt tgt:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Table Triangulation",
"sec_num": "2.3"
},
{
"text": "(f ,\u0113) \u2208 T F E \u21d0\u21d2 \u2203\u012b : (f ,\u012b) \u2208 T F I \u2227 (\u012b,\u0113) \u2208 T I E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Table Triangulation",
"sec_num": "2.3"
},
{
"text": "wheref ,\u012b and\u0113 are phrases in the source F, pivot I and target E languages respectively and T is a set representing a phrase table. Utiyama and Isahara (2007) also experimented with phrase-table triangulation. They compared both triangulation approaches when using Spanish, French and German as the source and target languages and English as the only pivot language. They showed that phrase-table triangulation is superior to the MT system cascades but both of them did not outperform the direct src tgt system.",
"cite_spans": [
{
"start": 132,
"end": 158,
"text": "Utiyama and Isahara (2007)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Table Triangulation",
"sec_num": "2.3"
},
{
"text": "The phrase-table triangulation approach with multiple pivot languages has been also investigated in several work (Cohn and Lapata, 2007; Wu and Wang, 2007) . These triangulated phrasetables are combined together using linear and loglinear mixture models. They also successfully combined the mixed phrase-table with a src-tgt phrase-table to achieve a higher BLEU score. Bertoldi et al. (2008) formulated phrase triangulation in the decoder where they also consider the phrase-segmentation model between src-pvt and the reordering model between src-tgt.",
"cite_spans": [
{
"start": 113,
"end": 136,
"text": "(Cohn and Lapata, 2007;",
"ref_id": "BIBREF4"
},
{
"start": 137,
"end": 155,
"text": "Wu and Wang, 2007)",
"ref_id": "BIBREF23"
},
{
"start": 370,
"end": 392,
"text": "Bertoldi et al. (2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Table Triangulation",
"sec_num": "2.3"
},
{
"text": "Beside machine translation, the use of pivot languages has found applications in other NLP areas. Gollins and Sanderson (2001) used a similar idea in cross-lingual information retrieval where query terms were translated through multiple pivot languages to the target language and the translations are combined to reduce the error. Pivot languages have also been successfully used in inducing translation lexicons (Mann and Yarowsky, 2001) as well as word alignments for resourcepoor languages (Kumar et al., 2007; Wang et al., 2006) . Callison-Burch et al. (2006) used pivot languages to extract paraphrases for unknown words.",
"cite_spans": [
{
"start": 98,
"end": 126,
"text": "Gollins and Sanderson (2001)",
"ref_id": "BIBREF8"
},
{
"start": 413,
"end": 438,
"text": "(Mann and Yarowsky, 2001)",
"ref_id": "BIBREF13"
},
{
"start": 493,
"end": 513,
"text": "(Kumar et al., 2007;",
"ref_id": "BIBREF12"
},
{
"start": 514,
"end": 532,
"text": "Wang et al., 2006)",
"ref_id": "BIBREF22"
},
{
"start": 535,
"end": 563,
"text": "Callison-Burch et al. (2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Table Triangulation",
"sec_num": "2.3"
},
{
"text": "In this paper, we compare our approach with two baselines. A simple baseline is the direct system between the source and target languages which is trained on the same amount of parallel data as the triangulated ones. In addition, we implemented a phrase-table triangulation method (Cohn and Lapata, 2007; Wu and Wang, 2007; Utiyama and Isahara, 2007) . This approach presents a probabilistic formulation for triangulation by marginalizing out the pivot phrases, and factorizing using the chain rule:",
"cite_spans": [
{
"start": 281,
"end": 304,
"text": "(Cohn and Lapata, 2007;",
"ref_id": "BIBREF4"
},
{
"start": 305,
"end": 323,
"text": "Wu and Wang, 2007;",
"ref_id": "BIBREF23"
},
{
"start": 324,
"end": 350,
"text": "Utiyama and Isahara, 2007)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3"
},
{
"text": "p(\u0113 |f ) = \u012b p(\u0113,\u012b |f ) = \u012b p(\u0113 |\u012b,f ) p(\u012b |f ) \u2248 \u012b p(\u0113 |\u012b) p(\u012b |f )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3"
},
{
"text": "wheref ,\u0113 and\u012b are phrases in the source, target and intermediate language respectively. In this equation, a conditional independence assumption has been made that sourcef and target phrases e are independent given their corresponding pivot phrase(s)\u012b. The equation requires that all phrases in the src pvt direction must also appear in pvt tgt. All missing phrases are simply dropped from the final phrase-table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3"
},
{
"text": "Using this approach, a triangulated sourcetarget phrase-table is generated for each pivot language. Then, linear and log-linear mixture methods are used to combine these phrase-tables into a single phrase-table in order to be used in the decoder. We implemented the linear mixture approach, since linear mixtures often outperform log-linear ones (Cohn and Lapata, 2007) . We then compare the results of these baselines with our approach over multiple language pairs (Section 6.2). In linear mixture models, each feature in the mixture phrase-table is computed as a linear interpolation of corresponding features in the component phrase-tables using a weight vector \u03bb.",
"cite_spans": [
{
"start": 346,
"end": 369,
"text": "(Cohn and Lapata, 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3"
},
{
"text": "p(\u0113 |f ) = i \u03bb i p i (\u0113 |f ) p(f |\u0113) = i \u03bb i p i (f |\u0113) \u2200 \u03bb i > 1 i \u03bb i = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3"
},
{
"text": "Following Cohn and Lapata (2007) , we combined triangulated phrase-tables with uniform weights into a single phrase table and then interpolated it with the phrase-table of the direct system.",
"cite_spans": [
{
"start": 10,
"end": 32,
"text": "Cohn and Lapata (2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3"
},
{
"text": "SMT log-linear models (Koehn, 2010) find the most likely target language output e given the source language input f using a vector of feature functions \u03c6:",
"cite_spans": [
{
"start": 22,
"end": 35,
"text": "(Koehn, 2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Decoding",
"sec_num": "4"
},
{
"text": "p(e|f ) \u221d exp w \u2022 \u03c6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Decoding",
"sec_num": "4"
},
{
"text": "Ensemble decoding combines several models dynamically at the decoding time. The scores are combined for each partial hypothesis using a user-defined mixture operation over component models. successfully applied ensemble decoding to domain adaptation in SMT and showed that it performed better than approaches that pre-compute linear mixtures of different models. Several mixture operations were proposed, allowing the user to encode belief about the relative strengths of the component models. These mixture operations receive two or more probabilities and return the mixture probability p(\u0113 |f ) for each rulef \u2192\u0113 used in the decoder. Different options for these operations are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Decoding",
"sec_num": "4"
},
{
"text": "p(e|f ) \u221d exp w 1 \u2022 \u03c6 1 w 2 \u2022 \u03c6 2 . . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Decoding",
"sec_num": "4"
},
{
"text": "\u2022 Weighted Sum (wsum) is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Decoding",
"sec_num": "4"
},
{
"text": "p(\u0113 |f ) \u221d M m \u03bb m exp w m \u2022 \u03c6 m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Decoding",
"sec_num": "4"
},
{
"text": "where m denotes the index of component models, M is the total number of them and \u03bb m is the weight for component m.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Decoding",
"sec_num": "4"
},
{
"text": "\u2022 Weighted Max (wmax) is defined as: for each cell, the model that has the highest weighted best-rule score wins:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Decoding",
"sec_num": "4"
},
{
"text": "p(\u0113 |f ) \u221d max m \u03bb m exp w m \u2022 \u03c6 m \u2022 Model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Decoding",
"sec_num": "4"
},
{
"text": "\u03c8(f , n) = \u03bb n max e (w n \u2022 \u03c6 n (\u0113,f ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Decoding",
"sec_num": "4"
},
{
"text": "The probability of each phrase-pair (\u0113,f ) is then:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Decoding",
"sec_num": "4"
},
{
"text": "p(\u0113 |f ) = M m \u03b4(f , m) p m (\u0113 |f )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Decoding",
"sec_num": "4"
},
{
"text": "5 Our Approach",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Decoding",
"sec_num": "4"
},
{
"text": "Given a src pvt and a pvt tgt system which are independently trained and tuned on their corresponding parallel data, these two systems can be triangulated dynamically in the decoder. For each source phrasef , the decoder consults the src pvt system to get its translations on the pivot side\u012b with their scores. Consequently, each of these pivot-side translation phrases is queried from the pvt tgt system to obtain their translations on the target side with their corresponding scores. Finally a (f ,\u0113) pair is constructed from each (f ,\u012b) and (\u012b,\u0113) pair, whose score is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Triangulation",
"sec_num": "5.1"
},
{
"text": "p I (f |\u0113) \u221d max i exp w 1 . \u03c6 1 (f ,\u012b) F I + w 2 . \u03c6 2 (\u012b ,\u0113) I E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Triangulation",
"sec_num": "5.1"
},
{
"text": "This method requires the language model score of the src pvt system. However for simplicity we do not use the pivot-side language models and hence the score of the src pvt system does not include the language model and word penalty scores. In this formulation for a given source and target phrase pair (f ,\u0113), if there are multiple bridging pivot phrases\u012b, we only use the one that yields the highest score. This is in contrast with previous work where they take the sum over all such pivot phrases (Cohn and Lapata, 2007; Utiyama and Isahara, 2007) . We use max as it outperforms sum in our preliminary experiments.",
"cite_spans": [
{
"start": 499,
"end": 522,
"text": "(Cohn and Lapata, 2007;",
"ref_id": "BIBREF4"
},
{
"start": 523,
"end": 549,
"text": "Utiyama and Isahara, 2007)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Triangulation",
"sec_num": "5.1"
},
{
"text": "It is noteworthy that in computing the score for p I (f |\u0113), the scores from src pvt and pvt tgt are added uniformly. However, there is no reason why this should be the case. Two different weights can be assigned to these two scores to highlight the importance of one against the other one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Triangulation",
"sec_num": "5.1"
},
{
"text": "A naive implementation of phrase-triangulation in the decoder would require O(n 2 ) steps for each source sub-span, where n is the average number of translation fan-out (i.e. possible translations) for each phrase. However, since the phrase candidates from both src pvt and pvt tgt are already sorted, we use a lazy algorithm that reduces the computational complexity to O(n).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Triangulation",
"sec_num": "5.1"
},
{
"text": "If we can make use of multiple pivot languages, a system can be created on-the-fly for each pivot language by triangulation and these systems can then be combined together in the decoder using ensemble decoding discussed in Section 4. Following previous work, these triangulated phrasetables can also be combined with the direct system to produce a yet stronger model. However, we do not combine them in two steps. Instead, all triangulated systems and the direct one are combined together in a single step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Triangulated Systems",
"sec_num": "5.2"
},
{
"text": "Ensemble decoding is aware of full model scores when it compares, ranks and prunes hypotheses. This includes the language model, word, phrase and glue rule penalty scores as well as standard phrase-table probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Triangulated Systems",
"sec_num": "5.2"
},
{
"text": "Since ensemble decoding combines the scores of common hypotheses across multiple systems rather than combining their feature values as in mixture models, it can be used to triangulate heterogeneous systems such as phrase-based, hierarchical phrase-based, and syntax-based with completely different feature types. Considering that ensemble decoding can be used in these diverse scenarios, it offers an attractive alternative to current phrase-table triangulation systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Triangulated Systems",
"sec_num": "5.2"
},
{
"text": "Component weights control the contribution of each model in the ensemble. A tuning procedure should assign higher weights to the models that produce higher quality translations and lower weights to weak models in order to control their noise propagation in the ensemble. In the ensemble decoder, since we do not have explicit gradient information for the objective function, we use a direct optimizer for tuning. We used Condor (Vanden Berghen and Bersini, 2005) which is a publicly available toolkit based on Powell's algorithm.",
"cite_spans": [
{
"start": 436,
"end": 462,
"text": "Berghen and Bersini, 2005)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning Component Weights",
"sec_num": "5.3"
},
{
"text": "The ensemble between three triangulated models and a direct one requires tuning in a 4-dimensional space, one for each system. If, on average, the tuner evaluates the decoder n times in each direction in the optimization space, there needs to be n 4 ensemble decoder evaluations, which is very time consuming. Instead, we resorted to a simpler approach for tuning: each triangulated model is separately tuned against the direct model with a fixed weights (we used a weight of 1). In other words, three ensemble models are created, each on a single triangulated model plus the direct one. These ensembles are separately tuned and once completed, these weights comprise the final tuned weights. Thus, the total number of ensemble evaluations reduces from O(n 4 ) to O(3n).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning Component Weights",
"sec_num": "5.3"
},
{
"text": "In addition to this significant complexity reduction, this method enables parallelism in tuning, since the three individual tuning branches can now be run independently. The final tuned weights are not necessarily a local optima and one can run further optimization steps around this point to get to even better solutions which should lead to higher BLEU scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning Component Weights",
"sec_num": "5.3"
},
{
"text": "For our experiments, we used the Europarl corpus (v7) (Koehn, 2005) for training sets and ACL/WMT 2005 1 data for dev/test sets (2k sentence pairs) following Cohn and Lapata (2007) . Our goal in this paper was to understand how multiple languages can help in triangulation, the improvement in coverage of the unseen data due to triangulation, and the importance of choosing the right languages as pivot languages. Thus, we needed to run experiments on a large number of language pairs, and for each language pair we wanted to work with many pivot languages. To this end, we created small sub-corpora from Europarl by sampling 10,000 sentence pairs and conducted our experiments on them. As we will show, using larger data than this would result in prohibitively large triangulated phrase tables. Table 2 shows the number of words on both sides of used language pairs in our corpora.",
"cite_spans": [
{
"start": 54,
"end": 67,
"text": "(Koehn, 2005)",
"ref_id": "BIBREF10"
},
{
"start": 158,
"end": 180,
"text": "Cohn and Lapata (2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 796,
"end": 803,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.1"
},
{
"text": "The ensemble decoder is built on top of an inhouse implementation of a Hiero-style MT system (Chiang, 2005) called Kriya (Sankaran et al., 2012 Table 1 : Results of i) single-pivot triangulation; ii) baseline systems including direct systems and linear mixture of triangulated phrase-tables; iii) ensemble triangulation results based on different mixture operations. The mixture and ensemble methods are based on multi-pivot triangulation. These methods are built on 10k sentence-pair corpora. scores equal to or better than the state-of-the-art in phrase-based and hierarchical phrase-based translation over a wide variety of language pairs and data sets. It uses the following standard features: forward and backward relative-frequency and lexical TM probabilities; LM; word, phrase and glue-rules penalty. GIZA++ (Och and Ney, 2000) has been used for word alignment with phrase length limit of 10. In both systems, feature weights were optimized using MERT (Och, 2003) . We used the target sides of the Europarl corpus (2M sentences) to build 5-gram language models and smooth them using the Kneser-Ney method. We used SRILM (Stolcke, 2002) as the language model toolkit. Table 1 shows the BLEU scores when using two languages from {fr, en, es, de} as source and target, and the other two languages plus it as intermediate languages. The first group of numbers are BLEU scores for triangulated systems through the specified pivot language. For example, translating from de to es through en (i.e. de en es) gets 15.94% BLEU score. The second group shows the BLEU scores of the baseline systems including the direct system between the source and target languages and the linear mixture baseline of the three triangulated systems. The BLEU scores of ensemble decoding using different mixture op- As the table shows, our approach outperforms the direct systems in all the 12 language pairs while the mixture model systems fail to improve over the direct system baseline for some of the language pairs. Our approach also outperforms the mixture models in most cases. Overall, ensemble decoding with wmax as mixture operation performs the best among the different systems and baselines. Figure 3 shows the average of the BLEU score of the direct system, mixture models and wmax on all 12 systems. On average the wmax method obtains 0.33 BLEU points higher than the mixture models. We also computed the Meteor scores (Denkowski and Lavie, 2011) for all systems and the results are summarized in Figure 4 . As the figure illustrates, our ensemble decoding approach with wmax outperforms the mixture models in 11 of 12 language pairs based on Meteor scores. Figure 2 shows the phrase-table coverage of the test set for different language pairs. The coverage is defined as the percentage of unigrams in the source side of the test set for which the corresponding phrase-table has translations for. The first set of bars shows the coverage of the direct systems and the second one shows that of the combined triangulated systems for three pivot languages. Finally, the last set of bars indicate the coverage when the direct phrase-table is combined with the triangulated ones. In all language pairs, the combined triangulated phrase-tables have a higher coverage compared to the direct phrasetables. As expected, the coverage increases when these two phrase-tables are aggregated. The table below the figure shows the number of rules for each system and language pair after filtering out based on the source side of the test set. This illustrates why running experiments on larger sizes of parallel data is prohibitive for hierarchical phrasebased models. Figure 4: Meteor score difference between mixture models and direct systems as well as the difference between ensemble decoding approach with wmax and the direct system.",
"cite_spans": [
{
"start": 93,
"end": 107,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF3"
},
{
"start": 115,
"end": 143,
"text": "Kriya (Sankaran et al., 2012",
"ref_id": null
},
{
"start": 816,
"end": 835,
"text": "(Och and Ney, 2000)",
"ref_id": "BIBREF14"
},
{
"start": 960,
"end": 971,
"text": "(Och, 2003)",
"ref_id": "BIBREF15"
},
{
"start": 1128,
"end": 1143,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF19"
},
{
"start": 2413,
"end": 2440,
"text": "(Denkowski and Lavie, 2011)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 144,
"end": 151,
"text": "Table 1",
"ref_id": null
},
{
"start": 1175,
"end": 1182,
"text": "Table 1",
"ref_id": null
},
{
"start": 2184,
"end": 2192,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 2491,
"end": 2499,
"text": "Figure 4",
"ref_id": null
},
{
"start": 2652,
"end": 2660,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.1"
},
{
"text": "L 1 -L 2 L 1 tokens (K) L 2 tokens (K)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.1"
},
{
"text": "Cohn and Lapata (2007) showed that the pivot language should be close to the source or the target language in order to be effective. For example, when translating between Romance languages (Italian, Spanish, etc.), the pivot language should also be a Romance language. In addition to those findings, based on the results presented in Table 1 , here are some observations for these five European languages:",
"cite_spans": [
{
"start": 5,
"end": 22,
"text": "and Lapata (2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 334,
"end": 341,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Choice of Pivot Language",
"sec_num": "6.3.1"
},
{
"text": "\u2022 When translating from or to de, en is the best pivot language;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choice of Pivot Language",
"sec_num": "6.3.1"
},
{
"text": "\u2022 Generally de is not a suitable pivot language for any translation pair;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choice of Pivot Language",
"sec_num": "6.3.1"
},
{
"text": "\u2022 When translating from en to any other language, fr is the best pivot;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choice of Pivot Language",
"sec_num": "6.3.1"
},
{
"text": "\u2022 it is the best intermediate language when translating from fr or es to other languages; except when translating to de for which en is the best pivot language (c.f. first finding);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choice of Pivot Language",
"sec_num": "6.3.1"
},
{
"text": "In the paper, we introduced a novel approach for triangulation which does phrase-table triangulation and model combination on-the-fly in the decoder. Ensemble decoder uses the full hypothesis score for triangulation and combination and hence is able to mix hypotheses from heterogeneous systems. Another advantage of this method to the phrasetable triangulation approach is that our method is applicable even when there exists no parallel data between source and target languages for tuning because we only use the src-tgt tuning set to optimize hyper-parameters, though phrase-table triangulation methods use it to learn MT log-linear feature weights for which having a tuning set is much more essential. Empirical results also showed that this method with wmax outperforms the baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Future work includes imposing restrictions on the generated triangulated rules in order to keep only ones that have a strong support from the word alignments. By exploiting such constraints, we can experiment with larger sizes of parallel data. Specifically, a more natural experimental setup for triangulation which we would like to try is to use a small direct system with big src pvt and pvt tgt systems. This resembles the actual situation for resource-poor language pairs. We will also experiment with higher number of pivot languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Currently, most research in this area focuses on triangulation on paths containing only one pivot language. We can also analyze our method when using more languages in the triangulation chain and see whether there would any gain in doing such.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Finally, in current methods all (f ,\u012b) phrase pairs of the src pvt systems, for which there does not exist any (\u012b,\u0113) pair in pvt tgt are simply discarded. However in most cases, such\u012b phrases can be segmented into smaller phrases (or rules for Hiero systems) to be triangulated via them. This segmentation is a decoding problem which requires an efficient algorithm to be practical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "http://www.statmt.org/wpt05/mt-shared-task/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Phrase-based statistical machine translation with pivot languages",
"authors": [
{
"first": "N",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Barbaiani",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Cattoni",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceeding of IWSLT",
"volume": "",
"issue": "",
"pages": "143--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Bertoldi, M. Barbaiani, M. Federico, and R. Cattoni. 2008. Phrase-based statistical machine translation with pivot languages. Proceeding of IWSLT, pages 143-149.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Pros and cons of the pivot and transfer approaches in multilingual machine translation",
"authors": [
{
"first": "C",
"middle": [],
"last": "Boitet",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maxwell",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "93--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Boitet. 1988. Pros and cons of the pivot and trans- fer approaches in multilingual machine translation. Maxwell et al.(1988), pages 93-106.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Improved statistical machine translation using paraphrases",
"authors": [
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Callison-Burch, P. Koehn, and M. Osborne. 2006. Improved statistical machine translation using para- phrases. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Com- putational Linguistics, pages 17-24. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A hierarchical phrase-based model for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL '05: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "263--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In ACL '05: Proceedings of the 43rd Annual Meeting on As- sociation for Computational Linguistics, pages 263- 270, Morristown, NJ, USA. ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Machine translation by triangulation: Making effective use of multi-parallel corpora",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Cohn and Mirella Lapata. 2007. Machine translation by triangulation: Making effective use of multi-parallel corpora. In Proceedings of the 45th",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Annual Meeting of the Association of Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "728--735",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association of Computational Linguistics, pages 728-735, Prague, Czech Repub- lic, June. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Catalan-english statistical machine translation without parallel corpus: bridging through spanish",
"authors": [
{
"first": "A",
"middle": [],
"last": "De Gispert",
"suffix": ""
},
{
"first": "J",
"middle": [
"B"
],
"last": "Marino",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of 5th International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "65--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. de Gispert and J.B. Marino. 2006. Catalan-english statistical machine translation without parallel cor- pus: bridging through spanish. In Proc. of 5th In- ternational Conference on Language Resources and Evaluation (LREC), pages 65-68.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Meteor 1.3: Automatic Metric for Reliable Optimization and Evaluation of Machine Translation Systems",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the EMNLP 2011 Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic Metric for Reliable Optimization and Evaluation of Machine Translation Systems. In Proceedings of the EMNLP 2011 Workshop on Sta- tistical Machine Translation.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improving cross language retrieval with triangulated translation",
"authors": [
{
"first": "T",
"middle": [],
"last": "Gollins",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sanderson",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "90--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Gollins and M. Sanderson. 2001. Improving cross language retrieval with triangulated translation. In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 90-95. ACM.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The proper place of men and machines in language translation",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 1997,
"venue": "Machine Translation",
"volume": "12",
"issue": "1/2",
"pages": "3--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Kay. 1997. The proper place of men and ma- chines in language translation. Machine Transla- tion, 12(1/2):3-23.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Europarl: A parallel corpus for statistical machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "MT summit",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn. 2005. Europarl: A parallel corpus for statis- tical machine translation. In MT summit, volume 5.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Statistical Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press, New York, NY, USA, 1st edition.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving word alignment with bridge languages",
"authors": [
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "42--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shankar Kumar, Franz Josef Och, and Wolfgang Macherey. 2007. Improving word alignment with bridge languages. In EMNLP-CoNLL, pages 42-50. ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multipath translation lexicon induction via bridge languages",
"authors": [
{
"first": "Gideon",
"middle": [
"S"
],
"last": "Mann",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, NAACL '01",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gideon S. Mann and David Yarowsky. 2001. Mul- tipath translation lexicon induction via bridge lan- guages. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technolo- gies, NAACL '01, pages 1-8, Stroudsburg, PA, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 38th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. J. Och and H. Ney. 2000. Improved statistical align- ment models. In Proceedings of the 38th Annual Meeting of the ACL, pages 440-447, Hongkong, China, October.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Minimum error rate training for statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate training for statistical machine translation. In Proceedings of the 41th Annual Meeting of the ACL, Sapporo, July. ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Mixing multiple translation models in statistical machine translation",
"authors": [
{
"first": "Majid",
"middle": [],
"last": "Razmara",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Baskaran Sankaran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sarkar",
"suffix": ""
}
],
"year": 2012,
"venue": "The 50th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference",
"volume": "1",
"issue": "",
"pages": "940--949",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Majid Razmara, George Foster, Baskaran Sankaran, and Anoop Sarkar. 2012. Mixing multiple transla- tion models in statistical machine translation. In The 50th Annual Meeting of the Association for Compu- tational Linguistics, Proceedings of the Conference, July 8-14, 2012, Jeju Island, Korea -Volume 1: Long Papers, pages 940-949. The Association for Com- puter Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Kriya -an end-to-end hierarchical phrase-based mt system",
"authors": [
{
"first": "Majid",
"middle": [],
"last": "Baskaran Sankaran",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Razmara",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sarkar",
"suffix": ""
}
],
"year": 2012,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "97",
"issue": "97",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baskaran Sankaran, Majid Razmara, and Anoop Sarkar. 2012. Kriya -an end-to-end hierarchical phrase-based mt system. The Prague Bulletin of Mathematical Linguistics, 97(97), April.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Implicitness as a guiding principle in machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Schubert",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the 12th conference on Computational linguistics",
"volume": "2",
"issue": "",
"pages": "599--601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Schubert. 1988. Implicitness as a guiding principle in machine translation. In Proceedings of the 12th conference on Computational linguistics-Volume 2, pages 599-601. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "SRILM -an extensible language modeling toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "257--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM -an extensible lan- guage modeling toolkit. In Proceedings Interna- tional Conference on Spoken Language Processing, pages 257-286.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A comparison of pivot methods for phrase-based statistical machine translation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of NAACL-HLT",
"volume": "7",
"issue": "",
"pages": "484--491",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Utiyama and H. Isahara. 2007. A comparison of pivot methods for phrase-based statistical machine translation. In Proceedings of NAACL-HLT, vol- ume 7, pages 484-491.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "CONDOR, a new parallel, constrained extension of powell's UOBYQA algorithm: Experimental results and comparison with the DFO algorithm",
"authors": [
{
"first": "Hugues",
"middle": [],
"last": "Frank Vanden Berghen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bersini",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Computational and Applied Mathematics",
"volume": "181",
"issue": "",
"pages": "157--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Vanden Berghen and Hugues Bersini. 2005. CONDOR, a new parallel, constrained extension of powell's UOBYQA algorithm: Experimental results and comparison with the DFO algorithm. Journal of Computational and Applied Mathematics, 181:157- 175, September.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Word alignment for languages with scarce resources using bilingual corpora of other language pairs",
"authors": [
{
"first": "H",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL on Main conference poster sessions",
"volume": "",
"issue": "",
"pages": "874--881",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Wang, H. Wu, and Z. Liu. 2006. Word alignment for languages with scarce resources using bilingual corpora of other language pairs. In Proceedings of the COLING/ACL on Main conference poster ses- sions, pages 874-881. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Pivot language approach for phrase-based statistical machine translation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2007,
"venue": "Machine Translation",
"volume": "21",
"issue": "3",
"pages": "165--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Wu and H. Wang. 2007. Pivot language ap- proach for phrase-based statistical machine transla- tion. Machine Translation, 21(3):165-181.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Number of OOVs when translating directly from fr to en (solid lines), triangulating through es, de or it individually (dotted lines), and when combining multiple triangulation systems with the direct system. OOV numbers are based on a multi-language parallel test set and the models are built on small corpora (10k sentence pairs), which are not multi-parallel.",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "Switching (Switch): Each cell in the CKY chart is populated only by rules from one of the models and the other models' rules are discarded. Each component model is considered an expert on different spans of the source. A binary indicator function \u03b4(f , m) picks a component model for each span:The criteria for choosing a model for each cell, \u03c8(f , n), is based on max top score, i.e.",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "665K 1,084K 1,155K 479K 927K 1,319K 394K 743K 976K tri + direct 83M 102M 132M 113M 103M 133M 129M 101M 152M 141M 109M 129MFigure 2: Coverage for i) direct system; ii) combined triangulated system with three 3 languages; and iii) the combination of the triangulated phrase-tables and the direct one. The table shows the number of rules for each system and language pair after filtering based on the source side of the test set. erations are illustrated at the bottom.",
"type_str": "figure",
"num": null
},
"FIGREF3": {
"uris": null,
"text": "The average BLEU scores of the direct system, mixture models and wmax ensemble triangulation approach over all 12 language pairs.",
"type_str": "figure",
"num": null
},
"TABREF0": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>src\u2193</td><td/><td>tgt \u2192</td><td>en</td><td>es</td><td>fr</td><td>src\u2193</td><td/><td>tgt \u2192</td><td>de</td><td>es</td><td>fr</td></tr><tr><td/><td>pivots</td><td>en es fr</td><td colspan=\"3\">-14.47 14.39 13.45 15.94 13.62 -13.43 -</td><td/><td>pivots</td><td>de es fr</td><td colspan=\"2\">-12.95 14.09 23.25 20.47 17.38 -20.78 -</td></tr><tr><td/><td/><td>it</td><td colspan=\"3\">14.14 14.90 11.67</td><td/><td/><td>it</td><td colspan=\"2\">13.00 23.18 19.02</td></tr><tr><td>de</td><td colspan=\"2\">direct mixture</td><td colspan=\"3\">21.94 20.70 17.37 21.86 22.30 18.28</td><td>en</td><td colspan=\"2\">direct mixture</td><td colspan=\"2\">17.57 28.81 24.58 17.91 28.89 24.30</td></tr><tr><td/><td colspan=\"2\">wmax</td><td colspan=\"3\">22.49 21.32 18.22</td><td/><td colspan=\"2\">wmax</td><td colspan=\"2\">17.77 29.17 25.39</td></tr><tr><td/><td colspan=\"2\">wsum</td><td colspan=\"3\">22.22 21.42 17.98</td><td/><td colspan=\"2\">wsum</td><td colspan=\"2\">17.68 29.33 24.70</td></tr><tr><td/><td colspan=\"2\">switch</td><td colspan=\"3\">22.59 21.80 17.70</td><td/><td colspan=\"2\">switch</td><td colspan=\"2\">17.77 29.32 24.98</td></tr><tr><td>src\u2193</td><td/><td>tgt \u2192</td><td>de</td><td>en</td><td>fr</td><td>src\u2193</td><td/><td>tgt \u2192</td><td>de</td><td>en</td><td>es</td></tr><tr><td/><td>pivots</td><td>de en fr</td><td colspan=\"3\">-14.50 12.48 22.81 18.84 23.28 -18.55 -</td><td/><td>pivots</td><td>de en es</td><td colspan=\"2\">-14.84 14.35 23.59 20.15 22.96 -27.84 -</td></tr><tr><td/><td/><td>it</td><td colspan=\"3\">13.69 23.14 23.44</td><td/><td/><td>it</td><td colspan=\"2\">14.08 24.08 30.38</td></tr><tr><td>es</td><td colspan=\"2\">direct mixture</td><td colspan=\"3\">16.30 28.11 29.83 17.75 28.99 29.47</td><td>fr</td><td colspan=\"2\">direct mixture</td><td colspan=\"2\">16.56 28.79 35.27 17.39 28.83 35.27</td></tr><tr><td/><td colspan=\"2\">wmax</td><td colspan=\"3\">17.34 29.23 30.54</td><td/><td colspan=\"2\">wmax</td><td colspan=\"2\">17.67 29.95 36.07</td></tr><tr><td/><td colspan=\"2\">wsum</td><td colspan=\"3\">16.79 28.79 30.12</td><td/><td colspan=\"2\">wsum</td><td colspan=\"2\">17.41 28.62 35.98</td></tr><tr><td/><td colspan=\"2\">switch</td><td colspan=\"3\">16.53 29.16 29.68</td><td/><td colspan=\"2\">switch</td><td colspan=\"2\">17.78 28.79 36.33</td></tr></table>",
"text": "). This Hiero decoder obtains BLEU"
},
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Number of tokens in each language pair in the training data."
}
}
}
}