{ "paper_id": "D09-1040", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:39:53.260531Z" }, "title": "Improved Statistical Machine Translation Using Monolingually-Derived Paraphrases", "authors": [ { "first": "Yuval", "middle": [], "last": "Marton", "suffix": "", "affiliation": { "laboratory": "Lab at the Institute for Advanced Computer Studies (UMIACS", "institution": "University of Maryland College Park", "location": { "postCode": "20742-7505", "region": "MD", "country": "USA" } }, "email": "ymarton@umiacs.umd.edu" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": { "addrLine": "3400 N. Charles Street (CSEB 226-B) Baltimore", "postCode": "21218", "region": "MD" } }, "email": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "", "affiliation": { "laboratory": "Lab at the Institute for Advanced Computer Studies (UMIACS", "institution": "University of Maryland College Park", "location": { "postCode": "20742-7505", "region": "MD", "country": "USA" } }, "email": "resnik@umiacs.umd.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Untranslated words still constitute a major problem for Statistical Machine Translation (SMT), and current SMT systems are limited by the quantity of parallel training texts. Augmenting the training data with paraphrases generated by pivoting through other languages alleviates this problem, especially for the so-called \"low density\" languages. But pivoting requires additional parallel texts. We address this problem by deriving paraphrases monolingually, using distributional semantic similarity measures, thus providing access to larger training resources, such as comparable and unrelated monolingual corpora. We present what is to our knowledge the first successful integration of a collocational approach to untranslated words with an end-to-end, state of the art SMT system demonstrating significant translation improvements in a low-resource setting.", "pdf_parse": { "paper_id": "D09-1040", "_pdf_hash": "", "abstract": [ { "text": "Untranslated words still constitute a major problem for Statistical Machine Translation (SMT), and current SMT systems are limited by the quantity of parallel training texts. Augmenting the training data with paraphrases generated by pivoting through other languages alleviates this problem, especially for the so-called \"low density\" languages. But pivoting requires additional parallel texts. We address this problem by deriving paraphrases monolingually, using distributional semantic similarity measures, thus providing access to larger training resources, such as comparable and unrelated monolingual corpora. We present what is to our knowledge the first successful integration of a collocational approach to untranslated words with an end-to-end, state of the art SMT system demonstrating significant translation improvements in a low-resource setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Phrase-based systems, flat and hierarchical alike (Koehn et al., 2003; Koehn, 2004b; Koehn et al., 2007; Chiang, 2005; Chiang, 2007) , have achieved a much better translation coverage than wordbased ones (Brown et al., 1993) , but untranslated words remain a major problem in SMT. For example, according to Callison-Burch et al. (2006) , a SMT system with a training corpus of 10,000 words learned only 10% of the vocabulary; the same system learned about 30% with a training corpus of 100,000 words; and even with a large training corpus of nearly 10,000,000 words it only reached about 90% coverage of the source vocabulary. Coverage of higher order n-gram levels is even harder. This problem plays a major part in reducing machine translation quality, as reflected by both automatic measures such as BLEU (Papineni et al., 2002) and human judgment tests. Improving translation coverage accurately is therefore important for SMT systems.", "cite_spans": [ { "start": 50, "end": 70, "text": "(Koehn et al., 2003;", "ref_id": "BIBREF15" }, { "start": 71, "end": 84, "text": "Koehn, 2004b;", "ref_id": "BIBREF18" }, { "start": 85, "end": 104, "text": "Koehn et al., 2007;", "ref_id": "BIBREF16" }, { "start": 105, "end": 118, "text": "Chiang, 2005;", "ref_id": "BIBREF5" }, { "start": 119, "end": 132, "text": "Chiang, 2007)", "ref_id": "BIBREF6" }, { "start": 204, "end": 224, "text": "(Brown et al., 1993)", "ref_id": "BIBREF1" }, { "start": 307, "end": 335, "text": "Callison-Burch et al. (2006)", "ref_id": "BIBREF3" }, { "start": 808, "end": 831, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The first solution that might come to mind is to use larger parallel training corpora. However, current state-of-the-art SMT systems cannot learn from non-aligned corpora, while sentence-aligned parallel corpora (bitexts) are a limited resource (See Section 2 for discussion of automaticallycompiled bitexts). Another direction might be to make use of non-parallel corpora for training. However, this requires developing techniques to extract alignments or translations from them, and in a sufficiently fast, memory-efficient, and scalable manner. One approach that can, in principle, better exploit both alignments from bitexts and make use of non-parallel corpora is the distributional collocational approach, e.g., as used by Fung and Yee (1998) and Rapp (1999) . However, the systems described there are not easily scalable, and require pre-computation of a very large collocation counts matrix. Related attempts propose generating bitexts from comparable and \"quasicomparable\" bilingual texts by iteratively bootstrapping documents, sentences, and words (Fung and Cheung, 2004) , or by using a maximum entropy classifier (Munteanu and Marcu, 2005) . Alignment accuracy remains a challenge for them.", "cite_spans": [ { "start": 729, "end": 748, "text": "Fung and Yee (1998)", "ref_id": "BIBREF13" }, { "start": 753, "end": 764, "text": "Rapp (1999)", "ref_id": "BIBREF31" }, { "start": 1059, "end": 1082, "text": "(Fung and Cheung, 2004)", "ref_id": "BIBREF12" }, { "start": 1126, "end": 1152, "text": "(Munteanu and Marcu, 2005)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent work has proposed augmenting the training data with paraphrases generated by pivoting through other languages (Callison-Burch et al., 2006; Madnani et al., 2007) . This indeed alleviates the vocabulary coverage problem, especially for the so-called \"low density\" languages. However, these approaches still require bitexts where one side contains the original source language.", "cite_spans": [ { "start": 117, "end": 146, "text": "(Callison-Burch et al., 2006;", "ref_id": "BIBREF3" }, { "start": 147, "end": 168, "text": "Madnani et al., 2007)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The paradigm described in this paper involves constructing monolingual distributional profiles (DPs; a.k.a. word association profiles, or cooccurrence vectors) of out-of-vocabulary words and phrases in the source language; then, generating paraphrase candidates from phrases that cooccur in similar contexts, and assigning them similarity scores. The highest ranking paraphrases are used to augment the translation phrase table. The table augmentation idea is similar to Callison-Burch et al.'s (Callison-Burch et al., 2006) , but our proposed paradigm does not require using a limited resource such as parallel texts in order to generate paraphrases. Moreover, our proposed paradigm can, in principle, achieve large-scale acquisition of paraphrases with high semantic similarity. However, using parallel training texts in pivoting techniques offers the potential advantage of implicit translational knowledge, in the form of sentence alignments, while our approach is unguided in this respect. Therefore, we conducted experiments to find out how these relative advantages play out. We present here, to our knowledge for the first time, positive results of integrating distributional monolingually-derived paraphrases in an end-to-end state-of-the-art SMT system.", "cite_spans": [ { "start": 471, "end": 524, "text": "Callison-Burch et al.'s (Callison-Burch et al., 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the rest of this paper we discuss related work in Section 2, describe the distributional hypothesis and distributional profiles in Section 3, and present the monolingually-derived paraphrase generation system in Section 4. We report our experiments and results in Section 5, and conclude by discussing the implications and future research directions in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This is not the first to attempt to ameliorate the out-of-vocabulary (OOV) words problem in statistical machine translation, and other natural language processing tasks. This work is most closely related to that of Callison-Burch et al. (2006) , who also translate source-side paraphrases of the OOV phrases. There, paraphrases are generated from bitexts of various language pairs, by \"pivoting\": translating the OOV phrases to an additional language (or languages) and back to the source language. The quality of these paraphrases is estimated by marginalizing translation probabilities to and from the additional language side(s) e, as follows: p(f 2 |f 1 ) = e p(e|f 1 )p(f 2 |e). A ma-jor disadvantage of their approach is that it relies on the availability of parallel corpora in other languages. While this works for English and many European languages, it is far less likely to help when translating from other source languages, for which bitexts are scarce or non-existent. Also, the pivoting approach is inherently noisy (in both the paraphrase candidates' correct sense, and their translational likelihood), and it is likely to fare poorly with out-of-domain translation. One advantage of the bitext-dependent pivoting approach is the use of the additional human knowledge that is encapsulated in the parallel sentence alignment. However, we argue that the ability to use much larger resources for paraphrasing should trump the human knowledge advantage.", "cite_spans": [ { "start": 215, "end": 243, "text": "Callison-Burch et al. (2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "More recently, Callison-Burch (2008) has improved performance of this pivoting technique by imposing syntactic constraints on the paraphrases. The limitation of such an approach is the reliance on a good parser (in addition to reliance on bitexts), but a good parser is not available in all languages, especially not in resource-poor languages. Another approach using a pivoting technique augments the human reference translation with paraphrases, creating additional translation \"references\" (Madnani et al., 2007) . Both approaches have shown gains in BLEU score. Barzilay and McKeown (2001) extract paraphrases from a monolingual parallel corpus, containing multiple translations of the same source. In addition to the parallel corpus usage limitations described above, this technique is further limited by the small size of such materials, which are even scarcer than the resources in the pivoting case. Dolan et al. (2004) explore generating paraphrases by edit-distance and headlines of timeand topic-clustered news articles; they do not address the OOV problem directly, as their focus is sentence-level paraphrases; although they use a standard SMT measure, alignment error rate (AER), they only report results of the alignment quality, and not of an end-to-end SMT system. Much of the previous research largely focused on morphological analysis in order to reduce type sparseness; Callison-Burch et al. (2006) list some of the influential work in that direction.", "cite_spans": [ { "start": 15, "end": 36, "text": "Callison-Burch (2008)", "ref_id": "BIBREF4" }, { "start": 493, "end": 515, "text": "(Madnani et al., 2007)", "ref_id": "BIBREF21" }, { "start": 566, "end": 593, "text": "Barzilay and McKeown (2001)", "ref_id": "BIBREF0" }, { "start": 908, "end": 927, "text": "Dolan et al. (2004)", "ref_id": "BIBREF8" }, { "start": 1390, "end": 1418, "text": "Callison-Burch et al. (2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Work that relies on the distributional hypothesis using bilingual comparable corpora (without the need for bitexts), typically uses a seed lexicon for \"bridging\" source language phrases with their target languages paraphrases (Fung and Yee, 1998; Rapp, 1999; Diab and Finch, 2000) . This approach is sometimes viewed as, or combined with, an information retrieval (IR) approach, and normalizes strength-of-association measures (see Section 3) with IR-related measures such as TF/IDF (Fung and Yee, 1998) . To date, reported implementations suffer from scalability issues, as they pre-compute and hold in memory a huge collocation matrix; we know of no report of using this approach in an end-to-end SMT system.", "cite_spans": [ { "start": 226, "end": 246, "text": "(Fung and Yee, 1998;", "ref_id": "BIBREF13" }, { "start": 247, "end": 258, "text": "Rapp, 1999;", "ref_id": "BIBREF31" }, { "start": 259, "end": 280, "text": "Diab and Finch, 2000)", "ref_id": "BIBREF7" }, { "start": 483, "end": 503, "text": "(Fung and Yee, 1998)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Another approach aiming to reduce OOV rate concentrates on increasing parallel training set size without using more dedicated human translation (Resnik and Smith, 2003; Oard et al., 2003) .", "cite_spans": [ { "start": 144, "end": 168, "text": "(Resnik and Smith, 2003;", "ref_id": "BIBREF32" }, { "start": 169, "end": 187, "text": "Oard et al., 2003)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The distributional hypothesis and distributional profiles. Natural language processing (NLP) applications that assume the distributional hypothesis (Harris, 1940; Firth, 1957) typically keep track of word co-occurrences in distributional profiles (a.k.a. collocation vectors, or context vectors). Each distributional profile DP u (for some word u) keeps counts of co-occurrence of u with all words within a usually fixed distance from each of its occurrences (a sliding window) in some training corpus. More advanced profiles keep \"strength of association\" (SoA) information between u and each of the co-occurring words, which is calculated from the counts of u, the counts of the other word, their co-occurrence count, and the count of all words in the corpus (corpus size). The information on the other words with respect to u is typically kept in a vector whose dimensions correspond to all words in the training corpus. This is described in Equation 1, where V is the training corpus vocabulary:", "cite_spans": [ { "start": 148, "end": 162, "text": "(Harris, 1940;", "ref_id": "BIBREF14" }, { "start": 163, "end": 175, "text": "Firth, 1957)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Collocational Profiles", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "DP u = {< w i , SoA(u, w i ) > |u, w i \u2208 V } for all i s.t. 1 \u2264 i \u2264 |V |", "eq_num": "(1)" } ], "section": "Collocational Profiles", "sec_num": "3" }, { "text": "Semantic similarity between words u and v can be estimated by calculating the similarity (vector distance) between their profiles. Slightly more formally, the distributional hypothesis assumes that if we had access to the hypothetical true (psycholinguistic) semantic similarity function over word pairs,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collocational Profiles", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "semsim(u, v), then \u2200u, v, w \u2208 V, [semsim(u, v) > semsim(u, w)] =\u21d2 [psim(DP u , DP v ) > psim(DP u , DP w )],", "eq_num": "(2)" } ], "section": "Collocational Profiles", "sec_num": "3" }, { "text": "where V is the language vocabulary, DP word is the distributional profile of word, and psim() is a 2-place vector similarity function (all further described below). Paraphrasing and other NLP applications that are based on the distributional hypothesis assume entailment in the reverse direction: the right-hand-side of Formula (2) (profile/vector similarity) entails the left-hand-side (semantic similarity).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collocational Profiles", "sec_num": "3" }, { "text": "The sliding window and word association (SoA) measures. Some researchers count positional collocations in a sliding window, i.e., the cocounts and SoA measures are calculated per relative position (e.g., for some word/token u, position 1 is the token immediately after u; position -2 is the token preceding the token that precedes u) (Rapp, 1999) ; other researchers use nonpositional (which we dub here flat) collocations, meaning, they count all token occurrences within the sliding window, regardless of their positions in it relative to u (McDonald, 2000; Mohammad and Hirst, 2006) . We use here flat collocations in a 6-token sliding window. Beside simple cooccurrence counts within sliding windows, other SoA measures include functions based on TF/IDF (Fung and Yee, 1998) , mutual information (PMI) (Lin, 1998) , conditional probabilities (Schuetze and Pedersen, 1997) , chi-square test, and the loglikelihood ratio (Dunning, 1993) .", "cite_spans": [ { "start": 334, "end": 346, "text": "(Rapp, 1999)", "ref_id": "BIBREF31" }, { "start": 543, "end": 559, "text": "(McDonald, 2000;", "ref_id": "BIBREF23" }, { "start": 560, "end": 585, "text": "Mohammad and Hirst, 2006)", "ref_id": "BIBREF24" }, { "start": 758, "end": 778, "text": "(Fung and Yee, 1998)", "ref_id": "BIBREF13" }, { "start": 806, "end": 817, "text": "(Lin, 1998)", "ref_id": "BIBREF20" }, { "start": 846, "end": 875, "text": "(Schuetze and Pedersen, 1997)", "ref_id": "BIBREF34" }, { "start": 923, "end": 938, "text": "(Dunning, 1993)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Collocational Profiles", "sec_num": "3" }, { "text": "Profile similarity measures. A profile similarity function psim(DP u , DP v ) is typically defined as a two-place function, taking vectors as arguments, each vector representing a distributional profile of some word u and v, respectively, and whose cells contain the SoA of u (or v) with each word (\"collocate\") in the known vocabulary. Similarity can be (and have been) estimated in several ways, e.g., the cosine coefficient, the Jaccard coefficient, the Dice coefficient, and the City-Block measure. The formula for the cosine function for similarity measure is given in Eq. 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collocational Profiles", "sec_num": "3" }, { "text": "psim(DP u , DP v ) = cos(DP u , DP v ) = w i \u2208V SoA(u, w i )SoA(v, w i ) w i \u2208V SoA(u, w i ) 2 w i \u2208V SoA(v, w i ) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collocational Profiles", "sec_num": "3" }, { "text": "(3) In principle, any SoA can be used with any profile similarity measure. However, in practice, only some SoA/similarity measure combinations do well, and finding the best combination is still more art than science. Some successful combinations are cos CP (Schuetze and Pedersen, 1997) , (Lin, 1998) , City LL (Rapp, 1999) , and Jensen-Shannon divergence of conditional probabilities (JSD CP ). We use here cosine of loglikelihood vectors (McDonald, 2000) .", "cite_spans": [ { "start": 257, "end": 286, "text": "(Schuetze and Pedersen, 1997)", "ref_id": "BIBREF34" }, { "start": 289, "end": 300, "text": "(Lin, 1998)", "ref_id": "BIBREF20" }, { "start": 311, "end": 323, "text": "(Rapp, 1999)", "ref_id": "BIBREF31" }, { "start": 440, "end": 456, "text": "(McDonald, 2000)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Collocational Profiles", "sec_num": "3" }, { "text": "Lin P M I", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collocational Profiles", "sec_num": "3" }, { "text": "Phrasal distributional profiles. Word DPs can be generalized to phrasal DPs, simply by counting words that co-occur within a sliding window around the target phrase's occurrences (i.e., counting occurrences of words up to 6 words before or after the target phrase). For example, when building a DP for the target phrase counting words in the previous sentence, then simply is in relative position -2, and sliding is in relative position 5. Searching for similar phrasal DPs poses an additional challenge over the word DP case (see Section 4), but there is no additional difficulty in building the phrasal profile itself as described above. In preliminary experiments we found no gain in using phrasal collocates (i.e., count how many times a phrase of more than one word co-occurs in a sliding window around the target word/phrase).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collocational Profiles", "sec_num": "3" }, { "text": "The system design is as follows: upon receiving OOV phrase phr, build distributional profile DP phr . Next, gather contexts: for each occurrence of phr, keep surrounding (left and right) context L__R. For each such context, gather paraphrase candidates X which occur between L and R in other locations in the training corpus, i.e., all X such that LXR occur in the corpus. Finally, rank all candidates X, by building distributional profile DP X and measuring profile similarity between DP X and DP phr , for each X. Output k-best candidates above a certain similarity score threshold. The rest of this section describes this system in more detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searching and Scoring Phrasal Paraphrases", "sec_num": "4" }, { "text": "Build phrasal profile DP phr . Build a profile of all word collocates, as described in Section 3. Use sliding window of size M axP os = 6. If phr is very frequent (above some threshold of t occurrences), uniformly sample only t occurrences, multiplying the gathered co-counts by factor of count(phr)/t. We set t = 10000.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searching and Scoring Phrasal Paraphrases", "sec_num": "4" }, { "text": "Gather context. The challenge in choosing the relevant context is this: if it is very short and/or very frequent (e.g., \"the __ is\"), then it might not be very informative, in the sense that many words can appear in that context (in this example, practically any noun); however, if it is too long (too specific), then it might not occur enough times elsewhere (or not at all) in the training corpus. Therefore, to balance between these two extremes, we use the following heuristics. Start small: Start with setting the left part of the context L to be a single word/token to the left of phrase phr. If it is stoplisted, append the next word to the left (now having a bigram left context instead of a unigram), and repeat until the left context is not in the stoplist. Repeat similarly for R, the context to the right of phr. Add the resulting L__R context to a context list. We stoplist \"promiscuous\" words, i.e., those that have more than StoplistT hreshold collocates in the training corpus, using the above M axP os parameter value. We also stoplist bigrams which occur more than t times and comprise solely from stoplisted unigrams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searching and Scoring Phrasal Paraphrases", "sec_num": "4" }, { "text": "Gather candidates. For each gathered context in the context list, gather all paraphrase candidate phrases X that connect left hand side context L with right hand side context R, i.e., gather all X such that the sequence LXR occurs in the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searching and Scoring Phrasal Paraphrases", "sec_num": "4" }, { "text": "In practice, to keep search complexity low, limit X to be up to length M axP hraseLen. Also, to further speed up runtime, we uniformly sample the context occurrences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searching and Scoring Phrasal Paraphrases", "sec_num": "4" }, { "text": "Rank candidates. For each candidate X, build distributional profile DP X , and evaluate psim(DP phr , DP X ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searching and Scoring Phrasal Paraphrases", "sec_num": "4" }, { "text": "Output k-best candidates. Output k-best paraphrase candidates for phrase phr, in descending order of similarity. We set k = 20. Filter out paraphrases with score less than minScore.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Searching and Scoring Phrasal Paraphrases", "sec_num": "4" }, { "text": "We examined the application of the system's paraphrases to handling unknown phrases when translating from English into Chinese (E2C) and from Spanish into English (S2E). For all baselines we used the phrase-based statistical machine translation system Moses (Koehn et al., 2007) , with the default model features, weighted in a log-linear framework (Och and Ney, 2002) . Feature weights were set with minimum error rate training (Och, 2003) on a development set using BLEU (Papineni et al., 2002) as the objective function. Test results were evaluated using BLEU and TER (Snover et al., 2005) . The phrase translation probabilities were determined using maximum likelihood estimation over phrases induced from word-level alignments produced by performing Giza++ training (Och and Ney, 2000) on both source and target sides of the parallel training sets. When the baseline system encountered unknown words in the test set, its behavior was simply to reproduce the foreign word in the translated output.", "cite_spans": [ { "start": 258, "end": 278, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF16" }, { "start": 349, "end": 368, "text": "(Och and Ney, 2002)", "ref_id": "BIBREF28" }, { "start": 429, "end": 440, "text": "(Och, 2003)", "ref_id": "BIBREF29" }, { "start": 473, "end": 496, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF30" }, { "start": 571, "end": 592, "text": "(Snover et al., 2005)", "ref_id": "BIBREF35" }, { "start": 771, "end": 790, "text": "(Och and Ney, 2000)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "5" }, { "text": "The paraphrase-augmented systems were identical to the corresponding baseline system, with the exception of additional (paraphrase-based) translation rules, and additional feature(s). Similarly to Callison-Burch et al. (2006) , we added the following feature:", "cite_spans": [ { "start": 197, "end": 225, "text": "Callison-Burch et al. (2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "5" }, { "text": "h(e, f ) = 8 > > > > > < > > > > > : psim(DP f , DP f ) If phrase table entry (e, f )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "5" }, { "text": "is generated from (e, f ) using monolinguallyderived paraphrases. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "5" }, { "text": "Otherwise,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "5" }, { "text": "Note that it is possible to construct a new translation rule from f to e via more than one pair of source-side phrase and its paraphrase; e.g., if f 1 is a paraphrase of f , and so is f 2 , and both f 1 , f 2 translate to the same e, then both lead to the construction of the new rule translating f to e, but with potentially different feature scores. In order to eliminate this duplicity and leverage over these alternate paths which can be used to increase our confidence level in the new rule, we did the following: For each paraphrase f of some source-side phrases f i , with respective similarity scores sim(f i , f ), we calculated an aggregate score asim with a \"quasi-onlineupdating\" method as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "5" }, { "text": "asim i = (1 \u2212 asim i\u22121 )sim(f i , f )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "5" }, { "text": ", where asim 0 = 0. The aggregate score asim is updated in an \"online\" fash-ion with each pair f i , f as they are processed, but only the final asim k score is used, after all k pairs have been processed. Simple arithmetics can show that this method is insensitive to the order in which the paraphrases are processed. We only augment the phrase table with a single rule from f to e, and in it are the feature values of the phrase f i for which the score sim(f i , f ) was the highest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "5" }, { "text": "For the English-Chinese (E2C) baseline system, we trained on the LCD Sinorama and FBIS tests (LCD2005T10 and LCD2003E14), and segmented the Chinese side with the Stanford Segmenter (Tseng et al., 2005) . After tokenization and filtering, this bitext contained 231,586 lines (6.4M + 5.1M tokens). We trained a trigram language model on the Chinese side. We then split the bitext to 32 even slices, and constructed a reduced set of about 29,000 lines (sentences) by using only every eighth slice. The purpose of creating this subset model was to simulate a resource-poor language. See Table 1 For development, we used the Chinese-English NIST MT 2005 evaluation set, taking one of the English references as source, and the Chinese source as a single reference translation. We tested the system using the English-Chinese NIST MT evaluation 2008 test set with its four reference translations.", "cite_spans": [ { "start": 181, "end": 201, "text": "(Tseng et al., 2005)", "ref_id": "BIBREF36" } ], "ref_spans": [ { "start": 583, "end": 590, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "English-to-Chinese Translation", "sec_num": "5.1" }, { "text": "We augmented the E2C baseline models with paraphrases generated as described above, training on the British National Corpus (BNC) v3 (Burnard, 2000) and the first 3 million lines of the English Gigaword v2 APW, totaling 187M terms after tokenization, and number and punctuation removal. We generated paraphrases for phrases up to six tokens in length, and used an ar-bitrary similarity threshold of minScore = 0.3. We experimented with three variants: adding a single additional feature for all paraphrases (1-6grams); using only paraphrases of unigrams (1grams); and adding two features, one only sensitive to unigrams, and the other only to the rest (1 + 2-6grams). All features had the same design as described in Section 5, each had an associated weight (as all other features), and all feature weights in each system, including the baseline, were tuned using a separate minimum error rate training for each system.", "cite_spans": [ { "start": 133, "end": 148, "text": "(Burnard, 2000)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "English-to-Chinese Translation", "sec_num": "5.1" }, { "text": "Results are shown in Table 2 . For the E2C systems, for which we had four reference translations for the test set, we used shortest reference length, and used the NIST-provided script to split the output words to Chinese characters before evaluation. Statistical significance for the BLEU results were calculated using Koehn's (Koehn, 2004) pair-wise bootstrapping test with 95% confidence interval.", "cite_spans": [ { "start": 327, "end": 340, "text": "(Koehn, 2004)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 21, "end": 28, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "English-to-Chinese Translation", "sec_num": "5.1" }, { "text": "On the E2C 29,000-line subset, the augmented system had a significant 1.7 BLEU points gain over its baseline. On the full size model, results were negative. Note that our E2C full size baseline is reasonably strong: Its character-based BLEU score is slightly higher than the JHU-UMD system that participated in the NIST 2008 MT evaluation (constrained training track), although we used a subset of that system's training materials, and a smaller language model. Results there ranged from 15.69 to 30.38 BLEU (ignoring a seeming outlier of 3.93).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English-to-Chinese Translation", "sec_num": "5.1" }, { "text": "In order to to permit a more direct comparison with the pivoting technique, we also experimented with Spanish to English (S2E) translation, following Callison-Burch et al. (2006) . For baseline we used the Spanish and English sides of the Europarl multilingual parallel corpus (Koehn, 2005) , with the standard training, development, and test sets. We created training subset models of 10,000, 20,000, and 80,000 aligned sentences, as described in Callison-Burch et al. (2006) We also re-trained adding the JRC-Acquis-v3 corpus 2 to the paraphrase training set, and then adding also the LDC Spanish Gigaword (LDC2006T12) and truncating the resulting corpus after the first 150M lines. We lowercased these training sets, tokenized and removed punctuation marks and numbers, and this resulted in training set sizes as detailed in Table 1 . We generated paraphrases for phrases up to four tokens in length, and used two arbitrary similarity thresholds of minScore = 0.3 (as in the E2C experiments), and 0.6, for enforcing only higher precision paraphrasing. We experimented with these variants: a single feature for all paraphrase (1-4grams); using only paraphrases of unigrams (1grams); and using two features: one only sensitive to unigrams and bigrams, and the other to the rest (1-2 + 3-4grams).", "cite_spans": [ { "start": 150, "end": 178, "text": "Callison-Burch et al. (2006)", "ref_id": "BIBREF3" }, { "start": 277, "end": 290, "text": "(Koehn, 2005)", "ref_id": "BIBREF19" }, { "start": 448, "end": 476, "text": "Callison-Burch et al. (2006)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 828, "end": 835, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Spanish-to-English Translation", "sec_num": "5.2" }, { "text": "Results are shown in Table 4 . We used BLEU over lowercased outputs to evaluate all S2E systems, and Koehn's significance test as above.", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 28, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Spanish-to-English Translation", "sec_num": "5.2" }, { "text": "On the S2E 10,000-line subset, both the 1grams and 1-4grams models achieved significant gains of .4 BLEU points over the baseline. We concluded from a manual evaluation of the 10,000-line models that the two major weaknesses of the baseline system were (not surprisingly) number of untranslated (OOV) words / phrases, followed by number of superfluous words / phrases. On the larger subset models, no system significantly outperformed the baseline. Note that our S2E baselines' scores are higher than those of Callison-Burch et al. (2006) , since we evaluate lowercased outputs, instead of recased ones.", "cite_spans": [ { "start": 510, "end": 538, "text": "Callison-Burch et al. (2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Spanish-to-English Translation", "sec_num": "5.2" }, { "text": "We have shown that monolingually-derived paraphrases, based on distributional semantic similarity measures over a source-language corpus, can improve the performance of statistical machine translation (SMT) systems. Our proposed method has the advantage of not relying on bitexts in order to generate the paraphrases, and therefore gives access to large amounts of monolingual training data, for which creating bitexts of equivalent size is generally unfeasible. We haven't trained our system on nearly as large a corpus as it can handle, and indeed we see this as a natural next step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6" }, { "text": "Results support the assumption that a larger monolingual paraphrase training set yields better paraphrases: our S2E 1-4grams model performed significantly better than baseline when using wmt09+aquis for paraphrasing, but when only using wmt09, the model had a smaller advantage that did not reach significance. However, for the S2E 1grams model, there was a slight decrease in performance when switching paraphrasing corpus from wmt09+aquis to wmt09+aquis+afp. This effect might be due to the genre or unbalanced content of the additional corpus, or perhaps it is the case that in this corpus size, paraphrases of higherlevel ngrams benefitted from the additional text much more than paraphrases of unigrams did. The two rightmost columns in Table 5 show that although Spanish monolingual paraphrases for the unigram baile improve when using the larger corpus, (e.g., danza and un balie become the third and fourth top candidates, pushing much worse candidates far down the list), the two top paraphrase candidates remained unchanged. However, for the 4gram a favor del informe, antonymous candidates, which are bad and misleading for translation, are pushed down from the top first and third spots by synonymous, better candidates. Table 3 contains additional examples of good and bad top paraphrase candidates, also in English. Paraphrases of phrases seem to be of lower quality than those of unigrams, as can be seen at the bottom of the table.", "cite_spans": [], "ref_spans": [ { "start": 742, "end": 749, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6" }, { "text": "These results also show that our method is especially useful in settings involving low-density languages or special domains: The smaller subset models, emulating a resource-poor language situation, show higher gains than larger models (which are supersets of the smaller subset models), when augmented with paraphrases derived from the same paraphrase training set. This was validated in two very different language pairs: English to Chinese, and Spanish to English. We believe that larger monolingual training sets for paraphrasing can help languages with richer resources, and we intend to explore this too.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6" }, { "text": "Although the gains in the Spanish-English subsets are somewhat smaller than the pivoting technique reported in Callison-Burch et al. (2006) , e.g., .7 BLEU for the 10k subset, we take these results as a proof of concept that can yield better bitext mono.corp. al baile a la baloncesto el cine Source: a favor del informe a favor de este informe en contra del informe favor del informe favor del informe a favor de este informe en contra del informe el informe en contra de este informe a favor de este informe a favor a favor de la resoluci\u00f3n en contra de este informe por el informe a favor de esta resoluci\u00f3n en contra de la resoluci\u00f3n al informe a favor del informe del se\u00f1or a favor del informe del sr. su a favor del informe del sr. en contra del informe del sr. del informe en contra de la propuesta a favor del excelente informe de este informe contra el informe a favor del informe deprez Table 5 : Comparison of Spanish paraphrases: by pivoting, and by two monolingual corpora. Ordered from best to worst score. system example source cuando escucho las distintas intervenciones , creo que quienes afirman que deber\u00edamos analizar nuestras prioridades y limitar el n\u00famero de objetivos que queremos conseguir , est\u00e1n en lo cierto . reference when i listen to the various comments made , i find myself agreeing with those who recommend that we take a look at our priorities and then limit the number of aims we want to achieve baseline escucho when the various speeches, i believe that those who afirman that we should our environmental limitar priorities and the number of objectives we want to achieve, are in this way. pivoting (MW) when i can hear the various speeches , i believe that those people that we should look at our priorities and to limit the number of objectives we want to achieve , are in fact . wmt09+acquis escucho when the various speeches, i believe that those who claiming that we should environmental .1-4grams", "cite_spans": [ { "start": 111, "end": 139, "text": "Callison-Burch et al. (2006)", "ref_id": "BIBREF3" }, { "start": 1636, "end": 1640, "text": "(MW)", "ref_id": null } ], "ref_spans": [ { "start": 897, "end": 904, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6" }, { "text": "limitar our priorities and the number of objectives we want to achieve, are on the way. wmt09+acquis", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6" }, { "text": "escucho when the various speeches, i believe that those who considered that we should our .1grams environmental priorities and reducing the number of objectives we want to achieve, are on the way. wmt09+acquis+afp escucho when the various speeches, i believe that those who say that we should our environmental .1grams priorities and reduce the number of objectives we want to achieve, are on the way. gains with larger monolingual training sets. Pivoting techniques (translating back and forth) rely on limited resources (bitexts), and are subject to shifts in meaning due to their inherent double translation step. In contrast, large monolingual resources are relatively easy to collect, and our system involves only a single translation/paraphrasing step per target phrase. Table 5 also shows an exemplar comparison with the pivoting paraphrases used in Callison-Burch et al. (2006) . It seems that the pivoting paraphrases might suffer more from having frequent function words as top candidates, which might be a by-product of their alignment \"promiscuity\". However, the top antonymous candidate problem seems to mainly plague the monolingual distributional paraphrases (but improves with larger corpora). See also Table 6 .", "cite_spans": [ { "start": 857, "end": 885, "text": "Callison-Burch et al. (2006)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 777, "end": 784, "text": "Table 5", "ref_id": null }, { "start": 1219, "end": 1226, "text": "Table 6", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6" }, { "text": "The paraphrase quality remains an issue with this method (as with all other paraphrasing methods). Some possible ways of improving it, besides using larger corpora, are: using syntactic information (Callison-Burch, 2008) , using semantic knowledge such as thesaurus or WordNet to perform word sense disambiguation (WSD) (Resnik, 1999; Mohammad and Hirst, 2006) , improving the similarity measure, and refining the similarity threshold. We would like to explore ways of incorporating syntactic knowledge that do not sacrifice coverage as much as in Callison-Burch (2008) ; incorporating semantic knowledge to disambiguate phrasal senses; using context to help sense disambiguation (Erk and Pad\u00f3, 2008) ; and optimizing the similarity threshold for use in SMT, for example on a held-out dataset: too high a threshold reduces coverage, while too low a threshold results in bad paraphrases and translation.", "cite_spans": [ { "start": 198, "end": 220, "text": "(Callison-Burch, 2008)", "ref_id": "BIBREF4" }, { "start": 320, "end": 334, "text": "(Resnik, 1999;", "ref_id": "BIBREF33" }, { "start": 335, "end": 360, "text": "Mohammad and Hirst, 2006)", "ref_id": "BIBREF24" }, { "start": 548, "end": 569, "text": "Callison-Burch (2008)", "ref_id": "BIBREF4" }, { "start": 680, "end": 700, "text": "(Erk and Pad\u00f3, 2008)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6" }, { "text": "The method presented here is quite general, and therefore different similarity measures, including other corpus-based ones, can be plugged in to generate paraphrases. We are looking into using DPs with word-sense disambiguation: Since it has been shown that similarity is often judged by the semantic distance of the closest senses of the two target words (Mohammad and Hirst, 2006) , and that paraphrases generated this way are likely to be of higher quality (Marton et al., 2009) , hence it is also likely that the overall performance of an SMT system using them will also improve further.", "cite_spans": [ { "start": 356, "end": 382, "text": "(Mohammad and Hirst, 2006)", "ref_id": "BIBREF24" }, { "start": 460, "end": 481, "text": "(Marton et al., 2009)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6" }, { "text": "One potential advantage of using bitexts for paraphrase generation is the usage of implicit human knowledge, i.e., sentence alignments. The concern that not using this knowledge would turn out detrimental to the performance of SMT systems augmented by paraphrases as described here was largely put to rest, as our method improved the tested subset SMT systems' quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6" }, { "text": "http://www.statmt.org/wmt09 2 http://wt.jrc.it/lt/Acquis", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Many thanks to Chris Dyer for his help with the E2C set, and to Adam Lopez for his implementation of pattern matching with Suffix Array. This research was partially supported by the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-2-001 and NSF award 0838801, by the Euro-MatrixPlus project funded by the European Commission, and by the US National Science Foundation under grant IIS-0713448. The views and findings are the authors' alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Extracting paraphrases from a parallel corpus", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Kathleen McKeown. 2001. Ex- tracting paraphrases from a parallel corpus. In Pro- ceedings of ACL-2001.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The mathematics of statistical machine translation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "S", "middle": [ "A D" ], "last": "Pietra", "suffix": "" }, { "first": "V", "middle": [ "J D" ], "last": "Pietra", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--313", "other_ids": {}, "num": null, "urls": [], "raw_text": "P.F. Brown, S.A.D. Pietra, V.J.D. Pietra, and R.L. Mer- cer. 1993. The mathematics of statistical machine translation. Computational Linguistics, 19(2):263- 313.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Reference Guide for the British National Corpus", "authors": [ { "first": "Lou", "middle": [], "last": "Burnard", "suffix": "" } ], "year": 2000, "venue": "Oxford University Computing Services", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lou Burnard. 2000. Reference Guide for the British National Corpus. Oxford University Computing Services, Oxford, England, world edition edition.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Improved statistical machine translation using paraphrases", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" } ], "year": 2006, "venue": "Proceedings NAACL-2006", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Philipp Koehn, and Miles Os- borne. 2006. Improved statistical machine trans- lation using paraphrases. In Proceedings NAACL- 2006.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Syntactic constraints on paraphrases extracted from parallel corpora", "authors": [ { "first": "Chris", "middle": [], "last": "Callison", "suffix": "" }, { "first": "-", "middle": [], "last": "Burch", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch. 2008. Syntactic constraints on paraphrases extracted from parallel corpora. In Pro- ceedings of EMNLP 2008, Waikiki, Hawai'i.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A hierarchical phrase-based model for statistical machine translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL-05", "volume": "", "issue": "", "pages": "263--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Pro- ceedings of ACL-05, pages 263-270.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Hierarchical phrase-based translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "2", "pages": "201--228", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2007. Hierarchical phrase-based trans- lation. Computational Linguistics, 33(2):201-228.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A statistical wordlevel translation model for comparable corpora", "authors": [ { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Finch", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Conference on Content-Based Multimedia Information Access (RIAO)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mona Diab and Steve Finch. 2000. A statistical word- level translation model for comparable corpora. In Proceedings of the Conference on Content-Based Multimedia Information Access (RIAO).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Unsupervised construction of large paraphrase corpora: exploiting massively parallel news sources", "authors": [ { "first": "B", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "C", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "C", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th International Conference on Computational Linguistics of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Dolan, C. Quirk, and C. Brockett. 2004. Unsu- pervised construction of large paraphrase corpora: exploiting massively parallel news sources. In Pro- ceedings of the 20th International Conference on Computational Linguistics of the Association for Computational Linguistics, Geneva, Switzerland.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Accurate methods for the statistics of surprise and coincidence", "authors": [ { "first": "T", "middle": [], "last": "Dunning", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "61--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Lin- guistics, 19(1):61-74.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A structured vector space model for word meaning in context", "authors": [ { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-2086)", "volume": "", "issue": "", "pages": "897--906", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katrin Erk and Sebastian Pad\u00f3. 2008. A struc- tured vector space model for word meaning in con- text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP- 2086), pages 897-906, Honolulu, HI.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A synopsis of linguistic theory 1930\u017055", "authors": [ { "first": "R", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Firth", "suffix": "" } ], "year": 1957, "venue": "Studies in Linguistic Analysis", "volume": "", "issue": "", "pages": "1--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "John R. Firth. 1957. A synopsis of linguistic theory 1930\u017055. Studies in Linguistic Analysis, (special volume of the Philological Society):1-32. Distribu- tional Hypothesis.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Multilevel bootstrapping for extracting parallel sentences from a quasi-comparable corpus", "authors": [ { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Cheung", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th international conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascale Fung and Percy Cheung. 2004. Multi- level bootstrapping for extracting parallel sentences from a quasi-comparable corpus. In Proceedings of the 20th international conference on Computational Linguistics, page 1051, Geneva, Switzerland. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "An ir approach for translating new words from nonparallel, comparable texts", "authors": [ { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" }, { "first": "Yee", "middle": [], "last": "Lo Yuen", "suffix": "" } ], "year": 1998, "venue": "Proceedings of COLING-ACL98", "volume": "", "issue": "", "pages": "414--420", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascale Fung and Lo Yuen Yee. 1998. An ir approach for translating new words from nonparallel, com- parable texts. In Proceedings of COLING-ACL98, pages 414-420, Montreal, Canada.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Review of louis h. gray, foundations of language", "authors": [ { "first": "S", "middle": [], "last": "Zellig", "suffix": "" }, { "first": "", "middle": [], "last": "Harris", "suffix": "" } ], "year": 1939, "venue": "Language", "volume": "16", "issue": "3", "pages": "216--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zellig S. Harris. 1940. Review of louis h. gray, foun- dations of language (new york: Macmillan, 1939). Language, 16(3):216\u0170231.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Statistical phrase-based translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Franz", "middle": [ "Josef" ], "last": "Och", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT-NAACL", "volume": "", "issue": "", "pages": "127--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Pro- ceedings of HLT-NAACL, pages 127-133.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine Moran Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Ondrej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Annual Meeting of the Association for Computational Linguistics (ACL), demonstration session", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Annual Meeting of the Association for Computational Linguistics (ACL), demonstration session, Prague, Czech Republic.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Statistical significance tests for machine translation evaluation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proc. EMNLP.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Pharaoh: A beam search decoder for phrase-based statistical machine translation models", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "Proceedings of AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2004b. Pharaoh: A beam search de- coder for phrase-based statistical machine transla- tion models. In Proceedings of AMTA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A parallel corpus for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "Proceedings of MT-Summit", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2005. A parallel corpus for statistical machine translation. In Proceedings of MT-Summit.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "An information-theoretic definition of similarity", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 15th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "296--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin. 1998. An information-theoretic defini- tion of similarity. In Proceedings of the 15th In- ternational Conference on Machine Learning, pages 296-304, San Francisco, CA.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Using paraphrases for parameter tuning in statistical machine translation", "authors": [ { "first": "Nitin", "middle": [], "last": "Madnani", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Necip Fazil Ayan", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "", "middle": [], "last": "Dorr", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the ACL Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitin Madnani, Necip Fazil Ayan, Philip Resnik, and Bonnie Dorr. 2007. Using paraphrases for parame- ter tuning in statistical machine translation. In Pro- ceedings of the ACL Workshop on Statistical Ma- chine Translation.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Estimating semantic distance using soft semantic constraints in knowledge-source / corpus hybrid models", "authors": [ { "first": "Yuval", "middle": [], "last": "Marton", "suffix": "" }, { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2009, "venue": "Procedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuval Marton, Saif Mohammad, and Philip Resnik. 2009. Estimating semantic distance using soft se- mantic constraints in knowledge-source / corpus hy- brid models. In Procedings of EMNLP, Singapore.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Environmental determinants of lexical processing effort", "authors": [ { "first": "S", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. McDonald. 2000. Environmental determinants of lexical processing effort. Ph.D. thesis, University of Edinburgh.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Distributional measures of concept-distance: A task-oriented evaluation", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad and Graeme Hirst. 2006. Distribu- tional measures of concept-distance: A task-oriented evaluation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-2006), Sydney, Australia.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Improving machine translation performance by exploiting non-parallel corpora", "authors": [ { "first": "Stefan", "middle": [], "last": "Dragos", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Munteanu", "suffix": "" }, { "first": "", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "4", "pages": "477--504", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dragos Stefan Munteanu and Daniel Marcu. 2005. Im- proving machine translation performance by exploit- ing non-parallel corpora. Computational Linguis- tics, 31(4):477-504.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Desperately seeking cebuano", "authors": [ { "first": "Doug", "middle": [], "last": "Oard", "suffix": "" }, { "first": "David", "middle": [], "last": "Doermann", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "Daqing", "middle": [], "last": "He", "suffix": "" }, { "first": "Phillip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "William", "middle": [], "last": "Byrne", "suffix": "" }, { "first": "Sanjeeve", "middle": [], "last": "Khudanpur", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Doug Oard, David Doermann, Bonnie Dorr, Daqing He, Phillip Resnik, William Byrne, Sanjeeve Khu- danpur, David Yarowsky, Anton Leuski, Philipp Koehn, and Kevin Knight. 2003. Desperately seek- ing cebuano. In Proceedings of HLT-NAACL.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Improved statistical alignment models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of the 38th Annual Meeting of the ACL, pages 440-447.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Discriminative training and maximum entropy models for statistical machine translation", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2002. Discrimina- tive training and maximum entropy models for sta- tistical machine translation. In Proceedings of ACL.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the ACL, pages 160-167.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Corpusbased comprehensive and diagnostic MT evaluation: Initial Arabic, Chinese, French, and Spanish results", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "John", "middle": [], "last": "Henderson", "suffix": "" }, { "first": "Florence", "middle": [], "last": "Reeder", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACL Human Language Technology Conference", "volume": "", "issue": "", "pages": "124--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, John Henderson, and Florence Reeder. 2002. Corpus- based comprehensive and diagnostic MT evaluation: Initial Arabic, Chinese, French, and Spanish results. In Proceedings of the ACL Human Language Tech- nology Conference, pages 124-127, San Diego, CA.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Automatic identification of word translations from unrelated english and german corpora", "authors": [ { "first": "Reinhard", "middle": [], "last": "Rapp", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Annual Conference of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "519--525", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reinhard Rapp. 1999. Automatic identification of word translations from unrelated english and german corpora. In Proceedings of the 37th Annual Confer- ence of the Association for Computational Linguis- tics., pages 519-525.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "The web as a parallel corpus", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "3", "pages": "349--380", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik and Noah Smith. 2003. The web as a parallel corpus. Computational Linguistics, 29(3):349-380.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1999, "venue": "Journal of Artificial Intelligence Research (JAIR)", "volume": "11", "issue": "", "pages": "95--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik. 1999. Semantic similarity in a taxon- omy: An information-based measure and its appli- cation to problems of ambiguity in natural language. Journal of Artificial Intelligence Research (JAIR), 11:95-130.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A cooccurrence-based thesaurus and two applications to information retreival", "authors": [ { "first": "Hinrich", "middle": [], "last": "Schuetze", "suffix": "" }, { "first": "Jan", "middle": [ "O" ], "last": "Pedersen", "suffix": "" } ], "year": 1997, "venue": "Information Processing and Management", "volume": "33", "issue": "3", "pages": "307--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hinrich Schuetze and Jan O. Pedersen. 1997. A cooccurrence-based thesaurus and two applications to information retreival. Information Processing and Management, 33(3):307\u0170318.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A study of translation error rate with targeted human annotation", "authors": [ { "first": "Matthew", "middle": [], "last": "Snover", "suffix": "" }, { "first": "Bonnie", "middle": [ "J" ], "last": "Dorr", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "John", "middle": [], "last": "Makhoul", "suffix": "" }, { "first": "Linnea", "middle": [], "last": "Micciulla", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Snover, Bonnie J. Dorr, Richard Schwartz, John Makhoul, Linnea Micciulla, and Ralph Weischedel. 2005. A study of translation error rate with targeted human annotation. Technical Report LAMP-TR-126, CS-TR-4755, UMIACS-TR-2005- 58, University of Maryland, July, 2005.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A conditional random field word segmenter", "authors": [ { "first": "Huihsin", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "Pichuan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Galen", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Fourth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A con- ditional random field word segmenter. In Fourth SIGHAN Workshop on Chinese Language Process- ing.", "links": null } }, "ref_entries": { "TABREF0": { "num": null, "type_str": "table", "text": "Training set sizes (million tokens).", "content": "
.
Set# Tokens Source+Target
E2C 29K0.8 + 0.6
E2C Full6.4 + 5.1
bnc+apw187
S2E 10K0.3 + 0.3
S2E 20K0.6 + 0.6
S2E 80K2.3 + 2.3
wmt0984
wmt09+acquis139
wmt09+acquis+afp 402
", "html": null }, "TABREF2": { "num": null, "type_str": "table", "text": "", "content": "
: E2C Results: character-based BLEU and
TER scores. All models have one additional fea-
ture over baseline, except for the \"1 + 2-6\" mod-
els that have one feature for unigrams and an-
other feature for bigrams to 6-grams. Paraphrases
with score < .3 were filtered out. *** = sig-
nificance test over baseline with p < 0.0001,
using Koehn's (2004) pair-wise bootstrap resam-
pling test for BLEU with 95% confidence interval.
ParaphraseScore
Source: deal
agreement0.56
accord0.53
talks0.45
contract0.42
peace deal0.33
merger0.32
agreement is0.30
Source: fall
rise0.87
slip0.82
tumbled today0.68
fell today0.67
tumble0.65
fall tokyo ap stock prices fell0.56
are mixed0.54
Source: to provide any other
to give any0.74
to give further0.70
to provide any0.68
to give any other0.62
to provide further0.61
to provide other0.53
to reveal any0.52
to provide any further0.48
to disclose any0.47
to publicly discuss the0.43
Source: we have a situation that
uncontroversial question about our0.66
obviously with the developments this morning 0.65
community staffing of community centres0.64
perhaps we are getting rather impatient0.63
er around the inner edge0.60
interested in going to the topics0.60
and that is the day that0.60
as a as a final point0.59
left which it may still have0.56
", "html": null }, "TABREF3": { "num": null, "type_str": "table", "text": "", "content": "
: English paraphrases from E2C 29K-
bitext systems.
", "html": null }, "TABREF5": { "num": null, "type_str": "table", "text": "S2E Results: Lowercase BLEU and TER. Paraphrases with score < minScore were filtered out. *** = significance test over baseline with p < 0.0001, usingKoehn's (2004) pair-wise bootstrap test for BLEU with 95% confidence interval.", "content": "
pivotwmt09+acquiswmt09+acquis+afp
Source: baile
danzael baileel baile
bailarbaile ybaile y
ade david palomar y ladanza
dansviejo como quien se acomoda una un baile
empresapor juli\u00e1n estrada el tercero deteatro
coro
", "html": null }, "TABREF6": { "num": null, "type_str": "table", "text": "S2E translation examples on 10k-bitext systems. Some translation differences are in bold.", "content": "", "html": null } } } }