{ "paper_id": "W12-0102", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T04:15:14.045098Z" }, "title": "Measuring Comparability of Documents in Non-Parallel Corpora for Efficient Extraction of (Semi-)Parallel Translation Equivalents", "authors": [ { "first": "Fangzhong", "middle": [], "last": "Su", "suffix": "", "affiliation": { "laboratory": "Centre for Translation Studies University Of Leeds", "institution": "", "location": { "postCode": "LS2 9JT", "settlement": "Leeds", "country": "UK" } }, "email": "" }, { "first": "Bogdan", "middle": [], "last": "Babych", "suffix": "", "affiliation": { "laboratory": "Centre for Translation Studies University Of Leeds", "institution": "", "location": { "postCode": "LS2 9JT", "settlement": "Leeds", "country": "UK" } }, "email": "b.babych@leeds.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we present and evaluate three approaches to measure comparability of documents in non-parallel corpora. We develop a task-oriented definition of comparability, based on the performance of automatic extraction of translation equivalents from the documents aligned by the proposed metrics, which formalises intuitive definitions of comparability for machine translation research. We demonstrate application of our metrics for the task of automatic extraction of parallel and semiparallel translation equivalents and discuss how these resources can be used in the frameworks of statistical and rule-based machine translation.", "pdf_parse": { "paper_id": "W12-0102", "_pdf_hash": "", "abstract": [ { "text": "In this paper we present and evaluate three approaches to measure comparability of documents in non-parallel corpora. We develop a task-oriented definition of comparability, based on the performance of automatic extraction of translation equivalents from the documents aligned by the proposed metrics, which formalises intuitive definitions of comparability for machine translation research. We demonstrate application of our metrics for the task of automatic extraction of parallel and semiparallel translation equivalents and discuss how these resources can be used in the frameworks of statistical and rule-based machine translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Parallel corpora have been extensively exploited in different ways in machine translation (MT) -both in Statistical (SMT) and more recently, in Rule-Based (RBMT) architectures: in SMT aligned parallel resources are used for building translation phrase tables and calculating translation probabilities; and in RBMT, they are used for automatically building bilingual dictionaries of translation equivalents and automatically deriving bilingual mappings for frequent structural patterns. However, large parallel resources are not always available, especially for under-resourced languages or narrow domains. Therefore, in recent years, the use of cross-lingual comparable corpora has attracted considerable attention in the MT community (Sharoff et al., 2006; Fung and Cheung, 2004a; Munteanu and Marcu, 2005; Babych et al., 2008) .", "cite_spans": [ { "start": 735, "end": 757, "text": "(Sharoff et al., 2006;", "ref_id": "BIBREF24" }, { "start": 758, "end": 781, "text": "Fung and Cheung, 2004a;", "ref_id": "BIBREF5" }, { "start": 782, "end": 807, "text": "Munteanu and Marcu, 2005;", "ref_id": "BIBREF16" }, { "start": 808, "end": 828, "text": "Babych et al., 2008)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most of the applications of comparable corpora focus on discovering translation equivalents to support machine translation, such as bilingual lexicon extraction (Rapp, 1995; Rapp, 1999; Morin et al., 2007; Yu and Tsujii, 2009; Li and Gaussier, 2010; Prachasson and Fung, 2011) , parallel phrase extraction (Munteanu and Marcu, 2006) , and parallel sentence extraction (Fung and Cheung, 2004b; Munteanu and Marcu, 2005; Munteanu et al., 2004; Smith et al., 2010) .", "cite_spans": [ { "start": 161, "end": 173, "text": "(Rapp, 1995;", "ref_id": "BIBREF20" }, { "start": 174, "end": 185, "text": "Rapp, 1999;", "ref_id": "BIBREF21" }, { "start": 186, "end": 205, "text": "Morin et al., 2007;", "ref_id": "BIBREF14" }, { "start": 206, "end": 226, "text": "Yu and Tsujii, 2009;", "ref_id": "BIBREF27" }, { "start": 227, "end": 249, "text": "Li and Gaussier, 2010;", "ref_id": "BIBREF10" }, { "start": 250, "end": 276, "text": "Prachasson and Fung, 2011)", "ref_id": null }, { "start": 306, "end": 332, "text": "(Munteanu and Marcu, 2006)", "ref_id": "BIBREF15" }, { "start": 368, "end": 392, "text": "(Fung and Cheung, 2004b;", "ref_id": "BIBREF6" }, { "start": 393, "end": 418, "text": "Munteanu and Marcu, 2005;", "ref_id": "BIBREF16" }, { "start": 419, "end": 441, "text": "Munteanu et al., 2004;", "ref_id": "BIBREF17" }, { "start": 442, "end": 461, "text": "Smith et al., 2010)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Comparability between documents is often understood as belonging to the same subject domain, genre or text type, so this definition relies on these vague linguistic concepts. The problem with this definition then is that it cannot be exactly benchmarked, since it becomes hard to relate automated measures of comparability to such inexact and unmeasurable linguistic concepts. Research on comparable corpora needs not only good measures for comparability, but also a clearer, technologicallygrounded and quantifiable definition of comparability in the first place.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we relate comparability to usefulness of comparable texts for MT. In particular, we propose a performance-based definition of comparability, as the possibility to extract parallel or quasi-parallel translation equivalents -words, phrases and sentences which are translations of each other. This definition directly relates comparability to texts' potential to improve the quality of MT by adding extracted phrases to phrase tables, training corpus or dictionaries. It also can be quantified as the rate of successful extraction of translation equivalents by automated tools, such as proposed in Munteanu and Marcu (2006) .", "cite_spans": [ { "start": 609, "end": 634, "text": "Munteanu and Marcu (2006)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Still, successful detection of translation equivalents from comparable corpora very much de-pends on the quality of these corpora, specifically on the degree of their textual equivalence and successful alignment on various text units. Therefore, the goal of this work is to provide comparability metrics which can reliably identify crosslingual comparable documents from raw corpora crawled from the Web, and characterize the degree of their similarity, which enriches comparable corpora with the document alignment information, filters out documents that are not useful and eventually leads to extraction of good-quality translation equivalents from the corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To achieve this goal, we need to define a scale to assess comparability qualitatively, metrics to measure comparability quantitatively, and the sources to get comparable corpora from. In this work, we directly characterize comparability by how useful comparable corpora are for the task of detecting translation equivalents in them, and ultimately to machine translation. We focus on document-level comparability, and use three categories for qualitative definition of comparability levels, defined in terms of granularity for possible alignment:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Parallel: Traditional parallel texts that are translations of each other or approximate translations with minor variations, which can be aligned on the sentence level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Strongly-comparable: Texts that talk about the same event or subject, but in different languages. For example, international news about oil spill in the Gulf of Mexico, or linked articles in Wikipedia about the same topic. These documents can be aligned on the document level on the basis of their origin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Weakly-comparable: Texts in the same subject domain which describe different events. For example, customer reviews about hotel and restaurant in London. These documents do not have an independent alignment across languages, but sets of texts can be aligned on the basis of belonging to the same subject domain or sub-domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present three different approaches to measure the comparability of crosslingual (especially under-resourced languages) comparable documents: a lexical mapping based approach, a keyword based approach, and a machine translation based approach. The experimental results show that all of them can effectively predict the comparability levels of the compared document pairs. We then further investigate the applicability of the proposed metrics by measuring their impact on the task of parallel phrase extraction from comparable corpora. It turns out that, higher comparability level predicted by the metrics consistently lead to more number of parallel phrase extracted from comparable documents. Thus, the metrics can help select more comparable document pairs to improve the performance of parallel phrase extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is organized as follows. Section 2 discusses previous work. Section 3 introduces our comparability metrics. Section 4 presents the experimental results and evaluation. Section 5 describes the application of the metrics. Section 6 discusses the pros and cons of the proposed metrics, followed by conclusions and future work in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The term \"comparability\", which is the key concept in this work, applies to the level of corpora, documents and sub-document units. However, so far there is no widely accepted definition of comparability. For example, there is no agreement on the degree of similarity that documents in comparable corpora should have or on the criteria for measuring comparability. Also, most of the work that performs translation equivalent extraction in comparable corpora usually assumes that the corpora they use are reliably comparable and focuses on the design of efficient extraction algorithms. Therefore, there has been very little literature discussing the characteristics of comparable corpora (Maia, 2003) . In this section, we introduce some representative work which tackles comparability metrics.", "cite_spans": [ { "start": 688, "end": 700, "text": "(Maia, 2003)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Some studies (Sharoff, 2007; Maia, 2003; McEnery and Xiao, 2007) analyse comparability by assessing corpus composition, such as structural criteria (e.g., format and size), and linguistic criteria (e.g., topic, domain, and genre). Kilgarriff and Rose (1998) measure similarity and homogeneity between monolingual corpora. They generate word frequency list from each corpus and then apply \u03c7 2 statistic on the most frequent n (e.g., 500) words of the compared corpora.", "cite_spans": [ { "start": 13, "end": 28, "text": "(Sharoff, 2007;", "ref_id": "BIBREF23" }, { "start": 29, "end": 40, "text": "Maia, 2003;", "ref_id": "BIBREF12" }, { "start": 41, "end": 64, "text": "McEnery and Xiao, 2007)", "ref_id": "BIBREF13" }, { "start": 231, "end": 257, "text": "Kilgarriff and Rose (1998)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The work which deals with comparability measures in cross-lingual comparable corpora is closer to our work. Saralegi et al. (2008) measure the degree of comparability of comparable corpora (English and Basque) according to the distribution of topics and publication dates of documents. They compute content similarity for all the document pairs between two corpora. These similarity scores are then input as parameters for the EMD (Earth Mover's Distance) distance measure, which is employed to calculate the global compatibility of the corpora. Munteanu and Marcu (2005; select more comparable document pairs in a cross-lingual information retrieval based manner by using a toolkit called Lemur 1 . The retrieved document pairs then serve as input for the tasks of parallel sentence and sub-sentence extraction. Smith et al. (2010) treat Wikipedia as a comparable corpus and use \"interwiki\" links to identify aligned comparable document pairs for the task of parallel sentence extraction. Li and Gaussier (2010) propose a comparability metric which can be applied at both document level and corpus level and use it as a measure to select more comparable texts from other external sources into the original corpora for bilingual lexicon extraction. The metric measures the proportion of words in the source language corpus translated in the target language corpus by looking up a bilingual dictionary. They evaluate the metric on the rich-resourced English-French language pair, thus good dictionary resources are available. However, this is not the case for under-resourced languages in which reliable language resources such as machine-readable bilingual dictionaries with broad word coverage or word lemmatizers might be not publicly available.", "cite_spans": [ { "start": 108, "end": 130, "text": "Saralegi et al. (2008)", "ref_id": "BIBREF22" }, { "start": 546, "end": 571, "text": "Munteanu and Marcu (2005;", "ref_id": "BIBREF16" }, { "start": 813, "end": 832, "text": "Smith et al. (2010)", "ref_id": "BIBREF25" }, { "start": 990, "end": 1012, "text": "Li and Gaussier (2010)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To measure the comparability degree of document pairs in different languages, we need to translate the texts or map lexical items from the source language into the target languages so that we can compare them within the same language. Usually this can be done by using bilingual dictionaries (Rapp, 1999; Li and Gaussier, 2010; Prachasson and Fung, 2011) or existing machine translation tools. Based on this process, in this section we present three different approaches to measure the 1 Available at http://www.lemurproject.org/ comparability of comparable documents.", "cite_spans": [ { "start": 292, "end": 304, "text": "(Rapp, 1999;", "ref_id": "BIBREF21" }, { "start": 305, "end": 327, "text": "Li and Gaussier, 2010;", "ref_id": "BIBREF10" }, { "start": 328, "end": 354, "text": "Prachasson and Fung, 2011)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Comparability Metrics", "sec_num": "3" }, { "text": "It is straightforward that we expect a bilingual dictionary can be used for lexical mapping between a language pair. However, unlike the language pairs in which both languages are rich-resourced (e.g., English-French, or English-Spanish) and dictionary resources are relatively easy to obtain, it is likely that bilingual dictionaries with good word coverage are not publicly available for underresourced languages (e.g., English-Slovenian, or English-Lithuanian). In order to address this problem, we automatically construct dictionaries by using word alignment on large-scale parallel corpora (e.g., Europarl and JRC-Acquis 2 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical mapping based metric", "sec_num": "3.1" }, { "text": "Specifically, GIZA++ toolkit (Och and Ney, 2000) with default setting is used for word alignment on the JRC-Acquis parallel corpora (Steinberger et al., 2006) . The aligned word pairs together with the alignment probabilities are then converted into dictionary entries. For example, in Estonian-English language pair, the alignment example \"kompanii company 0.625\" in the word alignment table means the Estonian word \"kompanii\" can be translated as (or aligned with) the English candidate word \"company\" with a probability of 0.625. In the dictionary, the translation candidates are ranked by translation probability in descending order. Note that the dictionary collects inflectional form of words, but not only base form of words. This is because the dictionary is directly generated from the word alignment results and no further word lemmatization is applied.", "cite_spans": [ { "start": 29, "end": 48, "text": "(Och and Ney, 2000)", "ref_id": "BIBREF18" }, { "start": 132, "end": 158, "text": "(Steinberger et al., 2006)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical mapping based metric", "sec_num": "3.1" }, { "text": "Using the resulting dictionary, we then perform lexical mapping in a word-for-word mapping strategy. We scan each word in the source language texts to check if it occurs in the dictionary entries. If so, the first translation candidate are recorded as the corresponding mapping word. If there are more than one translation candidate, the second candidate will also be kept as the mapping result if its translation probability is higher than 0.3 3 . For non-English and English language pair, the non-English texts are mapped into English. If both languages are non-English (e.g., Greek-Romanian), we use English as a pivot langauge and map both the source and target language texts into English 4 . Due to the lack of reliable linguistic resources in non-English languages, mapping texts from non-English language into English can avoid language processing in non-English texts and allows us to make use of the rich resources in English for further text processing, such as stop-word filtering and word lemmatization 5 . Finally, cosine similarity measure is applied to compute the comparability strength of the compared document pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical mapping based metric", "sec_num": "3.1" }, { "text": "The lexical mapping based metric takes all the words in the text into account for comparability measure, but if we only retain a small number of representative words (keywords) and discard all the other less informative words in each document, can we judge the comparability of a document pair by comparing these words? Our intuition is that, if two document share more keywords, they should be more comparable. To validate this, we then perform keyword extraction by using a simple TFIDF based approach, which has been shown effective for keyword or keyphrase extraction from the texts (Frank et al., 1999; Hulth, 2003; Liu et al., 2009) .", "cite_spans": [ { "start": 587, "end": 607, "text": "(Frank et al., 1999;", "ref_id": "BIBREF4" }, { "start": 608, "end": 620, "text": "Hulth, 2003;", "ref_id": "BIBREF7" }, { "start": 621, "end": 638, "text": "Liu et al., 2009)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Keyword based metric", "sec_num": "3.2" }, { "text": "More specifically, the keyword based metric can be described as below. First, similar to the lexical mapping based metric, bilingual dictionaries are used to map non-English texts into English. Thus, only the English resources are applied for stop-word filtering and word lemmatization, which are useful text preprocessing steps for keyword extraction. We then use TFIDF to measure the weight of words in the document and rank the words by their TFIDF weights in descending order. The top n (e.g., 30) words are extracted as keywords to represent the document. Finally, the comparability of each document pair is determined by applying cosine similarity to their key-word lists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Keyword based metric", "sec_num": "3.2" }, { "text": "Bilingual dictionary is used for word-for-word translation in the lexical mapping based metric and words which do not occur in the dictionary will be omitted. Thus, the mapping result is like a list of isolated words and information such as word order, syntactic structure and named entities can not be preserved. Therefore, in order to improve the text translation quality, we turn to the state-of-the-art SMT systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine translation based metrics", "sec_num": "3.3" }, { "text": "In practice, we use Microsoft translation API 6 to translate texts in under-resourced languages (e.g, Lithuanian and Slovenian) into English and then explore several features for comparability metric design, which are listed as below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine translation based metrics", "sec_num": "3.3" }, { "text": "\u2022 Lexical feature: Lemmatized bag-of-word representation of each document after stopword filtering. Lexical similarity (denoted by W L ) of each document pair is then obtained by applying cosine measure to the lexical feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine translation based metrics", "sec_num": "3.3" }, { "text": "\u2022 Structure feature: We approximate it by the number of content words (adjectives, adverbs, nouns, verbs and proper nouns) and the number of sentences in each document, denoted by C D and S D respectively. The intuition is that, if two documents are highly comparable, their number of content words and their document length should be similar. The structure similarity (denoted by W S ) of two documents D1 and D2 is defined as bellow.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine translation based metrics", "sec_num": "3.3" }, { "text": "W S = 0.5 * (C D1 /C D2 ) + 0.5 * (S D1 /S D2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine translation based metrics", "sec_num": "3.3" }, { "text": "suppose that C D1 <=C D2 , and S D1 <=S D2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine translation based metrics", "sec_num": "3.3" }, { "text": "\u2022 Keyword feature: Top-20 words (ranked by TFIDF weight) of each document. keyword similarity (denoted by W K ) of two documents is also measured by cosine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine translation based metrics", "sec_num": "3.3" }, { "text": "\u2022 Named entity feature: Named entities of each document. If more named entities cooccur in two documents, they are very likely to talk about the same event or subject and thus should be more comparable. We use Stanford named entity recognizer 7 to extract named entities from the texts (Finkel et al., 2005) . Again, cosine is then applied to measure the similarity of named entities (denoted by W N ) between a document pair.", "cite_spans": [ { "start": 286, "end": 307, "text": "(Finkel et al., 2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Machine translation based metrics", "sec_num": "3.3" }, { "text": "We then combine these four different types of score in an ensemble manner. Specifically, a weighted average strategy is applied: each individual score is associated with a constant weight, indicating the relative confidence (importance) of the corresponding type of score. The overall comparability score (denoted by SC) of a document pair is thus computed as below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine translation based metrics", "sec_num": "3.3" }, { "text": "SC = \u03b1 * W L + \u03b2 * W S + \u03b3 * W K + \u03b4 * W N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine translation based metrics", "sec_num": "3.3" }, { "text": "where \u03b1, \u03b2, \u03b3, and \u03b4 \u2208 [0, 1], and \u03b1 + \u03b2 + \u03b3 + \u03b4 = 1. SC should be a value between 0 and 1, and larger SC value indicates higher comparability level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine translation based metrics", "sec_num": "3.3" }, { "text": "To investigate the reliability of the proposed comparability metrics, we perform experiments for 6 language pairs which contain underresoured languages: German-English (DE-EN), Estonian-English (ET-EN), Lithuanian-English (LT-EN), Latvian-English (LV-EN), Slovenian-English (SL-EN) and Greek-Romanian (EL-RO). A comparable corpus is collected for each language pair. Based on the definition of comparability levels (see Section 1), human annotators fluent in both languages then manually annotated the comparability degree (parallel, stronglycomparable, and weakly-comparable) at the document level. Hence, these bilingual comparable corpora are used as gold standard for experiments. The data distribution for each language pair, i.e., number of document pairs in each comparability level, is given in Table 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data source", "sec_num": "4.1" }, { "text": "We adopt a simple method for evaluation. For each language pair, we compute the average scores for all the document pairs in the same comparability level, and compare them to the gold Table 1 : Data distribution of gold standard corpora standard comparability labels. In addition, in order to better reveal the relation between the scores obtained from the proposed metrics and comparability levels, we also measure the Pearson correlation between them 8 . For the keyword based metric, top 30 keywords are extracted from each text for experiment. For the machine translation based metric, we empirically set \u03b1 = 0.5, \u03b2 = \u03b3 = 0.2, and \u03b4 = 0.1. This is based on the assumption that, lexical feature can best characterize the comparability given the good translation quality provided by the powerful MT system, while keyword and named entity features are also better indicators of comparability than the simple document length information. The results for the lexical mapping based metric, the keyword based metric and the machine translation based metric are listed in Table 2 ably reflect the comparability levels across different language pairs, as the average scores for higher comparable levels are always significantly larger than those of lower comparable levels, namely SC(parallel)>SC(stronglycomparable)>SC(weakly-comparable). In addition, in all the three metrics, the Pearson correlation scores are very high (over 0.93) across different language pairs, which indicate that there is strong correlation between the comparability scores obtained from the metrics and the corresponding comparability level. Moreover, from the comparison of Table 2 , 3, and 4, we also have several other findings. Firstly, the performance of keyword based metric (see Table 3 ) is comparable to the lexical mapping based metric (see Table 2 ) as their comparability scores for the corresponding comparability levels are similar. This means it is reasonable to determine the comparability level by only comparing a small number of keywords of the texts. Secondly, the scores obtained from the machine translation based metric (see Table 4 ) are significantly higher than those in both the lexical mapping based metric and the keyword based metric. Clearly, this is due to the advantages of using the state-of-theart MT system. In comparison to the approach of using dictionary for word-for-word mapping, it can provide much better text translation which allows detecting more proportion of lexical over-lapping and mining more useful features in the translated texts. Thirdly, in the lexical mapping based metric and keyword based metric, we can also see that, although the average scores for EL-RO (both under-resourced languages) conform to the comparability levels, they are much lower than those of the other 5 language pairs. The reason is that, the size of the parallel corpora in JRC-Acquis for these 5 language pairs are significantly larger (over 1 million parallel sentences) than that of EL-EN, RO-EN 9 , and EL-RO, thus the resulting dictionaries of these 5 language pairs also contain many more dictionary entries.", "cite_spans": [], "ref_spans": [ { "start": 184, "end": 191, "text": "Table 1", "ref_id": null }, { "start": 1068, "end": 1075, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1647, "end": 1654, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1758, "end": 1765, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 1823, "end": 1830, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 2120, "end": 2127, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experimental results", "sec_num": "4.2" }, { "text": "The experiments in Section 4 confirm the reliability of the proposed metrics. The comparability metrics are thus useful for collecting highquality comparable corpora, as they can help filter out weakly comparable or non-comparable document pairs from the raw crawled corpora. But are they also useful for other NLP tasks, such as translation equivalent detection from comparable corpora? In this section, we further measure the impact of the metrics on parallel phrase extraction (PPE) from comparable corpora. Our intuition is that, if document pairs are assigned higher comparability scores by the metrics, they should be more comparable and thus more parallel phrases can be extracted from them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Application", "sec_num": "5" }, { "text": "The algorithm of parallel phrase extraction, which develops the approached presented in Munteanu and Marcu (2006) , uses lexical overlap and structural matching measures (Ion, 2012). Taking a list of bilingual comparable document pairs as input, the extraction algorithm involves the following steps.", "cite_spans": [ { "start": 88, "end": 113, "text": "Munteanu and Marcu (2006)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Application", "sec_num": "5" }, { "text": "1. Split the source and target language documents into phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Application", "sec_num": "5" }, { "text": "2. Compute the degree of parallelism for each candidate pair of phrases by using the bilingual dictionary generated from GIZA++ (base dictionary), and retain all the phrase pairs with a score larger than a predefined parallelism threshold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Application", "sec_num": "5" }, { "text": "3. Apply GIZA++ to the retained phrase pairs to detect new dictionary entries and add them to the base dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Application", "sec_num": "5" }, { "text": "Step 2 and 3 for several times (empirically set at 5) by using the augmented dictionary, and output the detected phrase pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repeat", "sec_num": "4." }, { "text": "Phrases which are extracted by this algorithm are frequently not exact translation equivalents. Below we give some English-German examples of extracted equivalents with their corresponding alignment scores: Even though some of the extracted phrases are not exact translation equivalents, they may still be useful resources both for SMT and RBMT if these phrases are passed through an extra preprocessing stage, of if the engines are modified specifically to work with semi-parallel translation equivalents extracted from comparable texts. We address this issue in the discussion section (see Section 6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repeat", "sec_num": "4." }, { "text": "For evaluation, we measure how the metrics affect the performance of parallel phrase extraction algorithm on 5 language pairs (DE-EN, ET-EN, LT-EN, LV-EN, and SL-EN). A large raw comparable corpus for each language pair was crawled from the Web, and the metrics were then applied to assign comparability scores to all the document pairs in each corpus. For each language pair, we set three different intervals based on the comparability score (SC) and randomly select 500 document pairs in each interval for evaluation. For the MT based metric, the three intervals are (1) 0.1<=SC<0.3, (2) 0.3<=SC<0.5, and (3) SC>=0.5. For the lexical mapping based metric and keyword based metric, since their scores are lower than those of the MT based metric for each comparability level, we set three lower intervals at (1) 0.1<=SC<0.2, (2) 0.2<=SC<0.4, and (3) SC>=0.4. The experiment focuses on counting the number of extracted parallel phrases with parallelism score>=0.4 10 , and computes the average number of extracted phrases per 100000 words (the sum of words in the source and target language documents) for each interval. In addition, the Pearson correlation measure is also applied to measure the correlation between the interval 11 of comparability scores and the number of extracted parallel phrases. The results which summarize the impact of the three metrics to the performance of parallel phrase extraction are listed in Table 5 , 6, and 7, respectively. Table 6 : Impact of the keyword based metric to parallel phrase extraction From Table 5 , 6, and 7, we can see that for all the 5 language pairs, based on the average number of extracted aligned phrases, clearly we have interval (3)>(2)>(1). In other words, in any of the three metrics, a higher comparability level always leads to significantly more number Table 7 : Impact of the machine translation based metric to parallel phrase extraction of aligned phrases extracted from the comparable documents. Moreover, although the lexical mapping based metric and the keyword based metric produce lower comparability scores than the MT based metric (see Section 4), they have similar impact to the task of parallel phrase extraction. This means, the comparability score itself does not matter much, as long as the metrics are reliable and proper thresholds are set for different metrics.", "cite_spans": [], "ref_spans": [ { "start": 1425, "end": 1432, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 1459, "end": 1466, "text": "Table 6", "ref_id": null }, { "start": 1539, "end": 1546, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 1817, "end": 1824, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Repeat", "sec_num": "4." }, { "text": "In all the three metrics, the Pearson correlation scores are very close to 1 for all the language pairs, which indicate that the intervals of comparability scores obtained from the metrics are in line with the performance of equivalent extraction algorithm. Therefore, in order to extract more parallel phrases (or other translation equivalents) from comparable corpora, we can try to improve the corpus comparability by applying the comparability metrics beforehand to add highly comparable document pairs in the corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repeat", "sec_num": "4." }, { "text": "We have presented three different approaches to measure comparability at the document level. In this section, we will analyze the advantages and limitations of the proposed metrics, and the feasibility of using semi-parallel equivalents in MT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Using bilingual dictionary for lexical mapping is simple and fast. However, as it adopts the wordfor-word mapping strategy and out-of-vocabulary (OOV) words are omitted, the linguistic structure of the original texts is badly hurt after mapping. Thus, apart from lexical information, it is difficult to explore more useful features for the comparability metrics. The TFIDF based keyword extraction approach allows us to select more representative words and prune a large amount of less informative words from the texts. The keywords are usually relevant to subject and domain terms, which is quite useful in judging the comparability of two documents. Both the lexical mapping based approach and the keyword based approach use dictionary for lexical translation, thus rely on the availability and completeness of the dictionary resources or large scale parallel corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pros and cons of the metrics", "sec_num": "6.1" }, { "text": "For the machine translation based metric, it provides much better text translation than the dictionary-based approach so that the comparability of two document can be better revealed from the richer lexical information and other useful features, such as named entities. However, the text translation process is expensive, as it depends on the availability of the powerful MT systems 12 and takes much longer than the simple dictionary based translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pros and cons of the metrics", "sec_num": "6.1" }, { "text": "In addition, we use a translation strategy of translating texts from under-resourced (or lessresourced) languages into rich-resourced language. In case that both languages are underresourced languages, English is used as the pivot langauge for translation. This can compensate the shortage of the linguistic resources in the underresourced languages and take advantages of various resources in the rich-resourced languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pros and cons of the metrics", "sec_num": "6.1" }, { "text": "We note that modern SMT and RBMT systems take maximal advantage of strictly parallel phrases, but they still do not use full potential of the semi-parallel translation equivalents, of the type that is illustrated in the application section (see Section 5). Such resources, even though they are not exact equivalents contain useful information which is not used by the systems. In particular, the modern decoders do not work with under-specified phrases in phrase tables, and do not work with factored semantic features. For example, the phrase:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using semi-parallel equivalents in MT systems", "sec_num": "6.2" }, { "text": "But a successful mission -seiner\u00fcberaus erfolgreichen Mission abgebremst", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using semi-parallel equivalents in MT systems", "sec_num": "6.2" }, { "text": "The English side contains the word but, which pre-supposes contrast, and on the Greman side words\u00fcberaus erfolgreichen (\"generally successful\") and abgebremst (\"slowed down\") -which taken together exemplify a contrast, since they have different semantic prosodies. In this example the semantic feature of contrast can be extracted and reused in other contexts. However, this would require the development of a new generation of decoders or rule-based systems which can successfully identify and reuse such subtle semantic features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using semi-parallel equivalents in MT systems", "sec_num": "6.2" }, { "text": "The success of extracting good-quality translation equivalents from comparable corpora to improve machine translation performance highly depends on \"how comparable\" the used corpora are. In this paper, we propose three different comparability measures at the document level. The experiments show that all the three approaches can effectively determine the comparability levels of comparable document pairs. We also further investigate the impact of the metrics on the task of parallel phrase extraction from comparable corpora. It turns out that higher comparability scores always lead to significantly more parallel phrases extracted from comparable documents. Since better quality of comparable corpora should have better applicability, our metrics can be applied to select highly comparable document pairs for the tasks of translation equivalent extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "7" }, { "text": "In the future work, we will conduct more comprehensive evaluation of the metrics by capturing its impact to the performance of machine translation systems with extended phrase tables derived from comparable corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "7" }, { "text": "The JRC-Acquis covers 22 European languages and provides large-scale parallel corpora for all the 231 language pairs.3 From the manual inspection on the word alignment results, we find that if the alignment probability is higher than 0.3, it is more reliable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Generally in JRC-Acquis, the size of parallel corpora for most of non-English langauge pairs is much smaller than that of language pairs which contain English. Therefore, the resulting bilingual dictionaries which contain English have better word coverage as they have many more dictionary entries.5 We use WordNet(Fellbaum, 1998) for word lemmatization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available athttp://code.google.com/p/microsofttranslator-java-api/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available at http://nlp.stanford.edu/software/CRF-NER.shtml", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For correlation measure, we use numerical calibration to different comparability degrees: \"Parallel\", \"stronglycomparable\" and \"weakly-comparable\" are converted as 3, 2, and 1, respectively. The correlation is then computed between the numerical comparability levels and the corresponding average comparability scores automatically derived from the metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Remember that in our experiment, English is used as the pivot language for non-English langauge pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A manual evaluation of a small set of extracted data shows that parallel phrases with parallelism score >=0.4 are more reliable.11 For the purpose of correlation measure, the three intervals are numerically calibrated as \"1\", \"2\", and \"3\", respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Alternatively, we can also train MT systems for text translation by using the available SMT toolkits (e.g., Moses) on large scale parallel corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Radu Ion at RACAI for providing us the toolkit of parallel phrase extraction, and the three anonymous reviewers for valuable comments. This work is supported by the EU funded ACCURAT project (FP7-ICT-2009-4-248347) at the Centre for Translation Studies, University of Leeds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Generalising Lexical Translation Strategies for MT Using Comparable Corpora", "authors": [ { "first": "Bogdan", "middle": [], "last": "Babych", "suffix": "" }, { "first": "Serge", "middle": [], "last": "Sharoff", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Hartley", "suffix": "" } ], "year": 2008, "venue": "Proceedings of LREC 2008", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bogdan Babych, Serge Sharoff and Anthony Hartley. 2008. Generalising Lexical Translation Strategies for MT Using Comparable Corpora. Proceedings of LREC 2008, Marrakech, Morocco.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Looking for candidate translational equivalents in specialized, comparable corpora", "authors": [ { "first": "Yun-Chuang", "middle": [], "last": "Chiao", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "" } ], "year": 2002, "venue": "Proceedings of COLING 2002", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yun-Chuang Chiao and Pierre Zweigenbaum. 2002. Looking for candidate translational equivalents in specialized, comparable corpora. Proceedings of COLING 2002, Taipei, Taiwan.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "WordNet: An Electronic Lexical Database", "authors": [ { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA, USA.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling", "authors": [ { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL 2005", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Finkel, Trond Grenager, and Christopher Man- ning. 2005. Incorporating Non-local Information into Information Extraction Systems by Gibbs Sam- pling. Proceedings of ACL 2005, University of Michigan, Ann Arbor, USA.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Domain-specific keyphrase extraction", "authors": [ { "first": "Eibe", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Gordon", "middle": [], "last": "Paynter", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Witten", "suffix": "" } ], "year": 1999, "venue": "Proceedings of IJCAI 1999", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eibe Frank, Gordon Paynter and Ian Witten. 1999. Domain-specific keyphrase extraction. Proceedings of IJCAI 1999, Stockholm, Sweden.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Mining very non-parallel corpora: Parallel sentence and lexicon extraction via bootstrapping and EM", "authors": [ { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Cheung", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascale Fung and Percy Cheung. 2004a. Mining very non-parallel corpora: Parallel sentence and lexicon extraction via bootstrapping and EM. Proceedings of EMNLP 2004, Barcelona, Spain.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multi-level bootstrapping for extracting parallel sentences from a quasicomparable corpus", "authors": [ { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Cheung", "suffix": "" } ], "year": 2004, "venue": "Proceedings of COL-ING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascale Fung and Percy Cheung. 2004b. Multi-level bootstrapping for extracting parallel sentences from a quasicomparable corpus. Proceedings of COL- ING 2004, Geneva, Switzerland.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Improved Automatic Keyword Extraction Given More Linguistic Knowledge. Proceedings of EMNLP", "authors": [ { "first": "Anette", "middle": [], "last": "Hulth", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anette Hulth. 2003. Improved Automatic Keyword Extraction Given More Linguistic Knowledge. Pro- ceedings of EMNLP 2003, Sapporo, Japan.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "PEXACC: A Parallel Data Mining Algorithm from Comparable Corpora", "authors": [ { "first": "", "middle": [], "last": "Radu Ion", "suffix": "" } ], "year": 2012, "venue": "Proceedings of LREC 2012", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radu Ion. 2012. PEXACC: A Parallel Data Mining Algorithm from Comparable Corpora. Proceedings of LREC 2012, Istanbul, Turkey.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Measures for corpus similarity and homogeneity", "authors": [ { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Rose", "suffix": "" } ], "year": 1998, "venue": "Proceedings of EMNLP 1998", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Kilgarriff and Tony Rose. 1998. Measures for corpus similarity and homogeneity. Proceedings of EMNLP 1998, Granada, Spain.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Improving corpus comparability for bilingual lexicon extraction from comparable corpora", "authors": [ { "first": "Bo", "middle": [], "last": "Li", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Gaussier", "suffix": "" } ], "year": 2010, "venue": "Proceedings of COL-ING 2010", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Li and Eric Gaussier. 2010. Improving cor- pus comparability for bilingual lexicon extraction from comparable corpora. Proceedings of COL- ING 2010, Beijing, China.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Unsupervised Approaches for Automatic Keyword Extraction Using Meeting Transcripts", "authors": [ { "first": "Feifan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Deana", "middle": [], "last": "Pennell", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2009, "venue": "Proceedings of NAACL 2009", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feifan Liu, Deana Pennell, Fei Liu and Yang Liu. 2009. Unsupervised Approaches for Automatic Keyword Extraction Using Meeting Transcripts. Proceedings of NAACL 2009, Boulder, Colorado, USA.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "What are comparable corpora? Proceedings of the Corpus Linguistics workshop on Multilingual Corpora: Linguistic requirements and technical perspectives", "authors": [ { "first": "Belinda", "middle": [], "last": "Maia", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Belinda Maia. 2003. What are comparable corpora? Proceedings of the Corpus Linguistics workshop on Multilingual Corpora: Linguistic requirements and technical perspectives, 2003, Lancaster, U.K.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Parallel and comparable corpora?", "authors": [ { "first": "Anthony", "middle": [], "last": "Mcenery", "suffix": "" }, { "first": "Zhonghua", "middle": [], "last": "Xiao", "suffix": "" } ], "year": 2007, "venue": "Incorporating Corpora: Translation and the Linguist. Translating Europe. Multilingual Matters", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony McEnery and Zhonghua Xiao. 2007. Par- allel and comparable corpora? In Incorporating Corpora: Translation and the Linguist. Translating Europe. Multilingual Matters, Clevedon, UK.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Bilingual terminology mining -using brain, not brawn comparable corpora", "authors": [ { "first": "Emmanuel", "middle": [], "last": "Morin", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Daille", "suffix": "" }, { "first": "Korchi", "middle": [], "last": "Takeuchi", "suffix": "" }, { "first": "Kyo", "middle": [], "last": "Kageura", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL 2007", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emmanuel Morin, Beatrice Daille, Korchi Takeuchi and Kyo Kageura. 2007. Bilingual terminology mining -using brain, not brawn comparable cor- pora. Proceedings of ACL 2007, Prague, Czech Re- public.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Extracting parallel sub-sentential fragments from nonparallel corpora", "authors": [ { "first": "Dragos", "middle": [], "last": "Munteanu", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ACL 2006, Syndey", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dragos Munteanu and Daniel Marcu. 2006. Ex- tracting parallel sub-sentential fragments from non- parallel corpora. Proceedings of ACL 2006, Syn- dey, Australia.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Improving machine translation performance by exploiting non-parallel corpora", "authors": [ { "first": "Dragos", "middle": [], "last": "Munteanu", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "4", "pages": "477--504", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dragos Munteanu and Daniel Marcu. 2005. Improv- ing machine translation performance by exploiting non-parallel corpora. Computational Linguistics, 31(4): 477-504.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Improved machine translation performance via parallel sentence extraction from comparable corpora", "authors": [ { "first": "Dragos", "middle": [], "last": "Munteanu", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dragos Munteanu, Alexander Fraser and Daniel Marcu. 2004. Improved machine translation performance via parallel sentence extraction from comparable corpora. Proceedings of HLT-NAACL 2004, Boston, USA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Improved Statistical Alignment Models", "authors": [ { "first": "Franz", "middle": [], "last": "Och", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proceedings of ACL 2000", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Och and Hermann Ney. 2000. Improved Statis- tical Alignment Models. Proceedings of ACL 2000, Hongkong, China.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Rare Word Translation Extraction from Aligned Comparable Documents", "authors": [ { "first": "Emmanuel", "middle": [], "last": "Prochasson", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ACL-HLT 2011", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emmanuel Prochasson and Pascale Fung. 2011. Rare Word Translation Extraction from Aligned Compa- rable Documents. Proceedings of ACL-HLT 2011, Portland, USA.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Identifying Word Translation in Non-Parallel Texts", "authors": [ { "first": "Reinhard", "middle": [], "last": "Rapp", "suffix": "" } ], "year": 1995, "venue": "Proceedings of ACL 1995", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reinhard Rapp. 1995. Identifying Word Translation in Non-Parallel Texts. Proceedings of ACL 1995, Cambridge, Massachusetts, USA.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Automatic identification of word translations from unrelated English and German corpora", "authors": [ { "first": "Reinhard", "middle": [], "last": "Rapp", "suffix": "" } ], "year": 1999, "venue": "Proceedings of ACL 1999", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reinhard Rapp. 1999. Automatic identification of word translations from unrelated English and Ger- man corpora. Proceedings of ACL 1999, College Park, Maryland, USA.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Automatic Extraction of Bilingual Terms from Comparable Corpora in a Popular Science Domain", "authors": [ { "first": "Xabier", "middle": [], "last": "Saralegi", "suffix": "" }, { "first": "Inaki", "middle": [], "last": "Vicente", "suffix": "" }, { "first": "Antton", "middle": [], "last": "Gurrutxaga", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Workshop on Comparable Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xabier Saralegi, Inaki Vicente and Antton Gurrutxaga. 2008. Automatic Extraction of Bilingual Terms from Comparable Corpora in a Popular Science Domain. Proceedings of the Workshop on Compa- rable Corpora, LREC 2008, Marrakech, Morocco.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Classifying Web corpora into domain and genre using automatic feature identification", "authors": [ { "first": "Serge", "middle": [], "last": "Sharoff", "suffix": "" } ], "year": 2007, "venue": "Proceedings of 3rd Web as Corpus Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Serge Sharoff. 2007. Classifying Web corpora into domain and genre using automatic feature identifi- cation. Proceedings of 3rd Web as Corpus Work- shop, Louvain-la-Neuve, Belgium.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Using Comparable Corpora to Solve Problems Difficult for Human Translators", "authors": [ { "first": "Serge", "middle": [], "last": "Sharoff", "suffix": "" }, { "first": "Bogdan", "middle": [], "last": "Babych", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Hartley", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ACL 2006", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Serge Sharoff, Bogdan Babych and Anthony Hartley. 2006. Using Comparable Corpora to Solve Prob- lems Difficult for Human Translators. Proceedings of ACL 2006, Syndey, Australia.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Extracting Parallel Sentences from Comparable Corpora using Document Level Alignment", "authors": [ { "first": "Jason", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2010, "venue": "Proceedings of NAACL 2010", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Smith, Chris Quirk and Kristina Toutanova. 2010. Extracting Parallel Sentences from Compa- rable Corpora using Document Level Alignment. Proceedings of NAACL 2010, Los Angeles, USA.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The JRC-Acquis: A multilingual aligned parallel corpus with 20+ languages", "authors": [ { "first": "Ralf", "middle": [], "last": "Steinberger", "suffix": "" }, { "first": "Bruno", "middle": [], "last": "Pouliquen", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Widiger", "suffix": "" } ], "year": 2006, "venue": "Proceedings of LREC 2006", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralf Steinberger, Bruno Pouliquen, Anna Widiger, Camelia Ignat and Dan Tufis. 2006. The JRC- Acquis: A multilingual aligned parallel corpus with 20+ languages. Proceedings of LREC 2006, Genoa, Italy.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Extracting bilingual dictionary from comparable corpora with dependency heterogeneity", "authors": [ { "first": "Kun", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Junichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2009, "venue": "Proceedings of HLT-NAACL 2009", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kun Yu and Junichi Tsujii. 2009. Extracting bilingual dictionary from comparable corpora with depen- dency heterogeneity. Proceedings of HLT-NAACL 2009, Boulder, Colorado, USA.", "links": null } }, "ref_entries": { "TABREF2": { "html": null, "num": null, "text": "", "type_str": "table", "content": "
: Average comparability scores for lexical map-
ping based metric
Overall, from the average scores for each
comparability level presented in Table 2, 3,
and 4, we can see that, the scores obtained
from the three comparability metrics can reli-
" }, "TABREF3": { "html": null, "num": null, "text": "", "type_str": "table", "content": "
: Average comparability scores for keyword
based metric
Languageparallel strongly-weakly-correlation
paircomparablecomparable
DE-EN0.912 0.6220.3260.999
ET-EN0.765 0.5470.3100.999
LT-EN0.755 0.6130.3080.984
LV-EN0.770 0.6270.2360.966
SL-EN0.779 0.5820.3730.988
EL-RO0.863 0.4460.2140.988
" }, "TABREF4": { "html": null, "num": null, "text": "", "type_str": "table", "content": "
: Average comparability scores for machine
translation based metric
" }, "TABREF6": { "html": null, "num": null, "text": "", "type_str": "table", "content": "
: Impact of the lexical mapping based metric to
parallel phrase extraction
Language0.1<=0.2<=SC>=0.4 correlation
pairSC<0.2SC<0.4
DE-EN1007134021510.972
ET-EN43865010500.984
LT-EN3064427650.973
LV-EN60096617220.980
SL-EN715102618540.967
" } } } }