Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I13-1046",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:14:15.591724Z"
},
"title": "Ranking Translation Candidates Acquired from Comparable Corpora",
"authors": [
{
"first": "Rima",
"middle": [],
"last": "Harastani",
"suffix": "",
"affiliation": {
"laboratory": "LINA UMR CNRS 6241",
"institution": "University of Nantes",
"location": {
"addrLine": "2 rue de la Houssini\u00e8re",
"postBox": "BP 92208",
"postCode": "44322",
"settlement": "Nantes",
"country": "France"
}
},
"email": "rima.harastani@univ-nantes.fr"
},
{
"first": "B\u00e9atrice",
"middle": [],
"last": "Daille",
"suffix": "",
"affiliation": {
"laboratory": "LINA UMR CNRS 6241",
"institution": "University of Nantes",
"location": {
"addrLine": "2 rue de la Houssini\u00e8re",
"postBox": "BP 92208",
"postCode": "44322",
"settlement": "Nantes",
"country": "France"
}
},
"email": "beatrice.daille@univ-nantes.fr"
},
{
"first": "Emmanuel",
"middle": [],
"last": "Morin",
"suffix": "",
"affiliation": {
"laboratory": "LINA UMR CNRS 6241",
"institution": "University of Nantes",
"location": {
"addrLine": "2 rue de la Houssini\u00e8re",
"postBox": "BP 92208",
"postCode": "44322",
"settlement": "Nantes",
"country": "France"
}
},
"email": "emmanuel.morin@univ-nantes.fr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Domain-specific bilingual lexicons extracted from domain-specific comparable corpora provide for one term a list of ranked translation candidates. This study proposes to re-rank these translation candidates. We suggest that a term and its translation appear in comparable sentences that can be extracted from domainspecific comparable corpora. For a source term and a list of translation candidates, we propose a method to identify and align the best source and target sentences that contain the term and its translation candidates. We report results with two language pairs (French-English and French-German) using domain-specific comparable corpora. Our method significantly improves the top 1, top 5 and top 10 precisions of a domain-specific bilingual lexicon, and thus, provides a better useroriented results.",
"pdf_parse": {
"paper_id": "I13-1046",
"_pdf_hash": "",
"abstract": [
{
"text": "Domain-specific bilingual lexicons extracted from domain-specific comparable corpora provide for one term a list of ranked translation candidates. This study proposes to re-rank these translation candidates. We suggest that a term and its translation appear in comparable sentences that can be extracted from domainspecific comparable corpora. For a source term and a list of translation candidates, we propose a method to identify and align the best source and target sentences that contain the term and its translation candidates. We report results with two language pairs (French-English and French-German) using domain-specific comparable corpora. Our method significantly improves the top 1, top 5 and top 10 precisions of a domain-specific bilingual lexicon, and thus, provides a better useroriented results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Comparable corpora have been the subject of interest for extracting bilingual lexicons by several researchers (Rapp, 1995; Fung and Mckeown, 1997; Rapp, 1999; Koehn and Knight, 2002; Morin et al., 2008; Bouamor et al., 2013, among others) . Rapp (1995) was the first to suggest that if a word A co-occurs frequently with another word B in one language, then the translation of A and the translation of B should co-occur frequently in another language. Approaches emerging from (Rapp, 1995) make different assumptions to extract bilingual lexicon from comparable corpora. However, they are all based on the assumption that a translation pair shares some similar context in comparable corpora. We refer to such approaches that depend on co-occurrences of words to extract a bilingual lexicon by distributional approaches. Results obtained from distributional approaches vary according to many parameters. For example, one of the parameters that impacts the performance of distributional approaches is the way the context of a word is defined. Various approaches defined contexts differently: windows (Rapp, 1999) , sentences or paragraphs (Fung and Mckeown, 1997) , or by taking into consideration syntax dependencies based on POS tags (Gamallo, 2007) . However, the most common way the context of a word is defined is by choosing words within windows centered around the word (Laroche and Langlais, 2010) , usually of small sizes (e.g. a window of size 3 is used by Rapp (1999) ).",
"cite_spans": [
{
"start": 110,
"end": 122,
"text": "(Rapp, 1995;",
"ref_id": "BIBREF15"
},
{
"start": 123,
"end": 146,
"text": "Fung and Mckeown, 1997;",
"ref_id": "BIBREF3"
},
{
"start": 147,
"end": 158,
"text": "Rapp, 1999;",
"ref_id": "BIBREF16"
},
{
"start": 159,
"end": 182,
"text": "Koehn and Knight, 2002;",
"ref_id": "BIBREF8"
},
{
"start": 183,
"end": 202,
"text": "Morin et al., 2008;",
"ref_id": "BIBREF12"
},
{
"start": 203,
"end": 238,
"text": "Bouamor et al., 2013, among others)",
"ref_id": null
},
{
"start": 241,
"end": 252,
"text": "Rapp (1995)",
"ref_id": "BIBREF15"
},
{
"start": 477,
"end": 489,
"text": "(Rapp, 1995)",
"ref_id": "BIBREF15"
},
{
"start": 1098,
"end": 1110,
"text": "(Rapp, 1999)",
"ref_id": "BIBREF16"
},
{
"start": 1137,
"end": 1161,
"text": "(Fung and Mckeown, 1997)",
"ref_id": "BIBREF3"
},
{
"start": 1234,
"end": 1249,
"text": "(Gamallo, 2007)",
"ref_id": "BIBREF4"
},
{
"start": 1375,
"end": 1403,
"text": "(Laroche and Langlais, 2010)",
"ref_id": "BIBREF9"
},
{
"start": 1465,
"end": 1476,
"text": "Rapp (1999)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Domain-specific comparable corpora have been used for bilingual terminology extraction. These corpora are of modest sizes since large domainspecific corpora are not available for many domains (Morin et al., 2008) . As a matter of fact, distributional approaches perform best with large comparable corpora, and thus they often give lower precisions when applied to domainspecific comparable corpora (Chiao and Zweigenbaum, 2002) .",
"cite_spans": [
{
"start": 192,
"end": 212,
"text": "(Morin et al., 2008)",
"ref_id": "BIBREF12"
},
{
"start": 398,
"end": 427,
"text": "(Chiao and Zweigenbaum, 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The goal of our work is to find translations of terms in domain-specific comparable corpora. Taking a list of ranked translation candidates (provided by a distributional method) for a term, we aim to improve the ranking of the correct translations that are not ranked first in the list. Obviously, the more translation candidates for a term are considered, the more correct translations are found. For example, Rapp (1999) obtains a precision of 72% when only the first translation candidate is considered correct. However, he reports an 89% precision when the first 10 translation candidates are provided as translations for a word.",
"cite_spans": [
{
"start": 411,
"end": 422,
"text": "Rapp (1999)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This study proposes to take the best translation candidates provided by a distributional approach, and tries to re-rank them in order to improve the top 1, top 5 and top 10 precisions. We suggest that a source term and its correct translation appear in comparable sentences. Comparable sentences are sentences that share parallel data (e.g. word overlap, long matched sequences, bilingual compound nouns). We proceed by first extracting sentences for a source term, as well as sentences for each of its provided translation candidates. For each translation pair (i.e. source term and a translation candidate), each extracted source sentence is aligned with at most one of the extracted sentences for the translation candidate. The aligned sentences are used to re-rank the translation candidates of the source term.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Besides being used by our approach to re-rank translations, comparable sentences that contain a term and its translation in corpora are promising, as they may be useful examples to a user or a human translator that needs to verify a translation pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 2, we present our approach and assumptions. In Section 3, we describe our method to extract sentences that best represent a term in corpora. In Section 4, we explain a method to score a sentence containing a term with a sentence containing its translation candidate. We evaluate our approach in Section 5 on two domain-specific corpora for the French-English and French-German language pairs, and report improvements in the top 1, top 5, and top 10 precisions. We conclude in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A term may appear in several contexts, but some can be more interesting and more informative than others. In Table 1 , an example of two sentences in which the term \"tumor\" appears is given. These sentences were extracted from an English corpus related to the domain of \"Breast Cancer\". Sentence (A) is considered to be more informative and more representative of the context of \"tumor\" than sentence (B). It also contains terms that are highly related to the \"Breast Cancer\" subject (e.g. chemotherapy, histological).",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 116,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Assumptions and Approach",
"sec_num": "2"
},
{
"text": "Our assumption is that the best context (represented by sentences) can be extracted for a term as well as for its translation candidates, and that these extracted sentences can be aligned in order to re-rank the translation candidates of the term.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assumptions and Approach",
"sec_num": "2"
},
{
"text": "After obtaining some candidate translations for (A) Chemotherapy was also administered to patients with smaller primary tumors with histological grade 2 or 3 or with negative hormone receptors. (B) The size of any captured image corresponding to the tumor was estimated. Table 1 : Sentence (A) and (B) containing the term \"tumor\" a term by applying a distributional method, we score a source term (t s ) with its target translation candidate (t t ) as follows: we first extract the n best sentences that contain t s in the source corpus as well as the n best sentences that contain its translation candidate in the target corpus. Then, we align each of the best sentences of t s with at most a sentence of t t using a method that depends on lexical similarity. Finally, the translation pair (t s ,t t ) is scored according to the scores of the aligned sentences between t s and t t . The scoring method is illustrated in Figure 1 . We combine the resulting score with its initial score that is provided by a distributional method. Combined scores are then used to re-rank translation candidates of the specific term. Parallel sentence (or fragment) extraction from comparable corpora has received the attention of a number of researchers (Fung and Cheung, 2004; Munteanu and Marcu, 2005; Munteanu and Marcu, 2006; Smith et al., 2010; Hunsicker et al., 2012, among others) , to enrich parallel text used by statistical machine translation (SMT) systems. They conducted experiments with large corpora (mainly news stories) which were noisy parallel, comparable (contain topic alignments or articles published in similar circumstances), or very non-parallel (Fung and Cheung, 2004) . Usually, these approaches perform document-level alignments before extracting parallel sentences. The domain-specific corpora we use contain few documents (ranging from 38 to 262 documents for each corpus) and no parallel sentences. Furthermore, they are of modest size (about 0.3 M to 0.5 M words), so even if there were some parallel fragments, this phenomenon would be rare. Nevertheless, we assume that some features used in state-of-the-art parallel sentence extraction methods can be used to identify comparable sentences that contain a translation pair.",
"cite_spans": [
{
"start": 1238,
"end": 1261,
"text": "(Fung and Cheung, 2004;",
"ref_id": "BIBREF2"
},
{
"start": 1262,
"end": 1287,
"text": "Munteanu and Marcu, 2005;",
"ref_id": "BIBREF13"
},
{
"start": 1288,
"end": 1313,
"text": "Munteanu and Marcu, 2006;",
"ref_id": "BIBREF14"
},
{
"start": 1314,
"end": 1333,
"text": "Smith et al., 2010;",
"ref_id": "BIBREF18"
},
{
"start": 1334,
"end": 1371,
"text": "Hunsicker et al., 2012, among others)",
"ref_id": null
},
{
"start": 1655,
"end": 1678,
"text": "(Fung and Cheung, 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 271,
"end": 278,
"text": "Table 1",
"ref_id": null
},
{
"start": 921,
"end": 929,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Assumptions and Approach",
"sec_num": "2"
},
{
"text": "Our goal is not to extract parallel sentences, but rather we need to find, for a translation pair, bilingual sentences that are comparable. For example, consider that we need to score the correct translation pair (FR 1 clinique, EN 2 clinical), and that we have two sentences, the first contains \"clinique\" and the second contains \"clinical\" (see Figure 2 ). The two sentences are not parallel, however, they both contain the following information: a clinical examination detects the size of a tumor. Finding this kind of comparability in sentences would help in increasing the score of correct translation pairs.",
"cite_spans": [],
"ref_spans": [
{
"start": 347,
"end": 355,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Assumptions and Approach",
"sec_num": "2"
},
{
"text": "For a term (t), we aim to extract the n best sentences that represent its context in the corpus. We suggest that sentences that best represent t contain words that are: (a) strongly associated with t in the corpus, (b) highly specific to the domain of the corpus. A word in a sentence containing t is scored by means of two measures: association and domain specificity, that are presented in the following.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Best Sentences Extraction for a Term",
"sec_num": "3"
},
{
"text": "1. Association with t: word associations are computed according to log-likelihood scores 1 FR signifies French 2 EN signifies English that are based on the co-occurrences of words in a window of size (s=7) around t. The top (m=30) associated words and their scores with t are denoted by v m (context vector of t of size m). The association between a word (w) and t is computed from occurrences that are resumed in the contingency table (see Table 2) , where occ(t,w) is the number of occurrences of t and w, and \u00acw signifies all words except w. The log-likelihood association measure is computed as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 441,
"end": 449,
"text": "Table 2)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Best Sentences Extraction for a Term",
"sec_num": "3"
},
{
"text": "w \u00acw t a=occ(t,w) b=occ(t,\u00acw) \u00act c=occ(\u00act,w) d=occ(\u00act,\u00acw)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Best Sentences Extraction for a Term",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "association(t, w) = a log(a) + b log(b) + c log(c) + d log(d) + (N ) log(N ) \u2212 (a + b) log(a + b) \u2212 (a + c) log(a + c) \u2212 (b + d) log(b + d) \u2212 (c + d) log(c + d)",
"eq_num": "(1)"
}
],
"section": "Best Sentences Extraction for a Term",
"sec_num": "3"
},
{
"text": "where N = a+b+c+d. The association between w and t is then divided by the biggest association score obtained with t to have a score \u2208[0,1].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Best Sentences Extraction for a Term",
"sec_num": "3"
},
{
"text": "2. Domain specificity: the specificity of a word is its relative frequency in the domainspecific corpus (dc={w 1 ,w 2 ,..,w n }) divided by its relative frequency in a general language corpus (gc={w 1 ,w 2 ,..,w m }), it is defined in (Khurshid et al., 1994) as follows:",
"cite_spans": [
{
"start": 235,
"end": 258,
"text": "(Khurshid et al., 1994)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Best Sentences Extraction for a Term",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ds(w) = rvf dc (w) rvf gc (w)",
"eq_num": "(2)"
}
],
"section": "Best Sentences Extraction for a Term",
"sec_num": "3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Best Sentences Extraction for a Term",
"sec_num": "3"
},
{
"text": "rvf dc = f req dc (w) w i \u2208dc f req dc (w i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Best Sentences Extraction for a Term",
"sec_num": "3"
},
{
"text": "is the relative frequency in the specific corpus,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Best Sentences Extraction for a Term",
"sec_num": "3"
},
{
"text": "rvf gc (w)= f reqgc(w) w i \u2208gc f reqgc(w i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Best Sentences Extraction for a Term",
"sec_num": "3"
},
{
"text": "is the relative frequency in the general corpus, and f req signifies frequency. The specificity of a term is normalized by being divided by the value of the biggest specificity in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Best Sentences Extraction for a Term",
"sec_num": "3"
},
{
"text": "To extract the n best sentences for term t, we give a score to each sentence S that contains t and words",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Best Sentences Extraction for a Term",
"sec_num": "3"
},
{
"text": "Source sentence: L'examenyradiologiqueydoity\u00eatreyassoci\u00e9yy\u00e0yunyexamenycliniqueym\u00e9dicalysimultan\u00e9,ycapableydeyd\u00e9tecterydesy tumeursydeytr\u00e8sypetitesydimensions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Best Sentences Extraction for a Term",
"sec_num": "3"
},
{
"text": "Thereywasynoyassociationybetweenytheytumorysizeydetectedyduringyclinicalyexaminationymammography,yMRIyory histopathologicalyanalysesyandypresenceyofyrisidualydisease.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target sentence:",
"sec_num": null
},
{
"text": "(examen,yexamination),y(clinique,yclinical),y(d\u00e9tecter,ydetected),y(tumeurs,ytumor),y(dimensions,ysize)y Figure 2 : Example of source and target sentences that contain the translation pair (FR clinique and EN clinical)",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 113,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Connected words:",
"sec_num": null
},
{
"text": "w1, w2, ..., w n as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Connected words:",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score(S) = n i=1 ds(w i ) + association (if w i \u2208vm) (w i , t)",
"eq_num": "(3)"
}
],
"section": "Connected words:",
"sec_num": null
},
{
"text": "We discard any sentence with a length of less than 5 words (after removing the stop words). All sentences containing t are then ranked according to their scores. For a translation pair (t s ,t t ), the n best sentences for t s as well as for t t are extracted following the method explained above. The next step consists of aligning the n best sentences of a source term t s with n best sentences of each of its proposed translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Connected words:",
"sec_num": null
},
{
"text": "We suggest that if a source term (t s ) is translated by a target term (t t ), then they must share some comparable sentences. The more a translation pair shares sentences with high comparability, the higher its score should be. The ratio between the lengths of two comparable sentences should be less than 2, following (Munteanu and Marcu, 2005) . We also suppose that the overlap between two comparable sentences should be greater than 3 (including the translation pair). Like previous works on extracting parallel sentences from comparable corpora, our approach depends mostly on lexical information between sentences by using a bilingual lexicon.",
"cite_spans": [
{
"start": 320,
"end": 346,
"text": "(Munteanu and Marcu, 2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "Suppose that we have a source sentence S s ={w 1 ,w 2 ,t s ,...,w n } 3 and a target sentence S t ={w 1 ,w 2 ,t t ...,w n } 4 (after removing the stop words), with a set of possible connected words 3 ts could be at any position in Ss 4 tt could be at any position in St",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "M ={(w 1 ,w 1 ),(w 2 ,w 2 ),.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": ".,(w n ,w n )} obtained using a bilingual dictionary. An optimal alignment A (each word in the sentence S s is connected to at most one word in the sentence S t ) is estimated according to a linear function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "Taking the optimal alignment A, feature functions (where each \u2208 [0,1]) are utilized to compute a score between the two sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "1. The cosine similarity between the two sentences (Fung and Cheung, 2004) penalized by the number of unconnected words: each word in S s (respectively S t ) is weighted by its score in the context vector",
"cite_spans": [
{
"start": 51,
"end": 74,
"text": "(Fung and Cheung, 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "v m (respec- tively v m ) of t s (respectively t t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": ". If a word is missing from the context vector, it would be associated a fixed minimal weight. The first feature function is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "f 1 (S ts , S tt ) = cosine(S ts , S tt ) |UnConnectedWords| (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "where |UnConnectedWords| is the number of unconnected words between the two sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "2. Positions of connected words in the source sentence (target sentence respectively) in comparison to the position of source term (target term respectively): the nearer the connected words are from the term t in the sentence, the greater the score of this feature function will be. Besides, we suppose that for two connected words (w i ,w i ), the distance between w i and t s should be close to the distance between w i and t t . The positions distance is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "pos distance (S ts , S tt ) = w i ,w i \u2208A (pos s + pos t + |pos s \u2212 pos t |) |S ts | + |S tt | + |S ts \u2212 S tt |",
"eq_num": "(5)"
}
],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "pos s = |pos(w i ) \u2212 pos(t s )| and pos t = |pos(w i ) \u2212 pos(t t )|.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "The pos distance is then divided by |A| to be normalized. The positions similarity is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f 2 (S ts , S tt ) = 1 \u2212 pos distance |A|",
"eq_num": "(6)"
}
],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "3. Longest contiguous span: it is defined by (Munteanu and Marcu, 2005) as being the longest \"pair of substrings in which the words in one substring are connected only to words in the other substring\". We assume that the length of a span must be greater than 2. The longest span is divided by the length of the smaller sentence, then:",
"cite_spans": [
{
"start": 45,
"end": 71,
"text": "(Munteanu and Marcu, 2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f 3 (S ts , S tt ) = span(S ts , S tt ) min(|S ts |, |S tt |)",
"eq_num": "(7)"
}
],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "4. Number of connected bi-grams: this feature function is defined as the number of found connected bi-grams divided by the number of connected words in A, then:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f 4 (S ts , S tt ) = bi-grams(S ts , S tt ) |A|",
"eq_num": "(8)"
}
],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "The optimal alignment A is the alignment that minimizes the squared Euclidean distance between the two sentence vectors and the pos distance . Indeed, we choose this minimization function for a matter of optimization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "We follow (Hunsicker et al., 2012) in considering the final score between a sentence pair as the weighted sum of all feature functions, such as the following:",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "(Hunsicker et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "score(S s , S t ) = 4 i=1 w i * f i (S ts , S tt ) (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "where 4 i=1 (w i ) = 1. Contrary to previous works that use parallel corpora to train their models and define the weights of feature functions, we define the weights by guesswork. This is because we do not have an annotated parallel corpora. Nevertheless, this should not have a significant impact on our results since our goal is not to extract parallel sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Alignment for Translation Pairs",
"sec_num": "4"
},
{
"text": "For a translation pair (t s ,t t ), each sentence of the n best representing sentences of t s is aligned with at most one of the n best representing sentences of t t . A target sentence can be aligned to multiple source sentences. The score between the translation pair is the average of the scores of the sentence alignments. We refer to this procedure as the sentence alignment method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reranking translation pairs",
"sec_num": "4.1"
},
{
"text": "The re-ranking is done by combining the score obtained by the sentence alignment method for a translation pair with its initial score that is obtained by a distributional method. The scores are combined by the weighted geometric mean.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reranking translation pairs",
"sec_num": "4.1"
},
{
"text": "We first need to extract translations for a list of domain-specific terms in comparable corpora. In order to do this, we pre-process corpora and align terms with the free tool TermSuite 5 (Rocheteau and Daille, 2011) . The distributional method that is implemented in TermSuite is the one described in (Rapp, 1999) . TermSuite provides a chosen number of translations for a term. Translations are ranked according to the scores provided by the distributional method. We try to enhance the top candidate translations of each reference source term by applying our re-ranking method.",
"cite_spans": [
{
"start": 188,
"end": 216,
"text": "(Rocheteau and Daille, 2011)",
"ref_id": "BIBREF17"
},
{
"start": 302,
"end": 314,
"text": "(Rapp, 1999)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "To carry out the distributional approach with TermSuite, we need comparable corpora, bilingual dictionaries, and a list of source reference terms to translate. We need the same resources to perform experiments with our method as well as general language monolingual corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "\u2022 Comparable corpora: we carry out experiments with comparable corpora in two different domains and two language pairs French-English and French-German. The first are medical corpora in the sub-domain of breast cancer, these contain approximately 0.37 M to 0.5 M words for each language. The second corpora belong to the renewable energy domain, more specifically, to the sub-domain of wind energy, and contain about 0.3 M to 0.35 M words for each language. Breast Cancer corpora were collected from an online medical portal, while Wind Energy corpora have been crawled using Babouk crawler (Groc, 2011). Both corpora have been collected using some seed terms and contain no parallel sentences. \u2022 General language corpora: for each language, a general language corpus is obtained and used in computing specificities of words to the domain-specific corpora. These contain 12003, 3903 and 44365 unique single words for French, English and German respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "\u2022 Reference lists: we have built a list of reference single-word terms (SWTs) for each corpora and for each language pair. Each source term in the list is domain-specific with a frequency greater than 5 in the source corpus and has been manually aligned with one golden translation that exists in the target corpus. For Breast Cancer corpora, for each language pair we built a list that that contains 122 translation pairs. As for Wind Energy corpora, for each language pair we built a list that includes 96 translation pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "For the sentence alignment method, we manually define the same parameters for Breast Cancer and Wind Energy corpora. For each term and each translation candidate, we extract the 70 best sentences, where sentences that have the same score are ranked at the same position. However, we take a maximum of 200 sentences for a term. If a term is less frequent that 70 in the corpus, we extract all the sentences that include this term. We do not 6 The dictionaries were obtained from http://catalog.elra.info/product info.php?products id=666 and http://catalog.elra.info/product info.php?products id=668 extract a large number of sentences for a term because the alignment process will be computationally expensive, besides, our assumption is that if a translation pair is valid, then its best representative sentences are comparable. When extracting sentences for a term, we discard any sentence with a length of less than 5 words (after removing the stop words). A sentence is supposed to be simply delimited by punctuation marks (\"?\", \"!\", \".\"). We point out that the words, in a sentence containing a term t, that are used in computing the score of this sentence and as context for t are the words appearing at maximum in a window of size n=20 around t (10 words or less appearing before t in the sentence, and 10 words or less appearing after t in the sentence, after removing the stop words).",
"cite_spans": [
{
"start": 440,
"end": 441,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.2"
},
{
"text": "To score a translation pair by aligning its sentences (see equation 9), the biggest weight is set to 0.4 and is attributed to the first feature function (see equation 4). The remainder of weights are set equally to 0.2. When combining the scores of the distributional and the sentence alignment methods by the weighted geometric mean, the weight of the first is set to 0.3, and the weight of the second is set to 0.7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.2"
},
{
"text": "The precision of a bilingual lexicon is computed at different levels after taking several n best translations for each term (top 1, top 5, etc.). The precision is the number of the correct translations found divided by the number of source terms in the reference list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "5.3"
},
{
"text": "The Mean Reciprocal Rank (MRR) is also used to evaluate the obtained results. The reciprocal rank for a given source term is the multiplicative inverse of the rank of the first correct target translation. The mean reciprocal rank is the average of the reciprocal ranks of the aligned source reference terms. MRR values are between 0 and 1, where higher values indicate a better performance of the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "5.3"
},
{
"text": "= 1 Q |Q| i=1 1 rank i (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MRR",
"sec_num": null
},
{
"text": "where |Q| is the number of source terms to be aligned. If a the correct translation of a term has not been found, then its corresponding \" 1 rank i \" is equal to 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MRR",
"sec_num": null
},
{
"text": "The results of the distributional approach (baseline) with the language pairs and two corpora are given in Table 4 (P1 signifies the precision when 1 translation candidate is provided for a term). We notice that the results on Breast Cancer corpora are better than those obtained with Wind Energy. This may be justified by the fact that Wind Energy corpora are of smaller sizes and less technical.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 114,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5.4"
},
{
"text": "The results are also significantly better with the French-English language pair than with the French-German language pair. In fact, domainspecific corpora contain many terms that are compound nouns. In the German language, many compound nouns may be written as single units (e.g. German term \"Produktionsstandort\" is translated into French by \"site de production\"). Therefore, the distributional approach may consider such German terms as one word when computing co-occurrences. One way to overcome this problem would be to perform splitting before applying the distributional approach (Macherey et al., 2011) .",
"cite_spans": [
{
"start": 586,
"end": 609,
"text": "(Macherey et al., 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5.4"
},
{
"text": "To analyze the results obtained by the distributional method in more depth, we measured the comparability of Wind Energy corpora for the different language pairs, using the comparability measure presented by Li et al. (2011) . For the French-English corpora, we obtained a comparability value of 0.81. As for the French-German corpora, we obtained a comparability value of 0.70. This implies that our French-German corpora are less comparable than the French-English corpora, and partly justifies the reason behind obtaining worse results with the French-German pair using the distributional method. Table 4 : Results obtained with distributional method (baseline). EN-FR signifies English-French, and FR-GR signifies French-German.",
"cite_spans": [
{
"start": 208,
"end": 224,
"text": "Li et al. (2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 600,
"end": 607,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5.4"
},
{
"text": "In order to improve these results, especially the top 1, top 5 and top 10 precisions, we try to re-rank the translation candidates for each source term by combining their initial scores with the scores obtained from aligning their sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Breast Cancer Wind Energy FR-EN FR-GR FR-EN FR-GR",
"sec_num": null
},
{
"text": "Let us suppose that for a source term t s , we want to re-rank its top 5 translation candidates L top5 ={t t 1 ,t t 2 ,t t 3 ,t t 4 ,t t 5 } provided by the distributional method. Following the approach presented in Section 3, we extract the best ranked sentences for t s . We do the same for each translation candidate in L top5 . Then, for each translation pair (e.g. t s and t t 1 ) we try to align each sentence that was extracted for t s with one sentence that shares the highest score with it among the sentences extracted for t t 1 , using the approach described in Section 4. A source sentence can be aligned with at most one target sentence and is assigned a score (which is equal to 0 if the sentence is not aligned). The score between t s and t t 1 is the average of the scores of the alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Breast Cancer Wind Energy FR-EN FR-GR FR-EN FR-GR",
"sec_num": null
},
{
"text": "Following the above explained procedure, we take the best n=20 translation candidates proposed by the distributional method for each term and re-rank the translation candidates. This evaluation strategy is denoted by RR1 in Tables 5 and 6 which resume the obtained results on our corpora with two language pairs. For example, using the French-English Breast Cancer list, we find that re-ranking the top 20 translation candidates provided for each source term improved the top 1 precision by approximately 5%. Moreover, before reranking, 43.24% of the correct translations found in the top 20 results were ranked at the 1 st position, after re-ranking, this percentage increases to 52.70%. Which means that the re-ranking has significantly improved the ranks of the correct translations. An improvement of approximately 6% in the top 1 precision is obtained when using 20 translation candidates to re-rank the results obtained with the French-English Wind Energy list. However, fewer improvements were obtained with the French-German language pair as there were not many correct translations in the first 20 translations provided for each term by the distributional method.",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 239,
"text": "Tables 5 and 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Breast Cancer Wind Energy FR-EN FR-GR FR-EN FR-GR",
"sec_num": null
},
{
"text": "While performing experiments, we have noticed that re-ranking the first 5 translation candidates for each term may increase the top 1 precision more than if we, for example, re-ranked the first 20 translation candidates for each term. For that, we have decided to follow a different strategy (denoted by RR2) for re-ranking translations. To de- Table 6 : Results obtained on both Breast Cancer and Wind Energy French-German Corpora termine which translation candidate will be ranked at the n (starting from 1) position for a term, we first re-rank the top m= round (2(n-1)+5) to the nearest multiple of 5 translations proposed for each term. The translation candidate at position 1 will have the position n in the new ranked list and it will not be further re-ranked. Then, we determine the translation candidate that will be ranked at the position (n+1) in the new ranked list. We repeat this process until obtaining 10 translation candidates for each term in the new ranked list.",
"cite_spans": [],
"ref_spans": [
{
"start": 345,
"end": 352,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Breast Cancer Wind Energy FR-EN FR-GR FR-EN FR-GR",
"sec_num": null
},
{
"text": "For example, taking a list of translation candidates provided for a term: to determine which translation candidate will be ranked at the first position, we re-rank the list of top 5 (L top5 ) translation candidates provided for the term, we put the translation now ranked in the first position in a list we name L taken . To determine which translation candidate will be in the second position, we rerank the list (L top5 -L taken ) and add the translation ranked in the first position to L taken . Now to determine which translation will be ranked in the third position, we re-rank the (list of top 10 -L taken ), and put the translation ranked in the first position in L taken , and so on. Results obtained using this strategy are presented in Tables 5 and 6 (under RR2).",
"cite_spans": [],
"ref_spans": [
{
"start": 746,
"end": 760,
"text": "Tables 5 and 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Breast Cancer Wind Energy FR-EN FR-GR FR-EN FR-GR",
"sec_num": null
},
{
"text": "RR2 strategy gave better top 1 precision and MRR than RR1 with French-English Breast Cancer corpora, and better top 10 precision with French-English Wind Energy corpora. RR1 strategy gave better MRR on Wind Energy corpora. In general, the results of the two strategies were comparable. This means that RR1 gave stable improvements when re-ranking a list of 20 candidates for each term. Both RR1 and RR2 significantly improved the baseline results for French-English and French-German language pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Breast Cancer Wind Energy FR-EN FR-GR FR-EN FR-GR",
"sec_num": null
},
{
"text": "In this paper, we proposed a method to re-rank the top translation candidates acquired by a distributional method from comparable corpora. We assumed that some sentences are more representative of a term than others, and that a term and its correct translation share comparable sentences that can be extracted from comparable corpora. We suggested aligning sentences that best represent a term with sentences that best represent its translation candidates to re-rank these translation candidates. Our experiments showed improvements in precision and MRR measures for two language pairs and two domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our re-ranking method was tested with SWTs, and we aim to further evaluate it with multi-word terms (MWTs). Moreover, best aligned sentences for a term and its translation candidates can also be proposed for a user-oriented evaluation to see whether the aligned sentences can help in validating a translation pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "This tool is available on http://code.google.com/p/ttcproject/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their valuable remarks. This work was supported by the French National Research Agency under grant ANR-12-CORD-0020.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Context vector disambiguation for bilingual lexicon extraction from comparable corpora",
"authors": [
{
"first": "Dhouha",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "Nasredine",
"middle": [],
"last": "Semmar",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Zweigenbaum",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "759--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dhouha Bouamor, Nasredine Semmar, and Pierre Zweigenbaum. 2013. Context vector disambigua- tion for bilingual lexicon extraction from compa- rable corpora. In Proceedings of the 51st Annual Meeting of the Association for Computational Lin- guistics (Short Papers), volume 2 of ACL '13, pages 759-764, Sofia, Bulgaria.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Looking for candidate translational equivalents in specialized, comparable corpora",
"authors": [
{
"first": "Yun-Chuang",
"middle": [],
"last": "Chiao",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Zweigenbaum",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th international conference on Computational linguistics",
"volume": "2",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yun-Chuang Chiao and Pierre Zweigenbaum. 2002. Looking for candidate translational equivalents in specialized, comparable corpora. In Proceedings of the 19th international conference on Computational linguistics, volume 2 of COLING '02, pages 1-5.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Mining verynon-parallel corpora: Parallel sentence and lexicon extraction via bootstrapping and EM",
"authors": [
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Cheung",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '04",
"volume": "",
"issue": "",
"pages": "57--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascale Fung and Percy Cheung. 2004. Mining very- non-parallel corpora: Parallel sentence and lexicon extraction via bootstrapping and EM. In Proceed- ings of the Conference on Empirical Methods in Nat- ural Language Processing, EMNLP '04, pages 57- 63, Barcelona, Spain.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Finding terminology translations from non-parallel corpora",
"authors": [
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 5th Annual Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "192--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascale Fung and Kathleen Mckeown. 1997. Finding terminology translations from non-parallel corpora. In Proceedings of the 5th Annual Workshop on Very Large Corpora, pages 192-202.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning bilingual lexicons from comparable english and spanish corpora",
"authors": [
{
"first": "Pablo",
"middle": [],
"last": "Gamallo",
"suffix": ""
}
],
"year": 2007,
"venue": "Machine Translation Summit",
"volume": "",
"issue": "",
"pages": "191--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pablo Gamallo. 2007. Learning bilingual lexicons from comparable english and spanish corpora. In Machine Translation Summit 2007, pages 191-198.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Babouk : Focused Web Crawling for Corpus Compilation and Automatic Terminology Extraction",
"authors": [
{
"first": "Groc",
"middle": [],
"last": "Cl\u00e9ment De",
"suffix": ""
}
],
"year": 2011,
"venue": "The IEEE/WIC/ACM International Conferences on Web Intelligence",
"volume": "",
"issue": "",
"pages": "497--498",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cl\u00e9ment De Groc. 2011. Babouk : Focused Web Crawling for Corpus Compilation and Automatic Terminology Extraction. In The IEEE/WIC/ACM International Conferences on Web Intelligence, pages 497-498, Lyon, France.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Hybrid parallel sentence mining from comparable corpora",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Hunsicker",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Ion",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Stefanescu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 16th Annual Conference of the European Association for Machine Translation, EAMT '12",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Hunsicker, Radu Ion, and Dan Stefanescu. 2012. Hybrid parallel sentence mining from com- parable corpora. In Proceedings of the 16th An- nual Conference of the European Association for Machine Translation, EAMT '12, Trento, Italy.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "What is a term? the semiautomatic extraction of terms from text",
"authors": [
{
"first": "Ahmad",
"middle": [],
"last": "Khurshid",
"suffix": ""
},
{
"first": "Davies",
"middle": [],
"last": "Andrea",
"suffix": ""
},
{
"first": "Fulford",
"middle": [],
"last": "Heather",
"suffix": ""
},
{
"first": "Rogers",
"middle": [],
"last": "Margaret",
"suffix": ""
}
],
"year": 1994,
"venue": "Translation Studies: An Interdiscipline",
"volume": "",
"issue": "",
"pages": "267--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmad Khurshid, Davies Andrea, Fulford Heather, and Rogers Margaret. 1994. What is a term? the semi- automatic extraction of terms from text. In Trans- lation Studies: An Interdiscipline, John Benjamins Publishing Company, Amsterdam, pages 267-278.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning a translation lexicon from monolingual corpora",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL Workshop on Unsupervised Lexical Acquisition",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Kevin Knight. 2002. Learn- ing a translation lexicon from monolingual corpora. In Proceedings of ACL Workshop on Unsupervised Lexical Acquisition, pages 9-16.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Revisiting context-based projection methods for termtranslation spotting in comparable corpora",
"authors": [
{
"first": "Audrey",
"middle": [],
"last": "Laroche",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Langlais",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10",
"volume": "",
"issue": "",
"pages": "617--625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Audrey Laroche and Philippe Langlais. 2010. Re- visiting context-based projection methods for term- translation spotting in comparable corpora. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10, pages 617-625.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Clustering comparable corpora for bilingual lexicon extraction",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gaussier",
"suffix": ""
},
{
"first": "Akiko",
"middle": [],
"last": "Aizawa",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers",
"volume": "2",
"issue": "",
"pages": "473--478",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Li, Eric Gaussier, and Akiko Aizawa. 2011. Clus- tering comparable corpora for bilingual lexicon ex- traction. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguis- tics: Human Language Technologies: short papers, volume 2 of HLT '11, pages 473-478, Portland, Ore- gon.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Languageindependent compound splitting with morphological operations",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"M"
],
"last": "Dai",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Talbot",
"suffix": ""
},
{
"first": "Ashok",
"middle": [
"C"
],
"last": "Popat",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1395--1404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klaus Macherey, Andrew M. Dai, David Talbot, Ashok C. Popat, and Franz Och. 2011. Language- independent compound splitting with morphological operations. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguis- tics: Human Language Technologies, volume 1 of HLT '11, pages 1395-1404, Portland, Oregon.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Brains, not brawn: The use of smart comparable corpora in bilingual terminology mining",
"authors": [
{
"first": "Emmanuel",
"middle": [],
"last": "Morin",
"suffix": ""
},
{
"first": "B\u00e9atrice",
"middle": [],
"last": "Daille",
"suffix": ""
},
{
"first": "Koichi",
"middle": [],
"last": "Takeuchi",
"suffix": ""
},
{
"first": "Kyo",
"middle": [],
"last": "Kageura",
"suffix": ""
}
],
"year": 2008,
"venue": "ACM Trans. Speech Lang. Process",
"volume": "7",
"issue": "1",
"pages": "1--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emmanuel Morin, B\u00e9atrice Daille, Koichi Takeuchi, and Kyo Kageura. 2008. Brains, not brawn: The use of smart comparable corpora in bilingual termi- nology mining. ACM Trans. Speech Lang. Process., 7(1):1-23.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improving machine translation performance by exploiting non-parallel corpora",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Dragos",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Munteanu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "",
"pages": "477--504",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dragos Stefan Munteanu and Daniel Marcu. 2005. Im- proving machine translation performance by exploit- ing non-parallel corpora. Computational Linguis- tics, 31:477-504.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Extracting parallel sub-sentential fragments from nonparallel corpora",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Dragos",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Munteanu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dragos Stefan Munteanu and Daniel Marcu. 2006. Ex- tracting parallel sub-sentential fragments from non- parallel corpora. In Proceedings of the 21st Interna- tional Conference on Computational Linguistics and the 44th annual meeting of the Association for Com- putational Linguistics, pages 81-88, Sydney, Aus- tralia.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Identifying word translations in non-parallel texts",
"authors": [
{
"first": "Reinhard",
"middle": [],
"last": "Rapp",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd annual meeting on Association for Computational Linguistics, ACL '95",
"volume": "",
"issue": "",
"pages": "320--322",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhard Rapp. 1995. Identifying word translations in non-parallel texts. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics, ACL '95, pages 320-322, Cambridge, Massachusetts.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic identification of word translations from unrelated english and german corpora",
"authors": [
{
"first": "Reinhard",
"middle": [],
"last": "Rapp",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, ACL '99",
"volume": "",
"issue": "",
"pages": "519--526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhard Rapp. 1999. Automatic identification of word translations from unrelated english and german corpora. In Proceedings of the 37th annual meet- ing of the Association for Computational Linguistics on Computational Linguistics, ACL '99, pages 519- 526, College Park, Maryland.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "TTC TermSuite: A UIMA Application for Multilingual Terminology extraction from Comparable Corpora",
"authors": [
{
"first": "Jerome",
"middle": [],
"last": "Rocheteau",
"suffix": ""
},
{
"first": "B\u00e9atrice",
"middle": [],
"last": "Daille",
"suffix": ""
}
],
"year": 2011,
"venue": "the 5th International Joint Conference on Natural Language Processing, IJCNLP '11",
"volume": "",
"issue": "",
"pages": "9--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerome Rocheteau and B\u00e9atrice Daille. 2011. TTC TermSuite: A UIMA Application for Multilingual Terminology extraction from Comparable Corpora. In the 5th International Joint Conference on Natu- ral Language Processing, IJCNLP '11, pages 9-12, Chiang Mai, Thailand.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Extracting parallel sentences from comparable corpora using document level alignment",
"authors": [
{
"first": "Jason",
"middle": [
"R"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10",
"volume": "",
"issue": "",
"pages": "403--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason R. Smith, Chris Quirk, and Kristina Toutanova. 2010. Extracting parallel sentences from com- parable corpora using document level alignment. In Human Language Technologies: The 2010 An- nual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10, pages 403-411.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Method to score a translation pair (source term and target term)",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"text": "Contingency table for t and w",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF1": {
"text": "resumes the sizes of monolingual parts of corpora.",
"content": "<table><tr><td colspan=\"3\">Language Breast Cancer Wind Energy</td></tr><tr><td>French</td><td>531,240</td><td>313,943</td></tr><tr><td>English</td><td>528,428</td><td>314,549</td></tr><tr><td>German</td><td>378,474</td><td>358,602</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF2": {
"text": "Sizes in number of words of corpora for each language and for each domain",
"content": "<table><tr><td>\u2022 Bilingual dictionaries: general language</td></tr><tr><td>bilingual dictionaries 6 for the French-English</td></tr><tr><td>and French-German language pairs were ob-</td></tr><tr><td>tained. The French-English dictionary con-</td></tr><tr><td>tains 145,542 single-word entries and the</td></tr><tr><td>French-German dictionary contains 118,776</td></tr><tr><td>single-word entries.</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF4": {
"text": "22% 31.96% 35.24% 16.66% 23.95% 22.91% P5 45.08% 52.45% 52.45% 38.54% 45.83% 44.79% P10 53.27% 57.37% 57.37% 45.83% 48.95% 52.",
"content": "<table><tr><td/><td colspan=\"2\">Breast Cancer</td><td/><td colspan=\"2\">Wind Energy</td><td/></tr><tr><td/><td>Baseline</td><td>RR1</td><td colspan=\"2\">RR2 Baseline</td><td>RR1</td><td>RR2</td></tr><tr><td>P1</td><td colspan=\"6\">26.08%</td></tr><tr><td>MRR</td><td>0.338</td><td>0.396</td><td>0.419</td><td>0.249</td><td>0.324</td><td>0.319</td></tr><tr><td colspan=\"7\">Table 5: Results obtained on both Breast Cancer and Wind Energy French-English Corpora</td></tr><tr><td/><td colspan=\"2\">Breast Cancer</td><td/><td colspan=\"2\">Wind Energy</td><td/></tr><tr><td/><td>Baseline</td><td>RR1</td><td colspan=\"2\">RR2 Baseline</td><td>RR1</td><td>RR2</td></tr><tr><td>P1</td><td colspan=\"3\">9.16% 11.47% 11.47%</td><td>3.12%</td><td>7.29%</td><td>5.20%</td></tr><tr><td>P5</td><td colspan=\"3\">18.85% 21.31% 21.31%</td><td colspan=\"3\">9.37% 10.41% 10.41%</td></tr><tr><td>P10</td><td colspan=\"3\">26.22% 27.04% 27.04%</td><td colspan=\"3\">10.41% 13.51% 13.51%</td></tr><tr><td>MRR</td><td>0.139</td><td>0.160</td><td>0.162</td><td>0.051</td><td>0.088</td><td>0.075</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
}
}
}
}