Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
78 kB
{
"paper_id": "I08-1024",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:41:15.026647Z"
},
"title": "A Comparative Study for Query Translation using Linear Combination and Confidence Measure",
"authors": [
{
"first": "Youssef",
"middle": [],
"last": "Kadri",
"suffix": "",
"affiliation": {
"laboratory": "Laboratoire RALI",
"institution": "DIRO Universit\u00e9 de Montr\u00e9al CP 6128",
"location": {
"postCode": "H3C3J7",
"settlement": "Montr\u00e9al",
"country": "Canada"
}
},
"email": "kadriyou@iro.umontreal.ca"
},
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": "",
"affiliation": {
"laboratory": "Laboratoire RALI, DIRO Universit\u00e9 de Montr\u00e9al CP 6128",
"institution": "",
"location": {
"postCode": "H3C3J7",
"settlement": "Montr\u00e9al",
"country": "Canada"
}
},
"email": "nie@iro.umontreal.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In Cross Language Information Retrieval (CLIR), query terms can be translated to the document language using Bilingual Dictionaries (BDs) or Statistical Translation Models (STMs). Combining different translation resources can also be used to improve the performance. Unfortunately, the most studies on combining multiple resources use simple methods such as linear combination. In this paper, we drew up a comparative study between linear combination and confidence measures to combine multiple translation resources for the purpose of CLIR. We show that the linear combination method is unable to combine correctly different types of resources such as BDs and STMs. While the confidence measure method is able to re-weight the translation candidate more radically than in linear combination. It reconsiders each translation candidate proposed by different resources with respect to additional features. We tested the two methods on different test CLIR collections and the results show that the confidence measure outperforms the linear combination method.",
"pdf_parse": {
"paper_id": "I08-1024",
"_pdf_hash": "",
"abstract": [
{
"text": "In Cross Language Information Retrieval (CLIR), query terms can be translated to the document language using Bilingual Dictionaries (BDs) or Statistical Translation Models (STMs). Combining different translation resources can also be used to improve the performance. Unfortunately, the most studies on combining multiple resources use simple methods such as linear combination. In this paper, we drew up a comparative study between linear combination and confidence measures to combine multiple translation resources for the purpose of CLIR. We show that the linear combination method is unable to combine correctly different types of resources such as BDs and STMs. While the confidence measure method is able to re-weight the translation candidate more radically than in linear combination. It reconsiders each translation candidate proposed by different resources with respect to additional features. We tested the two methods on different test CLIR collections and the results show that the confidence measure outperforms the linear combination method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Cross Language Information Retrieval (CLIR) tries to determine documents written in a language from a query written in another language. Query translation is widely considered as the key problem in this task (Oard, 1998) . In previous researches, various approaches have been proposed for query translation: using a bilingual dictionary, using an off-theshelf machine translation system or using a parallel corpus. It is also found that when multiple translation resources are used, the translation quality can be improved, comparing to using only one translation resource (Xu, 2005) . Indeed, every translation tool or resource has its own limitations. For example, a bilingual dictionary can suggest common translations, but they remain ambiguous -translations for different senses of the source word are mixed up. Machine translation systems usually employ sophisticated methods to determine the best translation sentence, for example, syntactic analysis and some semantic analysis. However, it usually output only one translation for a source word, while it is usually preferred that a source query word be translated by multiple words in order to produce a desired query expansion effect. In addition, the only word choice made by a machine translation system can be wrong. Finally, parallel corpora contain useful information about word translation in particular areas. One can use such a corpus to train a statistical translation model, which can then be used to translate a query. This approach has the advantage that few manual interventions are required to produce the statistical translation model. In addition, each source word can be translated by several related target words and the latter being weighted. However, among the proposed translation words, there may be irrelevant ones.",
"cite_spans": [
{
"start": 208,
"end": 220,
"text": "(Oard, 1998)",
"ref_id": "BIBREF8"
},
{
"start": 573,
"end": 583,
"text": "(Xu, 2005)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Therefore, one can take advantage of several translation resources and tools in order to produce better query translations. The key problem is the way to combine the resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A common method used in previous studies is to assign a weight to each resource. Then all the translation candidates are weighted and then combined linearly (Nie, 2000) . However, this kind of combination assigns a single confidence score to all the translations from the same translation resource. In reality, a translation resource does not cover all the words with equal confidence. For some words, its translations can be accurate, while for some others, they are inappropriate. By using a linear combination, the relative order among the translation candidates is not changed. In practice, a translation with a low score can turn out to be a better translation when other information becomes available.",
"cite_spans": [
{
"start": 157,
"end": 168,
"text": "(Nie, 2000)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For example, the English word \"nutritional\" is translated into French by a statistical translation model trained on a set of parallel texts as follows: {nutritive 0.32 (nutritious), alimentaire 0.21 (food)}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We observe that the most common translation word \"alimentaire\" only takes the second place with lower probability than \"nutritive\". If these translations are combined linearly with another resource (say a BD), it is unlikely that the correct translation word \"alimentaire\" gain larger weight than \"nutritive\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This example shows that we have to reconsider the relative weights of the translation candidates when another translation resource is available. The purpose of this reconsideration is to determine how reasonable a translation candidate is given all the information now available. In so doing, the initial ranking of translation candidates can be changed. As a matter of fact using the method of confidence measures that we propose in this paper, we are able to reorder the translation candidates as follows: {alimentaire 0.38, nutritive 0.23, valeur 0.11 (value)}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The weight of the correct translation \"alimentaire\" is considerably increased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we will propose to use a new method based on confidence measure to re-weight the translation candidates. In the re-weighting, the original weight according to each translation resource is only considered as one factor. The final weight is determined by combining all the available factors. In our implementation, the factors are combined in neural networks, which produce a final confidence measure for each of the translation candidates. This final weight is not a simple linear combination of the original weights, but a recalculation according to all the information available, which is not when each translation resource is estimated separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The advantages of this approach are twofold. On one hand, the confidence measure allows us to adjust the original weights of the translations and to select the best translation terms according to all the information. On the other hand, the confidence measures also provide us with a new weighting for the translation candidates that are comparable across different translation resources. Indeed, when we try to combine a statistical translation model with a bilingual dictionary, we had to assign a weight to a candidate from the bilingual dictionary. This weight is not directly compatible with the probability assigned in the former.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the remaining sections of this paper, we will first describe the principle of confidence measure in section 2. In section 3, we will compare two methods to combine different translation resources: linear combination and confidence measure. Section 4 provides a description on how the parameters are tuned. Section 5 outlines the different steps for computing confidence measures. Finally, we present the results of our experiments on both English-French and English-Arabic CLIR. Our experiments will show that the method using confidence measure significantly outperforms the traditional approach using linear combination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Confidence measure is often used to re-rank or reweight some outputs produced by separate means. For example, in speech recognition and understanding (Hazen et al., 2002) , one tries to re-rank the result of speech recognition according to additional information using confidence measure. used confidence measures in a translation prediction task. The goal is to re-rank the translation candidates according to additional information. Confidence measure is defined as the probability of correctness of a candidate. In the case of translation, given a candidate translation t E for a source word t F , the confidence measure is",
"cite_spans": [
{
"start": 150,
"end": 170,
"text": "(Hazen et al., 2002)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Confidence measure",
"sec_num": "2"
},
{
"text": ") , , | ( F t t correct P E F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confidence measure",
"sec_num": "2"
},
{
"text": ", where F is a set of other features of the translation context (e.g. the POS-tag of the word, the previous translations words, etc.). In both applications, significant gains have been observed when using a confidence estimation layer within the translation models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confidence measure",
"sec_num": "2"
},
{
"text": "The problem of query translation is similar to general translation described in . We are presented with several translation resources, each being built separately. Our goal now is to use all of them together. As we discussed earlier, we want to take advantage of the additional information (other translation resources as well as additional linguistic analysis on the query) in order to re-weight each of the translation candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confidence measure",
"sec_num": "2"
},
{
"text": "In previous studies, neural networks have been commonly used to produce confidence measures. The inputs to the neural networks are translation candidates from different resources, their original weights and various other properties of them (e.g. POS-tag, probability in a language model, etc.). The output of the neural networks is a confidence measure assigned to a translation candidate from a translation resource. This confidence measure is used to re-rank the whole set of candidates from all the resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confidence measure",
"sec_num": "2"
},
{
"text": "In this study, we will use the same approach to combine different translation resources and to produce confidence measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confidence measure",
"sec_num": "2"
},
{
"text": "The neural networks need to be trained on a set of training data. Such data are available in both speech recognition and machine translation. However, in the case of CLIR, the goal of query translation is not strictly equivalent to machine translation. Indeed, in query translation, we are not limited to the correct literal translations. Not literal translation words that are strongly related to the query are also highly useful. These latter related words can produce a desired query expansion effect in IR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confidence measure",
"sec_num": "2"
},
{
"text": "Given this situation, we can no longer use a parallel corpus as our training data as in the case of machine translation. Modifications are necessary. We will describe the modified way we use to create the training data in section 4. The informative features we use will be described n section 5.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confidence measure",
"sec_num": "2"
},
{
"text": "Assume a query Q E written in a source language E and a document D F written in a target language F, we would like to determine a score of relevance of D F to Q E . However, as they are not directly comparable, a form of translation is needed. Let us describe the model that we will use to determine its score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": "Various theoretical models have been developed for IR, including vector space model, Boolean model and probabilistic model. Recently, language modeling is widely used in IR, and it has been show to produce very good experimental results. In addition, language modeling also provides a solid theoretical framework for integrating more aspects in IR such as query translation. Therefore, we will use it as our basic framework in this study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": "In language modeling framework, the relevance score of the document D F to the query Q E is determined as the negative KL-divergence between the query's language model and the document's language model (Zhai, 2001a) . It is defined as follows:",
"cite_spans": [
{
"start": 202,
"end": 215,
"text": "(Zhai, 2001a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2211 \u221d F t F F E F F E D t p Q t p D Q R ) | ( log ) | ( ) , (",
"eq_num": "(1)"
}
],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": "To avoid the problem of attributing zero probability to query terms not occurring in document D F , smoothing techniques are used to estimate p(t F |D F ). One can use the Jelinek-Mercer smoothing technique which is a method of interpolating between the document and collection language models (Zhai, 2001b) . The smoothed p(t F |D F ) is calculated as follows:",
"cite_spans": [
{
"start": 294,
"end": 307,
"text": "(Zhai, 2001b)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": ") | ( ) | ( ) 1 ( ) | ( F F ML F F ML F F C t p D t p D t p \u03bb \u03bb + \u2212 = (2) where | | ) , ( ) | ( F F F F F ML D D t tf D t p = and | | ) , ( ) | ( F F F F F ML C C t tf C t p =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": "are the maximum likelihood estimates of a unigram language model based on respectively the given document D F and the collection of documents C F . \u03bb is a parameter that controls the influence of each model. In CLIR, the term 1representing the query model can be estimated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": ") | ( E F Q t p in equation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": "\u2211 \u2211 = = E E q E E E E F q E E F E F Q q p Q q t p Q q t p Q t p ) | ( ) , | ( ) | , ( ) | ( \u2211 \u2248 E q E E ML E F Q q p q t p ) | ( ) | ( (3) where ) | ( E E ML Q q p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": "is the maximum likelihood estimation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": "| | ) , ( ) | ( E E E E E ML Q Q q tf Q q p = and ) | ( E F q t p is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": "the translation model. Putting (3) in (1), we obtain the general CLIR score formula:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2211\u2211 \u221d F E t F F q E E ML E F F E D t p Q q p q t p D Q R ) | ( log ) | ( ) | ( ) , (",
"eq_num": "(4)"
}
],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": "In our work, we do not change the document model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": ") | ( F F D t p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": "from monolingual IR. Our focus will be put on the estimation of the translation model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": ") | ( E F q t p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": "-the translation probability from a source query term q E to a target word t F , in particular, when several translation resources are available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": "Let us now describe two different ways to combine different translation resources for the estimation of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": ") | ( E F q t p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": ": by linear combination and by confidence measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General CLIR Problem",
"sec_num": "3"
},
{
"text": "The first intuitive method to combine different translation resources is by a linear combination. This means that the final translation model is estimated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "4"
},
{
"text": "\u2211 = i E F i i q E F q t p z q t p E ) | ( ) | ( \u03bb (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "4"
},
{
"text": "where \u03bb i is the parameter assigned to the translation resource i and E q z is a normalization factor so that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "4"
},
{
"text": "1 ) | ( = \u2211 F t E F q t p . ) | ( E F i q t p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "4"
},
{
"text": "is the probability of translating the source word q E to the target word t F by the resource i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "4"
},
{
"text": "In order to determine the appropriate parameter for each translation resource, we use the EM algorithm to find values which maximize the loglikelihood LL of a set C of training data according to the combined model, i.e.: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "4"
},
{
"text": ") ( ) | ( log ) , ( ) ( ) , ( | | 1 1 | | 1 i i j k C e f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "4"
},
{
"text": "Where (f, e)\u2208C is a pair of parallel sentences;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "4"
},
{
"text": "| | ) , ( # ) , ( C e f e f p =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "4"
},
{
"text": "is the prior probability of the pair of sentences (f, e) in the corpus C, |f| is the length of the target sentence f and |e| is the length of the source sentence e. \u03bb k is the coefficient related to resource k that we want to optimize and n is the number of resources. t k (f j |e i ) is the probability of translating the source word e i with the target word f j with each resource. p(e i ) is the prior probability of the source word e i in the corpus C. Note that the validation data set C on which we optimize the parameters must be different from the one used to train our baseline models. The training corpora are as follows: For English-Arabic, we use the Arabic-English parallel news corpus 1 . This corpus consists of around 83 K pairs of aligned sentences. For English-French, we use a bitext extracted from two parallel corpora: The Hansard 2 corpus and the Web corpus (Kadri, 2004) . It consists of around 60 K pairs of aligned sentences.",
"cite_spans": [
{
"start": 880,
"end": 893,
"text": "(Kadri, 2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "4"
},
{
"text": "1 http://www.ldc.upenn.edu/ Arabic-English Parallel News Part 1 (LDC2004T18) 2 LDC provides a version of this corpus: http://www.ldc.upenn.edu/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "4"
},
{
"text": "The component models for English-Arabic CLIR are: a STM built on a set of parallel Web pages (Kadri, 2004) , another STM built on the English-Arabic United Nations corpus (Fraser, 2002) , Ajeeb 3 bilingual dictionary and Almisbar 4 bilingual dictionary. For English-French CLIR, we use three component models: a STM built on Hansard corpus, another STM built on parallel Web pages and the Freedict 5 bilingual dictionary.",
"cite_spans": [
{
"start": 93,
"end": 106,
"text": "(Kadri, 2004)",
"ref_id": "BIBREF5"
},
{
"start": 171,
"end": 185,
"text": "(Fraser, 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "4"
},
{
"text": "The question considered in confidence measure is: Given a translation candidate, is it correct and how confident are we on its correctness?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Confidence Measures",
"sec_num": "5"
},
{
"text": "Confidence measure aims to answer this question. Given a translation candidate t F for a source term q E and a set F of other features, confidence measure corresponds to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Confidence Measures",
"sec_num": "5"
},
{
"text": ") , , | 1 ( F q t C p E F i = .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Confidence Measures",
"sec_num": "5"
},
{
"text": "We can use this measure as an estimate of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Confidence Measures",
"sec_num": "5"
},
{
"text": ") | ( E F q t p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Confidence Measures",
"sec_num": "5"
},
{
"text": ", i.e.:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Confidence Measures",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2211 = = i E F i q E F F q t C p z q t p E ) , , | 1 ( ) | (",
"eq_num": "(7)"
}
],
"section": "Using Confidence Measures",
"sec_num": "5"
},
{
"text": "where F is the set of features that we use. We will see several features to help determine the confidence measure of a translation candidate, for example, the translation probability, the reverse translation probability, language model features, and so on. We will describe these features in more detail in section 5.2. In general, we can consider confidence measure as P(C=1|X), given X-the source word, a translation and a set of features. We use a Multi Layer Perceptron (MLP) to estimate the probability of correctness P(C=1|X) of a translation. Neural networks have the ability to use input data of different natures and they are well-suited for classification tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Confidence Measures",
"sec_num": "5"
},
{
"text": "Our training data can be viewed as a set of pairs (X,C), where X is a vector of features relative to a translation 6 used as the input of the network, and C is the desired output (the correctness of the translation 0/1). The MLP implements a non-linear mapping of the input features by combining layers of linear transformation and non-linear transfer function. Formally, the MLP implements a discriminant function for an input X of the form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Confidence Measures",
"sec_num": "5"
},
{
"text": ")) ( ( ) ; ( X W h V o X g \u00d7 \u00d7 = \u03b8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Confidence Measures",
"sec_num": "5"
},
{
"text": "(8) where \u03b8 ={W,V}, W is a matrix of weights between input and hidden layers and V is a vector of weights between hidden and output layers; h is an activation function for the hidden units which nonlinearly transforms the linear combination of inputs X W \u00d7 ; o is also a non-linear activation function but for the output unit, that transforms the MLP output to the probability estimate P(C=1|X). Under these conditions, our MLP was trained to minimize an objective function of error rate (Section 4.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Confidence Measures",
"sec_num": "5"
},
{
"text": "In our experiments, we used a batch gradient descent optimizer. During the test stage, the confidence of a translation X is estimated with the above discriminant function g(X; \u03b8); where \u03b8 is the set of weights optimized during the learning stage. These parameters are expected to correlate with the true probability of correctness P(C=1|X).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Confidence Measures",
"sec_num": "5"
},
{
"text": "A natural metric for evaluating probability estimates is the negative log-likelihood (or cross entropy CE) assigned to the test corpus by the model normalized by the number of examples in the test corpus (Blatz et al., 2003) . This metric evaluates the probabilities of correctness. It measures the cross entropy between the empirical distribution on the two classes (correct/incorrect) and the confidence model distribution across all the examples X (i) in the corpus. Cross entropy is defined as follows:",
"cite_spans": [
{
"start": 204,
"end": 224,
"text": "(Blatz et al., 2003)",
"ref_id": "BIBREF0"
},
{
"start": 451,
"end": 454,
"text": "(i)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The objective function to minimize",
"sec_num": "5.1"
},
{
"text": "\u2211 \u2212 = i i i n X C P CE ) | ( log ) ( ) ( 1 (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The objective function to minimize",
"sec_num": "5.1"
},
{
"text": "where C (i) is 1 if the translation X (i) is correct, 0 otherwise. To remove dependence on the prior probability of correctness, Normalized Cross Entropy (NCE) is used:",
"cite_spans": [
{
"start": 8,
"end": 11,
"text": "(i)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The objective function to minimize",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "b b CE CE CE NCE ) ( \u2212 =",
"eq_num": "("
}
],
"section": "The objective function to minimize",
"sec_num": "5.1"
},
{
"text": "10) The baseline CE b is a model that assigns fixed probabilities of correctness based on the empirical class frequencies:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The objective function to minimize",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ") / log( ) / ( ) / log( ) / ( 1 1 0 0 n n n n n n n n CE b \u2212 \u2212 =",
"eq_num": "("
}
],
"section": "The objective function to minimize",
"sec_num": "5.1"
},
{
"text": "11) where n 0 and n 1 are the numbers of correct and incorrect translations among n test cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The objective function to minimize",
"sec_num": "5.1"
},
{
"text": "The MLP tends to capture the relationship between the correctness of the translation and the features, and its performance depends on the selection of informative features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "We selected intuitively seven classes of features hypothesized to be informative for the correctness of a translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "Translation model index: an index representing the resource of translation that produced the translation candidate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "Translation probabilities: the probability of translating a source word with a target word. These probabilities are estimated with IBM model 1 (Brown et al., 1993) on parallel corpora. For translations from bilingual dictionaries, as no probability is provided, we carry out the following process to assign a probability to each translation pair (e, f) in a bilingual dictionary: We trained a statistical translation model on a parallel corpus. Then for each translation pair (e,f) of the bilingual dictionary, we looked up the resulting translation model and extracted the probability assigned by this translation model to the translation pair in question. Finally, the probability is normalized by the Laplace smoothing method:",
"cite_spans": [
{
"start": 143,
"end": 163,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "\u2211 = + + = n i i STM STM BD e f p e f p e f p 1 1 ) | ( 1 ) | ( ) | ( (12)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "Where n is the number of translations proposed by the bilingual dictionary to the word e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "Translation ranking: This class of features includes two features: The rank of the translation provided by each resource and the probability difference between the translation and the highest probability translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "Reverse translation information: This includes the probability of translation of a target word to a source word. Other features measure the rank of source word in the list of translations of the target word and if the source word holds in the best translations of the target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "Translation \"Voting\": This feature aims to know whether the translation is voted by more than one resource. The more a same translation is voted the more likely it may be correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "Source sentence-related features: One feature measures the frequency of the source word in the source sentence. Another feature measures the number of source words in the source sentence that have a translation relation with the translation in question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "We use the unigram, the bigram and the trigram language models for source and target words on the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language model features:",
"sec_num": null
},
{
"text": "The corpus used for training confidence is the same as the corpus for tuning parameters for the linear combination. It is a set of aligned sentences. Source sentences are translated to the target language word by word using baseline models. We translated each source word with the most probable 7 translations for the translation models and the best five translations provided by the bilingual dictionaries. Translations are then compared to the reference sentence to build a labeled corpus: a translation of a source word is considered to be correct if it occurs in the reference sentence. The word order is ignored, but the number of occurrences is taken into account. This metric fits well our context of IR: IR models are based on \"bag of words\" principle and the order of words is not considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training for confidence measures",
"sec_num": "5.3"
},
{
"text": "We test with various numbers of hidden units (from 5 to 100). We used the NCE metric to compare the performance of different architectures. The MLP with 50 hidden units gave the best performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training for confidence measures",
"sec_num": "5.3"
},
{
"text": "To test the performance of individual features, we experimented with each class of features alone. The best features are the translation \"voting\", language model features and the translation probabilities. The translation \"voting\" is very informative because it presents the translation probability attributed by each resource to the translation in question. The translation ranking, the reverse translation information, the translation model index and the source sentence-related features provide some marginally useful information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training for confidence measures",
"sec_num": "5.3"
},
{
"text": "The experiments are designed to test whether the confidence measure approach is effective for query translation, and how it compares with the traditional linear combination. We will conduct two series of experiments, one for English-French CLIR and another for English-Arabic CLIR. 7 The translations with the probability p(f|e)\u22650.1",
"cite_spans": [
{
"start": 282,
"end": 283,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CLIR experiments",
"sec_num": "6"
},
{
"text": "English-French CLIR: We use English queries to retrieve French documents. In our experiments, we use two document collections: one from TREC 8 and another from CLEF 9 (SDA). Both collections contain newspaper articles. TREC collection contains 141 656 documents and CLEF collection 44 013 documents. We use 4 query sets: 3 from TREC (TREC6 (25 queries), TREC7 (28 queries), TREC8 (28 queries)) and one from CLEF (40 queries).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "6.1"
},
{
"text": "English-Arabic CLIR: For these experiments, we use English queries to retrieve Arabic documents. The test corpus is the Arabic TREC collection which contains 383 872 documents. For topics, we use two sets: TREC2001 (25 queries) and TREC2002 (50 queries). Documents and queries are stemmed and stopwords are removed. The Porter stemming is used to stem English queries and French documents. Arabic documents are stemmed using linguisticbased stemming method (Kadri, 2006) . The query terms are translated with the baseline models (Section 4). The resulting translations are then submitted to the information retrieval process. We tested with different ways to assign weights to translation candidates: translations from each resource, linear combination and confidence measures.",
"cite_spans": [
{
"start": 457,
"end": 470,
"text": "(Kadri, 2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "6.1"
},
{
"text": "When using each resource separately, we attribute the IBM 1 translation probabilities to our translations. For each query term, we take only translations with the probability p(f|e)\u22650.1 when using translation models and the five best translations when using bilingual dictionaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "6.1"
},
{
"text": "The tuned parameters assigned to each translation resource are as follows: English-Arabic CLR: STM-Web: 0.29, STM-UN: 0.34, Ajeeb BD: 0.14, Almisbar BD: 0.22. English-French CLR: STM-Web: 0.3588, STM-Hansard: 0.6408, Freedict BD: 0.0003.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear combination (LC)",
"sec_num": "6.2"
},
{
"text": "These weights produced the best log-likelihood of the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear combination (LC)",
"sec_num": "6.2"
},
{
"text": "For CLIR, the above combinations are used to combine translation candidates from different resources. The tables below show the CLIR effectiveness (mean average precision -MAP) of individual models and the linear combination. STM-Hansard 0.25 (64%) 0.24 (70%) 0.33 (75%) 0.30 (75%) Freedict BD 0.17 (43%) 0.11 (32%) 0.13 (29%) 0.14 35%Linear Comb. 0.26 (66%) 0.26 (76%) 0.36 (81%) 0.30 75%Table2. English-French CLIR performance (MAP) with individual models and linear combination",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear combination (LC)",
"sec_num": "6.2"
},
{
"text": "We observe that the performance is quite different from one model to another. The low score recorded by the STMs for English-Arabic CLIR compared to the score of STMs for English-French CLIR is possibly due to the small data set on which the English-Arabic STMs are trained. A set of 2816 English-Arabic pairs of documents is not enough to build a reasonable STM. For English-Arabic CLIR, BDs present better performance than STMs because they cover almost all query terms and they provide multiple good translations to each query term. When combining all the resources, the performance is supposed to be better because we would like to take advantage of each of the models. However, we see that the combined model performs even worse than one of the models -Ajeeb BD for English-Arabic CLIR. This shows that the linear combination is not necessarily a good way to combine different translation resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear combination (LC)",
"sec_num": "6.2"
},
{
"text": "An example of English queries is shown in Table 3: \"What measures are being taken to develop tourism in Cairo?\". The Arabic translation provided by TREC to the word \"measures\" is: \u202b.\"\u0625\ufe9f\u0631\u0627\ufe80\u0627\u062a\"\u202c We see clearly that translations with different resources are different. Some resources propose inappropriate translations such as \u202b\"\ufee3\ufedc\ufef3\u0627\u0644\"\u202c or \u202b.\"\ufee3\ufef3\u0632\u0627\u0646\"\u202c Even if two resources suggest the same translations, the weights are different. For this query, the linear combination produces better query translation terms than every resource taken alone: The most probable translations are selected from the combined list. However, this method is unable to attribute an appropriate weight to the best translation \u202b;\"\u0625\ufe9f\u0631\u0627\ufe80\u0627\u062a\"\u202c it is selected but ranked at third position with a weak weight. In these experiments, we use confidence measures as weights for translations. According to these confidence measures, we select the translations with the best confidences for each query term. The following tables show the results: In terms of MAP, we see clearly that the results using confidence measures are better than those obtained with the linear combination. The twotailed t-test shows that the improvement brought by confidence measure over linear combination is statistically significant at the level P<0.05. This improvement in CLIR performance is attributed to the ability of confidence measure to re-weight each translation candidate. The final sets of translations (and their probabilities) are more reasonable than in linear combination. The tables below show some examples where we get a large improvement in average precision when using confidence measures to combine resources. The first example is the TREC 2001 query \"What measures are being taken to develop tourism in Cairo?\". The translation of the query term \"measures\" to Arabic using the two methods is presented in table 6. The second example is the TREC6 query \"Acupuncture\". Table 7 presents the translation of this query term is to French using the two techniques: In the example of table 6, confidence measure has been able to redeem the best translation \u202b\"\u0625\ufe9f\u0631\u0627\ufe80\u0627\u062a\"\u202c and rescore it with a stronger weight than the other incorrect or inappropriate ones. The same effect is observed in the example of table 7. Confidence measure has been able to increase the correct translation \"acupuncture\" to a higher level than the other incorrect ones. These examples show the potential advantage of confidence measure over linear combination: The confidence measure does not blindly trust all the translations from different resources. It tests their validity on new validation data. Thus, the translation candidates are rescored and filtered according to a more reliable weight.",
"cite_spans": [],
"ref_spans": [
{
"start": 1928,
"end": 1935,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Linear combination (LC)",
"sec_num": "6.2"
},
{
"text": "Multiple translation resources are believed to contribute in improving the quality of query translation. However, in most previous studies, only linear combination has been used. In this study, we propose a new method based on confidence measure to combine different translation resources. The confidence measure estimates the probability of correctness of a translation, given a set of features available. The measure is used to weight the translation candidates in a unified manner. It is also expected that the new measure is more reasonable than the original measures because of the use of additional features. Our experiments on both English-Arabic and English-French CLIR have shown that confidence measure is a better way to combine translation resources than linear combination. This shows that confidence measure is a promising approach to combine non homogenous resources and can be further improved on several aspects. For example, we can optimize this technique by identi-fying other informative features. Other techniques for computing confidence estimates can also be used in order to improve the performance of CLIR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "http://www.ajeeb.com/ 4 http://www.almisbar.com/ 5 http://www.freedict.com/ 6 By translation, we mean the pair of source word and its translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://trec.nist.gov/ 9 http://www.clef-campaign.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Confidence estimation for machine translation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Blatz",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Gandrabur",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kulesza",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sanchis",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Ueffing",
"suffix": ""
}
],
"year": 2003,
"venue": "CLSP/JHU 2003 Summer Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Blatz, E. Fitzgerald, G. Foster, S. Gandrabur, C. Goutte, A. Kulesza, A. Sanchis and N. Ueffing. 2003. Confidence estimation for machine translation. Technical Report, CLSP/JHU 2003 Summer Work- shop, Baltimore MD.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, S. A. Pietra, V. J. Pietra and R. L. Mercer. 1993. The mathematics of statistical machine transla- tion: Parameter estimation. Computational Linguis- tics, 19(2):263-311.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "TREC 2002 Cross-lingual Retrieval at BBN. TREC11 conference",
"authors": [
{
"first": "A",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Fraser, J. Xu and R. Weischedel. 2002. TREC 2002 Cross-lingual Retrieval at BBN. TREC11 conference.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Confidence Estimation for Text Prediction",
"authors": [
{
"first": "S",
"middle": [],
"last": "Gandrabur",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the CoNLL 2003 Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Gandrabur and G. Foster. 2003. Confidence Estima- tion for Text Prediction. Proceedings of the CoNLL 2003 Conference, Edmonton.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Recognition confidence scoring for use in speech understanding systems",
"authors": [
{
"first": "T",
"middle": [
"J"
],
"last": "Hazen",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Burianek",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Polifroni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Seneff",
"suffix": ""
}
],
"year": 2002,
"venue": "Computer Speech and Language",
"volume": "16",
"issue": "",
"pages": "49--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. J. Hazen, T. Burianek, J. Polifroni and S. Seneff. 2002. Recognition confidence scoring for use in speech understanding systems. Computer Speech and Language, 16:49-67.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Query translation for English-Arabic cross language information retrieval",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Kadri",
"suffix": ""
},
{
"first": "J",
"middle": [
"Y"
],
"last": "Nie",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the TALN conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Kadri and J. Y. Nie. 2004. Query translation for English-Arabic cross language information retrieval. Proceedings of the TALN conference.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Effective stemming for Arabic information retrieval. The challenge of Arabic for NLP/MT Conference",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Kadri",
"suffix": ""
},
{
"first": "J",
"middle": [
"Y"
],
"last": "Nie",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Kadri and J. Y. Nie. 2006. Effective stemming for Arabic information retrieval. The challenge of Ara- bic for NLP/MT Conference. The British Computer Society. London, UK.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multilingual information retrieval based on parallel texts from the Web",
"authors": [
{
"first": "J",
"middle": [
"Y"
],
"last": "Nie",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "2000",
"issue": "",
"pages": "188--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Y. Nie, M. Simard and G Foster. 2000. Multilingual information retrieval based on parallel texts from the Web. In LNCS 2069, C. Peters editor, CLEF2000:188-201, Lisbon.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Cross-Language Information Retrieval",
"authors": [
{
"first": "D",
"middle": [
"W"
],
"last": "Oard",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Diekema",
"suffix": ""
}
],
"year": 1998,
"venue": "Annual review of Information science",
"volume": "",
"issue": "",
"pages": "223--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. W. Oard and A. Diekema. 1998. Cross-Language Information Retrieval. In M. Williams (ed.), Annual review of Information science, 1998:223-256.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Empirical studies on the impact of lexical resources on CLIR performance",
"authors": [
{
"first": "J",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2005,
"venue": "formation processing & management",
"volume": "41",
"issue": "",
"pages": "475--487",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Xu and R. Weischedel. 2005. Empirical studies on the impact of lexical resources on CLIR performance. In- formation processing & management, 41(3):475-487.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Model-based feedback in the language modeling approach to information retrieval",
"authors": [
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Zhai and J. Lafferty. 2001a. Model-based feedback in the language modeling approach to information re- trieval. CIKM 2001 Conference.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A study of smoothing methods for language models applied to ad hoc information retrieval",
"authors": [
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the ACM-SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Zhai and J. Lafferty. 2001b. A study of smoothing methods for language models applied to ad hoc in- formation retrieval. Proceedings of the ACM-SIGIR.",
"links": null
}
},
"ref_entries": {
"TABREF3": {
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"2\">Linear Comb. Acupuncture 0.13 (acupuncture), sevrage</td></tr><tr><td/><td>0.13 (severing), hypnose 0.13 (hypnosis)</td></tr><tr><td>Conf. meas.</td><td>Acupuncture 0.21, sevrage 0.17, hypnose</td></tr><tr><td/><td>0.14</td></tr><tr><td colspan=\"2\">Table7. Translation examples to French</td></tr></table>",
"html": null,
"text": "Trans.Model Translation(s) of term \"measures\" Linear Comb. \u202b\ufe97\u062f\u0627\ufe91\ufef3\u0631\u202c 0.61, \u202b\ufee3\ufed7\ufef3\u0627\u0633\u202c 0.037, \u202b\u0625\ufe9f\u0631\u0627\ufe80\u0627\u062a\u202c 0.029, \u202b\ufed7\ufef3\u0627\u0633\u202c 0.020 Conf. meas. \u202b\u0625\ufe9f\u0631\u0627\ufe80\u0627\u062a\u202c 0.51, \u202b\ufed7\u062f\u0631\u202c 0.10, \u202b\ufed7\ufef3\u0627\u0633\u202c 0.06 Table6. Translation examples to Arabic Trans.model Translation(s) of term \"Acupuncture\""
}
}
}
}