{ "paper_id": "O12-5002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:02:51.773080Z" }, "title": "TQDL: Integrated Models for Cross-Language Document Retrieval", "authors": [ { "first": "Long-Yue", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "Natural Language Processing & Portuguese-Chinese Machine Translation Laboratory", "institution": "Macau S. A. R", "location": { "country": "China" } }, "email": "" }, { "first": "Derek", "middle": [ "F" ], "last": "Wong", "suffix": "", "affiliation": { "laboratory": "Natural Language Processing & Portuguese-Chinese Machine Translation Laboratory", "institution": "Macau S. A. R", "location": { "country": "China" } }, "email": "" }, { "first": "Lidia", "middle": [ "S" ], "last": "Chao", "suffix": "", "affiliation": { "laboratory": "Natural Language Processing & Portuguese-Chinese Machine Translation Laboratory", "institution": "Macau S. A. R", "location": { "country": "China" } }, "email": "" }, { "first": "Yue", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "Natural Language Processing & Portuguese-Chinese Machine Translation Laboratory", "institution": "Macau S. A. R", "location": { "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper proposed an integrated approach for Cross-Language Information Retrieval (CLIR), which integrated with four statistical models: Translation model, Query generation model, Document retrieval model and Length Filter model. Given a certain document in the source language, it will be translated into the target language of the statistical machine translation model. The query generation model then selects the most relevant words in the translated version of the document as a query. Instead of retrieving all the target documents with the query, the length-based model can help to filter out a large amount of irrelevant candidates according to their length information. Finally, the left documents in the target language are scored by the document searching model, which mainly computes the similarities between query and document.", "pdf_parse": { "paper_id": "O12-5002", "_pdf_hash": "", "abstract": [ { "text": "This paper proposed an integrated approach for Cross-Language Information Retrieval (CLIR), which integrated with four statistical models: Translation model, Query generation model, Document retrieval model and Length Filter model. Given a certain document in the source language, it will be translated into the target language of the statistical machine translation model. The query generation model then selects the most relevant words in the translated version of the document as a query. Instead of retrieving all the target documents with the query, the length-based model can help to filter out a large amount of irrelevant candidates according to their length information. Finally, the left documents in the target language are scored by the document searching model, which mainly computes the similarities between query and document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Different from the traditional parallel corpora-based model which relies on IBM algorithm, we divided our CLIR model into four independent parts but all work together to deal with the term disambiguation, query generation and document retrieval. Besides, the TQDL method can efficiently solve the problem of translation ambiguity and query expansion for disambiguation, which are the big issues in Cross-Language Information Retrieval. Another contribution is the length filter, which are trained from a parallel corpus according to the ratio of length between two languages. This can not only improve the recall value due to filtering out lots of useless documents dynamically, but also increase the efficiency in a smaller search space. Therefore, the precision can be improved but not at the cost of recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "With the flourishing development of the Internet, the amount of information from a variety of domains is rising dramatically. Especially after the advent of the World Wide Web (WWW) in the 1900s, the amount of online information from the government, scientific and business communities has risen dramatically. Although much word has been done to develop effective and efficient retrieval systems for monolingual resources, the diversity and the explosive growth of information in different languages drove a great need for information retrieval that could cross language boundaries (Ballesteros et al., 1988) .", "cite_spans": [ { "start": 582, "end": 608, "text": "(Ballesteros et al., 1988)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The issues of CLIR have been discussed for several decades. Its task addresses a situation in which a user tries to search a set of documents written in one language using a query in a different language (Kishida, 2005) . It is of great significance, allowing people access information resources written in non-native languages and aligning documents for statistical machine translation (SMT) systems, of which quality is heavily dependent upon the amount of parallel sentences used in constructing the system.", "cite_spans": [ { "start": 204, "end": 219, "text": "(Kishida, 2005)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this paper, we focus on the problems of translation ambiguity, query generation and searching score which are keys to the retrieval performance. First of all, in order to increase the probability that the best translation can be selected from multiple ones, which occurs in the target documents, the context and the most likely probability of the whole sentence should be considered. So we apply document translation approach using SMT model instead of query translation, although the latter one may require fewer computational resources. After the source documents are translated into the target language, the problem is transformed from bilingual environment to monolingual one, where conventional IR techniques can be used for document retrieval. Secondly, some terms in a certain document will be selected as query, which can distinguish the document from others. However, some of the words occur too frequently to be useful, which cannot distinguish target documents. This mostly includes two cases: one is that the word frequency is high in all the documents of a set, which is usually classified as stop word; the other one is that the frequency is moderate in several documents of a set. These words are poor in the ability of distinguishing documents. Thus, the query generation model should pick the words that occur more frequently in a certain document while less frequently in other documents. Finally, the document searching model evaluates the similarity between the query and each document. This model should give a higher score to the target document which covers the most relevant words in the given query. However, another problem is that word overlap between a query and a wrong document is more probable when the document and the query are expressed in the same language. For example, Document A is larger and contains another smaller document B. So the retrieval system would be confused with a query including the information of B. In order to solve this problem, the length ratio of a language pair is considered. As the search space is reduced, both the speed efficiency and the recall value will be improved clearly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "There are two cases to be considered when we investigated the method. In one case, the lengths of documents are uneven, which are hard to balance the scores between large and small documents. In the other case, the contents of the documents are very similar, which are not easy to distinguish for retrieval. The results of experiments reveal that the proposed model shows a very good performance in dealing with both cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The paper is organized as follows. The related works are reviewed and discussed in Section 2. The proposed CLIR approach based on statistical models is described in Section 3. The resources and configurations of experiments for evaluating the system are detailed in Section 4. Results, discussion and comparison between different strategies are given in Section 5 followed by a conclusion and future improvements to end the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The issues of CLIR have been discussed from different perspectives for several decades. In this section, we briefly describe some related methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "From a statistical perspective, the CLIR problem can be treated as document alignment. Given a set of parallel documents, the alignment that maximizes the probability over all possible alignments is retrieved (Gale & Church, 1991) as follows:", "cite_spans": [ { "start": 209, "end": 230, "text": "(Gale & Church, 1991)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "\u220f \u2208 \u21d4 \u2194 \u2248 A L L t s t s A t s t s L L L L D D A ) ( ) | Pr( max arg ) , | Pr( max arg (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "where A is an alignment, D s and D t are the source and target documents, respectively L 1 and L 2 are the documents of two languages, L s \u2194L t is an individual aligned pairs, an alignment A is a set consisting of L s \u2194L t pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "On the matching strategies for CLIR, query translation is most widely used method due to its tractability (Gao et al., 2001) . However, it is relatively difficult to resolve the problem of term ambiguity because \"queries are often short and short queries provide little context for disambiguation\" (Oard & Diekema, 1998) . Hence, some researchers have used document translation method as the opposite strategies to improve translation quality, since more varied context within each document is available for translation (Braschler & Schauble, 2001; Franz et al., 1999) .", "cite_spans": [ { "start": 106, "end": 124, "text": "(Gao et al., 2001)", "ref_id": "BIBREF10" }, { "start": 298, "end": 320, "text": "(Oard & Diekema, 1998)", "ref_id": "BIBREF15" }, { "start": 520, "end": 548, "text": "(Braschler & Schauble, 2001;", "ref_id": "BIBREF2" }, { "start": 549, "end": 568, "text": "Franz et al., 1999)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "However, another problem introduced based on this approach is word (term) disambiguation, because a word may have multiple possible translations (Oard & Diekema, 1998) . Significant efforts have been devoted to this problem. Davis and Ogden (1997) applied a part-of-speech (POS) method which requires POS tagging software for both languages. Marcello et al. presented a novel statistical method to score and rank the target documents by integrating probabilities computed by query-translation model and query-document model (Federico & Bertoldi, 2002) . However, this approach cannot aim at describing how users actually create queries which have a key effect on the retrieval performance. Due to the availability of parallel corpora in multiple languages, some authors have tried to extract beneficial information for CLIR by using SMT techniques. S\u00e1nchez-Mart\u00ednez et al. (S\u00e1nchez-Mart\u00ednez & Carrasco, 2011) applied SMT technology to generate and translate queries in order to retrieve long documents.", "cite_spans": [ { "start": 145, "end": 167, "text": "(Oard & Diekema, 1998)", "ref_id": "BIBREF15" }, { "start": 225, "end": 247, "text": "Davis and Ogden (1997)", "ref_id": "BIBREF6" }, { "start": 524, "end": 551, "text": "(Federico & Bertoldi, 2002)", "ref_id": "BIBREF7" }, { "start": 873, "end": 908, "text": "(S\u00e1nchez-Mart\u00ednez & Carrasco, 2011)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Some researchers like Marcello, S\u00e1nchez-Mart\u00ednez et al. have attempted to estimate translation probability from a parallel corpus according to a well-known algorithm developed by IBM (Brown et al., 1993) . The algorithm can automatically generate a bilingual term list with a set of probabilities that a term is translated into equivalents in another language from a set of sentence alignments included in a parallel corpus. The IBM Model 1 is the simplest among the five models and often used for CLIR. The fundamental idea of the Model 1 is to estimate each translation probability so that the probability represented is maximized", "cite_spans": [ { "start": 183, "end": 203, "text": "(Brown et al., 1993)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "\u2211 \u220f = = + = l i i j m j m s t P l s t P 0 1 ) | ( ) 1 ( ) | ( \u03b5 (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "where t is a sequence of terms t 1 , \u2026, t m in the target language, s is a sequence of terms s 1 , \u2026, s l in the source language, P(t j |s i ) is the translation probability, and \u0190 is a parameter (\u0190 =P(m|e)),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "where e is target language and m is the length of source language). Eq. (2) tries to balance the probability of translation, and the query selection, in which problem still exists: it tends to select the terms consisting of more words as query because of its less frequency, while cutting the length of terms may affect the quality of translation. Besides, the IBM model 1 only proposes translations word-by-word and ignores the context words in the query. This observation suggests that a disambiguation process can be added to select the correct translation words (Oard & Diekema, 1998) . However, in our method, the conflict can be resolved through contexts.", "cite_spans": [ { "start": 566, "end": 588, "text": "(Oard & Diekema, 1998)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "If translated sentences share cognates, then the character lengths of those cognates are correlated (Yang & Li, 2004) . Brown et al. (1991) and Gale and Church (1991) have developed the models based on relationship between the lengths of sentences that are mutual translations. Although it has been suggested that length-based methods are language-independent (Gale & Church, 1991) , they really rely on length correlations arising from the historical relationships of the languages being aligned.", "cite_spans": [ { "start": 100, "end": 117, "text": "(Yang & Li, 2004)", "ref_id": "BIBREF22" }, { "start": 120, "end": 139, "text": "Brown et al. (1991)", "ref_id": "BIBREF3" }, { "start": 144, "end": 166, "text": "Gale and Church (1991)", "ref_id": "BIBREF9" }, { "start": 360, "end": 381, "text": "(Gale & Church, 1991)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "The length-based model assumes that each term in L s is responsible for generating some number of terms in L t . This leads to a further approximation that encapsulates the dependence to a single parameter \u03b4. \u03b4(l s ,l t ) is function of l s and l t , which can be designed according to different language pairs. The length-based method is developed based on the following approximation to Eq. (3): The approach relies on four models: translation model which generates the most probable translation of source documents; query generation model which determines what words in a document might be more favorable to use in a query; length filter model dynamically create a subset of candidates for retrieval according to the length information; and document searching model, which evaluates the similarity between a given query and each document in the target document set. The workflow of the approach for CLIR is shown in Fig. 1 .", "cite_spans": [], "ref_spans": [ { "start": 919, "end": 925, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": ")) , ( | Pr( ) , | Pr( t s t s t s t s l l L L L L L L \u03b4 \u2194 \u2248 \u2194 (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Currently, the good performing statistical machine translation systems are based on phrase-based models which translate small word sequences at a time. Generally speaking, translation model is common for contiguous sequences of words to translate as a whole. Phrasal translation is certainly significant for CLIR (Ballesteros & Croft, 1997) , as stated in Section 1. It can do a good job in dealing with term disambiguation.", "cite_spans": [ { "start": 313, "end": 340, "text": "(Ballesteros & Croft, 1997)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Model", "sec_num": "3.1" }, { "text": "In this work, documents are translated using the translation model provided by Moses, where the log-linear model is considered for training the phrase-based system models (Och & Ney, 2002) ", "cite_spans": [ { "start": 171, "end": 188, "text": "(Och & Ney, 2002)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Model", "sec_num": "3.1" }, { "text": "1 1 1 1 1 ) ) , ' ( exp( ) ) , ( exp( ) | ( \u03bb \u03bb (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model", "sec_num": "3.1" }, { "text": "where h m indicates a set of different models, \u03bb m means the scaling factors, and the denominator can be ignored during the maximization process. The most important models in Eq. (4) normally are phrase-based models which are carried out in source to target and target to source directions. The source document will maximize the equation to generate the translation including the words most likely to occur in the target document set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model", "sec_num": "3.1" }, { "text": "After translating the source document into the target language of the translation model, the system should select a certain amount of words as a query for searching instead of using the whole translated text. It is for two reasons, one is computational cost, and the other is that the unimportant words will degrade the similarity score. This is also the reason why it often responses nothing from the search engines on the Internet when we choose a whole text as a query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Generation Model", "sec_num": "3.2" }, { "text": "In this paper, we apply a classical algorithm which is commonly used by the search engines as a central tool in scoring and ranking relevance of a document given a user query. Term Frequency-Inverse Document Frequency (TF-IDF) calculates the values for each word in a document through an inverse proportion of the frequency of the word in a particular document to the percentage of documents where the word appears (Ramos, 2003) . Given a document collection D, a word w, and an individual document d D, we calculate ) , (", "cite_spans": [ { "start": 415, "end": 428, "text": "(Ramos, 2003)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Query Generation Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "| | log ) , ( ) , ( D w f D d w f d w P \u00d7 =", "eq_num": "(5)" } ], "section": "Query Generation Model", "sec_num": "3.2" }, { "text": "where f(w, d) denotes the number of times w that appears in d, |D| is the size of the corpus, and f(w,D) indicates the number of documents in which w appears in D (Berger et al., 2000) .", "cite_spans": [ { "start": 163, "end": 184, "text": "(Berger et al., 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Query Generation Model", "sec_num": "3.2" }, { "text": "In implementation, if w is an Out-of-Vocabulary term (OOV), the denominator f(w,D) becomes zero, and will be problematic (divided by zero). Thus, our model makes log (|D|/ f(w,D))=1 (IDF=1) when this situation occurs. Additionally, a list of stop-words in the target language is also used in query generation to remove the words which are high frequency but less discrimination power. Numbers are also treated as useful terms in our model, which also play an important role in distinguishing the documents. Finally, after evaluating and ranking all the words in a document by their scores, we take a portion of the (n-best) words for constructing the query and are guided by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Generation Model", "sec_num": "3.2" }, { "text": "] [ d percent q Len Size \u00d7 = \u03bb (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Generation Model", "sec_num": "3.2" }, { "text": "Size q is the number of terms. \u03bb percent is the percentage and is manually defined, which determines the Size q according to Len d , the length of the document. The model uses the first Size q -th words as the query. In another word, the larger document, the more words are selected as the query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Generation Model", "sec_num": "3.2" }, { "text": "In order to use the generated query for retrieving documents, the core algorithm of the document retrieval model is derived from the Vector Space Model (VSM). Our system takes this model to calculate the similarity of each indexed document according to the input query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Retrieval Model", "sec_num": "3.3" }, { "text": "The final scoring formula is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Retrieval Model", "sec_num": "3.3" }, { "text": ") , ( ) ( ) , ( ) , ( ) , ( d t norm bst t idf d t tf d q coord d q Score q t in \u00d7 \u00d7 \u00d7 \u2211 = (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Retrieval Model", "sec_num": "3.3" }, { "text": "where tf (t,d ) is the term frequency factor for term t in document d, idf(t) is the inverse document frequency of term t, while coord(q,d) is frequency of all the terms in query occur in a document. bst is a weight for each term in the query. Norm (t,d ) encapsulates a few (indexing time) boost and length factors, for instance, weights for each document and field. As a summary, many factors that could affect the overall score are taken into account in this model.", "cite_spans": [ { "start": 9, "end": 13, "text": "(t,d", "ref_id": null }, { "start": 249, "end": 253, "text": "(t,d", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Document Retrieval Model", "sec_num": "3.3" }, { "text": "In order to obtain a suitable filter, we firstly analyzed the golden data 1 of ACL Workshop on SMT 2011, which includes Spanish, English, French, German and Czech 5 languages and 10 language pairs. English-Spanish language pair was used for analyzing and the data of the corpus are summarizes in Table 1 . 2 plots the distribution of word number in each aligned sentences. l t is the length of English sentence while l s is the length of sentence in Spanish. So the expectation is c= E (l t /l s ) =1.0073, with the correlation R 2 = 0.9157. This shows that the data points are not substantially scatter in the plot and many data points are along with the regression line. Therefore, it is suitable to design a filter based on length ratio. ", "cite_spans": [], "ref_spans": [ { "start": 296, "end": 303, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 306, "end": 307, "text": "2", "ref_id": null } ], "eq_spans": [], "section": "Length Filter Model", "sec_num": "3.4" }, { "text": "To obtain an estimated length-threshold (\u03b4) for filter model, the function \u03b4 (l s , l t ) can be designed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. The length ratio of Spanish-English sentences.", "sec_num": null }, { "text": "s s t t s l l l l l | | ) , ( \u2212 = \u03b4 (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. The length ratio of Spanish-English sentences.", "sec_num": null }, { "text": "where l s and l t respectively stand for the length of a certain aligned sentence in the corpus we used. Finally, we got the average \u03b4 of around 0.15. In implementation, we choose 4\u03b4 instead of \u03b4 to avoid some unnormal cases, where the right document would be discarded by the filter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. The length ratio of Spanish-English sentences.", "sec_num": null }, { "text": "Filter F describes the relation between bilingual sentences based on the length ratio. Since western languages are similar in terms of word representation, the length ratio can be simply estimated as a 1:1. Given a certain document in source language, F can collect a subset for retrieval according to the average length ratio. So F is designed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. The length ratio of Spanish-English sentences.", "sec_num": null }, { "text": "] , [ , , 0 , 1 \u03b4 \u03b4 + \u2212 = \u23a9 \u23a8 \u23a7 \u2209 \u2208 = s s t t length length C C length C length F (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. The length ratio of Spanish-English sentences.", "sec_num": null }, { "text": "where length s is the length of source document, and length t is the length of target document. \u03b4 is an average threshold obtained through Eq. (8), C is a confidence interval. If length t is included in C, F is 1, which has a chance to be retrieved, otherwise set as 0, which will be skipped during searching.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. The length ratio of Spanish-English sentences.", "sec_num": null }, { "text": "In order to evaluate the retrieval performance of the proposed model on text of cross languages, we use the Europarl corpus 2 which is the collection of parallel texts in 11 languages from the proceedings of the European Parliament (Koehn, 2005) . The corpus is commonly used for the construction and evaluation of statistical machine translation. The corpus consists of spoken records held at the European Parliament and are labeled with corresponding IDs (e.g. , ). The corpus is quite suitable for use in training the proposed probabilistic models between different language pairs (e.g. English-Spanish, English-French, English-German, etc.), as well as for evaluating retrieval performance of the system. The datasets (training and test set) are collected for this evaluation. The chapters from April 1998 to October 2006 were used as a training set for model construction, both for training the Language Model (LM) and Translation Model (TM). While the chapters from April 1996 to March 1998 were considered as the testing set for evaluating the performance of the model. Besides, each paragraph (split by label) is treated as a document, for dealing with the low discrimination power. The analytical data of the corpus are presented in Table 2 . The TestSet contains 23,342 documents, of which length is 309 in average. Actually 30% of documents are much more or less than the average number. Table 1 summarizes the number of documents, sentences, words and the average word number of each document.", "cite_spans": [ { "start": 232, "end": 245, "text": "(Koehn, 2005)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 1279, "end": 1286, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 1436, "end": 1443, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "The most frequent and basic evaluation metrics for information retrieval are precision and recall, which are defined as follows (Manning et al., 2008) For reporting the evaluation of our method, we used the F1 measure, the recall and the precision values. F1-measure (F) is formulated by Van Rijsbergen as a combination of recall (R) and precision (P) with an equal weight in the following form:", "cite_spans": [ { "start": 128, "end": 150, "text": "(Manning et al., 2008)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R P PR F + = 2", "eq_num": "(12)" } ], "section": "Evaluation Metrics", "sec_num": "4.2" }, { "text": "In order to evaluate our proposed model, the following tools have been used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.3" }, { "text": "The probabilistic LMs are constructed on monolingual corpora by using the SRILM (Stolcke et al., 2002) . We use GIZA++ (Och & Ney, 2003) to train the word alignment models for different pairs of languages of the Europarl corpus, and the phrase pairs that are consistent with the word alignment are extracted. For constructing the phrase-based statistical machine translation model, we use the open source Moses (Koehn et al., 2007) toolkit, and the translation model is trained based on the log-linear model, as given in Eq. (4). The workflow of constructing the translation model is illustrated in Fig. 3 and it consists of the following main steps 3 :", "cite_spans": [ { "start": 80, "end": 102, "text": "(Stolcke et al., 2002)", "ref_id": "BIBREF21" }, { "start": 119, "end": 136, "text": "(Och & Ney, 2003)", "ref_id": "BIBREF17" }, { "start": 411, "end": 431, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 599, "end": 605, "text": "Fig. 3", "ref_id": null } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.3" }, { "text": "(1) Preparation of aligned parallel corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.3" }, { "text": "(2) Preprocessing of training data: tokenization, case conversion, and sentences filtering where sentences with length greater than fifty words are removed from the corpus in order to comply with the requirement of Moses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.3" }, { "text": "(3) A 5-gram LM is trained on Spanish data with the SRILM toolkits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.3" }, { "text": "3 See http://www.statmt.org/wmt09/baseline.html for a detailed description of MOSES training options.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.3" }, { "text": "TQDL: Integrated Models for Cross-Language Document Retrieval", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.3" }, { "text": "(4) The phrased-based STM model is therefore trained on the prepared parallel corpus (English-Spanish) based on log-linear model of by using the nine-steps suggested in Moses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "25", "sec_num": null }, { "text": "Word ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "25", "sec_num": null }, { "text": "Once LM and TM have been obtained, we evaluate the proposed method with the following steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3. Main workflow of training phase", "sec_num": null }, { "text": "(1) The source documents are first translated into target language using the constructed translation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3. Main workflow of training phase", "sec_num": null }, { "text": "(2) The words candidates are computed and ranked based on a TF-IDF algorithm and the n-best words candidates then are selected to form the query based on Eq. (5) and (6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3. Main workflow of training phase", "sec_num": null }, { "text": "(3) All the target documents are stored and indexed using Apache Lucene 4 as our default search engine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3. Main workflow of training phase", "sec_num": null }, { "text": "(4) In retrieval, target documents are scored and ranked by using the document retrieval model to return the list of most related documents with Eq. (7).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3. Main workflow of training phase", "sec_num": null }, { "text": "A number of experiments have been performed to investigate our proposed method on different settings. In order to evaluate the performance of the three independent models, we firstly conducted experiments to test them respectively before whole the TQDL platform. The performance of the method is evaluated in terms of the average precision, that is, how often the target document is included within the first N-best candidate documents when retrieved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "In this experiment, we want to evaluate the performance of the proposed system to retrieve documents (monolingual environment) given the query. It supposes that the translations of source documents are available, and the step to obtain the translation for the input document can therefore be neglected. Under such assumptions, the CLIR problem can be treated as normal IR in monolingual environment. In conducting the experiment, we used all of the source documents of TestSet. The steps are similar to that of the testing phase as described in Section 4.2, excluding the translation step. The empirical results based on different configurations are presented in Table 3 , where the first column gives the number of documents returned against the number of words/terms used as the query. The results show that the proposed method gives very high retrieval accuracy, with precision of 100%, when the top 18% of the words are used as the query. In case of taking the top 5 candidates of documents, the approach can always achieve a 100% of retrieval accuracy with query sizes between 8% and 18%. This fully illustrates the effectiveness of the retrieval model.", "cite_spans": [], "ref_spans": [ { "start": 663, "end": 670, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Monolingual Environment Information Retrieval", "sec_num": "5.1" }, { "text": "The overall retrieval performance of the system will be affected by the quality of translation. In order to have an idea the performance of the translation model we built, we employ the commonly used evaluation metric, BLEU, for such measure. The BLEU (Bilingual Evaluation Understudy) is a classical automatic evaluation method for the translation quality of an MT system (Papineni et al., 2002) . In this evaluation, the translation model is created using the parallel corpus, as described in Section 4. We use another 5,000 sentences from the TestSet1 for evaluation 5 .", "cite_spans": [ { "start": 373, "end": 396, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Quality", "sec_num": "5.2" }, { "text": "The BLEU value, we obtained, is 32.08. The result is higher than that of the results reported by Koehn in his work (Koehn, 2005) , of which the BLEU score is 30.1 for the same language pair we used in Europarl corpora. Although we did not use exactly the same data for constructing the translation model, the value of 30.1 was presented as a baseline of the English-Spanish translation quality in Europarl corpora.", "cite_spans": [ { "start": 115, "end": 128, "text": "(Koehn, 2005)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Quality", "sec_num": "5.2" }, { "text": "The BLEU score shows that our translation model performs very well, due to the large number of the training data we used and the pre-processing tasks we designed for cleaning the data. On the other hand, it reveals that the translation quality of our model is good.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Quality", "sec_num": "5.2" }, { "text": "In this section, the proposed model without length filter model is tested. Table 4 presents the F-measure given by TQDL system without length filter model. As illustrated, the it can only achieve up to 94.7%, counting that the desired document is returned as the most relevant document among the candidates. Although it has achieved a very good performance in the experiments, the 6.6% of documents have been discarded in the pre-processing. To investigate the changes of the performance with removing abnormal documents (too lager or too small), query size Size q was set as a constant value (8.0%), which can achieve the best precision as shown in Table 4 . We believed that the abnormal document is the main obstacle to develop the performance of the system. Therefore, we removed the documents, of which length are out of a certain threshold. Fig. 4 plots the variations of P, R and F with the length scope increasing. As we expected, the precision increase when the more abnormal documents are discarded from the dataset. However, the recall declines sharply, which also lead to the falling of F-measure. When the precision is closed to 100%, nearly 15% documents are removed from the dataset. So the high precision is often at the cost of reducing the recall rate. F-measure is only 95% at its top, so it is hard to improve the performance of CLIR using traditional methods.", "cite_spans": [], "ref_spans": [ { "start": 75, "end": 82, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 650, "end": 657, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 847, "end": 853, "text": "Fig. 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "TQDL without Filter for CLIR", "sec_num": "5.3" }, { "text": "In order to obtain a higher retrieval rate, our model has been improved from different points. Firstly, we generate the query with dynamic size, which can do better in dealing with the problem of similar documents both in length and content. In another words, the longer the document, the more words will be used for retrieval of the target documents. So the Size q is considered as a hidden variable in our document retrieval model. Besides, all the indexed documents can be filtered with F formula in Eq. (9), and it can alleviate the scarcity of tending to select longer documents when occurring the word overlap between shorter and longer documents, because a certain source document are only searched in a subset defined by its length. It can improve the precision without discard any so-called \"abnormal\" documents from dataset, so the P, R and F values will always be the same. Table 5 presents the F values given by TQDL with length filter model. Compared with the results presented in Tables 4 and 5, it shows that the length filter model is able to give a high improvement by 4.5% in F-measure and achieve more than 99% of successful rate, in the case that the desired candidate is ranked in the first place. Above all, there is no documents waste in the dataset. 5 presents an ideal distribution of evaluation, of which P and R should be closed to the F line. In this comparison, query size Size q was still set as a constant value (8.0%). With the increasing of N, evaluations without filter are in a low level, while the one with this filter can achieve a good and stable performance. Finally, the precision and recall values are closed to F measure, which can all keep in a high level (99%-100%).", "cite_spans": [], "ref_spans": [ { "start": 885, "end": 892, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 1274, "end": 1275, "text": "5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "TQDL with Filter for CLIR", "sec_num": "5.4" }, { "text": "This article presents a TQDL statistical approach for CLIR which has been explored for both large and similar documents retrieval. Different from the traditional parallel corpora-based model which relies on IBM algorithm, we divided our CLIR model into four independent parts but all work together to deal with the term disambiguation, query generation and document retrieval. The performances showed that this method can do a good job of CLIR for not only large documents but also the similar documents. This fully illustrates the discrimination power of the proposed method. It is of a great significance to both cross-language searching on the Internet and the parallel corpus producing for statistical machine translation systems. In the future work, the TQDL system will be evaluated for Chinese language, which is a big changing and more meaningful to CLIR. In the further work, we plan to make better use of the proposed models between significantly different languages such as Portuguese-Chinese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "It can be download from http://www.statmt.org/wmt11/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available online at http://www.statmt.org/europarl/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available at http://lucene.apache.org.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See http://www.statmt.org/wmt09/baseline.html for a detailed description of MOSES evaluation options.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was partially supported by the Research Committee of University of Macau under grant UL019B/09-Y3/EEE/LYP01/FST, and also supported by Science and Technology Development Fund of Macau under grant 057/2009/A2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Statistical methods for cross-language information retrieval. Cross-language information retrieval", "authors": [ { "first": "L", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Croft", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "23--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ballesteros, L., & Croft, W. B. (1988). Statistical methods for cross-language information retrieval. Cross-language information retrieval, 23-40.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Phrasal translation and query expansion techniques for cross-language information retrieval", "authors": [ { "first": "L", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Croft", "suffix": "" } ], "year": 1997, "venue": "ACM SIGIR Forum", "volume": "", "issue": "SI", "pages": "84--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ballesteros, L. & Croft, W. B. (1997). Phrasal translation and query expansion techniques for cross-language information retrieval. ACM SIGIR Forum, 31(SI), 84-91.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Experiments with the eurospider retrieval system for clef 2000. Cross-Language Information Retrieval and Evaluation", "authors": [ { "first": "M", "middle": [], "last": "Braschler", "suffix": "" }, { "first": "P", "middle": [], "last": "Schauble", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "140--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Braschler, M., & Schauble, P. (2001). Experiments with the eurospider retrieval system for clef 2000. Cross-Language Information Retrieval and Evaluation, 140-148.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Aligning sentences in parallel corpora", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Lai", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the 29th annual meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "169--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown, P. F., Lai, J. C. & Mercer, R. L. (1991). Aligning sentences in parallel corpora. In Proceedings of the 29th annual meeting on Association for Computational Linguistics, 169-176.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "V", "middle": [ "J D" ], "last": "Pietra", "suffix": "" }, { "first": "S", "middle": [ "A D" ], "last": "Pietra", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown, P. F., Pietra, V. J. D., Pietra, S. A. D. & Mercer, R. L. (1993). The mathematics of statistical machine translation: Parameter estimation. Computational linguistics, 19(2), 263-311. MIT Press.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bridging the lexical chasm: statistical approaches to answer-finding", "authors": [ { "first": "A", "middle": [], "last": "Berger", "suffix": "" }, { "first": "R", "middle": [], "last": "Caruana", "suffix": "" }, { "first": "D", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "D", "middle": [], "last": "Freitag", "suffix": "" }, { "first": "V", "middle": [], "last": "Mittal", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "192--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Berger, A., Caruana, R., Cohn, D., Freitag, D. & Mittal, V. (2000). Bridging the lexical chasm: statistical approaches to answer-finding. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, 192-199.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Quilt: Implementing a large-scale cross-language text retrieval system", "authors": [ { "first": "M", "middle": [ "W" ], "last": "Davis", "suffix": "" }, { "first": "W", "middle": [ "C" ], "last": "Ogden", "suffix": "" } ], "year": 1997, "venue": "ACM SIGIR Forum", "volume": "", "issue": "SI", "pages": "92--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Davis, M. W. & Ogden, W. C. (1997). Quilt: Implementing a large-scale cross-language text retrieval system. ACM SIGIR Forum, 31(SI), 92-98.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Statistical cross-language information retrieval using n-best query translations", "authors": [ { "first": "M", "middle": [], "last": "Federico", "suffix": "" }, { "first": "N", "middle": [], "last": "Bertoldi", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "167--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Federico, M. & Bertoldi, N. (2002). Statistical cross-language information retrieval using n-best query translations. In Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, 167-174.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Ad hoc, cross-language and spoken document information retrieval at IBM", "authors": [ { "first": "M", "middle": [], "last": "Franz", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Mccarley", "suffix": "" }, { "first": "R", "middle": [ "T" ], "last": "Ward", "suffix": "" } ], "year": 1999, "venue": "NIST Special Publication: The 8th Text Retrieval Conference (TREC-8)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz, M., McCarley, J. S. & Ward, R. T. (1999). Ad hoc, cross-language and spoken document information retrieval at IBM. NIST Special Publication: The 8th Text Retrieval Conference (TREC-8).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Identifying word correspondences in parallel texts", "authors": [ { "first": "W", "middle": [ "A" ], "last": "Gale", "suffix": "" }, { "first": "K", "middle": [ "W" ], "last": "Church", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the workshop on Speech and Natural Language", "volume": "", "issue": "", "pages": "152--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gale, W. A. & Church, K. W. (1991). Identifying word correspondences in parallel texts. In Proceedings of the workshop on Speech and Natural Language, 152-157.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Improving query translation for cross-language information retrieval using statistical models", "authors": [ { "first": "J", "middle": [], "last": "Gao", "suffix": "" }, { "first": "J", "middle": [ "Y" ], "last": "Nie", "suffix": "" }, { "first": "E", "middle": [], "last": "Xun", "suffix": "" }, { "first": "J", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "C", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "96--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gao, J., Nie, J. Y., Xun, E., Zhang, J., Zhou, M., & Huang, C. (2001). Improving query translation for cross-language information retrieval using statistical models.In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, 96-104.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Technical issues of cross-language information retrieval: a review. Information processing & management", "authors": [ { "first": "K", "middle": [], "last": "Kishida", "suffix": "" } ], "year": 2005, "venue": "", "volume": "41", "issue": "", "pages": "433--455", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishida, K. (2005). Technical issues of cross-language information retrieval: a review. Information processing & management, 41(3), 433-455.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Europarl: A parallel corpus for statistical machine translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, P. (2005). Europarl: A parallel corpus for statistical machine translation. MT summit, 5.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "H", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "A", "middle": [], "last": "Birch", "suffix": "" }, { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "M", "middle": [], "last": "Federico", "suffix": "" }, { "first": "N", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "B", "middle": [], "last": "Cowan", "suffix": "" } ], "year": 2007, "venue": "Annual meeting-association for computational linguistics", "volume": "45", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., Cowan, B., et al. (2007). Moses: Open source toolkit for statistical machine translation. Annual meeting-association for computational linguistics, 45(2), 2.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Introduction to information retrieval", "authors": [ { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "P", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "H", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2008, "venue": "", "volume": "1", "issue": "", "pages": "140--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manning, C. D., Raghavan, P. & Sch\u00fctze, H. (2008). Introduction to information retrieval (Vol. 1). Cambridge University Press Cambridge, 140-159.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Cross-language information retrieval. Annual review of Information science", "authors": [ { "first": "D", "middle": [ "W" ], "last": "Oard", "suffix": "" }, { "first": "A", "middle": [ "R" ], "last": "Diekema", "suffix": "" } ], "year": 1998, "venue": "", "volume": "33", "issue": "", "pages": "223--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oard, D. W., & Diekema, A. R. (1998). Cross-language information retrieval. Annual review of Information science, 33, 223-256.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Discriminative training and maximum entropy models for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "295--302", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och, F. J. & Ney, H. (2002). Discriminative training and maximum entropy models for statistical machine translation.In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, 295-302.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och, F. J. & Ney, H. (2003). A systematic comparison of various statistical alignment models. Computational linguistics, 29(1), 19-51. MIT Press.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "BLEU: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W", "middle": [ "J" ], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Papineni, K., Roukos, S., Ward, T. & Zhu, W. J. (2002). BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, 311-318.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Using tf-idf to determine word relevance in document queries", "authors": [ { "first": "J", "middle": [], "last": "Ramos", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the First Instructional Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramos, J. (2003). Using tf-idf to determine word relevance in document queries. In Proceedings of the First Instructional Conference on Machine Learning.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Document translation retrieval based on statistical machine translation techniques", "authors": [ { "first": "F", "middle": [], "last": "S\u00e1nchez-Mart\u00ednez", "suffix": "" }, { "first": "R", "middle": [ "C" ], "last": "Carrasco", "suffix": "" } ], "year": 2011, "venue": "Applied Artificial Intelligence", "volume": "25", "issue": "5", "pages": "329--340", "other_ids": {}, "num": null, "urls": [], "raw_text": "S\u00e1nchez-Mart\u00ednez, F. & Carrasco, R. C. (2011). Document translation retrieval based on statistical machine translation techniques. Applied Artificial Intelligence, 25(5), 329-340.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "SRILM-an extensible language modeling toolkit", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the international conference on spoken language processing", "volume": "2", "issue": "", "pages": "901--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stolcke, A. & others. (2002). SRILM-an extensible language modeling toolkit. In Proceedings of the international conference on spoken language processing, 2, 901-904.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Building parallel corpora by automatic title alignment using length-based and text-based approaches. Information processing & management", "authors": [ { "first": "C", "middle": [ "C" ], "last": "Yang", "suffix": "" }, { "first": "K", "middle": [], "last": "Wing Li", "suffix": "" } ], "year": 2004, "venue": "", "volume": "40", "issue": "", "pages": "939--955", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, C. C. & Wing Li, K. (2004). Building parallel corpora by automatic title alignment using length-based and text-based approaches. Information processing & management, 40(6), 939-955.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "ls-\u03b4, ls+\u03b4]) The proposed approach for CLIR 20Long-Yue WANG et al." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": ", and is represented as:" }, "FIGREF4": { "uris": null, "num": null, "type_str": "figure", "text": "The changes of evaluation when removing data" }, "FIGREF5": { "uris": null, "num": null, "type_str": "figure", "text": "The changes of evaluation with N-BestFig." }, "TABREF0": { "text": "", "num": null, "content": "
DatasetNo. of SentencesSize of corpus No. of CharactersAve. No. Characters
English3,00374,75325
Spanish3,00379,42626
Fig.
", "html": null, "type_str": "table" }, "TABREF1": { "text": "", "num": null, "content": "
Training Set2,9001,902,05023,411,54550
TestSet23,34280,0007,217,827309
", "html": null, "type_str": "table" }, "TABREF3": { "text": "", "num": null, "content": "", "html": null, "type_str": "table" }, "TABREF4": { "text": "", "num": null, "content": "
RetrievedQuery Size (Size q in %)
Documents (N-Best)24810141820
10.7940.9100.9930.9890.9861.0000.989
50.9210.9641.0001.0001.0001.0000.996
100.9420.9711.0001.0001.0001.0000.996
200.9460.9781.0001.0001.0001.0000.996
", "html": null, "type_str": "table" }, "TABREF5": { "text": "", "num": null, "content": "
Retrieved DocumentsQuery Size (Size q in %)
(N-Best)2.04.06.08.010.0
10.9050.9430.9420.9470.941
20.9220.9490.9490.9530.950
50.9320.9500.9530.9630.960
100.9360.9540.9600.9680.971
200.9410.9580.9740.9790.981
", "html": null, "type_str": "table" }, "TABREF6": { "text": "", "num": null, "content": "
Retrieved DocumentsQuery Size (Size q in %)
(N-Best)2.04.06.08.010.0
10.9580.9750.9830.9900.992
20.9670.9790.9860.9930.996
50.9710.9820.9870.9930.996
100.9740.9830.9880.9950.996
200.9740.9830.9900.9950.996
", "html": null, "type_str": "table" } } } }