Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
42.6 kB
{
"paper_id": "I08-1026",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:40:23.397556Z"
},
"title": "A Study on Effectiveness of Syntactic Relationship in Dependence Retrieval Model",
"authors": [
{
"first": "Fan",
"middle": [],
"last": "Ding",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100080",
"settlement": "Beijing",
"country": "China"
}
},
"email": "dingfan@ict.ac.cn"
},
{
"first": "Bin",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "wangbin@ict.ac.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "To relax the Term Independence Assumption, Term Dependency is introduced and it has improved retrieval precision dramatically. There are two kinds of term dependencies, one is defined by term proximity, and the other is defined by linguistic dependencies. In this paper, we take a comparative study to reexamine these two kinds of term dependencies in dependence language model framework. Syntactic relationships, derived from a dependency parser, Minipar, are used as linguistic term dependencies. Our study shows: 1) Linguistic dependencies get a better result than term proximity. 2) Dependence retrieval model achieves more improvement in sentence-based verbose queries than keywordbased short queries.",
"pdf_parse": {
"paper_id": "I08-1026",
"_pdf_hash": "",
"abstract": [
{
"text": "To relax the Term Independence Assumption, Term Dependency is introduced and it has improved retrieval precision dramatically. There are two kinds of term dependencies, one is defined by term proximity, and the other is defined by linguistic dependencies. In this paper, we take a comparative study to reexamine these two kinds of term dependencies in dependence language model framework. Syntactic relationships, derived from a dependency parser, Minipar, are used as linguistic term dependencies. Our study shows: 1) Linguistic dependencies get a better result than term proximity. 2) Dependence retrieval model achieves more improvement in sentence-based verbose queries than keywordbased short queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "For the sake of computational simplicity, Term Independence Assumption (TIA) is widely used in most retrieval models. It states that terms are statistically independent from each other. Though unreasonable, TIA did not cause very bad performance. However, relaxing the assumption by adding term dependencies into the retrieval model is still a basic IR problem. Relaxing TIA is not easy because improperly relaxing may introduce much noisy information which will hurt the final performance. Defining the term dependency is the first step in dependence retrieval model. Two research directions are taken to define the term dependency. The first is to treat term dependencies as term proximity, for example, the Bi-gram Model (F. Song and W. B. Croft, 1999) and Markov Random Field Model (D. Metzler and W. B. Croft, 2005) in language model. The second direction is to derive term dependencies by using some linguistic structures, such as POS block (Lioma C. and Ounis I., 2007) or Noun/Verb Phrase (Mitra et al., 1997) , Maximum Spanning Tree (C. J. van Rijsbergen, 1979) and Linkage Model (Gao et al., 2004) etc.",
"cite_spans": [
{
"start": 728,
"end": 755,
"text": "Song and W. B. Croft, 1999)",
"ref_id": null
},
{
"start": 767,
"end": 820,
"text": "Random Field Model (D. Metzler and W. B. Croft, 2005)",
"ref_id": null
},
{
"start": 947,
"end": 976,
"text": "(Lioma C. and Ounis I., 2007)",
"ref_id": null
},
{
"start": 997,
"end": 1017,
"text": "(Mitra et al., 1997)",
"ref_id": null
},
{
"start": 1053,
"end": 1070,
"text": "Rijsbergen, 1979)",
"ref_id": null
},
{
"start": 1083,
"end": 1107,
"text": "Model (Gao et al., 2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Though linguistic information is intensively used in QA (Question Answering) and IE (Information Extraction) task, it is seldom used in document retrieval (T. Brants, 2004) . In document retrieval, how effective linguistic dependencies would be compared with term proximity still needs to be explored thoroughly.",
"cite_spans": [
{
"start": 159,
"end": 172,
"text": "Brants, 2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we use syntactic relationships derived by a popular dependency parser, Minipar (D. Lin, 1998) , as linguistic dependencies. Minipar is a broad-coverage parser for the English language. It represents the grammar as a network of nodes and links, where the nodes represent grammatical categories and the links represent types of dependency. We extract the dependencies between content words as term dependencies.",
"cite_spans": [
{
"start": 94,
"end": 108,
"text": "(D. Lin, 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To systematically compare term proximity with syntactic dependencies, we study the dependence retrieval models in language model framework and present a smooth-based dependence language model (SDLM). It can incorporate these two kinds of term dependencies. The experiments in TREC collections show that SDLM with syntactic relationships achieves better result than with the term proximity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows. Section 2 reviews some previous relevant work, Section 3 presents the definition of term dependency using syntactic relationships derived by Minipar. Section 4 presents in detail the smoothbased dependence language model. A series of experiments on TREC collections are presented in Section 5. Some conclusions are summarized in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Generally speaking, when using term dependencies in language modeling framework, two problems should be considered: The first is to define and identify term dependencies; the second is to integrate term dependencies into a weighting schema. Accordingly, this section briefly reviews some recent relevant work, which is summarized into two parts: the definition of term dependencies and weight of term dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In definition of term dependencies, there are two main methods: shallow parsing by some linguistic tools and term proximity with co-occurrence information. Both queries and documents are represented as a set of terms and term dependencies among terms. Table 1 summarizes some recent related work according to the method they use to identify term dependencies in queries and documents. (Gao et al., 2004) . It introduces a dependency structure, called linkage model. The linkage structure assumes that term dependencies in a sentence form an acyclic, planar graph, where two related terms are linked. LDM (Gao et al., 2005) represents the related terms as linguistic concepts, which can be semantic chunks (e.g. named entities like person name, location name, etc.) and syntactic chunks (e.g. noun phrases, verb phrases, etc.).",
"cite_spans": [
{
"start": 385,
"end": 403,
"text": "(Gao et al., 2004)",
"ref_id": null
},
{
"start": 600,
"end": 622,
"text": "LDM (Gao et al., 2005)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 252,
"end": 259,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Definition of Term Dependencies",
"sec_num": "2.1"
},
{
"text": "In the part II of table 1, CULM (M. Srikanth and R. Srihari, 2003) is a concept unigram language model. The parser tree of a user query is used to identify the concepts in the query. Term sequence in a concept is treated as bi-grams in the document model. RP (Recognized Phrase, S. Liu et al., 2004) uses some linguistic tools and statistical tools to recognize four types of phrase in the query, including proper names, dictionary phrase, simple phrase and complex phrase. A phrase is in a document if all its content words appear in the document within a certain window size. The four kinds of phrase correspond to variant window size.",
"cite_spans": [
{
"start": 36,
"end": 66,
"text": "Srikanth and R. Srihari, 2003)",
"ref_id": null
},
{
"start": 259,
"end": 299,
"text": "(Recognized Phrase, S. Liu et al., 2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document",
"sec_num": null
},
{
"text": "In the part IV of table 1, BG (bi-gram language model) is the simplest model which assumes term dependencies exist only between adjacent words both in queries and documents. WPLM (word pairs in language model, Alvarez et al., 2004) relax the co-occurrence window size in documents to 5 and relax the order constraint in bi-gram model. MRF (Markov Random Field) classify the term dependencies in queries into sequential dependence and full dependence, which respectively corresponds to ordered and unordered co-occurrence within a predefine-sized window in documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document",
"sec_num": null
},
{
"text": "From above discussion we can see that when the query is sentence-based, parsing method is preferred to proximity method. When the query is keyword-based, proximity method is preferred to parsing method. Thorsten (T. Brants, 2004) note: the longer the queries, the bigger the benefit of NLP. This conclusion also holds for the definition of query term dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document",
"sec_num": null
},
{
"text": "In dependence retrieval model, the final relevance score of a query and a document consists of both the independence score and dependence score, such as Bahadur Lazarsfeld expansion (R. M. Losee, 1994) in classical probabilistic IR models. However, Spark Jones et al. point out that without a theoretically motivated integration model, documents containing dependencies (e.g. phrases) may be over-scored if they are weighted in the same way as single words (Jones et al., 1998) . Smoothing strategy in language modeling framework provide such an elegant solution to incorporate term dependencies.",
"cite_spans": [
{
"start": 457,
"end": 477,
"text": "(Jones et al., 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weight of Term Dependencies",
"sec_num": "2.2"
},
{
"text": "In the simplest bi-gram model, the probability of bi-gram (q i-1 ,q i ) in document D is smoothed by its unigram:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight of Term Dependencies",
"sec_num": "2.2"
},
{
"text": ") | ( ) | ( ) , | ( , ) , | ( ) 1 ( ) | ( ) , | ( 1 1 1 1 1 D q P D q q P D q q P where D q q P D q P D q q P i i i i i i i i i i smoothed \u2212 \u2212 \u2212 \u2212 \u2212 \u2261 \u00d7 \u2212 + \u00d7 = \u03bb \u03bb (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight of Term Dependencies",
"sec_num": "2.2"
},
{
"text": "Further, the probability of bi-gram (q i-1 ,q i ) in document P(q i |q i-1 ,D) can be smoothed by its probability in collection P(q i |q i-1 ,C). If P(q i |q i-1 ,D) is smoothed as Equation 1, the relevance score of query Q={q 1 q 2 \u2026q m } and document D is: (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight of Term Dependencies",
"sec_num": "2.2"
},
{
"text": ") | ( ) | ( ) | ( log ) | , ( , ) | , ( ) | ( log ) ) | ( ) | ( ) | ( 1 1 log( ) | ( log ) ) | ( ) , | ( ) 1 ( log( ) | ( log )) , | ( ) 1 ( ) | ( log( ) | ( log ) , | ( log ) | ( log ) | ( log 1 1 1 ... 2 1 ... 1 ... 2 1 1 ... 1 ... 2 1 ... 1 ... 2 1 1 ... 2 1 1 D q P D q P D q q P D q q MI usually D q q MI D q P D q P D q P D q q P D q P D q P D q q P D q P D q q P D q P D q P D q q P D q P D Q P i i i i i i m i i i smoothed m i i m i i i i i m i i m i i i i m i i m i i i i m i i i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight of Term Dependencies",
"sec_num": "2.2"
},
{
"text": "In Equation 2, the first score term is independence unigram score and the second score term is smoothed dependence score. Usually \u03bb is set to 0.9, i.e., the dependence score is given a less weight compared with the independence score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight of Term Dependencies",
"sec_num": "2.2"
},
{
"text": "DM (Gao et al., 2004) , which can be regarded as the generalization of the bi-gram model, gives the relevance score of a document as:",
"cite_spans": [
{
"start": 3,
"end": 21,
"text": "(Gao et al., 2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weight of Term Dependencies",
"sec_num": "2.2"
},
{
"text": "\u2211 \u2211 \u2208 = + + = L j i j i m i i D L q q MI D L P D q P D Q P ) , ( ... 1 ) , | , ( ) | ( log ) | ( log ) | ( log (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight of Term Dependencies",
"sec_num": "2.2"
},
{
"text": "In Equation (3),L is the set of term dependencies in query Q. The score function consists of three parts: a unigram score, a smoothing factor logP(L|D), and a dependence score MI(q i ,q j |L,D).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight of Term Dependencies",
"sec_num": "2.2"
},
{
"text": "MRF (D. Metzler and W. B. Croft, 2005) combines the score of full independence, sequential dependence and full dependence in an interpolated way with the weight (0.8, 0.1, 0.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight of Term Dependencies",
"sec_num": "2.2"
},
{
"text": "Though these above models are derived from different theories, smoothing is an important part when incorporating term dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight of Term Dependencies",
"sec_num": "2.2"
},
{
"text": "Term dependencies defined as term proximity may contain many \"noisy\" dependencies. It's our belief that parsing technique can filter out some of these noises and syntactic relationship is a clue to define parser, Minipar, to extract the syntactic dependency between words. In this section we will discuss the extraction of syntactic dependencies and the indexing schemes of term dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Parsing of Queries and Documents",
"sec_num": "3"
},
{
"text": "term dependencies. We use a popular dependency s ary an des in the parsing result are single w A dependency relationship is an asymmetric bin relationship between a word called head (or governor, parent), and another word called modifier (or dependent, daughter). Dependency grammars represent sentence structures as a set of dependency relationships. For example, Figure 1 takes the description field of TREC topic 651 as an example and shows part of the parsing result of Minipar.",
"cite_spans": [],
"ref_spans": [
{
"start": 365,
"end": 373,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Extraction of Syntactic Dependencie",
"sec_num": "3.1"
},
{
"text": "In Figure 1 , Cat is the lexical category of word, d Rel is a label assigned to the syntactic dependencies, such as subject (sub), object (obj), adjunct (mod:A), prepositional attachment (Prep:pcomp-n), etc. Since function words have no meaning, the dependency relationships including function words, such as N:det:Det, are ignored. Only the dependency relationships between content words are extracted. However, prepositional attachment is an exception. A prepositional noun phrase contains two parts: (N:mod:Prep) and (Prep:pcomp-n:N). We combine these two parts and get a relationship between nouns.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Extraction of Syntactic Dependencie",
"sec_num": "3.1"
},
{
"text": "Mostly, the no ords. When the nodes are proper names, dictionary phrases, or compound words connected by hyphen, there are more than one word in the node. For example, the 5 th and 6 th relationship in Figure 1 describes a compound word \"make up\". We divide these nodes into bi-grams, which assume dependencies exist between adjacent words inside the nodes. If the compound-word node has a relationship with other nodes, each word in the compoundword node is assumed to have a relationship with the other nodes. Finally, the term dependencies are represented as word pairs. The direction of syntactic dependencies is ignored.",
"cite_spans": [],
"ref_spans": [
{
"start": 202,
"end": 210,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Extraction of Syntactic Dependencie",
"sec_num": "3.1"
},
{
"text": "And the of e that trieval status value (RSV) has the form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Indexing of Term Dependencies",
"sec_num": "3.2"
},
{
"text": "Parsing is a time-consuming process. documents parsing should be an off-line process. The parsing results, recognized as term dependencies, should be organized efficiently to support the computation of relevance score at the retrieval step. As a supplement of regular documents\u2194words inverted index, the indexing of term dependencies is organized as documents\u2192dependencies lists. For example, Document A has n unique words; each of these n words has relationships with at least one other word. Then the term dependencies inside these n words can be represented as a halfangle matrix as Figure 2 shows. The (i,j)-th element of the matrix is the number times that tid i and tid j have a dependency in document A. The matrix has the size of (n-1)*n/2 and it is stored as list of size (n-1)*n/2. Each document corresponds to such a matrix. When accessing the term dependencies index, the global word id in the regular index is firstly converted to the internal id according to the word's appearance order in the document. The internal id is the index of the half-angle matrix. Using the internal id pair, we can get its position in the matrix.",
"cite_spans": [],
"ref_spans": [
{
"start": 586,
"end": 594,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Indexing of Term Dependencies",
"sec_num": "3.2"
},
{
"text": "From the discussion in section 2.2, we can se smoothing is very important not only in unigram language model, but also in dependence language model. Taking the smoothed unigram model (C. Zhai and J. Lafferty, 2001) as the example, the re-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smooth-based Dependence Model",
"sec_num": "4"
},
{
"text": "D D Q w D DML UG Q C w p D w p Q w c D Q RSV \u03b1 \u03b1 log | | ) | ( ) | ( log ) , ( ) , ( + = \u2211 \u2229 \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smooth-based Dependence Model",
"sec_num": "4"
},
{
"text": "In Equation (4), c(w,Q) is the frequency of w Q. The equation has three parts: P DML (w|D), \u03b1 D and P( (4) in w|C). P DML (w|D) is the discounted maximum likelihood estimation of unigram P(w|D), \u03b1 D is the smoothing coefficient of document D, and P(w|C) is collection language model. If we use a smoothing strategy as the smoothed MI in Equation 2, and replace term w with term pair (w i ,w j ), we can get the smoothed dependence model as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smooth-based Dependence Model",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2211 \u2229 \u2208 \u00d7 + = D L w w j i smooth j j i j i C w w p D Q w w c ) , ( 0 ) ) | , ( ) | 1 log( ) , , ( \u03bb",
"eq_num": "(5)"
}
],
"section": "Smooth-based Dependence Model",
"sec_num": "4"
},
{
"text": "In Equation 5, \u03bb 0 is the smoothing coefficient. P sm (w ,w |D) and P sm (w ,w |C) is the smoothed w ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smooth-based Dependence Model",
"sec_num": "4"
},
{
"text": "We use two parts to estimat one is the weight of the term ips in D, P(w i ,w j |R,D), the other is the weight of the term co-occurrence in D, P co (w i ,w j |D). These two parts are defined as below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing P(w ,w |D)",
"sec_num": "4.1"
},
{
"text": "| D | )/ (w C D) | P(w D) | P(w D) | P(w D) | w , (w P | D | R)/ , w , (w C D) R, | w , P(w j i D j i \u00d7 = tid 1 tid 2 \u2026 tid n-1 tid n tid 1 tid 2 \u2026 tid n-1 tid n \u23aa \u23aa \u23aa \u23ad \u23aa \u23aa \u23aa \u23ac \u23ab \u23aa \u23aa \u23aa \u23a9 \u23aa \u23aa \u23aa \u23a8 \u23a7 0 * * * * 1 0 ... * * 0 3 ... * * 4 5 .. 0 * 2 0 ... 1 0 i D i j i j i CO = = (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing P(w ,w |D)",
"sec_num": "4.1"
},
{
"text": "|D| is the document length, C D (w i ,w j ,R) denotes the count of the dependency (w i ,w ) in the docum j i co 1 j ent D, and C D (w i ) is the frequency of word w i in D. P smooth (w i ,w j |D) is defined as a combination of the two parts:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing P(w ,w |D)",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P ) \u03bb - (1 D) R, | w , P(w \u03bb D) | w , (w P j i 1 j i smooth \u00d7 + D) | w , (w \u00d7 =",
"eq_num": "(7)"
}
],
"section": "Smoothing P(w ,w |D)",
"sec_num": "4.1"
},
{
"text": "Figure 2. Half-angle matrix of term dependencies",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing P(w ,w |D)",
"sec_num": "4.1"
},
{
"text": "bability of term pair . We use docum To directly estimate the pro (w ,w ) in the collection is not easy i j ent frequency of term pair (w i ,w j ) as its approximation. Same as P smooth (w i ,w j |D), P smooth (w i ,w j |C) consists of two parts: one is the document frequency of term pair (w i ,w j ), DF(w i ,w j ), the other is the averaged document frequency of w i and w j . Then, P smooth (w i ,w j |C) is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing P(w i ,w j |C)",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D j i D j i j i smooth C w DF w DF C w w DF C w w P | | ) ( ) ( ) 1 ( | | ) , ( ) | , ( 2 \u00d7 \u00d7 \u2212 + 2 \u00d7 = \u03bb \u03bb",
"eq_num": "(8)"
}
],
"section": "Smoothing P(w i ,w j |C)",
"sec_num": "4.2"
},
{
"text": "In Equation 8, |C| D is the count of Document in Collection C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing P(w i ,w j |C)",
"sec_num": "4.2"
},
{
"text": "Finally, if substituting Equation 7and (8) into Eq ). The final retrieval status value of th s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing P(w i ,w j |C)",
"sec_num": "4.2"
},
{
"text": "To answer the question whether the syntactic determ proximity, ,w ,R) in Equatio parameter is ns. Some statistics of the collec rameters (\u03bb ,\u03bb ,\u03bb ), SD (MB) Doc. uation (5), there are three parameters (\u03bb 0 ,\u03bb 1 ,\u03bb 2 ) in RSV DEP (Q,D e smooth-based dependence model, RSV SDLM , is the sum of RSV DEP and RSV UG :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing P(w i ,w j |C)",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ") , ( ) , ( ) , ( D Q RSV D Q RSV D Q RSV UG DEP SDLM + =",
"eq_num": "(9)"
}
],
"section": "Smoothing P(w i ,w j |C)",
"sec_num": "4.2"
},
{
"text": "5 Experiments and Result pendencies is more effective than we systematically compared their performance on two kinds of queries. One is verbose queries (the description field of TREC topics), the other is short queries (the title field of TREC topics). Since the verbose queries are sentence-level, they are parsed by Minipar to get the syntactic dependencies. In short queries, term proximity is used to define the dependencies, which assume every two words in the queries have a dependency. Our smooth-based dependence language model (SDLM) is used as dependence retrieval model in the experiments. If defining C D (w i j n (6) to different meanings, we can get a dependence model with syntactic dependencie, SDLM_Syn, or a dependence model with term proximity, SDLM_Prox. In SDLM_Syn, C D (w i ,w j ,R) is the count of syntactic dependencies between w i and w j in D. In SDLM_Prox, C D (w i ,w j ,R) is the number of times the terms w i and w j appear within a window N terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing P(w i ,w j |C)",
"sec_num": "4.2"
},
{
"text": "We use Dirichlet-Prior smoothed KL-Divergence model as the unigram model in Equation (9). The Dirichlet-Prior smoothing set to 2000. This unigram model, UG, is also the baseline in the experiments. The main evaluation metric in this study is the non-interpolated average precision (AvgPr.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing P(w i ,w j |C)",
"sec_num": "4.2"
},
{
"text": "We evaluated the smooth-based dependence language model in two document collections and four query collectio tions are shown in Table 2 . Three retrieval models are evaluated in the TREC collections: UG, SDLM_Syn and SDLM_Prox. Besides the pa 0 1 2 LM_Prox has one more parameter than SDLM_Syn. It is the window size N of C D (w i ,w j ,R). In the experiments, we tried the window size N of 5, 10, 20 and 40 to find the optimal setting. We find the optimal N is 10. This size is close to sentence length and it is used in the following experiments. Table 2 . TREC collections eter ,\u03bb 2 ) were trained on three query se 700. Each query set was divided into two halves, and we applied twofo Param s (\u03bb 0 ,\u03bb 1 ts: 51-200, 351-450 and 651ld cross validation to get the final result. We trained (\u03bb 0 ,\u03bb 1 ,\u03bb 2 ) by directly maximizing MAP (mean average precision). Since the parameter range was limited, we used a linear search method at step 0.1 to find the optimal setting of (\u03bb 0 ,\u03bb 1 ,\u03bb 2 ). Table 3 and Table 4 respectively. The settings of (\u03bb 0 ,\u03bb 1 ,\u03bb 2 ) used in the experiments are al improvement over UG and SDLM_Syn has robust improvement over SDLM_Prox. In short queries, SDLM has slight improvement over UG and SDLM_Syn is comparative with SDLM_Prox.",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 135,
"text": "Table 2",
"ref_id": null
},
{
"start": 549,
"end": 556,
"text": "Table 2",
"ref_id": null
},
{
"start": 991,
"end": 998,
"text": "Table 3",
"ref_id": null
},
{
"start": 1003,
"end": 1010,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Smoothing P(w i ,w j |C)",
"sec_num": "4.2"
},
{
"text": "To study the effectiveness of syntactic dependencies in detail, Figure 3 and 4 compare SDLM_Syn and UG, SDLM_Syn and SDLM_Prox topic by topic in verbose queries.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 72,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Coll",
"sec_num": null
},
{
"text": "As shown in Figure 3 and Figure 4 , SDLM_Syn achieves substantial improvements over UG in the majority of queries. While SDLM_Syn is comparative with SDLM_Prox in most of the queries, SDLM_Syn still get some noticeable improvements over SDLM_Prox.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 25,
"end": 33,
"text": "Figure 4",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Coll",
"sec_num": null
},
{
"text": "From Table 3 and 4, we can see while the parameters (\u03bb 0 ,\u03bb 1 ,\u03bb 2 ) change a lot in two different document collections, there is little change in the same document collection. This shows the robustness of our smooth-based dependence language model.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Coll",
"sec_num": null
},
{
"text": "In this paper we have systematically studied the effectiveness of syntactic dependencies compared with term proximity in dependence retrieval model. To compare the effectiveness of syntactic dependencies and term proximity, we develop a smooth-based dependence language model that can incorporate different term dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Experiments on four TREC collections indicate the effectiveness of syntactic dependencies: In verbose queries, the improvement of syntactic dependencies over term proximity is noticeable; In short queries, the improvement is not noticeable. For keywords-based short queries with average length of 2-3 words, the term dependencies in the queries are very few. So the improvement of dependence retrieval model over independence unigram model is very limited. Meanwhile, the difference between syntactic dependencies and term proximity is not noticeable. For dependence retrieval model, we can get the same conclusion as Thorsten Brants: the longer the queries are, the bigger the benefit of NLP is.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": ": \"How is the ethnic makeup of the U.S. population changing?\""
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "pair (w i ,w j ) in document D and collection C."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "UG vs. SDLM_Syn in verbose queries: Top Left (51-200), Top Right (351-450), Bottom Left (hard topics in 351-450), and Bottom Right"
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "verbose queries and short queries are listed in so listed. A star mark after the change percent value indicates a statistical significant difference at the 0.05 level(one-sided Wilcoxon test"
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "SDLM_Prox vs. SDLM_Syn in verbose queries: Top Left (51-200), Top Right (351-450), Bottom Left (hard topics in 351-450), Bottom Right (651-700)"
}
}
}
}