{ "paper_id": "I13-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:14:36.472743Z" }, "title": "Precise Information Retrieval Exploiting Predicate-Argument Structures", "authors": [ { "first": "Daisuke", "middle": [], "last": "Kawahara", "suffix": "", "affiliation": { "laboratory": "", "institution": "Kyoto University", "location": {} }, "email": "" }, { "first": "Keiji", "middle": [], "last": "Shinzato", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rakuten Institute of Technology", "location": {} }, "email": "keiji.shinzato@mail.rakuten.com" }, { "first": "Tomohide", "middle": [], "last": "Shibata", "suffix": "", "affiliation": { "laboratory": "", "institution": "Kyoto University", "location": {} }, "email": "shibata@i.kyoto-u.ac.jp" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Kyoto University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A concept can be linguistically expressed in various syntactic constructions. Such syntactic variations spoil the effectiveness of incorporating dependencies between words into information retrieval systems. This paper presents an information retrieval method for normalizing syntactic variations via predicate-argument structures. We conduct experiments on standard test collections and show the effectiveness of our approach. Our proposed method significantly outperforms a baseline method based on word dependencies.", "pdf_parse": { "paper_id": "I13-1005", "_pdf_hash": "", "abstract": [ { "text": "A concept can be linguistically expressed in various syntactic constructions. Such syntactic variations spoil the effectiveness of incorporating dependencies between words into information retrieval systems. This paper presents an information retrieval method for normalizing syntactic variations via predicate-argument structures. We conduct experiments on standard test collections and show the effectiveness of our approach. Our proposed method significantly outperforms a baseline method based on word dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Most conventional approaches to information retrieval (IR) deal with words as independent terms. In query sentences 1 and documents, however, dependencies exist between words. 2 To capture these dependencies, some extended IR models have been proposed in the last decade (Jones, 1999; Lee et al., 2006; Song et al., 2008; Shinzato et al., 2008) . These models, however, did not achieve consistent significant improvements over models based on independent words.", "cite_spans": [ { "start": 176, "end": 177, "text": "2", "ref_id": null }, { "start": 271, "end": 284, "text": "(Jones, 1999;", "ref_id": "BIBREF8" }, { "start": 285, "end": 302, "text": "Lee et al., 2006;", "ref_id": "BIBREF10" }, { "start": 303, "end": 321, "text": "Song et al., 2008;", "ref_id": "BIBREF22" }, { "start": 322, "end": 344, "text": "Shinzato et al., 2008)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One of the reasons for this is the linguistic variations of syntax, that is, languages are syntactically expressed in various ways. For instance, the same or similar meaning can be expressed using the passive voice or the active voice in a sentence. Previous approaches based on dependencies cannot identify such variations. This is because they use the output of a dependency parser, which generates syntactic (grammatical) dependencies built 1 In this paper, we handle queries written in natural language.", "cite_spans": [ { "start": 444, "end": 445, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 While dependencies between words are sometimes considered to be the co-occurrence of words in a sentence, in this paper we consider dependencies to be syntactic or semantic dependencies between words. upon surface word sequences. Consider, for example, the following sentence in a document:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "? ? ?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) YouTube was acquired by Google.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Dependency parsers based on the Penn Treebank and the head percolation table (Collins, 1999) judge the head of \"YouTube\" as \"was\" (\"YouTube\u2190was\"; hereafter, we denote a dependency by \"modifier\u2190head\"). This dependency, however, cannot be matched with the dependency \"YouTube\u2190acquire\" in a query like:", "cite_spans": [ { "start": 77, "end": 92, "text": "(Collins, 1999)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) I want to know the details of the news that Google acquired YouTube.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Furthermore, even if a dependency link in a query matches that in a document, a mismatch of dependency type can cause another problem. This is because previous models did not distinguish dependency types. For example, the dependency \"YouTube\u2190acquire\" in query sentence (2) can be found in the following irrelevant document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(3) Google acquired PushLife for $25M ... YouTube acquired Green Parrot Pictures ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While this document does indeed contain the dependency \"YouTube\u2190acquire,\" its type is different; specifically, the query dependency is accusative while the document dependency is nominative. That is to say, ignoring differences in dependency types can lead to inaccurate information retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose an IR method that does not use syntactic dependencies, but rather predicate-argument structures, which are normalized forms of sentence meanings. For example, query sentence (2) is interpreted as the following predicate-argument structure (hereafter, we denote a predicate-argument structure by \u27e8\u2022 \u2022 \u2022 \u27e9): 3 (4) \u27e8NOM:Google acquire ACC:YouTube\u27e9.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) is also represented as the same predicate-argument structure, and documents including this sentence can be regarded as relevant documents. Conversely, the irrelevant document (3) has different predicate-argument structures from (4), as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence", "sec_num": null }, { "text": "(5) a. \u27e8NOM:Google acquire ACC:PushLife\u27e9, b. \u27e8NOM:YouTube acquire ACC:Green Parrot Pictures\u27e9.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence", "sec_num": null }, { "text": "In this way, by considering this kind of predicateargument structure, more precise information retrieval is possible. We mainly evaluate our proposed method using the NTCIR test collection, which consists of approximately 11 million Japanese web documents. We also have an experiment on the TREC Robust 2004 test collection, which consists of around half a million English documents, to validate the applicability to other languages than Japanese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence", "sec_num": null }, { "text": "This paper is organized as follows. Section 2 introduces related work, and section 3 describes our proposed method. Section 4 presents the experimental results and discussion. Section 5 describes the conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence", "sec_num": null }, { "text": "There have been two streams of related work that considers dependencies between words in a query sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "One stream is based on linguistically-motivated approaches that exploit natural language analysis to identify dependencies between words. For example, Jones proposed an information retrieval method that exploits linguistically-motivated analysis, especially dependency relations (Jones, 1999) . However, Jones noted that dependency relations did not contribute to significantly improving performance due to the low accuracy and robustness of syntactic parsers. Subsequently, both the accuracy and robustness of dependency parsers were dramatically improved (Nivre and Scholz, 2004; McDonald et al., 2005) , with such parsers being applied more recently to information retrieval (Lee et al., 2006; Song et al., 2008 ; Shin-NOM (nominative), ACC (accusative), DAT (dative), ALL (allative), GEN (genitive), CMI (comitative), LOC (locative), ABL (ablative), CMP (comparative), DEL (delimitative) and TOP (topic marker). zato et al., 2008) . For example, Shinzato et al. investigated the use of syntactic dependency output by a dependency parser and reported a slight improvement over a baseline method that used only words. However, the use of dependency parsers still introduces the problems stated in the previous section because of their handling of only syntactic dependencies.", "cite_spans": [ { "start": 279, "end": 292, "text": "(Jones, 1999)", "ref_id": "BIBREF8" }, { "start": 557, "end": 581, "text": "(Nivre and Scholz, 2004;", "ref_id": "BIBREF15" }, { "start": 582, "end": 604, "text": "McDonald et al., 2005)", "ref_id": "BIBREF11" }, { "start": 678, "end": 696, "text": "(Lee et al., 2006;", "ref_id": "BIBREF10" }, { "start": 697, "end": 714, "text": "Song et al., 2008", "ref_id": "BIBREF22" }, { "start": 916, "end": 934, "text": "zato et al., 2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The second stream of research has attempted to integrate dependencies between words into information retrieval models. These models include a dependence language model (Gao et al., 2004) , a Markov Random Field model (Metzler and Croft, 2005) , and a quasi-synchronous dependence model (Park et al., 2011) . However, they focus on integrating term dependencies into their respective models without explicitly considering any syntactic or semantic structures in language. Therefore, the purpose of these studies can be considered different from ours. Park and Croft (2010) proposed a method for ranking query terms for the selection of those which were most effective by exploiting typed dependencies in the analysis of query sentences. They did not, however, use typed dependencies for indexing documents.", "cite_spans": [ { "start": 168, "end": 186, "text": "(Gao et al., 2004)", "ref_id": "BIBREF6" }, { "start": 217, "end": 242, "text": "(Metzler and Croft, 2005)", "ref_id": "BIBREF12" }, { "start": 286, "end": 305, "text": "(Park et al., 2011)", "ref_id": "BIBREF18" }, { "start": 550, "end": 571, "text": "Park and Croft (2010)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The work that is closest to our present work is that of Miyao et al. (2006) , which proposed a method for the semantic retrieval of relational concepts in the domain of biomedicine. They retrieved sentences that match a given query using predicate-argument structures via a framework of region algebra. Thus, they namely approached the task of sentence matching, which is not the same as document retrieval (or ranking). As for the types of queries they used, although their method could handle natural language queries, they used short queries like \"TNF activate IL6.\" Because of the heavy computational load of region algebra, if a query matches several thousand sentences, for example, then it requires several thousand seconds to return all sentence matches (though it takes on average 0.01 second to return the first matched sentence).", "cite_spans": [ { "start": 56, "end": 75, "text": "Miyao et al. (2006)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In the area of question answering, predicateargument structures have been used to precisely match a query with a passage in a document (e.g., (Narayanan and Harabagiu, 2004; Shen and Lapata, 2007; Bilotti et al., 2010) ). However, candidate documents to extract an answer are retrieved using conventional search engines without predicate-argument structures.", "cite_spans": [ { "start": 142, "end": 173, "text": "(Narayanan and Harabagiu, 2004;", "ref_id": "BIBREF14" }, { "start": 174, "end": 196, "text": "Shen and Lapata, 2007;", "ref_id": "BIBREF20" }, { "start": 197, "end": 218, "text": "Bilotti et al., 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "3 Information retrieval exploiting predicate-argument structures", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Our key idea is to exploit the normalization of linguistic expressions based on their predicateargument structures to improve information retrieval. The process of information retrieval systems can be decomposed into offline processing and online processing. During offline processing, analysis is first applied to a document collection. For example, typical analyses for English include tokenization and stemming analyses, while those for Japanese include morphological analysis. In addition, previous models using the dependencies between words also used dependency parsing. In this paper, we employ predicate-argument structures analysis, which is detailed in the next subsection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "Following the initial analysis, indexing is performed to produce an inverted index. In most cases, words are indexed as terms, but several previous approaches have also indexed dependencies between words as terms (e.g., (Shinzato et al., 2008) ). In our study, however, we do not use syntactic dependencies directly, but rather consider predicate-argument structures. To bring this predicate-argument structure information into the index, we handle predicate-argument structures as a set of typed semantic dependencies. Dependency types are expressed as term features, which are additional information to each term including the list of positions of the term.", "cite_spans": [ { "start": 220, "end": 243, "text": "(Shinzato et al., 2008)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "As for online processing, we first apply the predicate-argument structure analysis to a query sentence, and then create terms including words and typed semantic dependencies extracted from the predicate-argument structures. Then, we search documents containing these terms from the inverted index, and then finally rank these documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "In the following subsections, we describe in more detail the procedures of predicate-argument structure analysis, indexing, query processing, and document ranking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "We apply predicate-argument structure analysis to both queries and documents. Predicate-argument structure analysis normalizes the following linguistic expressions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of predicate-argument structures", "sec_num": "3.2" }, { "text": "\u2022 relative clause", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of predicate-argument structures", "sec_num": "3.2" }, { "text": "\u2022 passive voice (the predicate is normalized to active voice)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of predicate-argument structures", "sec_num": "3.2" }, { "text": "\u2022 causative (the predicate is normalized to normal form)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of predicate-argument structures", "sec_num": "3.2" }, { "text": "\u2022 intransitive (the predicate is normalized to transitive)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of predicate-argument structures", "sec_num": "3.2" }, { "text": "\u2022 giving and receiving expressions (the predicate is normalized to a giving expression)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of predicate-argument structures", "sec_num": "3.2" }, { "text": "In the case of Japanese, we use the morphological analyzer JUMAN, 4 and the predicateargument structure analyzer KNP (Kawahara and Kurohashi, 2006) . 5 The accuracy of syntactic dependencies output by KNP is around 89% and that of predicate-argument relations is around 81% on web sentences. Examples of this predicateargument structure analysis are shown in Figures 1 and 2. Figure 1 shows an example of relative clause normalization by predicate-argument structure analysis. The syntactic dependencies of the two sentences are different, but this difference is solved by using predicate-argument structures. Figure 2 shows an example of intransitive verb normalization by predicate-argument structure analysis. In this example, the syntactic dependencies are the same, but different verbs are used. 6 The analyzer canonicalizes the intransitive verb to its corresponding transitive verb, and also produces the same predicate-argument structure for the two sentences.", "cite_spans": [ { "start": 117, "end": 147, "text": "(Kawahara and Kurohashi, 2006)", "ref_id": "BIBREF9" }, { "start": 150, "end": 151, "text": "5", "ref_id": null }, { "start": 801, "end": 802, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 376, "end": 384, "text": "Figure 1", "ref_id": null }, { "start": 610, "end": 618, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Analysis of predicate-argument structures", "sec_num": "3.2" }, { "text": "If we apply our method to English, deep parsers such as the Stanford Parser 7 and Enju 8 can be employed to achieve predicate-argument structure analysis. The Stanford parser can output typed semantic dependencies that conform to the Stanford dependencies (de Marneffe et al., 2006) . Enju is an HPSG parser that outputs predicate-argument structures, and arguments are typed as Arg1, Arg2, and so forth. The representation of the dependency types in Enju is the same as that of Prop-Bank (Palmer et al., 2005 \u27e9 (NOM:Tom ACC:bread bake) Figure 1 : An example of relative clause normalization by predicate-argument structure analysis in Japanese. (a) is a normal-order sentence and (b) is a sentence that contains a relative clause, \" \" (which Tom bakes). Arrows represent syntactic dependencies. Dotted arrows represent semantic dependencies that constitute predicate-argument structures. Both sentences are normalized to the predicate-argument structure (c).", "cite_spans": [ { "start": 256, "end": 282, "text": "(de Marneffe et al., 2006)", "ref_id": "BIBREF3" }, { "start": 489, "end": 509, "text": "(Palmer et al., 2005", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 537, "end": 545, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Analysis of predicate-argument structures", "sec_num": "3.2" }, { "text": "In this way, though our framework itself is language-independent, our method depends on the availability of a predicate-argument structure analyzer for the target language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of predicate-argument structures", "sec_num": "3.2" }, { "text": "Our method builds an inverted index from the results of the predicate-argument structure analysis. First, word lemmas are registered as terms. We then need to integrate the predicate-argument structure information into the index. One possibility is to represent each predicate-argument structure as a term, but this method leads to a data sparseness problem. This is because the number of arguments in predicate-argument structures varies greatly not only in documents, but also in queries because of information granularity. For example, to express the same event, a predicate-argument structure can omit time or place information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Indexing", "sec_num": "3.3" }, { "text": "Instead, we decompose a predicate-argument structure into a set of typed semantic dependencies. A typed semantic dependency is defined as a typed dependency between a predicate and an argument that the predicate governs. For instance, the predicate-argument structure in Figure 2 can be decomposed into the following two typed semantic dependencies:", "cite_spans": [], "ref_spans": [ { "start": 271, "end": 279, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Indexing", "sec_num": "3.3" }, { "text": "(6) a. NOM \u2190 (Tom NOM \u2190 raise) b. ACC \u2190 (tension ACC \u2190 raise)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Indexing", "sec_num": "3.3" }, { "text": "These typed semantic dependencies are registered as dependency terms in the index. The type information is encoded as a term feature, which is an additional field for each dependency term. This term feature consists of both dependency type information and predicate information. We con-sider major postpositions in Japanese as dependency types (Table 1 ). If a dependency type is not listed in this table, then this type is regarded as a special type which we classify as \"other.\" In addition, a dependency that is not the relation between a predicate and its argument is also classified as \"other\" (e.g., the dependency between verbs).", "cite_spans": [], "ref_spans": [ { "start": 344, "end": 352, "text": "(Table 1", "ref_id": null } ], "eq_spans": [], "section": "Indexing", "sec_num": "3.3" }, { "text": "The predicate information in the term feature refers to the original predicate type for canonicalized predicates. There are four types: passive, causative, intransitive, and giving expression.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Indexing", "sec_num": "3.3" }, { "text": "Hereafter, we describe the steps of online processing. When a query sentence is input, both predicate-argument structure analysis and term extraction are applied to the query sentence in the same way indexing is applied. The extracted terms consist of words and typed semantic dependencies and they are used to retrieve documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query processing", "sec_num": "3.4" }, { "text": "Note that unnecessary expressions like \" \" (please tell me) in a query sentence are not used to extract terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query processing", "sec_num": "3.4" }, { "text": "Using the results of the query processing, documents are then retrieved and ranked. First, documents are retrieved by accessing the inverted index using the terms extracted from the query analysis. Here, we have two options for the logical operator on the terms. If we apply the logical operator AND, we impose a constraint that all the terms must be contained in a retrieved document. Conversely, if we apply the logical operator OR, a retrieved document should have one of the terms. In this study, we use the logical operator OR to retrieve as many documents as possible. This means that we do not apply any methods of selecting or ? ?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document retrieval and scoring", "sec_num": "3.5" }, { "text": "? ?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document retrieval and scoring", "sec_num": "3.5" }, { "text": "(a) Tom-TOP tension-NOM rise (Tom's tension rises) (b) Tom-TOP tension-ACC raise (Tom raises (his) tension) (c) \u27e8NOM:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document retrieval and scoring", "sec_num": "3.5" }, { "text": "ACC: \u27e9 (NOM:Tom ACC:tension raise) Figure 2 : An example of intransitive verb normalization by predicate-argument structure analysis in Japanese. (a) is an intransitive sentence and (b) is a transitive sentence. Arrows represent syntactic dependencies (they are also semantic dependencies in this case). Both sentences are normalized to the predicate-argument structure (c). In particular, the intransitive verb \" \" (rise) is a different word from the transitive verb \" \" (raise) but both are canonicalized to the same transitive verb \" \" (raise) in the predicate-argument structure. Table 1 : Dependency type information in Japanese. The first row is the list of dependency types used in our method. The second row means the translations of the first row, where adj means adjuncts such as adverbs.", "cite_spans": [], "ref_spans": [ { "start": 35, "end": 43, "text": "Figure 2", "ref_id": null }, { "start": 584, "end": 591, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Document retrieval and scoring", "sec_num": "3.5" }, { "text": "ranking query terms, 9 but rely only on document scoring to examine the effectiveness of the use of predicate-argument structures. Following document retrieval, a relevancy score is assigned to each document, and the documents are ranked according to these relevancy scores. We use Okapi BM25 (Robertson et al., 1992) for estimating the relevancy score between a query and a document. This measure was originally proposed for models based on terms of independent words, but we slightly extend this measure to include estimating relevancy for typed semantic dependencies that are extracted from predicateargument structures. Our relevancy score is calculated as a weighted sum of the score of words and the score of dependencies. The score of dependencies is further calculated as a weighted sum of the following two scores: the score of dependencies with consistent (matched) type and that with inconsistent (mismatched) type. In particular, the score of dependencies with inconsistent type is reduced compared to the score of dependencies with consistent type.", "cite_spans": [ { "start": 293, "end": 317, "text": "(Robertson et al., 1992)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Document retrieval and scoring", "sec_num": "3.5" }, { "text": "We denote a set of words in a query q as T qw , and also denote a set of dependencies in q as T qd . This set of dependencies is further divided into two types according to the consistency of dependency features: T qd C (consistent) and T qd I (inconsistent). We define the relevancy score between 9 We only discard unnecessary expressions in a query as described in subsection 3.4. query q and document d as follows:", "cite_spans": [ { "start": 298, "end": 299, "text": "9", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Document retrieval and scoring", "sec_num": "3.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R(q, d) = \u2211 t\u2208Tqw BM (t, d)+ \u03b2 \uf8f1 \uf8f2 \uf8f3 \u2211 t\u2208T qd C BM (t, d) + \u03b3 \u2211 t\u2208T qd I BM (t, d) \uf8fc \uf8fd \uf8fe ,", "eq_num": "(1)" } ], "section": "Document retrieval and scoring", "sec_num": "3.5" }, { "text": "where \u03b2 is a parameter for adjusting the ratio of a score calculated from dependency relations to that from words and \u03b3 is a parameter for decreasing the weight of inconsistent dependency types. The score BM (t, d) is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document retrieval and scoring", "sec_num": "3.5" }, { "text": "BM (t, d) = IDF (t) \u00d7 (k 1 +1)F dt K+F dt \u00d7 (k 3 +1)Fqt k 3 +Fqt , (2) IDF (t) = log N \u2212n+0.5 n+0.5 , K = k 1 { (1 \u2212 b) + b l d lave } ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document retrieval and scoring", "sec_num": "3.5" }, { "text": "where F dt is the frequency with which t appears in document d, F qt is the frequency that t appears in q, N is the number of documents being searched, n is the document frequency of t, l d is the length of document d (words), and l ave is the average document length. Finally, we set these Okapi parameters as k 1 = 1, k 3 = 0 and b = 0.6. We use the following relevancy score for a baseline method that uses only syntactic dependencies, which is explained in section 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document retrieval and scoring", "sec_num": "3.5" }, { "text": "R(q, d) = \u2211 t\u2208Tqw BM (t, d) + \u03b2 \u2211 t\u2208T qd BM (t, d). (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document retrieval and scoring", "sec_num": "3.5" }, { "text": "This equation is the same as the relevancy score used in Shinzato et al. (2008) .", "cite_spans": [ { "start": 57, "end": 79, "text": "Shinzato et al. (2008)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Document retrieval and scoring", "sec_num": "3.5" }, { "text": "In this section, we evaluate and analyze our proposed method on the standard test collections of Japanese and English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4" }, { "text": "We implemented our proposed method using the open search engine infrastructure TSUBAKI (Shinzato et al., 2008 ) as a base system. TSUB-AKI generates an inverted index from linguistic analyses in an XML format. Note that while TSUBAKI has a facility for using a synonym lexicon, but we did not use it because we performed pure comparisons without referencing synonyms.", "cite_spans": [ { "start": 87, "end": 109, "text": "(Shinzato et al., 2008", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental setup", "sec_num": "4.1.1" }, { "text": "We evaluated our proposed method by using the test collection built for the NTCIR-3 (Eguchi et al., 2003) and NTCIR-4 (Eguchi et al., 2004) workshops. These workshops shared a target document set, which consists of 11,038,720 web pages from Japanese domains. We used a highperformance computing environment to perform predicate-argument structure analysis and indexing on these documents. It took three days for analysis and two days for indexing. For the evaluation, we used 127 informational topics (descriptions) defined in the test collections (47 from NTCIR-3 and 80 from NTCIR-4). We also had additional 65 topics that were not used for evaluation in NTCIR-3; we used these 65 topics for parameter tuning. The relevance of each document with respect to a topic was judged as highly relevant, relevant, partially relevant, irrelevant or unjudged. We regarded the highly relevant, relevant, and partially relevant documents as correct answers.", "cite_spans": [ { "start": 84, "end": 105, "text": "(Eguchi et al., 2003)", "ref_id": "BIBREF4" }, { "start": 110, "end": 139, "text": "NTCIR-4 (Eguchi et al., 2004)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental setup", "sec_num": "4.1.1" }, { "text": "For each topic, we retrieved 1,000 documents, ranked according to the score R(q, d) in equation (1). We optimized the parameter \u03b2 as 0.18, and the parameter \u03b3 as 0.85 using the additional 65 topics in relation to their mean average precision (MAP) score. We then assessed retrieval performance according to MAP, P@3 (Precision at 3), P@5, P@10 and nDCG@10 (J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002) . Note that unjudged documents were treated as irrelevant when computing the scores. For the graded relevance of nDCG@10, we mapped highly relevant, relevant, and partially relevant to the values 3, 2, and 1, respectively. MAP P@3 P@5 P@10 nDCG@10 word 0.1665 0.4233 0.4159 0.3706 0.2323 word+dep 0.1704 0.4233 0.4095 0.3730 0.2313 word+pa 0.1727 * * 0.4418 * 0.4175 0.3794 * 0.2370 * * Table 2 : Retrieval performance of two baseline methods (\"word\" and \"word+dep\") and our proposed method (\"word+pa). ** and * mean that the differences between \"word+dep\" and \"word+pa\" are statistically significant with p < 0.05 and p < 0.10, respectively. MAP P@3 P@5 P@10 nDCG@10 word 0.2085 0.4312 0.4302 0.3960 0.2455 word+dep 0.2120 0.4392 0.4286 0.3913 0.2433 word+pa 0.2139 * * 0.4524 0.4333 0.3976 * * 0.2484 * * Table 3 : Retrieval performance without unjudged documents. ** means that the differences between \"word+dep\" and \"word+pa\" are statistically significant with p < 0.05. Table 2 lists retrieval performances. In this table, \"word\" is a baseline method that uses only words as terms, and \"word+dep\" is another baseline method that uses words and untyped syntactic dependencies as terms. These untyped syntactic dependencies are also available in the results of the predicate-argument structure analyzer KNP. \"word+pa\" is our proposed model, which considers predicate-argument structures. We also applied the Wilcoxon signed-rank test to the differences between \"word+dep\" and \"word+pa.\"", "cite_spans": [ { "start": 356, "end": 387, "text": "(J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 775, "end": 782, "text": "Table 2", "ref_id": null }, { "start": 1195, "end": 1202, "text": "Table 3", "ref_id": null }, { "start": 1363, "end": 1370, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experimental setup", "sec_num": "4.1.1" }, { "text": "We can see that our proposed method \"word+pa\" outperformed the baselines \"word\" and \"word+dep\" in all the metrics. In particular, the difference between \"word+dep\" and \"word+pa\" in MAP was statistically significant with p = 0.01134. In addition, P@3 is higher than the baselines by approximately 1.9%. This means that our model can provide more relevant documents on the top of the ranked result. The baseline \"word+dep\" outperformed the baseline \"word\" in MAP, which is used as a metric for optimizing the parameters, but did not outperform \"word\" in P@5 and nDCG@10. That is to say, \"word+dep\" was not consistently better than \"word.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Retrieval performance evaluation", "sec_num": "4.1.2" }, { "text": "Generally, relevance judgments on a standard test collection are created using a pooling method, which judges a certain number of documents submitted by every participating system. Systems that are developed after the creation of the test col- Figure 3 : An improved example of relative clause normalization by predicate-argument structure analysis in Japanese. (a) is a part of the query sentence and (b) is a part of a relevant document. Arrows represent syntactic dependencies and dotted arrows represent semantic dependencies. These sentences are normalized to the predicate-argument structures (a') and (b'), respectively. MAP P@3 P@5 P@10 nDCG@10 word+dep 0.1769 0.4444 0.4254 0.3921 0.2373 word+pa 0.1790 * * 0.4577 0.4317 0.3984 * 0.2424 * * Table 4 : Retrieval performance including additional judgments. The meaning of ** and * is the same as the previous tables. lection possibly retrieve unjudged documents, but they are usually handled as irrelevant documents, even though they may contain relevant documents. In addition, the number of unjudged documents is likely to increase according to the complexity of systems. To alleviate this bias, we evaluated the three systems without the inclusion of unjudged documents. Table 3 lists the evaluation results. From this table, we can see that \"word\" was likely to defeat \"word+dep,\" but \"word+pa\" consistently outperformed the two baseline methods.", "cite_spans": [], "ref_spans": [ { "start": 244, "end": 252, "text": "Figure 3", "ref_id": null }, { "start": 750, "end": 757, "text": "Table 4", "ref_id": null }, { "start": 1231, "end": 1238, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Retrieval performance evaluation", "sec_num": "4.1.2" }, { "text": "We also evaluated unjudged documents manually. We asked a person who is a certified librarian to judge them. These documents comprise the unjudged documents which appeared in the top 10 results of the two methods (\"word+dep\" and \"word+pa\") for each topic. Table 4 lists the retrieval performances reflecting the inclusion of these additional judgments. From this table, the result of proposed method is consistently better than that of the baseline using syntactic dependencies.", "cite_spans": [], "ref_spans": [ { "start": 256, "end": 263, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Retrieval performance evaluation", "sec_num": "4.1.2" }, { "text": "By introducing the normalization by predicateargument structures, our proposed method can retrieve relevant documents that cannot be retrieved or ranked below 1,000 documents by the baseline methods. Figures 3 and 4 show improved examples by the proposed method (\"word+pa\") compared to the baseline method (\"word+dep\"). (I wish to find out about differences in the ingredients and miso stock used to make ozoni soup at New Years in each region.)", "cite_spans": [], "ref_spans": [ { "start": 200, "end": 215, "text": "Figures 3 and 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Discussions", "sec_num": "4.1.3" }, { "text": "(in some places, they put salmon, salmon roe and potato in ozoni soup in Hokkaido)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b. \u2022 \u2022 \u2022", "sec_num": null }, { "text": "While different verbs are used to express almost the same meaning in these sentences, they are normalized to the predicate-argument structures (a') and (b') in Figure 4 . The whole predicateargument structures are different, but they contain the same typed semantic dependency: Figure 4 : An improved example of intransitive verb normalization by predicate-argument structure analysis in Japanese. (a) is a part of the query sentence and (b) is a part of a relevant document. These sentences are normalized to the predicate-argument structures (a') and (b'), respectively. In particular, the intransitive verb \" \" (exist) is a different word from the transitive verb \" \" (put) but both are canonicalized to the same transitive verb \" \" (put) in the predicate-argument structures.", "cite_spans": [], "ref_spans": [ { "start": 160, "end": 168, "text": "Figure 4", "ref_id": "FIGREF2" }, { "start": 278, "end": 286, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "b. \u2022 \u2022 \u2022", "sec_num": null }, { "text": "DAT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b. \u2022 \u2022 \u2022", "sec_num": null }, { "text": "\u2190 (ozoni soup DAT \u2190put).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b. \u2022 \u2022 \u2022", "sec_num": null }, { "text": "Generally speaking, linguistic variations can be roughly divided into two types: syntactic variations and lexical variations. Among syntactic variations, we handled syntactic variations that are related to predicate-argument structures in this study. In our future work, we intend to investigate remaining syntactic variations, such as nominal compounds and paraphrases consisting of larger trees than predicate-argument structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b. \u2022 \u2022 \u2022", "sec_num": null }, { "text": "The other type is lexical variations, namely synonymous words and phrases. In our approach, they are partially handled in the normalization process to predicate-argument structures. Although handling lexical variations is not the main focus of this paper, we will investigate the effect of incorporating a lexicon of synonymous words and phrases into our model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b. \u2022 \u2022 \u2022", "sec_num": null }, { "text": "To validate the effectiveness of the proposed method in other languages than Japanese, we also conducted an experiment on English. We used the TREC Robust 2004 test collection (Voorhees, 2004) , which consists of 528,155 English documents and 250 topics (TREC topics 301-450 and 601-700). We used the description queries in these topics, which are written in natural language. Stopwords are removed from the parse of a description and dependencies that contain a stopword in either a modifier or a head are also removed. We used the INQUERY stopword list (Allan et al., 2000) . Other experimental settings are the same as the Japanese evaluation. Table 5 lists retrieval performances. In this table, \"word\" is a baseline method that uses only lemmatized words as terms, and \"word+dep\" is another baseline method that uses lemmatized words and syntactic dependencies that are analyzed by the state-of-the-art dependency parser MAP P@3 P@5 P@10 nDCG@10 word 0.1344 0.4498 0.4016 0.3297 0.3527 word+dep 0.1350 0.4337 0.4112 0.3317 0.3517 word+pa 0.1396 * 0.4618 * * 0.4257 * * 0.3482 * * 0.3659 * * Table 5 : Retrieval performance of two baseline methods (\"word\" and \"word+dep\") and our proposed method (\"word+pa) on the TREC test collection. The meaning of ** and * is the same as the previous tables. MaltParser. 10 \"word+pa\" is our proposed method, which considers predicate-argument structures converted from the typed semantic dependencies output by the Stanford Parser. 11 We can see that our proposed method \"word+pa\" outperformed the baselines \"word\" and \"word+dep\" in all the metrics also on this English test collection.", "cite_spans": [ { "start": 176, "end": 192, "text": "(Voorhees, 2004)", "ref_id": "BIBREF23" }, { "start": 555, "end": 575, "text": "(Allan et al., 2000)", "ref_id": "BIBREF0" }, { "start": 1473, "end": 1475, "text": "11", "ref_id": null } ], "ref_spans": [ { "start": 647, "end": 654, "text": "Table 5", "ref_id": null }, { "start": 1096, "end": 1103, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Evaluation on English Test Collection", "sec_num": "4.2" }, { "text": "This paper described an information retrieval method that exploits predicate-argument structures to precisely capture the dependencies between words. Experiments on the standard test collections of Japanese and English indicated the effectiveness of our approach. In particular, the proposed method outperformed a baseline method that uses syntactic dependencies output by a dependency parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "For future work, we plan to optimize ranking by using machine learning techniques such as support vector regression, and to capture any remaining syntactic differences that express similar meanings (i.e., paraphrasing). We used the Okapi BM25 system as our baseline in this study. We will also employ a language model-based information retrieval system as a baseline to confirm the robustness of our approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "In this paper, we use the following abbreviations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.maltparser.org/ 11 To normalize passive constructions, we applied a rule that converts the dependency type \"nsubjpass\" to \"dobj\" and \"agent\" to \"nsubj.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by JSPS KAKENHI Grant Number 23680015.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "INQUERY and TREC-9", "authors": [ { "first": "James", "middle": [], "last": "Allan", "suffix": "" }, { "first": "Margaret", "middle": [ "E" ], "last": "Connell", "suffix": "" }, { "first": "W", "middle": [ "Bruce" ], "last": "Croft", "suffix": "" }, { "first": "Fangfang", "middle": [], "last": "Feng", "suffix": "" }, { "first": "David", "middle": [], "last": "Fisher", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Li", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Ninth Text REtrieval Conference", "volume": "", "issue": "", "pages": "551--562", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Allan, Margaret E. Connell, W. Bruce Croft, Fangfang Feng, David Fisher, and Xiaoyan Li. 2000. INQUERY and TREC-9. In Proceedings of the Ninth Text REtrieval Conference, pages 551- 562.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Rank learning for factoid question answering with linguistic and semantic constraints", "authors": [ { "first": "W", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Bilotti", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Elsas", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "", "middle": [], "last": "Nyberg", "suffix": "" } ], "year": 2010, "venue": "Proceedings of CIKM2010", "volume": "", "issue": "", "pages": "459--468", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew W Bilotti, Jonathan Elsas, Jaime Carbonell, and Eric Nyberg. 2010. Rank learning for fac- toid question answering with linguistic and seman- tic constraints. In Proceedings of CIKM2010, pages 459-468. ACM.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Head-Driven Statistical Models for Natural Language Parsing", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 1999. Head-Driven Statistical Mod- els for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Generating typed dependency parses from phrase structure parses", "authors": [ { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "the 5th International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In the 5th International Conference on Language Re- sources and Evaluation.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The web retrieval task and its evaluation in the third NTCIR workshop", "authors": [ { "first": "Koji", "middle": [], "last": "Eguchi", "suffix": "" }, { "first": "Keizo", "middle": [], "last": "Oyama", "suffix": "" }, { "first": "Emi", "middle": [], "last": "Ishida", "suffix": "" }, { "first": "Noriko", "middle": [], "last": "Kando", "suffix": "" }, { "first": "Kazuko", "middle": [], "last": "Kuriyama", "suffix": "" } ], "year": 2003, "venue": "Proceedings of SIGIR2003", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koji Eguchi, Keizo Oyama, Emi Ishida, Noriko Kando, and Kazuko Kuriyama. 2003. The web retrieval task and its evaluation in the third NTCIR workshop. In Proceedings of SIGIR2003.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Overview of web task at the fourth NTCIR workshop", "authors": [ { "first": "Koji", "middle": [], "last": "Eguchi", "suffix": "" }, { "first": "Keizo", "middle": [], "last": "Oyama", "suffix": "" }, { "first": "Akiko", "middle": [], "last": "Aizawa", "suffix": "" }, { "first": "Haruko", "middle": [], "last": "Ishikawa", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Fourth NTCIR Workshop on Research in Information Access Technologies Information Retrieval, Question Answering and Summarization", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koji Eguchi, Keizo Oyama, Akiko Aizawa, and Haruko Ishikawa. 2004. Overview of web task at the fourth NTCIR workshop. In Proceedings of the Fourth NTCIR Workshop on Research in Information Ac- cess Technologies Information Retrieval, Question Answering and Summarization.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Dependence language model for information retrieval", "authors": [ { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jian-Yun", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Guangyuan", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Guihong", "middle": [], "last": "Cao", "suffix": "" } ], "year": 2004, "venue": "Proceedings of SIGIR2004", "volume": "", "issue": "", "pages": "170--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianfeng Gao, Jian-Yun Nie, Guangyuan Wu, and Gui- hong Cao. 2004. Dependence language model for information retrieval. In Proceedings of SIGIR2004, pages 170-177.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Cumulated gain-based evaluation of ir techniques", "authors": [ { "first": "Kalervo", "middle": [], "last": "J\u00e4rvelin", "suffix": "" }, { "first": "Jaana", "middle": [], "last": "Kek\u00e4l\u00e4inen", "suffix": "" } ], "year": 2002, "venue": "ACM Transactions on Information Systems", "volume": "20", "issue": "4", "pages": "422--446", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kalervo J\u00e4rvelin and Jaana Kek\u00e4l\u00e4inen. 2002. Cumu- lated gain-based evaluation of ir techniques. ACM Transactions on Information Systems, 20(4):422- 446.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "What is the role of NLP in text retrieval", "authors": [ { "first": "Karen Sparck", "middle": [], "last": "Jones", "suffix": "" } ], "year": 1999, "venue": "Natural language information retrieval", "volume": "", "issue": "", "pages": "1--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karen Sparck Jones. 1999. What is the role of NLP in text retrieval? In T. Strzalkowski, editor, Natural language information retrieval, pages 1-24. Kluwer.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A fully-lexicalized probabilistic model for Japanese syntactic and case structure analysis", "authors": [ { "first": "Daisuke", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2006, "venue": "Proceedings of HLT-NAACL2006", "volume": "", "issue": "", "pages": "176--183", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daisuke Kawahara and Sadao Kurohashi. 2006. A fully-lexicalized probabilistic model for Japanese syntactic and case structure analysis. In Proceed- ings of HLT-NAACL2006, pages 176-183.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Dependency structure applied to language modeling for information retrieval", "authors": [ { "first": "Changki", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Myung-Gil", "middle": [], "last": "Gary Geunbae Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Jang", "suffix": "" } ], "year": 2006, "venue": "ETRI Journal", "volume": "28", "issue": "3", "pages": "337--346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Changki Lee, Gary Geunbae Lee, and Myung-Gil Jang. 2006. Dependency structure applied to language modeling for information retrieval. ETRI Journal, 28(3):337-346.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Online large-margin training of dependency parsers", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL2005", "volume": "", "issue": "", "pages": "91--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of ACL2005, pages 91-98.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A markov random field model for term dependencies", "authors": [ { "first": "Donald", "middle": [], "last": "Metzler", "suffix": "" }, { "first": "W. Bruce", "middle": [], "last": "Croft", "suffix": "" } ], "year": 2005, "venue": "Proceedings of SIGIR2005", "volume": "", "issue": "", "pages": "472--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Donald Metzler and W. Bruce Croft. 2005. A markov random field model for term dependencies. In Pro- ceedings of SIGIR2005, pages 472-479.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Semantic retrieval for the accurate identification of relational concepts in massive textbases", "authors": [ { "first": "Yusuke", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Tomoko", "middle": [], "last": "Ohta", "suffix": "" }, { "first": "Katsuya", "middle": [], "last": "Masuda", "suffix": "" }, { "first": "Yoshimasa", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "Kazuhiro", "middle": [], "last": "Yoshida", "suffix": "" }, { "first": "Takashi", "middle": [], "last": "Ninomiya", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2006, "venue": "Proceedings of COLING-ACL2006", "volume": "", "issue": "", "pages": "1017--1024", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yusuke Miyao, Tomoko Ohta, Katsuya Masuda, Yoshi- masa Tsuruoka, Kazuhiro Yoshida, Takashi Ni- nomiya, and Jun'ichi Tsujii. 2006. Semantic re- trieval for the accurate identification of relational concepts in massive textbases. In Proceedings of COLING-ACL2006, pages 1017-1024.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Question answering based on semantic structures", "authors": [ { "first": "Srini", "middle": [], "last": "Narayanan", "suffix": "" }, { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of COLING2004", "volume": "", "issue": "", "pages": "184--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Srini Narayanan and Sanda Harabagiu. 2004. Ques- tion answering based on semantic structures. In Pro- ceedings of COLING2004, pages 184-191.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Deterministic dependency parsing of English text", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Scholz", "suffix": "" } ], "year": 2004, "venue": "Proceedings of COLING2004", "volume": "", "issue": "", "pages": "64--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre and Mario Scholz. 2004. Deterministic dependency parsing of English text. In Proceedings of COLING2004, pages 64-70.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The proposition bank: An annotated corpus of semantic roles", "authors": [ { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Kingsbury", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "1", "pages": "71--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated cor- pus of semantic roles. Computational Linguistics, 31(1):71-106.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Query term ranking based on dependency parsing of verbose queries", "authors": [ { "first": "Jae-Hyun", "middle": [], "last": "Park", "suffix": "" }, { "first": "W. Bruce", "middle": [], "last": "Croft", "suffix": "" } ], "year": 2010, "venue": "Proceedings of SIGIR2010", "volume": "", "issue": "", "pages": "829--830", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jae-Hyun Park and W. Bruce Croft. 2010. Query term ranking based on dependency parsing of ver- bose queries. In Proceedings of SIGIR2010, pages 829-830.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Quasi-synchronous dependence model for information retrieval", "authors": [ { "first": "Jae-Hyun", "middle": [], "last": "Park", "suffix": "" }, { "first": "W", "middle": [ "Bruce" ], "last": "Croft", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2011, "venue": "Proceedings of CIKM2011", "volume": "", "issue": "", "pages": "17--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jae-Hyun Park, W. Bruce Croft, and David A. Smith. 2011. Quasi-synchronous dependence model for in- formation retrieval. In Proceedings of CIKM2011, pages 17-26.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Okapi at TREC", "authors": [ { "first": "Stephen", "middle": [ "E" ], "last": "Robertson", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Walker", "suffix": "" }, { "first": "Micheline", "middle": [], "last": "Hancock-Beaulieu", "suffix": "" }, { "first": "Aarron", "middle": [], "last": "Gull", "suffix": "" }, { "first": "Marianna", "middle": [], "last": "Lau", "suffix": "" } ], "year": 1992, "venue": "Proceedings of Text REtrieval Conference", "volume": "", "issue": "", "pages": "21--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen E. Robertson, Steve Walker, Micheline Hancock-Beaulieu, Aarron Gull, and Marianna Lau. 1992. Okapi at TREC. In Proceedings of Text RE- trieval Conference, pages 21-30.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Using semantic roles to improve question answering", "authors": [ { "first": "Dan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2007, "venue": "Proceedings of EMNLP-CoNLL2007", "volume": "", "issue": "", "pages": "12--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Shen and Mirella Lapata. 2007. Using semantic roles to improve question answering. In Proceed- ings of EMNLP-CoNLL2007, pages 12-21.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "TSUBAKI: An open search engine infrastructure for developing new information access methodology", "authors": [ { "first": "Keiji", "middle": [], "last": "Shinzato", "suffix": "" }, { "first": "Tomohide", "middle": [], "last": "Shibata", "suffix": "" }, { "first": "Daisuke", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "Chikara", "middle": [], "last": "Hashimoto", "suffix": "" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2008, "venue": "Proceedings of IJCNLP2008", "volume": "", "issue": "", "pages": "189--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keiji Shinzato, Tomohide Shibata, Daisuke Kawahara, Chikara Hashimoto, and Sadao Kurohashi. 2008. TSUBAKI: An open search engine infrastructure for developing new information access methodology. In Proceedings of IJCNLP2008, pages 189-196.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A novel retrieval approach reflecting variability of syntactic phrase representation", "authors": [ { "first": "Young-In", "middle": [], "last": "Song", "suffix": "" }, { "first": "Kyoung-Soo", "middle": [], "last": "Han", "suffix": "" }, { "first": "Sang-Bum", "middle": [], "last": "Kim", "suffix": "" }, { "first": "So-Young", "middle": [], "last": "Park", "suffix": "" }, { "first": "Hae-Chang", "middle": [], "last": "Rim", "suffix": "" } ], "year": 2008, "venue": "Journal of Intelligent Information Systems", "volume": "31", "issue": "3", "pages": "265--286", "other_ids": {}, "num": null, "urls": [], "raw_text": "Young-In Song, Kyoung-Soo Han, Sang-Bum Kim, So-Young Park, and Hae-Chang Rim. 2008. A novel retrieval approach reflecting variability of syn- tactic phrase representation. Journal of Intelligent Information Systems, 31(3):265-286.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Overview of the TREC 2004 robust retrieval track", "authors": [ { "first": "Ellen", "middle": [ "M" ], "last": "Voorhees", "suffix": "" } ], "year": 2004, "venue": "Proceedings of Text REtrieval Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen M. Voorhees. 2004. Overview of the TREC 2004 robust retrieval track. In Proceedings of Text RE- trieval Conference 2004.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "uris": null, "text": "Figure 3 is an example of the effect of normalizing relative clauses. The following sentences are the original query and a part of relevant document: (7) a. (I want to find shops that make bread with natural yeast.) b. (\u2022 \u2022 \u2022 only the bread that (someone) makes using only salt and yeast \u2022 \u2022 \u2022 )Here, (a) is a query and (b) is a sentence in a relevant document. These sentences have different syntactic dependencies as illustrated inFigure3, but they are normalized to the predicateargument structures (a') and (b') inFigure 3. The whole predicate-argument structures are different, but they contain the same typed semantic depen-", "type_str": "figure" }, "FIGREF2": { "num": null, "uris": null, "text": "is an example of the effect of normalizing intransitive verbs. The following sentences are the original sentences in a query and a relevant document:(9) a.", "type_str": "figure" }, "TABREF0": { "html": null, "content": "
(a)? ?? ?(b)? ?? ?
Tom-NOM bread-ACC bakeTom-NOM bake bread
(Tom bakes bread)(bread which Tom bakes)
(c) \u27e8NOM:ACC:
).
", "num": null, "text": "http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?JUMAN 5 http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?KNP 6 In many cases, the lemma of a transitive verb is not the same as that of its corresponding intransitive verb in Japanese.", "type_str": "table" } } } }