|
{ |
|
"paper_id": "I05-1045", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:24:40.074428Z" |
|
}, |
|
"title": "Exploring Syntactic Relation Patterns for Question Answering", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Saarland University", |
|
"location": { |
|
"addrLine": "Building 17, Postfach 15 11 50", |
|
"postCode": "66041", |
|
"settlement": "Saarbruecken", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "dshen@coli.uni-sb.de" |
|
}, |
|
{ |
|
"first": "Geert-Jan", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Kruijff", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Saarland University", |
|
"location": { |
|
"addrLine": "Building 17, Postfach 15 11 50", |
|
"postCode": "66041", |
|
"settlement": "Saarbruecken", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Dietrich", |
|
"middle": [], |
|
"last": "Klakow", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Saarland University", |
|
"location": { |
|
"addrLine": "Building 17, Postfach 15 11 50", |
|
"postCode": "66041", |
|
"settlement": "Saarbruecken", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "dietrich.klakow@lsv.uni-saarland.de" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we explore the syntactic relation patterns for opendomain factoid question answering. We propose a pattern extraction method to extract the various relations between the proper answers and different types of question words, including target words, head words, subject words and verbs, from syntactic trees. We further propose a QA-specific tree kernel to partially match the syntactic relation patterns. It makes the more tolerant matching between two patterns and helps to solve the data sparseness problem. Lastly, we incorporate the patterns into a Maximum Entropy Model to rank the answer candidates. The experiment on TREC questions shows that the syntactic relation patterns help to improve the performance by 6.91 MRR based on the common features.", |
|
"pdf_parse": { |
|
"paper_id": "I05-1045", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we explore the syntactic relation patterns for opendomain factoid question answering. We propose a pattern extraction method to extract the various relations between the proper answers and different types of question words, including target words, head words, subject words and verbs, from syntactic trees. We further propose a QA-specific tree kernel to partially match the syntactic relation patterns. It makes the more tolerant matching between two patterns and helps to solve the data sparseness problem. Lastly, we incorporate the patterns into a Maximum Entropy Model to rank the answer candidates. The experiment on TREC questions shows that the syntactic relation patterns help to improve the performance by 6.91 MRR based on the common features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Question answering is to find answers for open-domain natural language questions in a large document collection. A typical QA system usually consists of three basic modules: 1. Question Processing (QP) Module, which finds some useful information from questions, such as expected answer type and key words; 2. Information Retrieval (IR) Module, which searches a document collection to retrieve a set of relevant sentences using the key words; 3. Answer Extraction (AE) Module, which analyzes the relevant sentences using the information provided by the QP module and identify the proper answer. In this paper, we will focus on the AE module.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to find the answers, some evidences, such as expected answer types and surface text patterns, are extracted from answer sentences and incorporated in the AE module using a pipelined structure, a scoring function or some statistical-based methods. However, the evidences extracted from plain texts are not sufficient to identify a proper answer. For examples, for \"Q1910: What are pennies made of?\", the expected answer type is unknown; for \"Q21: Who was the first American in space?\", the surface patterns may not detect the long-distance relations between the question key phrase \"the first American in space\" and the answer \"Alan Shepard\" in \"\u2026 that carried Alan Shepard on a 15 -minute suborbital flight in 1961 , making him the first American in space.\" To solve these problems, more evidences need to be extracted from the more complex data representations, such as parse trees.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we explore the syntactic relation patterns (SRP) for the AE module. An SRP is defined as a kind of relation between a question word and an answer candidate in the syntactic tree. Different from the textual patterns, the SRPs capture the relations based on the sentence syntactic structure rather than the sentence surface. Therefore, they may get the deeper understanding of the relations and capture the long range dependency between words regardless of their ordering and distance in the surface text. Based on the observation of the task, we find that the syntactic relations between different types of question words and answers vary a lot with each other. We classify the question words into four classes, including target words, head words, subject phrases and verbs, and generate the SRPs for them respectively. Firstly, we generate the SRPs from the training data and score them based on the support and confidence measures. Next, we propose a QA-specific tree kernel to calculate the similarity between two SRPs in order to match the patterns from the unseen data into the pattern set. The tree kernel makes the partial matching between two patterns and helps to solve the data sparseness problem. Lastly, we incorporate the SRPs into a Maximum Entropy Model along with some common features to classify the answer candidates. The experiment on TREC questions shows that the syntactic relation patterns improve the performance by 6.91 MRR based on the common features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Although several syntactic relations, such as subject-verb and verb-object, have been also considered in some other systems, they are basically extracted using a small number of hand-built rules. As a result, they are limited and costly. In our task, we automatically extract the various relations between different question words and answers and more tolerantly match the relation patterns using the tree kernel.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The relations between answers and question words have been explored by many successful QA systems based on certain sentence representations, such as word sequence, logic form, parse tree, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the simplest case, a sentence is represented as a sequence of words. It is assumed that, for certain type of questions, the proper answers always have certain surface relations with the question words. For example, \"Q: When was X born?\", the proper answers often have such relation \"<X> ( <Answer>--\" with the question phrase X . [14] first used a predefined pattern set in QA and achieved a good performance at TREC10. [13] further developed a bootstrapping method to learn the surface patterns automatically. When testing, most of them make the partial matching using regular expression. However, such surface patterns strongly depend on the word ordering and distance in the text and are too specific to the question type.", |
|
"cite_spans": [ |
|
{ |
|
"start": 333, |
|
"end": 337, |
|
"text": "[14]", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 423, |
|
"end": 427, |
|
"text": "[13]", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "LCC [9] explored the syntactic relations, such as subject, object, prepositional attachment and adjectival/adverbial adjuncts, based on the logic form transformation. Furthermore they used a logic prover to justify the answer candidates. The prover is accurate but costly.", |
|
"cite_spans": [ |
|
{ |
|
"start": 4, |
|
"end": 7, |
|
"text": "[9]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Most of the QA systems explored the syntactic relations on the parse tree. Since such relations do not depend on the word ordering and distance in the sentence, they may cope with the various surface expressions of the sentence. ISI [7] extracted the relations, such as \"subject-verb\" and \"verb-object\", in the answer sentence tree and compared with those in the question tree. IBM's Maximum Entropy-based model [10] integrated a rich feature set, including words co-occurrence scores, named entity, dependency relations, etc. For the dependency relations, they considered some predefined relations in trees by partial matching. BBN [15] also considered the verbargument relations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 233, |
|
"end": 236, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 416, |
|
"text": "[10]", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 633, |
|
"end": 637, |
|
"text": "[15]", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "However, most of the current QA systems only focus on certain relation types, such as verb-argument relations, and extract them from the syntactic tree using some heuristic rules. Therefore, extracting such relations is limited in a very local context of the answer node, such as its parent or sibling nodes, and does not involve long range dependencies. Furthermore, most of the current systems only concern the relations to certain type of question words, such as verb. In fact, different types of question words may have different indicative relations with the proper answers. In this paper, we will automatically extract more comprehensive syntactic relation patterns for all types of question words, partially match them using a QA-specific tree kernel and evaluate their contributions by integrating them into a Maximum Entropy Model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section, we will discuss how to extract the syntactic relation patterns. Firstly, we briefly introduce the question processing module which provides some necessary information to the answer extraction module. Secondly, we generate the dependency tree of the answer sentence and map the question words into the tree using a Modified Edit Distance (MED) algorithm. Thirdly, we define and extract the syntactic relation patterns in the mapped dependency tree. Lastly, we score and filter the patterns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Relation Pattern Generating", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The key words are extracted from the questions. Considering that different key words may have different syntactic relations with the answers, we divide the key words into the following four types:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Question Processing Module", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "1. Target Words, which are extracted from what / which questions. Such words indicate the expected answer types, such as \"party\" in \"Q1967: What party led \u2026?\". 2. Head Words, which are extracted from how questions. Such words indicate the expected answer heads, such as \"dog\" in the \"Q210: How many dogs pull \u2026?\" 3. Subject Phrases, which are extracted from all types of questions. They are the base noun phrases of the questions except the target words and the head words. 4. Verbs, which are the main verbs extracted from non-definition questions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Question Processing Module", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The key words described above are identified and classified based on the question parse tree. We employ the Collins Parser [2] to parse the questions and the answer sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Question Processing Module", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "From this section, we start to introduce the AE module. Firstly, the answer sentences are tagged with named entities and parsed. Secondly, the parse trees are transformed to the dependency trees based on a set of rules. To simplify a dependency tree, some special rules are used to remove the non-useful nodes and dependency information. The rules include 1. Since the question key words are always NPs and verbs, only the syntactic relations between NP and NP / NP and verb are considered. 2. The original form of Base Noun Phrase (BNP) is kept and the dependency relations within the BNPs are not considered, such as adjective-noun. A base noun phrase is defined as the smallest noun phrase in which there are no noun phrases embedded.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Question Key Words Mapping", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "An example of the dependency tree is shown in Figure 1 . We regard all BNP nodes and leaf nodes as answer candidates.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 54, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Question Key Words Mapping", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Next, we map the question key words into the simplified dependency trees. We propose a weighted edit distance (WED) algorithm, which is to find the similarity between two phrases by computing the minimal cost of operations needed to transform one phrase into the other, where an operation is an insertion, deletion, or substitution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fig. 1. Dependency tree and Tagged dependency tree", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Different from the commonly-used edit distance algorithm [11] , the WED defines the more flexible cost function which incorporates the morphological and semantic alternations of the words. The morphological alternations indicate the inflections of noun/verb. For example, for Q2149: How many Olympic gold medals did Carl Lewis win? We map the verb win to the nominal winner in the answer sentence \"Carl Lewis, winner of nine Olympic gold medals, thinks that \u2026\". The morphological alternations are found based on a stemming algorithm and the \"derivationally related forms\" in WordNet [8] . The semantic alternations consider the synonyms of the words. Some types of the semantic relations in WordNet enable the retrieval of synonyms, such as hypernym, hyponym, etc. For example, for Q212: Who invented the electric guitar? We may map the verb invent to its direct hypernym create in answer sentences. Based on the observation of the task, we set the substitution costs of the alternations as follows: Identical words have cost 0; Words with the same morphological root have cost 0. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 61, |
|
"text": "[11]", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 583, |
|
"end": 586, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fig. 1. Dependency tree and Tagged dependency tree", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A syntactic relation pattern is defined as the smallest subtree which covers an answer candidate node and one question key word node in the dependency tree. To capture different relations between answer candidates and different types of question words, we generate four pattern sets, called PSet_target, PSet_head, PSet_subject and PSet_verb, for the answer candidates. The patterns are extracted from the training data. Some pattern examples are shown in Table 1 . For a question Q, there are a set of relevant sentences SentSet. The extraction process is as follows: ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 456, |
|
"end": 463, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntactic Relation Pattern Extraction", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The patterns extracted in section 3.3 are scored by support and confidence measures. Support and confidence measures are most commonly used to evaluate the association rules in the data mining area. The support of a rule is the proportion of times the rule applies. The confidence of a rule is the proportion of times the rule is correct. In our task, we score a pattern by measuring the strength of the association rule from the pattern to the proper answer (the pattern is matched => the answer is correct We score the patterns in the PSet_target, PSet_head, PSet_subject and PSet_verb respectively. If the support value is less than the threshold sup t or the confidence value is less than the threshold conf t , the pattern is removed from the set. In the experiment, we set sup t 0.01 and conf t 0.5. Table 1 lists the support and confidence of the patterns.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 806, |
|
"end": 813, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntactic Relation Pattern Scoring", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Since we build the pattern sets based on the training data in the current experiment, the pattern sets may not be large enough to cover all of the unseen cases. If we make the exact match between two patterns, we will suffer from the data sparseness problem. So a partial matching method is required. In this section, we will propose a QAspecific tree kernel to match the patterns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Relation Pattern Matching", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "A kernel function 1 2 ( , ): [0, ] K x x \u00d7 \u2192 X X", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Relation Pattern Matching", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "R , is a similarity measure between two objects 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Relation Pattern Matching", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "x and 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Relation Pattern Matching", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "x with some constraints. It is the most important component of kernel methods [16] . Tree kernels are the structure-driven kernels used to calculate the similarity between two trees. They have been successfully accepted in the natural language processing applications, such as parsing [4] , part of speech tagging and named entity extraction [3] , and information extraction [5, 17] Figure 2 shows an example of the pattern tree T_ac#target.", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 82, |
|
"text": "[16]", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 288, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 345, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 375, |
|
"end": 378, |
|
"text": "[5,", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 382, |
|
"text": "17]", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 383, |
|
"end": 391, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntactic Relation Pattern Matching", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The core idea of the tree kernel ( , ) 1 2 K T T is that the similarity between two trees T 1 and T 2 is the sum of the similarity between their subtrees. It can be calculated by dynamic programming and can capture the long-range relations between two nodes. The kernel we use is similar to [17] except that we define a task-specific matching function and similarity function, which are two primitive functions to calculate the similarity between two nodes in terms of their attributes. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 291, |
|
"end": 295, |
|
"text": "[17]", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Relation Pattern Matching", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In addition to the syntactic relation patterns, many other evidences, such as named entity tags, may help to detect the proper answers. Therefore, we use maximum entropy to integrate the syntactic relation patterns and the common features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ME-Based Answer Extraction", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "[1] gave a good description of the core idea of maximum entropy model. In our task, we use the maximum entropy model to rank the answer candidates for a question, ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Entropy Model", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For the baseline maximum entropy model, we use four types of common features:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "1. Named Entity Features: For certain question target, if the answer candidate is tagged as certain type of named entity, one feature fires. 2. Orthographic Features: They capture the surface format of the answer candidates, such as capitalizations, digits and lengths, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For certain question target, if the word in the answer candidate belongs to a certain syntactic / POS type, one feature fires. 4. Triggers: For some how questions, there are always some trigger words which are indicative for the answers. For example, for \"Q2156: How fast does Randy Johnson throw?\", the word \"mph\" may help to identify the answer \"98-mph\" in \"Johnson throws a 98-mph fastball.\" Table 3 shows some examples of the common features. All of the features are the binary features. In addition, many other features, such as the answer candidate frequency, can be extracted based on the IR output and are thought as the indicative evidences for the answer extraction [10] . However, in this paper, we are to evaluate the answer extraction module independently, so we do not incorporate such features in the current model. In order to evaluate the effectiveness of the automatically generated syntactic relation patterns, we also manually build some heuristic rules to extract the relation features from the trees and incorporate them into the baseline model. The baseline model uses 20 rules. Some examples of the hand-extracted relation features are listed as follows,", |
|
"cite_spans": [ |
|
{ |
|
"start": 676, |
|
"end": 680, |
|
"text": "[10]", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 395, |
|
"end": 402, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntactic Tag Features:", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "If the ac node is the same of the qtarget node, one feature fires. If the ac node is the sibling of the qtarget node, one feature fires. If the ac node is the child of the qsubject node, one feature fires. \u2026 Next, we will discuss the use of the syntactic relation features. Firstly, for each answer candidate, we extract the syntactic relations between it and all mapped question key words in the sentence tree. Then for each extracted relation, we match it in the pattern set PSet_target, PSet_head, PSet_subject or PSet_verb. A tree kernel discussed in Section 4 is used to calculate the similarity between two patterns. Finally, if the maximal similarity is above a threshold \u03bb , the pattern with the maximal similarity is chosen and the corresponding feature fires. The experiments will evaluate the performance and the coverage of the pattern sets based on different \u03bb values.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Tag Features:", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "We apply the AE module to the TREC QA task. Since this paper focuses on the AE module alone, we only present those sentences containing the proper answers to the AE module based on the assumption that the IR module has got 100% precision. The AE module is to identify the proper answers from the given sentence collection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We use the questions of TREC8, 9, 2001 and 2002 for training and the questions of TREC2003 for testing. The following steps are used to generate the data:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "1. Retrieve the relevant documents for each question based on the TREC judgments. 2. Extract the sentences, which match both the proper answer and at least one question key word, from these documents. 3. Tag the proper answer in the sentences based on the TREC answer patterns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In TREC 2003, there are 413 factoid questions in which 51 questions (NIL questions) are not returned with the proper answers by TREC. According to our data generation process, we cannot provide data for those NIL questions because we cannot get the sentence collections. Therefore, the AE module will fail on all of the NIL questions and the number of the valid questions should be 362 (413 -51). In the experiment, we still test the module on the whole question set (413 questions) to keep consistent with the other's work. The training set contains 1252 questions. The performance of our system is evaluated using the mean reciprocal rank (MRR). Furthermore, we also list the percentages of the correct answers respectively in terms of the top 5 answers and the top 1 answer returned. No post-processes are used to adjust the answers in the experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In order to evaluate the effectiveness of the syntactic relation patterns in the answer extraction, we compare the modules based on different feature sets. The first ME module ME1 uses the common features including NE features, Orthographic features, Syntactic Tag features and Triggers. The second ME module ME2 uses the common features and some hand-extracted relation features, described in Section 5.2. The third module ME3 uses the common features and the syntactic relation patterns which are automatically extracted and partial matched with the methods proposed in Section 3 and 4. Table 4 shows the overall performance of the modules. Both ME2 and ME3 outperform ME1 by 3.15 MRR and 6.91 MRR respectively. This may indicate that the syntactic relations between the question words and the answers are useful for the answer extraction. Furthermore, ME3 got the higher performance (+3.76 MRR) than ME2. The probable reason may be that the relations extracted by some heuristic rules in ME2 are limited in the very local contexts of the nodes and they may not be sufficient. On the contrary, the pattern extraction methods we proposed can explore the larger relation space in the dependency trees. Table 5 . Performances for two pattern matching methods PartialMatch ExactMatch Furthermore, we evaluate the effectiveness of the pattern matching method in Section 4. We compare two pattern matching methods: the exact matching (ExactMatch) and the partial matching (PartialMatch) using the tree kernel. Table 5 shows the performances for the two pattern matching methods. For PartialMatch, we also evaluate the effect of the parameter \u03bb (described in Section 5.2) on the performance. In Table 5 , the best PartialMatch ( \u03bb = 0.6) outperforms ExactMatch by 1.48 MRR.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 589, |
|
"end": 596, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 1202, |
|
"end": 1209, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1506, |
|
"end": 1513, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1690, |
|
"end": 1697, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "( \u03bb =1) \u03bb =0.8 \u03bb =0.6 \u03bb =0.4 \u03bb =0.2 \u03bb =0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Since the pattern sets extracted from the training data is not large enough to cover the unseen cases, ExactMatch may have too low coverage and suffer with the data sparseness problem when testing, especially for PSet_subject (24.32% coverage using Ex-actMatch vs. 49.94% coverage using PartialMatch). In addition, even the model with ExactMatch is better than ME2 (common features + hand-extracted relations) by 2.28 MRR. It indicates that the relation patterns explored with the method proposed in Section 3 are more effective than the relations extracted by the heuristic rules. Table 6 shows the size of the pattern sets PSet_target, PSet_head, PSet_subject", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 582, |
|
"end": 589, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "and PSet_verb and their coverage for the test data based on different \u03bb values.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "PSet_verb gets the low coverage (<5% coverage). The probable reason is that the verbs in the answer sentences are often different from those in the questions, therefore only a few question verbs can be matched in the answer sentences. PSet_head also gets the relatively low coverage since the head words are only exacted from how questions and there are only 49/413 how questions with head words in the test data. We further evaluate the contributions of different types of patterns. We respectively combine the pattern features in different pattern set and the common features. Some findings can be concluded from Table 7 : All of the patterns have the positive effects based on the common features, which indicates that all of the four types of the relations are helpful for answer extraction. Furthermore, P_target (+4.21 MRR) and P_subject (+2.47 MRR) are more beneficial than P_head (+1.25 MRR) and P_verb (+0.19 MRR). This may be explained that the target and subject patterns may have the effect on the more test data than the head and verb patterns since PSet_target and PSet_subject have the higher coverage for the test data than PSet_head and PSet_verb, as shown in Table 6 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 615, |
|
"end": 622, |
|
"text": "Table 7", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 1177, |
|
"end": 1184, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this paper, we study the syntactic relation patterns for question answering. We extract the various syntactic relations between the answers and different types of question words, including target words, head words, subject words and verbs and score the extracted relations based on support and confidence measures. We further propose a QA-specific tree kernel to partially match the relation patterns from the unseen data to the pattern sets. Lastly, we incorporate the patterns and some common features into a Maximum Entropy Model to rank the answer candidates. The experiment shows that the syntactic relation patterns improve the performance by 6.91 MRR based on the common features. Moreover, the contributions of the pattern matching methods are evaluated. The results show that the tree kernel-based partial matching outperforms the exact matching by 1.48 MRR. In the future, we are to further explore the syntactic relations using the web data rather than the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A maximum entropy approach to natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Berger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Computational Linguistics", |
|
"volume": "22", |
|
"issue": "1", |
|
"pages": "39--71", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Berger, A., Della Pietra, S., Della Pietra, V.: A maximum entropy approach to natural lan- guage processing. Computational Linguistics (1996), vol. 22, no. 1, pp. 39-71", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A New Statistical Parser Based on Bigram Lexical Dependencies", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of ACL-96", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "184--191", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Collins, M.: A New Statistical Parser Based on Bigram Lexical Dependencies. In: Pro- ceedings of ACL-96 (1996) 184-191", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "New Ranking Algorithms for Parsing and Tagging: Kernel over Discrete Structures, and the Voted Perceptron", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceeings of ACL-2002", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Collins, M.: New Ranking Algorithms for Parsing and Tagging: Kernel over Discrete Structures, and the Voted Perceptron. In: Proceeings of ACL-2002 (2002).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Convolution Kernels for Natural Language", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Duffy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Collins, M., Duffy, N.: Convolution Kernels for Natural Language. Advances in Neural Information Processing Systems 14, Cambridge, MA. MIT Press (2002)", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Dependency Tree Kernels for Relation Extraction", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Culotta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Sorensen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of ACL-2004", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Culotta, A., Sorensen, J.: Dependency Tree Kernels for Relation Extraction. In: Proceed- ings of ACL-2004 (2004)", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Generalized iterative scaling for log-linear models", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Darroch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Ratcliff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1972, |
|
"venue": "The annuals of Mathematical Statistics", |
|
"volume": "43", |
|
"issue": "", |
|
"pages": "1470--1480", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Darroch, J., Ratcliff, D.: Generalized iterative scaling for log-linear models. The annuals of Mathematical Statistics (1972), vol. 43, pp. 1470-1480", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Multiple-Engine Question Answering in TextMap", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Echihabi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Hermjakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Melz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Ravichandran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the TREC-2003 Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Echihabi, A., Hermjakob, U., Hovy, E., Marcu, D., Melz, E., Ravichandran, D.: Multiple- Engine Question Answering in TextMap. In: Proceedings of the TREC-2003 Conference, NIST (2003)", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "WordNet -An Electronic Lexical Database", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Fellbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fellbaum, C.: WordNet -An Electronic Lexical Database. MIT Press, Cambridge, MA (1998)", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Answer Mining by Combining Extraction Techniques with Abductive Reasoning", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Moldovan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Bowden", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Bensley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the TREC-2003 Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harabagiu, S., Moldovan, D., Clark, C., Bowden, M., Williams, J., Bensley, J.: Answer Mining by Combining Extraction Techniques with Abductive Reasoning. In: Proceedings of the TREC-2003 Conference, NIST (2003)", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "IBM's Statistical Question Answering System -TREC 11", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ittycheriah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the TREC-2002 Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ittycheriah, A., Roukos, S.: IBM's Statistical Question Answering System -TREC 11. In: Proceedings of the TREC-2002 Conference, NIST (2002)", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Binary Codes Capable of Correcting Deletions, Insertions and Reversals", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Levenshtein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1965, |
|
"venue": "Doklady Akademii Nauk SSSR", |
|
"volume": "163", |
|
"issue": "4", |
|
"pages": "845--848", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Levenshtein, V. I.: Binary Codes Capable of Correcting Deletions, Insertions and Rever- sals. Doklady Akademii Nauk SSSR 163(4) (1965) 845-848", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Re-ranker: What's the difference?", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Ravichandran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of Workshop on Multilingual Summarization and Question Answering, ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ravichandran, D., Hovy, E., Och, F. J.: Statistical QA -Classifier vs. Re-ranker: What's the difference? In: Proceedings of Workshop on Multilingual Summarization and Question Answering, ACL (2003)", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Learning Surface Text Patterns for a Question Answering System", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Ravichandran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of ACL-2002", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "41--47", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ravichandran, D., Hovy, E.: Learning Surface Text Patterns for a Question Answering System. In: Proceedings of ACL-2002 (2002) 41-47", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Patterns of Potential Answer Expressions as Clues to the Right Answer", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Soubbotin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Soubbotin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the TREC-10 Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soubbotin, M. M., Soubbotin, S. M.: Patterns of Potential Answer Expressions as Clues to the Right Answer. In: Proceedings of the TREC-10 Conference, NIST (2001)", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "TREC 2002 QA at BBN: Answer Selection and Confidence Estimation", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Licuanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "May", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the TREC-2002 Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xu, J., Licuanan, A., May, J., Miller, S., Weischedel, R.: TREC 2002 QA at BBN: Answer Selection and Confidence Estimation. In: Proceedings of the TREC-2002 Conference, NIST (2002)", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Statistical Learning Theory", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Vapnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vapnik, V.: Statistical Learning Theory, John Wiley, NY, (1998) 732.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Kernel Methods for Relation Extraction", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Zelenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Aone", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Richardella", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1083--1106", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zelenko, D., Aone, C., Richardella, A.: Kernel Methods for Relation Extraction. Journal of Machine Learning Research (2003) 1083-1106.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "2; Words with the hypernym or hyponym relations have cost verb of the question SUB: the subject words of the question TGT_HYP: the hypernym of the target word of the question live BNP NER_PER SUB Ellington BNP NER_LOC TGT_HYP BNP Washington his early NNP 20s NER_DAT Q1916: What city did Duke Ellington live in? A: Ellington lived in Washington until his early 20s. 0.4; Words in the same SynSet have cost 0.6; Words with subsequence relations have cost 0.8; otherwise, words have cost 1. Figure 1 also shows an example of the tagged dependency tree.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "An example of the pattern tree T_ac#target", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"text": "parameters, which are trained with Generalized Iterative Scaling[6]. A Gaussian Prior is used to smooth the ME model.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"content": "<table><tr><td>PatternSet</td><td>Patterns</td><td>Sup.</td><td>Conf.</td></tr><tr><td/><td>(NPB~AC~TGT)</td><td>0.55</td><td>0.22</td></tr><tr><td>PSet_target</td><td>(NPB~AC~null (NPB~null~TGT))</td><td>0.08</td><td>0.06</td></tr><tr><td/><td>(NPB~null~null (NPB~AC~null) (NPB~null~TGT))</td><td>0.02</td><td>0.09</td></tr><tr><td>PSet_head</td><td>(NPB~null~null (CD~AC~null) (NPB~null~HEAD))</td><td>0.59</td><td>0.67</td></tr><tr><td/><td>(VP~null~null (NPB~null~SUB) (NPB~null~null</td><td>0.04</td><td>0.33</td></tr><tr><td>PSet_subject</td><td>(NPB~AC~null))) (NPB~null~null (NPB~null~SUB) (NPB~AC~null))</td><td>0.02</td><td>0.18</td></tr><tr><td>PSet_verb</td><td>(VP~null~VERB (NPB~AC~null))</td><td>0.18</td><td>0.16</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "1. for each question Q in the training data 2. question processing model extract the key words of Q 3. for each sentence s in SentSet a) parse s b) map the question key words into the parse tree c) tag all BNP nodes in the parse tree as answer candidates. Examples of the patterns in the four pattern sets" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"content": "<table><tr><td>Attributes</td><td/><td>Examples</td></tr><tr><td>Type</td><td>POS tag</td><td>CD, NNP, NN\u2026</td></tr><tr><td/><td>syntactic tag</td><td>NP, VP, \u2026</td></tr><tr><td>Orthographic</td><td>Is Digit?</td><td>DIG, DIGALL</td></tr><tr><td/><td>Is Capitalized?</td><td>CAP, CAPALL</td></tr><tr><td/><td>length of phrase</td><td>LNG1, LNG2#3, LNGgt3</td></tr><tr><td>Role1</td><td>Is answer candidate?</td><td>true, false</td></tr><tr><td>Role2</td><td>Is question key words?</td><td>true, false</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Attributes of the nodes" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"content": "<table><tr><td>Features</td><td>Examples</td><td>Explanation</td></tr><tr><td>NE</td><td>NE#DAT_QT_DAT</td><td>ac is NE (DATE) and qtarget is DATE</td></tr><tr><td/><td>NE#PER_QW_WHO</td><td>ac is NE (PERSON) and qword is WHO</td></tr><tr><td>Ortho-</td><td>SSEQ_Q</td><td>ac is a subsequence of question</td></tr><tr><td>graphic</td><td>CAP_QT_LOC LNGlt3_QT_PER</td><td>ac is capitalized and qtarget is LOCATION the length of ac \u2264 3 and qtarget is PERSON</td></tr><tr><td>Syntactic</td><td>CD_QT_NUM</td><td>syn. tag of ac is CD and qtarget is NUM</td></tr><tr><td>Tag</td><td>NNP_QT_PER</td><td>syn. tag of ac is NNP and qtarget is PERSON</td></tr><tr><td>Triggers</td><td>TRG_HOW_DIST</td><td>ac matches the trigger words for HOW questions which</td></tr><tr><td/><td/><td>ask for distance</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Examples of the common features" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"content": "<table><tr><td/><td>ME1</td><td>ME2</td><td>ME3</td></tr><tr><td>Top1</td><td>44.06</td><td>47.70</td><td>51.81</td></tr><tr><td>Top5</td><td>53.27</td><td>55.45</td><td>58.85</td></tr><tr><td>MRR</td><td>47.75</td><td>50.90</td><td>54.66</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Overall performance" |
|
}, |
|
"TABREF8": { |
|
"num": null, |
|
"content": "<table><tr><td/><td>size</td><td>\u03bb =1</td><td>\u03bb =0.8</td><td colspan=\"2\">coverage (*%) \u03bb =0.6 \u03bb =0.4</td><td>\u03bb =0.2</td><td>\u03bb =0</td></tr><tr><td>PSet_target</td><td>45</td><td>49.85</td><td>53.73</td><td>57.01</td><td>58.14</td><td>58.46</td><td>58.46</td></tr><tr><td>PSet_head</td><td>42</td><td>5.82</td><td>6.48</td><td>6.69</td><td>6.80</td><td>6.80</td><td>6.80</td></tr><tr><td>PSet_subject</td><td>123</td><td>24.32</td><td>44.82</td><td>49.94</td><td>51.29</td><td>51.84</td><td>51.84</td></tr><tr><td>PSet_verb</td><td>125</td><td>2.21</td><td>3.49</td><td>3.58</td><td>3.58</td><td>3.58</td><td>3.58</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Size and coverage of the pattern sets" |
|
}, |
|
"TABREF9": { |
|
"num": null, |
|
"content": "<table><tr><td>Combination of features</td><td>MRR</td></tr><tr><td>common features</td><td>47.75</td></tr><tr><td>common features + P_target</td><td>51.96</td></tr><tr><td>common features + P_head</td><td>49.00</td></tr><tr><td>common features + P_subject</td><td>50.22</td></tr><tr><td>common features + P_verb</td><td>47.94</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Performance on feature combination" |
|
} |
|
} |
|
} |
|
} |