{ "paper_id": "A00-1021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:12:26.506117Z" }, "title": "Ranking suspected answers to natural language questions using predictive annotation", "authors": [ { "first": "Dragomir", "middle": [ "R" ], "last": "Radev", "suffix": "", "affiliation": {}, "email": "radev@umich@edu" }, { "first": "John", "middle": [], "last": "Prager", "suffix": "", "affiliation": {}, "email": "jprager@us.ibm.com" }, { "first": "Valerie", "middle": [], "last": "Samn", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we describe a system to rank suspected answers to natural language questions. We process both corpus and query using a new technique, predictive annotation, which augments phrases in texts with labels anticipating their being targets of certain kinds of questions. Given a natural language question, our IR system returns a set of matching passages, which we then rank using a linear function of seven predictor variables. We provide an evaluation of the techniques based on results from the TREC Q&A evaluation in which our system participated.", "pdf_parse": { "paper_id": "A00-1021", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we describe a system to rank suspected answers to natural language questions. We process both corpus and query using a new technique, predictive annotation, which augments phrases in texts with labels anticipating their being targets of certain kinds of questions. Given a natural language question, our IR system returns a set of matching passages, which we then rank using a linear function of seven predictor variables. We provide an evaluation of the techniques based on results from the TREC Q&A evaluation in which our system participated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Question Answering is a task that calls for a combination of techniques from Information Retrieval and Natural Language Processing. The former has the advantage of years of development of efficient techniques for indexing and searching large collections of data, but lacks of any meaningful treatment of the semantics of the query or the texts indexed. NLP tackles the semantics, but tends to be computationally expensive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We have attempted to carve out a middle ground, whereby we use a modified IR system augmented by shallow NL parsing. Our approach was motivated by the following problem with traditional IR systems. Suppose the user asks \"Where did happen?\". If the system does no pre-processing of the query, then \"where\" will be included in the bag of words submitted to the search engine, but this will not be helpful since the target text will be unlikely to contain the word \"where\". If the word is stripped out as a stop-word, then * The work presented in this paper was performed while the first and third authors were at 1BM Research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "the search engine will have no idea that a location is sought. Our approach, called predictive annotation, is to augment the query with semantic category markers (which we call QA-Tokens) , in this case with the PLACES token, and also to label with QA-Tokens all occurrences in text that are recognized entities, (for example, places). Then traditional bag-ofwords matching proceeds successfully, and will return matching passages. The answer-selection process then looks for and ranks in these passages occurrences of phrases containing the particular QA-Token(s) from the augmented query. This classification of questions is conceptually similar to the query expansion in (Voorhees, 1994) but is expected to achieve much better performance since potentially matching phrases in text are classified in a similar and synergistic way.", "cite_spans": [ { "start": 177, "end": 187, "text": "QA-Tokens)", "ref_id": null }, { "start": 674, "end": 690, "text": "(Voorhees, 1994)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our system participated in the official TREC Q&A evaluation. For 200 questions in the evaluation set, we were asked to provide a list of 50-byte and 250-byte extracts from a 2-GB corpus. The results are shown in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Some techniques used by other participants in the TREC evaluation are paragraph indexing, followed by abductive inference (Harabagiu and Maiorano, 1999) and knowledge-representation combined with information retrieval (Breck et al., 1999) . Some earlier systems related to our work are FaqFinder (Kulyukin et al., 1998) , MURAX (Kupiec, 1993) , which uses an encyclopedia as a knowledge base from which to extract answers, and PROFILE (Radev and McKeown, 1997) which identifies named entities and noun phrases that describe them in text.", "cite_spans": [ { "start": 122, "end": 152, "text": "(Harabagiu and Maiorano, 1999)", "ref_id": "BIBREF2" }, { "start": 218, "end": 238, "text": "(Breck et al., 1999)", "ref_id": "BIBREF0" }, { "start": 296, "end": 319, "text": "(Kulyukin et al., 1998)", "ref_id": "BIBREF3" }, { "start": 328, "end": 342, "text": "(Kupiec, 1993)", "ref_id": "BIBREF4" }, { "start": 435, "end": 460, "text": "(Radev and McKeown, 1997)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our system ( Figure 1 ) consists of two pieces: an IR component (GuruQA) that which returns matching texts, and an answer selection compo-neat (AnSel/Werlect) that extracts and ranks potential answers from these texts. This paper focuses on the process of ranking potential answers selected by the IR engine, which is itself described in (Prager et al., 1999) . ", "cite_spans": [ { "start": 338, "end": 359, "text": "(Prager et al., 1999)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 13, "end": 21, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "System description", "sec_num": "2" }, { "text": "In the context of fact-seeking questions, we made the following observations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Information Retrieval component", "sec_num": "2.1" }, { "text": "\u2022 In documents that contain the answers, the query terms tend to occur in close proximity to each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Information Retrieval component", "sec_num": "2.1" }, { "text": "\u2022 The answers to fact-seeking questions are usually phrases: \"President Clinton\", \"in the Rocky Mountains\", and \"today\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Information Retrieval component", "sec_num": "2.1" }, { "text": "\u2022 These phrases can be categorized by a set of a dozen or so labels ( Figure 2 ) corresponding to question types.", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 78, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The Information Retrieval component", "sec_num": "2.1" }, { "text": "\u2022 The phrases can be identified in text by pattern matching techniques (without full NLP).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Information Retrieval component", "sec_num": "2.1" }, { "text": "As a result, we defined a set of about 20 categories, each labeled with its own QA-Token, and built an IR system which deviates from the traditional model in three important aspects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Information Retrieval component", "sec_num": "2.1" }, { "text": "\u2022 We process the query against a set of approximately 200 question templates which, may replace some of the query words with a set of QA-Tokens, called a SYNclass. Thus \"Where\" gets mapped to \"PLACES\", but \"How long \" goes to \"@SYN(LENGTH$, DURATIONS)\". Some templates do not cause complete replacement of the matched string. For example, the pattern \"What is the population\" gets replaced by \"NUMBERS population'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Information Retrieval component", "sec_num": "2.1" }, { "text": "\u2022 Before indexing the text, we process it with Textract (Byrd and Ravin, 1998; Wacholder et al., 1997) , which performs lemmatization, and discovers proper names and technical terms. We added a new module (Resporator) which annotates text segments with QA-Tokens using pattern matching. Thus the text \"for 5 centuries\" matches the DURATIONS pattern \"for :CARDINAL _timeperiod\", where :CAR-DINAL is the label for cardinal numbers, and _timeperiod marks a time expression.", "cite_spans": [ { "start": 56, "end": 78, "text": "(Byrd and Ravin, 1998;", "ref_id": "BIBREF1" }, { "start": 79, "end": 102, "text": "Wacholder et al., 1997)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "The Information Retrieval component", "sec_num": "2.1" }, { "text": "\u2022 GuruQA scores text passages instead of documents. We use a simple documentand collection-independent weighting scheme: QA-Tokens get a weight of 400, proper nouns get 200 and any other word -100 (stop words are removed in query processing after the pattern template matching operation).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Information Retrieval component", "sec_num": "2.1" }, { "text": "The density of matching query tokens within a passage is contributes a score of 1 to 99 (the highest scores occur when all matched terms are consecutive).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Information Retrieval component", "sec_num": "2.1" }, { "text": "Predictive Annotation works best for Where, When, What, Which and How+adjective questions than for How+verb and Why questions, since the latter are typically not answered by phrases. However, we observed that \"by\" + the present participle would usually indicate the description of a procedure, so we instantiate a METHODS QA-Token for such occurrences. We have no such QA-Token for Why questions, but we do replace the word \"why\" with \"@SYN(result, cause, because)\", since the occurrence of any of these words usually betokens an explanation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Information Retrieval component", "sec_num": "2.1" }, { "text": "3 Answer selection So far, we have described how we retrieve relevant passages that may contain the answer to a query. The output of GuruQA is a list of 10 short passages containing altogether a large number (often more than 30 or 40) of potential answers in the form of phrases annotated with QA-Tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Information Retrieval component", "sec_num": "2.1" }, { "text": "We now describe two algorithms, AnSel and Werlect, which rank the spans returned by Gu-ruQA. AnSel and Werlect 1 use different approaches, which we describe, evaluate and compare and contrast. The output of either system consists of five text extracts per question that contain the likeliest answers to the questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Answer ranking", "sec_num": "3.1" }, { "text": "The role of answer selection is to decide which among the spans extracted by GuruQA are most likely to contain the precise answer to the questions. Figure 3 contains an example of the data structure passed from GuruQA to our answer selection module. The input consists of four items:", "cite_spans": [], "ref_spans": [ { "start": 148, "end": 156, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Sample Input to AnSel/Werlect", "sec_num": "3.2" }, { "text": "\u2022 a query (marked with tokens in the example),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Input to AnSel/Werlect", "sec_num": "3.2" }, { "text": "\u2022 a list of 10 passages (one of which is shown above),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Input to AnSel/Werlect", "sec_num": "3.2" }, { "text": "\u2022 a list of annotated text spans within the passages, annotated with QA-Tokens, and 1 from ANswer SELect and ansWER seLECT, respectively", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Input to AnSel/Werlect", "sec_num": "3.2" }, { "text": "\u2022 the SYN-class corresponding to the type of question (e.g., \"PERSONS NAMES\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Input to AnSel/Werlect", "sec_num": "3.2" }, { "text": "The text in Figure 3 contains five spans (potential answers), of which three (\"Biography of Margaret Thatcher\", \"Hugo Young\", and \"Margaret Thatcher\") are of types included in the SYN-class for the question (PERSON NAME). The full output of GuruQA for this question includes a total of 14 potential spans (5 PERSONs and 9 NAMEs).", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Sample Input to AnSel/Werlect", "sec_num": "3.2" }, { "text": "The answer selection module has two outputs: internal (phrase) and external (text passage).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Output of AnSel/Werlect", "sec_num": "3.3" }, { "text": "The internal output is a ranked list of spans as shown in Table 1 . It represents a ranked list of the spans (potential answers) sent by GuruQA.", "cite_spans": [], "ref_spans": [ { "start": 58, "end": 65, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Internal output:", "sec_num": null }, { "text": "External output: The external output is a ranked list of 50-byte and 250-byte extracts. These extracts are selected in a way to cover the highest-ranked spans in the list of potential answers. Examples are given later in the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Internal output:", "sec_num": null }, { "text": "The external output was required for the TREC evaluation while system's internal output can be used in a variety of applications, e.g., to highlight the actual span that we believe is the answer to the question within the context of the passage in which it appears.

1

Who is the author of the book, \"The Iron Lady: A Biography of Margaret Thatcher\"?

@excwin(*dynamic* @weight (200 * Iron_Lady) @weight (200 Biography_of_Margaret_Thatcher) @weight(200 Margaret) @weight(100 author) @weight(100 book) @weight(100 iron) @weight(100 lady) @weight(100 :) @weight(100 biography) @weight(100 thatcher) @weight(400 @syn(PERSON$ NAME$)))

LA090290-0118

1020.8114 d/p>

THE IRON LADY; A Biography of Margaret Thatcher by Hugo Young (in a man's world, Margaret Thatcher evinces such an exclusionary attitude toward women.

", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Internal output:", "sec_num": null }, { "text": "In this section we describe the corpora used for training and evaluation as well as the questions contained in the training and evaluation question sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of corpus and question sets", "sec_num": "4" }, { "text": "For both training and evaluation, we used the TREC corpus, consisting of approximately 2 GB of articles from four news agencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus analysis", "sec_num": null }, { "text": "To train our system, we used 38 questions (see Figure 4 ) for which the answers were provided by NIST.", "cite_spans": [], "ref_spans": [ { "start": 47, "end": 55, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Training set TR38", "sec_num": "4.2" }, { "text": "The majority of the 200 questions (see Figure 5 ) in the evaluation set (T200) were not substan-tially different from these in TR38, although the introduction of \"why\" and \"how\" questions as well as the wording of questions in the format \"Name X\" made the task slightly harder.", "cite_spans": [], "ref_spans": [ { "start": 39, "end": 47, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Test set T200", "sec_num": "4.3" }, { "text": "Q: Why did David Koresh ask the FBI for a word processor? A: to record his revelations. Q: How tall is the Matterhorn? A: 14,776 feet 9 inches Q: How tall is the replica of the Matterhorn at Disneyland? A: 147-foot Figure 6 . Q: Why did David Koresh ask the FBI for a word processor? Q: Name the first private citizen to fly in space. Q: What is considered the costliest disaster the insurance industry has ever faced? Q: What did John Hinckley do to impress Jodie Foster? Q: How did Socrates die? Figure 6 : Sample harder questions from T200.", "cite_spans": [], "ref_spans": [ { "start": 215, "end": 223, "text": "Figure 6", "ref_id": null }, { "start": 498, "end": 506, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Questlon/Answer (T200)", "sec_num": null }, { "text": "AnSel uses an optimization algorithm with 7 predictive variables to describe how likely a given span is to be the correct answer to a question. The variables are illustrated with examples related to the sample question number 10001 from TR38 \"Who was Johnny Mathis' high school track coach?\". The potential answers (extracted by GuruQA) are shown in Table 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AnSel", "sec_num": "5" }, { "text": "The seven span features described below were found to correlate with the correct answers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature selection", "sec_num": "5.1" }, { "text": "Number: position of the span among M1 spans returned from the hit-list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature selection", "sec_num": "5.1" }, { "text": "Rspanno: position of the span among all spans returned within the current passage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature selection", "sec_num": "5.1" }, { "text": "Count: number of spans of any span class retrieved within the current passage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature selection", "sec_num": "5.1" }, { "text": "Notinq: the number of words in the span that do not appear in the query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature selection", "sec_num": "5.1" }, { "text": "Type: the position of the span type in the list of potential span types. Example: Type (\"Lou Vasquez\") = 1, because the span type of \"Lou Vasquez\", namely \"PER-SON\" appears first in the SYN-class \"PER-SON ORG NAME ROLE\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature selection", "sec_num": "5.1" }, { "text": "Avgdst: the average distance in words between the beginning of the span and query words that also appear in the passage. Example: given the passage \"Tim O'Donohue, Woodbridge High School's varsity baseball coach, resigned Monday and will be replaced by assistant Johnny Ceballos, Athletic Director Dave Cowen said.\" and the span \"Tim O'Donohue\", the value of avgdst is equal to 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature selection", "sec_num": "5.1" }, { "text": "Sscore: passage relevance as computed by Gu-ruQA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature selection", "sec_num": "5.1" }, { "text": "Number: the position of the span among all retrieved spans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature selection", "sec_num": "5.1" }, { "text": "The TOTAL score for a given potential answer is computed as a linear combination of the features described in the previous subsection:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AnSel algorithm", "sec_num": "5.2" }, { "text": "TOTAL = ~ w~ , fi i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AnSel algorithm", "sec_num": "5.2" }, { "text": "The Mgorithm that the training component of AnSel uses to learn the weights used in the formula is shown in Figure 7 .", "cite_spans": [], "ref_spans": [ { "start": 108, "end": 116, "text": "Figure 7", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "AnSel algorithm", "sec_num": "5.2" }, { "text": "For each tuple in training set : i. Compute features for each span 2. Compute TOTAL score for each span using current set of weights Figure 8 . For lack of space, we are omitting the 250-byte extracts.", "cite_spans": [], "ref_spans": [ { "start": 149, "end": 157, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "AnSel algorithm", "sec_num": "5.2" }, { "text": "The Werlect algorithm used many of the same features of phrases used by AnSel, but employed a different ranking scheme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Werlect", "sec_num": "6" }, { "text": "Unlike AnSel, Werlect is based on a two-step, rule-based process approximating a function with interaction between variables. In the first stage of this algorithm, we assign a rank to -7.53 -9.93 -12.57 -15.87 -19.07 -19.36 -25.22 -25.37 -25.47 -28.37 -29.57 -30,87 -37.40 -40.06 -49.80 -52.52 -56.27 -59.42 -62.77 -71.17 -211.33 -254.16 -259.67 every relevant phrase within each sentence according to how likely it is to be the target answer. Next, we generate and rank each N-byte fragment based on the sentence score given by GuruQA, measures of the fragment's relevance, and the ranks of its component phrases. Unlike AnSel, Werlect was optimized through manual trial-and-error using the TR38 questions.", "cite_spans": [ { "start": 184, "end": 345, "text": "-7.53 -9.93 -12.57 -15.87 -19.07 -19.36 -25.22 -25.37 -25.47 -28.37 -29.57 -30,87 -37.40 -40.06 -49.80 -52.52 -56.27 -59.42 -62.77 -71.17 -211.33 -254.16 -259.67", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "6.1" }, { "text": "Step One: Feature Selection The features considered in Werlect that were also used by AnSel, were Type, Avgdst and Sscore. Two additional features were also taken into account:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6.2", "sec_num": null }, { "text": "NotinqW: a modified version of Notinq. As in AnSel, spans that are contained in the query are given a rank of 0. However, partial matches are weighted favorably in some cases. For example, if the question asks, \"Who was Lincoln's Secretary of State?\" a noun phrase that contains \"Secretary of State\" is more likely to be the answer than one that does not. In this example, the phrase, \"Secretary of State William Seward\" is the most likely candidate. This criterion also seems to play a role in the event that Resporator fails to identify rel-evant phrase types. For example, in the training question, \"What shape is a porpoise's tooth?\" the phrase \"spade-shaped\" is correctly selected from among all nouns and adjectives of the sentences returned by Guru-QA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6.2", "sec_num": null }, { "text": "Frequency: how often the span occurs across different passages. For example, the test question, \"How many lives were lost in the Pan Am crash in Lockerbie, Scotland?\" resulted in four potential answers in the first two sentences returned by Guru-QA. Table 3 shows the frequencies of each term, and their eventual influence on the span rank. The repeated occurrence of \"270\", helps promote it to first place.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6.2", "sec_num": null }, { "text": "Step two: ranking the sentence spans", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6.3", "sec_num": null }, { "text": "After each relevant span is assigned a rank, we rank all possible text segments of 50 (or 250) bytes from the hit list based on the sum of the phrase ranks plus additional points for other words in the segment that match the query. The algorithm used by Werlect is shown in Figure 9 . 2 270 7 1 (ranked highest) noted that on the 14 questions we were unable to classify with a QA-Token, Werlect (runs W50 and W250) achieved an MRAR of 3.5 to Ansel's 2.0.", "cite_spans": [], "ref_spans": [ { "start": 274, "end": 282, "text": "Figure 9", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "6.3", "sec_num": null }, { "text": "The cumulative RAR of A50 on T200 (Table 4) is 63.22 (i.e., we got 49 questions among the 198 right from our first try and 39 others within the first five answers).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6.3", "sec_num": null }, { "text": "The performance of A250 on T200 is shown in Table 5 . We were able to answer 71 questions with our first answer and 38 others within our first five answers (cumulative RAR = 85.17).", "cite_spans": [], "ref_spans": [ { "start": 44, "end": 51, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "6.3", "sec_num": null }, { "text": "To better characterize the performance of our system, we split the 198 questions into 20 groups of 10 questions. Our performance on groups of questions ranged from 0.87 to 5.50 MRAR for A50 and from 1.98 to 7.5 MRAR for A250 (Table 6 ). ", "cite_spans": [], "ref_spans": [ { "start": 225, "end": 233, "text": "(Table 6", "ref_id": null } ], "eq_spans": [], "section": "6.3", "sec_num": null }, { "text": "In this section, we describe the performance of our system using results from our four official runs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "7" }, { "text": "For each question, the performance is computed as the reciprocal value of the rank (RAR) of the highest-ranked correct answer given by the system. For example, if the system has given the correct answer in three positions: second, third, and fifth, RAR for that question is ! 2\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation scheme", "sec_num": "7.1" }, { "text": "The Mean Reciprocal Answer Rank (MRAR) is used to compute the overall performance of systems participating in the TREC evaluation: Table 6 : Performance on groups of ten questions Finally, Table 7 shows how our official runs compare to the rest of the 25 official submissions. Our performance using AnSel and 50byte output was 0.430. The performance of Werlect was 0.395. On 250 bytes, AnSel scored 0.319 and Werlect -0.280.", "cite_spans": [], "ref_spans": [ { "start": 131, "end": 138, "text": "Table 6", "ref_id": null }, { "start": 189, "end": 196, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Evaluation scheme", "sec_num": "7.1" }, { "text": "We presented a new technique, predictive annotation, for finding answers to natural language questions in text corpora. We showed that a system based on predictive annotation can deliver very good results compared to other competing systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "We described a set of features that correlate with the plausibility of a given text span being a good answer to a question. We experi- Table 7 : Comparison of our system with the other participants mented with two algorithms for ranking potential answers based on these features. We discovered that a linear combination of these features performs better overall, while a non-linear algorithm performs better on unclassified questions.", "cite_spans": [], "ref_spans": [ { "start": 135, "end": 142, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "8" } ], "back_matter": [ { "text": "We would like to thank Eric Brown, Anni Coden, and Wlodek Zadrozny from IBM Research for useful comments and collaboration. We would also like to thank the organizers of the TREC Q~zA evaluation for initiating such a wonderful research initiative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Question answering from large document collections", "authors": [ { "first": "Eric", "middle": [], "last": "Breck", "suffix": "" }, { "first": "John", "middle": [], "last": "Burger", "suffix": "" }, { "first": "David", "middle": [], "last": "House", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Light", "suffix": "" }, { "first": "Inderjeet", "middle": [], "last": "Mani", "suffix": "" } ], "year": 1999, "venue": "Proceedings of AAAI Fall Symposium on Question Answering Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Breck, John Burger, David House, Marc Light, and Inderjeet Mani. 1999. Ques- tion answering from large document collec- tions. In Proceedings of AAAI Fall Sympo- sium on Question Answering Systems, North Falmouth, Massachusetts.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Identifying and extracting relations in text", "authors": [ { "first": "Roy", "middle": [], "last": "Byrd", "suffix": "" }, { "first": "Yael", "middle": [], "last": "Ravin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of NLDB", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roy Byrd and Yael Ravin. 1998. Identifying and extracting relations in text. In Proceed- ings of NLDB, Klagenfurt, Austria.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Finding answers in large collections of texts : Paragraph indexing + abductive inference", "authors": [ { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Maiorano", "suffix": "" } ], "year": 1999, "venue": "Proceedings of AAAI Fall Symposium on Question Answering Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanda Harabagiu and Steven J. Maiorano. 1999. Finding answers in large collections of texts : Paragraph indexing + abductive in- ference. In Proceedings of AAAI Fall Sympo- sium on Question Answering Systems, North Falmouth, Massachusetts.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Answering questions for an organization online", "authors": [ { "first": "Vladimir", "middle": [], "last": "Kulyukin", "suffix": "" }, { "first": "Kristian", "middle": [], "last": "Hammond", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Burke", "suffix": "" } ], "year": 1998, "venue": "Proceedings of AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir Kulyukin, Kristian Hammond, and Robin Burke. 1998. Answering questions for an organization online. In Proceedings of AAAI, Madison, Wisconsin.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "MURAX: A robust linguistic approach for question answering using an ondine encyclopedia", "authors": [ { "first": "Julian", "middle": [ "M" ], "last": "Kupiec", "suffix": "" } ], "year": 1993, "venue": "Proceedings, 16th Annual International A CM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julian M. Kupiec. 1993. MURAX: A robust linguistic approach for question answering us- ing an ondine encyclopedia. In Proceedings, 16th Annual International A CM SIGIR Con- ference on Research and Development in In- formation Retrieval.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The use of predictive annotation for question answering in TREC8", "authors": [ { "first": "John", "middle": [], "last": "Prager", "suffix": "" }, { "first": "R", "middle": [], "last": "Dragomir", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Radev", "suffix": "" }, { "first": "Anni", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Valerie", "middle": [], "last": "Coden", "suffix": "" }, { "first": "", "middle": [], "last": "Samn", "suffix": "" } ], "year": 1999, "venue": "Proceedings o/TREC-8", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Prager, Dragomir R. Radev, Eric Brown, Anni Coden, and Valerie Samn. 1999. The use of predictive annotation for question an- swering in TREC8. In Proceedings o/TREC- 8, Gaithersburg, Maryland.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Building a generation knowledge source using internet-accessible newswire", "authors": [ { "first": "R", "middle": [], "last": "Dragomir", "suffix": "" }, { "first": "Kathleen", "middle": [ "R" ], "last": "Radev", "suffix": "" }, { "first": "", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 5th Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "221--228", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dragomir R. Radev and Kathleen R. McKe- own. 1997. Building a generation knowledge source using internet-accessible newswire. In Proceedings of the 5th Conference on Applied Natural Language Processing, pages 221-228, Washington, DC, April.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Query expansion using lexical-semantic relations", "authors": [ { "first": "Ellen", "middle": [], "last": "Voorhees", "suffix": "" } ], "year": 1994, "venue": "Proceedings of A CM SIGIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Voorhees. 1994. Query expansion using lexical-semantic relations. In Proceedings of A CM SIGIR, Dublin, Ireland.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Disambiguation of proper names in text", "authors": [ { "first": "Nina", "middle": [], "last": "Wacholder", "suffix": "" }, { "first": "Yael", "middle": [], "last": "Ravin", "suffix": "" }, { "first": "Misook", "middle": [], "last": "Choi", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fifth Applied Natural Language Processing Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nina Wacholder, Yael Ravin, and Misook Choi. 1997. Disambiguation of proper names in text. In Proceedings of the Fifth Applied Nat- ural Language Processing Conference, Wash- ington, D.C. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Figure 1: System Architecture.", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "Sample QA-Tokens.", "type_str": "figure" }, "FIGREF2": { "num": null, "uris": null, "text": "Input sent from GuruQA to AnSel/Werlect.", "type_str": "figure" }, "FIGREF3": { "num": null, "uris": null, "text": "Sample questions from TR38.", "type_str": "figure" }, "FIGREF4": { "num": null, "uris": null, "text": "Sample questions from T200. Some examples of problematic questions are shown in", "type_str": "figure" }, "FIGREF6": { "num": null, "uris": null, "text": "Training algorithm used by AnSel. Training discovered the following weights: Wnurnbe r -~ --0.3; Wrspann o -~ --0.5; Wcount : 3.0; Wnotinq = 2.0; Wtypes = 15.0; Wavgdst -----1.0; W~score = 1.5 At runtime, the weights are used to rank potential answers. Each span is assigned a TO-TAL score and the top 5 distinct extracts of 50 (or 250) bytes centered around the span are output. The 50-byte extracts for question 10001 are shown in", "type_str": "figure" }, "FIGREF7": { "num": null, "uris": null, "text": "Figure 8: Fifty-byte extracts.", "type_str": "figure" }, "FIGREF8": { "num": null, "uris": null, "text": "Algorithm used by Werlect.", "type_str": "figure" }, "TABREF0": { "num": null, "html": null, "type_str": "table", "text": "", "content": "
PLACESWhereIn the Rocky Mountains
COUNTRY$Where/What countryUnited Kingdom
STATESWhere/What stateMassachusetts
PERSONSWhoAlbert Einstein
ROLESWhoDoctor
NAMESWho/What/WhichThe Shakespeare Festival
ORG$Who/WhatThe US Post Office
DURATIONSHow longFor 5 centuries
AGESHow old30 years old
YEARSWhen/What year1999
TIMESWhenIn the afternoon
DATESWhen/What dateJuly 4th, 1776
VOLUMESHow big3 gallons
AREASHow big4 square inches
LENGTHSHow big/long/high3 miles
WEIGHTSHow big/heavy25 tons
NUMBERSHow many1,234.5
METHODSHowBy rubbing
RATESHow much50 per cent
MONEYSHow much4 million dollars
" }, "TABREF2": { "num": null, "html": null, "type_str": "table", "text": "Ranked potential answers to Quest. 1.", "content": "" }, "TABREF3": { "num": null, "html": null, "type_str": "table", "text": "", "content": "
TypeNunlberRspannoCountNotlnqTypeAvgdstSscore
PERSON3362I120.02507
PERSON11621160.02507
PERSON17142I80.02257
PERSON236441II0.02257
PERSON2254II90.02257
PERSON13I251160.02505
PERSON2524II150.02256
PERSON33442l140.02256
PERSON3O1421170.02256
ORG18241260.02257
PERSON376411140.02256
PERSON387421170.02256
O.J. SimpsonNAME22623120.02507
South Lake TahoeNAME75633140.02507
Washington HighNAME106613180.02507
MorganNAME263413120.02256
Tennessee footballNAME312413150.02256
EllingtonNAME241413200.02256
assistantROLE21441480.02257
the VolunteersROLE345424140.02256
Johnny MathisPERSON446-I00III0.02507
MathisNAME1422-1003I00.02505
coachROLE1934-100440.02257
" }, "TABREF4": { "num": null, "html": null, "type_str": "table", "text": "", "content": "" }, "TABREF6": { "num": null, "html": null, "type_str": "table", "text": "Influence of frequency on span rank.", "content": "
i. Let candidate_set = all potential
answers, ranked and sorted.
2. For each hit-list passage, extract
ali spans of 50 (or 250) bytes, on
word boundaries.
3. Rank and sort all segments based
on phrase ranks, matching terms,
and sentence ranks.
4. For each candidate in sorted
candidate_set
-Let highest_ranked_span
=highest-ranked span
containing candidate
-Let answer_set[i++] =
highest_rankedspan
-Remove every candidate from
candidate_set that is found in
highest_rankedspan
-Exit if i > 5
5. Output answer_set
" } } } }