Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
45.5 kB
{
"paper_id": "A00-1041",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:12:05.912030Z"
},
"title": "Answer Extraction",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Abney",
"suffix": "",
"affiliation": {
"laboratory": "AT&T Shannon Laboratory",
"institution": "",
"location": {
"addrLine": "180 Park Ave. Florharn Park",
"postCode": "07932",
"region": "NJ"
}
},
"email": "abney@research.att.corn"
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": "",
"affiliation": {
"laboratory": "AT&T Shannon Laboratory",
"institution": "",
"location": {
"addrLine": "180 Park Ave. Florharn Park",
"postCode": "07932",
"region": "NJ"
}
},
"email": "mcollins@research.att.corn"
},
{
"first": "Amit",
"middle": [],
"last": "Singhal",
"suffix": "",
"affiliation": {
"laboratory": "AT&T Shannon Laboratory",
"institution": "",
"location": {
"addrLine": "180 Park Ave. Florharn Park",
"postCode": "07932",
"region": "NJ"
}
},
"email": "singhal@research.att.corn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Information retrieval systems have typically concentrated on retrieving a set of documents which are relevant to a user's query. This paper describes a system that attempts to retrieve a much smaller section of text, namely, a direct answer to a user's question. The SMART IR system is used to extract a ranked set of passages that are relevant to the query. Entities are extracted from these passages as potential answers to the question, and ranked for plausibility according to how well their type matches the query, and according to their frequency and position in the passages. The system was evaluated at the TREC-8 question answering track: we give results and error analysis on these queries.",
"pdf_parse": {
"paper_id": "A00-1041",
"_pdf_hash": "",
"abstract": [
{
"text": "Information retrieval systems have typically concentrated on retrieving a set of documents which are relevant to a user's query. This paper describes a system that attempts to retrieve a much smaller section of text, namely, a direct answer to a user's question. The SMART IR system is used to extract a ranked set of passages that are relevant to the query. Entities are extracted from these passages as potential answers to the question, and ranked for plausibility according to how well their type matches the query, and according to their frequency and position in the passages. The system was evaluated at the TREC-8 question answering track: we give results and error analysis on these queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this paper, we describe and evaluate a questionanswering system based on passage retrieval and entity-extraction technology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There has long been a concensus in the Information Retrieval (IR) community that natural language processing has little to offer for retrieval systems. Plausibly, this is creditable to the preeminence of ad hoc document retrieval as the task of interest in IR. However, there is a growing recognition of the limitations of ad hoc retrieval, both in the sense that current systems have reached the limit of achievable performance, and in the sense that users' information needs are often not well characterized by document retrieval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In many cases, a user has a question with a specific answer, such as What city is it where the European Parliament meets? or Who discovered Pluto? In such cases, ranked answers with links to supporting documentation are much more useful than the ranked list of documents that standard retrieval engines produce.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The ability to answer specific questions also provides a foundation for addressing quantitative inquiries such as How many times has the Fed raised interest rates this year? which can be interpreted as the cardinality of the set of answers to a specific question that happens to have multiple correct an-swers, like On what date did the Fed raise interest rates this year?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We describe a system that extracts specific answers from a document collection. The system's performance was evaluated in the question-answering track that has been introduced this year at the TREC information-retrieval conference. The major points of interest are the following.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Comparison of the system's performance to a system that uses the same passage retrieval component, but no natural language processing, shows that NLP provides significant performance improvements on the question-answering task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The system is designed to build on the strengths of both IR and NLP technologies. This makes for much more robustness than a pure NLP system would have, while affording much greater precision than a pure IR system would have.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The task is broken into subtasks that admit of independent development and evaluation. Passage retrieval and entity extraction are both recognized independent tasks. Other subtasks are entity classification and query classification-both being classification tasks that use features obtained by parsing--and entity ranking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following section, we describe the questionanswering system, and in section 3, we quantify its performance and give an error analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Question-Answering System",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "The system takes a natural-language query as input and produces a list of answers ranked in order of confidence. The top five answers were submitted to the TREC evaluation. Queries are processed in two stages. In the information retrieval stage, the most promising passages of the most promising documents are retrieved. In the linguistic processing stage, potential answers are extracted from these passages and ranked.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "The system can be divided into five main components. The information retrieval stage consists of a single component, passage retrieval, and the linguistic processing stage circumscribes four components: entity extraction, entity classification, query classification, and entity ranking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "Passage Retrieval Identify relevant documents, and within relevant documents, identify the passages most likely to contain the answer to the question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "Entity Extraction Extract a candidate set of possible answers from the passages. the answer should be an entity of type Person.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "Entity Ranking Assign scores to entities, representing roughly belief that the entity is the correct answer. There are two components of the score. The most-significant bit is whether or not the category of the entity (as determined by entity classification) matches the category that the question is seeking (as determined by query classification). A finer-grained ranking is imposed on entities with the correct category, through the use of frequency and other information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Classification",
"sec_num": null
},
{
"text": "The following sections describe these five components in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Classification",
"sec_num": null
},
{
"text": "The first step is to find passages likely to contain the answer to the query. We use a modified version of the SMART information retrieval system (Buckley and Lewit, 1985; Salton, 1971) to recover a set of documents which are relevant to the question. We define passages as overlapping sets consisting of a sentence and its two immediate neighbors. (Passages are in one-one correspondence with with sentences, and adjacent passages have two sentences in common.) The score for passage i was calculated as",
"cite_spans": [
{
"start": 146,
"end": 171,
"text": "(Buckley and Lewit, 1985;",
"ref_id": "BIBREF1"
},
{
"start": 172,
"end": 185,
"text": "Salton, 1971)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Passage Retrieval",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 \u00bcSi-z + \u00bdSi + ~'S,+1",
"eq_num": "(1)"
}
],
"section": "Passage Retrieval",
"sec_num": "2.1"
},
{
"text": "where Sj, the score for sentence j, is the sum of IDF weights of non-stop terms that it shares with the query, plus an additional bonus for pairs of words (bigrams) that the sentence and query have in common.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Passage Retrieval",
"sec_num": "2.1"
},
{
"text": "The top 50 passages are passed on as input to linguistic processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Passage Retrieval",
"sec_num": "2.1"
},
{
"text": "Entity extraction is done using the Cass partial parser (Abney, 1996) . From the Cass output, we take dates, durations, linear measures, and quantities.",
"cite_spans": [
{
"start": 56,
"end": 69,
"text": "(Abney, 1996)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Extraction",
"sec_num": "2.2"
},
{
"text": "In addition, we constructed specialized code for extracting proper names. The proper-name extractor essentially classifies capitalized words as intrinsically capitalized or not, where the alternatives to intrinsic capitalization are sentence-initial capitalization or capitalization in titles and headings. The extractor uses various heuristics, including whether the words under consideration appear unambiguously capitalized elsewhere in the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Extraction",
"sec_num": "2.2"
},
{
"text": "The following types of entities were extracted as potential answers to queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Classification",
"sec_num": "2.3"
},
{
"text": "Proper names were classified into these categories using a classifier built using the method described in (Collins and Singer, 1999) . 1 This is the only place where entity classification was actually done as a separate step from entity extraction.",
"cite_spans": [
{
"start": 106,
"end": 132,
"text": "(Collins and Singer, 1999)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Person, Location, Organization, Other",
"sec_num": null
},
{
"text": "Dates Four-digit numbers starting with 1... or 20.. were taken to be years. Cass was used to extract more complex date expressions (such as Saturday, January 1st, 2000) . We should note that this list does not exhaust the space of useful categories. Monetary amounts (e.g., ~The classifier makes a three way distinction between Person, Location and Organization; names where the classifier makes no decision were classified as Other Named E~tity. $25 million) were added to the system shortly after the Trec run, but other gaps in coverage remain. We discuss this further in section 3.",
"cite_spans": [
{
"start": 140,
"end": 168,
"text": "Saturday, January 1st, 2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Person, Location, Organization, Other",
"sec_num": null
},
{
"text": "This step involves processing the query to identify the category of answer the user is seeking. We parse the query, then use the following rules to determine the category of the desired answer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Classification",
"sec_num": "2.4"
},
{
"text": "\u2022 Who, Whom -+ Person.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Classification",
"sec_num": "2.4"
},
{
"text": "\u2022 Where, Whence, Whither--+ Location.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Classification",
"sec_num": "2.4"
},
{
"text": "\u2022 When -+ Date.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Classification",
"sec_num": "2.4"
},
{
"text": "\u2022 How few, great, little, many, much -+ Quemtity. We also extract the head word of the How expression (e.g., stooges in how many stooges) for later comparison to the head word of candidate answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Classification",
"sec_num": "2.4"
},
{
"text": "\u2022 How long --+ Duration or Linear Measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Classification",
"sec_num": "2.4"
},
{
"text": "Measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How tall, wide, high, big, far --+ Linear",
"sec_num": null
},
{
"text": "\u2022 The wh-words Which or What typically appear with a head noun that describes the category of entity involved. These questions fall into two formats: What X where X is the noun involved, and What is the ... X. Here are a couple of examples:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How tall, wide, high, big, far --+ Linear",
"sec_num": null
},
{
"text": "What company is the largest Japanese ship builder?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How tall, wide, high, big, far --+ Linear",
"sec_num": null
},
{
"text": "What is the largest city in Germany?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How tall, wide, high, big, far --+ Linear",
"sec_num": null
},
{
"text": "For these queries the head noun (e.g., company or city) is extracted, and a lexicon mapping nouns to categories is used to identify the category of the query. The lexicon was partly hand-built (including some common cases such as number --+ Quantity or year --~ Date). A large list of nouns indicating Person, Location or Organization categories was automatically taken from the contextual (appositive) cues learned in the named entity classifier described in (Collins and Singer, 1999 ).",
"cite_spans": [
{
"start": 460,
"end": 485,
"text": "(Collins and Singer, 1999",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "How tall, wide, high, big, far --+ Linear",
"sec_num": null
},
{
"text": "\u2022 In queries containing no wh-word (e.g., Name the largest city in Germany), the first noun phrase that is an immediate constituent of the matrix sentence is extracted, and its head is used to determine query category, as for What X questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How tall, wide, high, big, far --+ Linear",
"sec_num": null
},
{
"text": "\u2022 Otherwise, the category is the wildcard Any.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How tall, wide, high, big, far --+ Linear",
"sec_num": null
},
{
"text": "Entity scores have two components. The first, mostsignificant, component is whether or not the entity's category matches the query's category. (If the query category is Any, all entities match it.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Ranking",
"sec_num": "2.5"
},
{
"text": "In most cases, the matching is boolean: either an entity has the correct category or not. However, there are a couple of special cases where finer distinctions are made. If a question is of the Date type, and the query contains one of the words day or month, then \"full\" dates are ranked above years. Conversely, if the query contains the word year, then years are ranked above full dates. In How many X questions (where X is a noun), quantified phrases whose head noun is also X are ranked above bare numbers or other quantified phrases: for example, in the query How many lives were lost in the Lockerbie air crash, entities such as 270 lives or almost 300 lives would be ranked above entities such as 200 pumpkins or 150. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Ranking",
"sec_num": "2.5"
},
{
"text": "The second component of the entity score is based on the frequency and position of occurrences of a given entity within the retrieved passages. Each occurrence of an entity in a top-ranked passage counts 10 points, and each occurrence of an entity in any other passage counts 1 point. (\"Top-ranked passage\" means the passage or passages that received the maximal score from the passage retrieval component.) This score component is used as a secondary sort key, to impose a ranking on entities that are not distinguished by the first score component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Ranking",
"sec_num": "2.5"
},
{
"text": "In counting occurrences of entities, it is necessary to decide whether or not two occurrences are tokens of the same entity or different entities. To this end, we do some normalization of entities. Dates are mapped to the format year-month-day: that is, last Tuesday, November 9, 1999 and 11/9/99 are both mapped to the normal form 1999 Nov 9 before frequencies are counted. Person names axe aliased based on the final word they contain. For example, Jackson and Michael Jackson are both mapped to the normal form Jackson. a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Ranking",
"sec_num": "2.5"
},
{
"text": "The system was evaluated in the TREC-8 questionanswering track. TREC provided 198 questions as a blind test set: systems were required to provide five potential answers for each question, ranked in order of plausibility. The output from each system was then scored by hand by evaluators at NIST, each answer being marked as either correct or incorrect. The system's score on a particular question is a function of whether it got a correct answer in the five ranked answers, with higher scores for the answer appearing higher in the ranking. The system receives a score of 1, 1/2, 1/3, 1/4, 1/5, or 0, re-2perhaps less desirably, people would not be recognized as a synonym of lives in this example: 200 people would be indistinguishable from 200 pumpkins.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on the TREC-8 Evaluation",
"sec_num": "3.1"
},
{
"text": "3This does introduce occasional errors, when two people with the same last name appear in retrieved passages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on the TREC-8 Evaluation",
"sec_num": "3.1"
},
{
"text": "Answer Figure 1 : Results on the TREC-8 Evaluation spectively, according as the correct answer is ranked 1st, 2nd, 3rd, 4th, 5th, or lower in the system output. The final score for a system is calculated as its mean score on the 198 questions.",
"cite_spans": [],
"ref_spans": [
{
"start": 7,
"end": 15,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mean",
"sec_num": null
},
{
"text": "The TREC evaluation considered two questionanswering scenarios: one where answers were limited to be less than 250 bytes in length, the other where the limit was 50 bytes. The output from the passage retrieval component (section 2.1), with some trimming of passages to ensure they were less than 250 bytes, was submitted to the 250 byte scenario. The output of the full entity-based system was submitted to the 50 byte track. For comparison, we also submitted the output of a 50-byte system based on IR techniques alone. In this system single-sentence passages were retrieved as potential answers, their score being calculated using conventional IR methods. Some trimming of sentences so that they were less than 50 bytes in length was performed. Figure 1 shows results on the TREC-8 evaluation. The 250-byte passage-based system found a correct answer somewhere in the top five answers on 68% of the questions, with a final score of 0.545. The 50byte passage-based system found a correct answer on 38.9% of all questions, with an average score of 0.261. The reduction in accuracy when moving from the 250-byte limit to the 50-byte limit is expected, because much higher precision is required; the 50byte limit allows much less extraneous material to be included with the answer. The benefit of the including less extraneous material is that the user can interpret the output with much less effort.",
"cite_spans": [],
"ref_spans": [
{
"start": 747,
"end": 755,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mean",
"sec_num": null
},
{
"text": "Our entity-based system found a correct answer in the top five answers on 46% of the questions, with a final score of 0.356. The performance is not as good as that of the 250-byte passage-based system. But when less extraneous material is permitted, the entity-based system outperforms the passage-based approach. The accuracy of the entity-based system is significantly better than that of the 50-byte passage-based system, and it returns virtually no extraneous material, as reflected in the average answer length of only 10.5 bytes. The implication is that NLP techniques become increasingly useful when short answers are required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mean",
"sec_num": null
},
{
"text": "System 3.2.1 Ranking of Answers As a first point, we looked at the performance of the entity-based system, considering the queries where the correct answer was found somewhere in the top 5 answers (46% of the 198 questions). We found that on these questions, the percentage of answers ranked 1, 2, 3, 4, and 5 was 66%, 14%, 11%, 4%, and 4% respectively. This distribution is by no means uniform; it is clear that when the answer is somewhere in the top five, it is very likely to be ranked 1st or 2nd. The system's performance is quite bimodah it either completely fails to get the answer, or else recovers it with a high ranking. Figure 2 shows the distribution of question types in the TREC-8 test set (\"Percentage of Q's\"), and the performance of the entity-based system by question type (\"System Accuracy\"). We categorized the questions by hand, using the eight categories described in section 2.3, plus two categories that essentially represent types that were not handled by the system at the time of the TREC competition:",
"cite_spans": [],
"ref_spans": [
{
"start": 631,
"end": 639,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis of the Entity-Based",
"sec_num": "3.2"
},
{
"text": "Monetary Amount and Miscellaneous.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy on Different Categories",
"sec_num": "3.2.2"
},
{
"text": "\"System Accuracy\" means the percentage of questions for which the correct answer was in the top five returned by the system. There is a sharp division in the performance on different question types. The categories Person, Location, Date and Quantity are handled fairly well, with the correct answer appearing in the top five 60% of the time. These four categories make up 67% of all questions. In contrast, the other question types, accounting for 33% of the questions, are handled with only 15% accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy on Different Categories",
"sec_num": "3.2.2"
},
{
"text": "Unsurprisingly, the Miscellaneous and Other Named Entity categories are problematic; unfortunately, they are also rather frequent. Figure 3 shows some examples of these queries. They include a large tail of questions seeking other entity types (mountain ranges, growth rates, films, etc.) and questions whose answer is not even an entity (e.g., \"Why did David Koresh ask the FBI for a word processor?\") For reference, figure 4 gives an impression of the sorts of questions that the system does well on (correct answer in top five).",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 139,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 418,
"end": 426,
"text": "figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracy on Different Categories",
"sec_num": "3.2.2"
},
{
"text": "Finally, we performed an analysis to gauge which components represent performance bottlenecks in the current system. We examined system logs for a 50-question sample, and made a judgment of what caused the error, when there was an error. Figure 5 gives the breakdown. Each question was assigned to exactly one line of the table.",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 246,
"text": "Figure 5",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Errors by Component",
"sec_num": "3.2.3"
},
{
"text": "The largest body of errors, accounting for 18% of the questions, are those that are due to unhandled Figure 2 : Performance of the entity-based system on different question types. \"System Accuracy\" means percent of questions for which the correct answer was in the top five returned by the system. \"Good\" types are in the upper block, \"Bad\" types are in the lower block. five, but not at rank one, are almost all due to failures of entity ranking) Various factors contributing to misrankings are the heavy weighting assigned to answers in the top-ranked passage, the failure to adjust frequencies by \"complexity\" (e.g., it is significant if 22.5 million occurs several times, but not if 3 occurs several times), and the failure of the system to consider the linguistic context in which entities appear. types, of which half are monetary amounts. (Questions with non-entity answers account for another 4%.) Another large block (16%) is due to the passage retrieval component: the correct answer was not present in the retrieved passages. The linguistic components together account for the remaining 14% of error, spread evenly among them. The cases in which the correct answer is in the top 4 Conclusions and Future Work",
"cite_spans": [],
"ref_spans": [
{
"start": 101,
"end": 109,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Errors by Component",
"sec_num": "3.2.3"
},
{
"text": "We have described a system that handles arbitrary questions, producing a candidate list of answers ranked by their plausibility. Evaluation on the TREC question-answering track showed that the correct answer to queries appeared in the top five answers 46% of the time, with a mean score of 0.356. The average length of answers produced by the system was 10.5 bytes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Errors by Component",
"sec_num": "3.2.3"
},
{
"text": "4The sole exception was a query misclassification caused by a parse failure---miraculously, the correct answer made it to rank five despite being of the \"wrong\" type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Errors by Component",
"sec_num": "3.2.3"
},
{
"text": "There are several possible areas for future work. There may be potential for improved performance through more sophisticated use of NLP techniques. In particular, the syntactic context in which a particular entity appears may provide important information, but it is not currently used by the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Errors by Component",
"sec_num": "3.2.3"
},
{
"text": "Another area of future work is to extend the entity-extraction component of the system to handle arbitrary types (mountain ranges, films etc.). The error analysis in section 3.2.2 showed that these question types cause particular difficulties for the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Errors by Component",
"sec_num": "3.2.3"
},
{
"text": "The system is largely hand-built. It is likely that as more features are added a trainable statistical or machine learning approach to the problem will become increasingly desirable. This entails developing a training set of question-answer pairs, raising the question of how a relatively large corpus of questions can be gathered and annotated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Errors by Component",
"sec_num": "3.2.3"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Partial parsing via finitestate cascades",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 1996,
"venue": "J. Natural Language Engineering",
"volume": "2",
"issue": "4",
"pages": "337--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Abney. 1996. Partial parsing via finite- state cascades. J. Natural Language Engineering, 2(4):337-344, December.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Optimization of inverted vector searches",
"authors": [
{
"first": "C",
"middle": [],
"last": "Buckley",
"suffix": ""
},
{
"first": "A",
"middle": [
"F"
],
"last": "Lewit",
"suffix": ""
}
],
"year": 1985,
"venue": "Proe. Eighth International ACM SIGIR Conference",
"volume": "",
"issue": "",
"pages": "97--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Buckley and A.F. Lewit. 1985. Optimization of inverted vector searches. In Proe. Eighth Interna- tional ACM SIGIR Conference, pages 97-110.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised models for named entity classification",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 1999,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Yoram Singer. 1999. Unsuper- vised models for named entity classification. In EMNLP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Smart Retrieval System -Experiments in Automatic Document Processing",
"authors": [],
"year": 1971,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Salton, editor. 1971. The Smart Retrieval Sys- tem -Experiments in Automatic Document Pro- cessing. Prentice-Hall, Inc., Englewood Cliffs, NJ.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Breakdown of questions by error type, in particular, by component responsible. Numbers are percent of questions in a 50-question sample.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Examples of \"Other Named Entity\" and Miscellaneous questions.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF3": {
"text": "Question I Rank I Output from SystemWho is the author of the book, The Iron Lady: A Biography of 2 Margaret Thatcher? What is the name of the managing director of Apricot Computer? i What country is the biggest producer of tungsten? Who was the first Taiwanese President?",
"html": null,
"content": "<table><tr><td/><td/><td/><td/><td>Hugo Young</td></tr><tr><td/><td/><td/><td/><td>Dr Peter Horne</td></tr><tr><td/><td/><td/><td/><td>China</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">Taiwanese President Li</td></tr><tr><td/><td/><td/><td/><td>Teng hui</td></tr><tr><td colspan=\"2\">When did Nixon visit China?</td><td/><td/><td>1972</td></tr><tr><td colspan=\"3\">How many calories are there in a Big Mac?</td><td>4</td><td>562 calories</td></tr><tr><td colspan=\"4\">What is the acronym for the rating system for air conditioner effi-1</td><td>EER</td></tr><tr><td>ciency?</td><td/><td/><td/><td/></tr><tr><td colspan=\"5\">Figure 4: A few TREC questions answered correctly by the system.</td></tr><tr><td>Type</td><td>Percent</td><td>System</td><td colspan=\"2\">Errors</td></tr><tr><td/><td>of Q's</td><td>Accuracy</td><td colspan=\"2\">Passage retrieval failed</td><td>16%</td></tr><tr><td>Person</td><td>28</td><td>62.5</td><td colspan=\"2\">Answer is not an entity</td><td>4%</td></tr><tr><td>Location Date Quantity</td><td>18.5 11 9.5</td><td>67.6 45.5 52.7</td><td colspan=\"2\">Answer of unhandled type: money Answer of unhandled type: misc Entity extraction failed</td><td>10% 8% 2%</td></tr><tr><td>TOTAL Other Named Ent Miscellaneous Linear Measure</td><td>67 14.5 8.5 3.5</td><td>60 31 5.9 0</td><td colspan=\"2\">Entity classification failed Query classification failed Entity ranking failed Successes</td><td>4% 4% 4%</td></tr><tr><td>Monetary Amt</td><td>3</td><td>0</td><td colspan=\"2\">Answer at Rank 2-5</td><td>I 16%</td></tr><tr><td>Organization</td><td>2</td><td>0</td><td>Answer at Rank 1</td><td/><td>I 32%</td></tr><tr><td>Duration</td><td>1.5</td><td>0</td><td>TOTAL</td><td/></tr><tr><td>TOTAL</td><td>33</td><td>15</td><td/><td/></tr></table>",
"num": null,
"type_str": "table"
}
}
}
}