{ "paper_id": "A00-1023", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:12:19.233891Z" }, "title": "A Question Answering System Supported by Information Extraction*", "authors": [ { "first": "Rohini", "middle": [], "last": "Srihari", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cymfony Inc", "location": { "addrLine": "5500 Main Street Williamsville", "postCode": "14221", "region": "NY" } }, "email": "rohini@cymfony.com" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cymfony Inc", "location": { "addrLine": "5500 Main Street Williamsville", "postCode": "NY14221" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper discusses an information extraction (IE) system, Textract, in natural language (NL) question answering (QA) and examines the role of IE in QA application. It shows: (i) Named Entity tagging is an important component for QA, (ii) an NL shallow parser provides a structural basis for questions, and (iii) high-level domain independent IE can result in a QA breakthrough.", "pdf_parse": { "paper_id": "A00-1023", "_pdf_hash": "", "abstract": [ { "text": "This paper discusses an information extraction (IE) system, Textract, in natural language (NL) question answering (QA) and examines the role of IE in QA application. It shows: (i) Named Entity tagging is an important component for QA, (ii) an NL shallow parser provides a structural basis for questions, and (iii) high-level domain independent IE can result in a QA breakthrough.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "With the explosion of information in Internet, Natural language QA is recognized as a capability with great potential. Traditionally, QA has attracted many AI researchers, but most QA systems developed are toy systems or games confined to lab and a very restricted domain. More recently, Text Retrieval Conference (TREC-8) designed a QA track to stimulate the research for real world application.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Due to little linguistic support from text analysis, conventional IR systems or search engines do not really perform the task of information retrieval; they in fact aim at only document retrieval. The following quote from the QA Track Specifications (www.research.att.com/ -singhal/qa-track-spec.txt) in the TREC community illustrates this point.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Current information retrieval systems allow us to locate documents that might contain the pertinent information, but most of them leave it to the user to extract the useful information from a ranked list. This leaves the (often unwilling) user with a relatively large amount of text to consume. There is an urgent need for tools that would reduce the amount of text one might have to read in order to obtain the desired information. This track aims at doing exactly that for a special (and popular) class of information seeking behavior: QUESTION ANSWERING. People have questions and they need answers, not documents. Automatic question answering will definitely be a significant advance in the state-of-art information retrieval technology. Kupiec (1993) presented a QA system MURAX using an on-line encyclopedia. This system used the technology of robust shallow parsing but suffered from the lack of basic information extraction support. In fact, the most siginifcant IE advance, namely the NE (Named Entity) technology, occured after Kupiec (1993) , thanks to the MUC program (MUC-7 1998). High-level IE technology beyond NE has not been in the stage of possible application until recently.", "cite_spans": [ { "start": 485, "end": 498, "text": "(and popular)", "ref_id": null }, { "start": 742, "end": 755, "text": "Kupiec (1993)", "ref_id": "BIBREF4" }, { "start": 1038, "end": 1051, "text": "Kupiec (1993)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "AskJeeves launched a QA portal (www.askjeeves.com). It is equipped with a fairly sophisticated natural language question parser, but it does not provide direct answers to the asked questions. Instead, it directs the user to the relevant web pages, just as the traditional search engine does. In this sense, AskJeeves has only done half of the job for QA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "We believe that QA is an ideal test bed for demonstrating the power of IE. There is a natural co-operation between IE and IR; we regard QA as one major intelligence which IE can offer IR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "An important question then is, what type of IE can support IR in QA and how well does it support it? This forms the major topic of this paper. We structure the remaining part of the paper as follows. In Section 1, we first give an overview of the underlying IE technology which our organization has been developing. Section 2 discusses the QA system. Section 3 describes the limitation of the current system. Finally, in Section 4, we propose a more sophisticated QA system supported by three levels of IE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "The last decade has seen great advance and interest in the area of IE. In the US, the DARPA sponsored Tipster Text Program [Grishman 1997 ] and the Message Understanding Conferences (MUC) [MUC-7 1998 ] have been the driving force for developing this technology. In fact, the MUC specifications for various IE tasks have become de facto standards in the IE research community. It is therefore necessary to present our IE effort in the context of the MUC program. MUC divides IE into distinct tasks, namely, NE (Named Entity), TE (Template Element), TR (Template Relation), CO (Co-reference), and ST (Scenario Templates) [Chinchor & Marsh 1998 ]. Our proposal for three levels of IE is modelled after the MUC standards using MUC-style representation. However, we have modified the MUC IE task definitions in order to make them more useful and more practical. More precisely, we propose a hierarchical, 3-level architecture for developing a kernel IE system which is domain-independent throughout.", "cite_spans": [ { "start": 123, "end": 137, "text": "[Grishman 1997", "ref_id": "BIBREF2" }, { "start": 188, "end": 199, "text": "[MUC-7 1998", "ref_id": null }, { "start": 619, "end": 641, "text": "[Chinchor & Marsh 1998", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Overview of Textract IE", "sec_num": "1" }, { "text": "The core of this system is a state-of-the-art NE tagger ], named Textract 1.0. The Textract NE tagger has achieved speed and accuracy comparable to that of the few deployed NE systems, such as NetOwl [Krupka & Hausman 1998 ] and Nymble [Bikel et al 1997] .", "cite_spans": [ { "start": 200, "end": 222, "text": "[Krupka & Hausman 1998", "ref_id": "BIBREF3" }, { "start": 236, "end": 254, "text": "[Bikel et al 1997]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Overview of Textract IE", "sec_num": "1" }, { "text": "It is to be noted that in our definition of NE, we significantly expanded the type of information to be extracted. In addition to all the MUC defined NE types (person, organization, location, time, date, money and percent), the following types/sub-types of information are also identified by the TextractNE module: These new sub-types provide a better foundation for defining multiple relationships between the identified entities and for supporting question answering functionality. For example, the key to a question processor is to identify the asking point (who, what, when, where, etc.) . In many cases, the asking point corresponds to an NE beyond the MUC definition, e.g. the how+adjective questions: how long (duration or length), how far (length), how often (frequency), how old (age), etc.", "cite_spans": [ { "start": 561, "end": 591, "text": "(who, what, when, where, etc.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Overview of Textract IE", "sec_num": "1" }, { "text": "\u2022 duration,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of Textract IE", "sec_num": "1" }, { "text": "Level-2 IE, or CE (Correlated Entity), is concerned with extracting pre-defined multiple relationships between the entities. Consider the person entity as an example; the TextractCE prototype is capable of extracting the key relationships such as age, gender, affiliation, position, birthtime, birth__place, spouse, parents, children, where.from, address, phone, fax, email, descriptors. As seen, the information in the CE represents a mini-CV or profile of the entity. In general, the CE template integrates and greatly enriches the information contained in MUC TE and TR.", "cite_spans": [ { "start": 252, "end": 387, "text": "gender, affiliation, position, birthtime, birth__place, spouse, parents, children, where.from, address, phone, fax, email, descriptors.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Overview of Textract IE", "sec_num": "1" }, { "text": "The final goal of our IE effort is to further extract open-ended general events (GE, or level 3 IE) for information like who did what (to whom) when (or how often) and where. By general events, we refer to argument structures centering around verb notions plus the associated information of time/frequency and location. We show an example of our defined GE extracted from the text below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of Textract IE", "sec_num": "1" }, { "text": "Julian Hill, a research chemist whose accidental discovery of a tough, taffylike compound revolutionized everyday life after it proved its worth in warfare and courtship, died on Sunday in Hockessin, Del.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of Textract IE", "sec_num": "1" }, { "text": "[1] := PREDICATE: die ARGUMENTI:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of Textract IE", "sec_num": "1" }, { "text": "Julian Hill TIME:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of Textract IE", "sec_num": "1" }, { "text": "Sunday LOCATION:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of Textract IE", "sec_num": "1" }, { "text": "Hockessin, Del Figure 1 is the overall system architecture for the IE system Textract that our organization has been developing. The core of the system consists of three kernel IE modules and six linguistic modules. The multi-level linguistic modules serve as an underlying support system for different levels of IE. The IE results are stored in a database which is the basis for IE-related applications like QA, BR (Browsing, threading and visualization) and AS (Automatic Summarization). The approach to IE taken here, consists of a unique blend of machine learning and FST (finite state transducer) rule-based system [Roche & Schabes 1997] . By combining machine learning with an FST rule-based system, we are able to exploit the best of both paradigms while overcoming their respective weaknesses , Li & Srihari 2000 , where (LOCATION), how far (LENGTH). Therefore, the NE tagger has been proven to be very helpful.", "cite_spans": [ { "start": 620, "end": 642, "text": "[Roche & Schabes 1997]", "ref_id": "BIBREF6" }, { "start": 801, "end": 820, "text": ", Li & Srihari 2000", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 15, "end": 23, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Overview of Textract IE", "sec_num": "1" }, { "text": "I I I I I I F L-- ----~ .... . L ------. ------| ....", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of Textract IE", "sec_num": "1" }, { "text": "Of course, the NE of the targeted type is only necessary but not complete in answering such questions because NE by nature only extracts isolated individual entities from the text. Nevertheless, using even crude methods like \"the nearest NE to the queried key words\" or \"the NE and its related key words within the same line (or same paragraph, etc.)\", in most cases, the QA system was able to extract text portions which contained answers in the top five list. Figure 2 illustrates the system design of TextractQA Prototype.", "cite_spans": [], "ref_spans": [ { "start": 462, "end": 470, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Apptication Modutes", "sec_num": null }, { "text": "There are two components for the QA prototype: Question Processor and Text Processor. The Text Matcher module links the two processing results and tries to find answers to the processed question. Matching is based on keywords, plus the NE type and their common location within a same sentence. The following is an example where the asking point does not correspond to any type of NE in our definition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Apptication Modutes", "sec_num": null }, { "text": "[3] Why did David Koresh ask the FBI for a word processor ?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Apptication Modutes", "sec_num": null }, { "text": "The system then maps it to the following question template :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Apptication Modutes", "sec_num": null }, { "text": "[4] asking_point:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Apptication Modutes", "sec_num": null }, { "text": "key_word:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Apptication Modutes", "sec_num": null }, { "text": "REASON { ask, David, Koresh, FBI, word, processor }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Apptication Modutes", "sec_num": null }, { "text": "The question processor scans the question to search for question words (wh-words) and maps them into corresponding NE types/sub-types or pre-defined notions like REASON.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Apptication Modutes", "sec_num": null }, { "text": "We adopt two sets of pattern matching rules for this purpose: (i) structure based pattern matching rules; (ii) simple key word based pattern matching rules (regarded as default rules).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Apptication Modutes", "sec_num": null }, { "text": "It is fairly easy to exhaust the second set of rules as interrogative question words/phrases form a closed set. In comparison, the development of the first set of rules are continuously being fine-tuned and expanded. This strategy of using two set of rules leads to the robustness of the question processor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Apptication Modutes", "sec_num": null }, { "text": "The first set of rules are based on shallow parsing results of the questions, using Cymfony FST based Shallow Parser. This parser identifies basic syntactic constructions like BaseNP (Basic Noun Phrase), BasePP (Basic Prepositional Phrase) and VG (Verb Group).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Apptication Modutes", "sec_num": null }, { "text": "The following is a sample of the first set of rules: As seen, shallow parsing helps us to capture a variety of natural language question expressions. However, there are cases where some simple key word based pattern matching would be enough to capture the asking point. That is our second set of rules. These rules are used when the first set of rules has failed to produce results. The following is a sample of such rules:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Apptication Modutes", "sec_num": null }, { "text": "In the stage of question expansion, the template in [4] [asking, David,Koresh,FBI, word, processor} The last item in the asking._point list attempts to find an infinitive by checking the word to followed by a verb (with the part-of-speech tag VB). As we know, infinitive verb phrases are often used in English to explain a reason for some action.", "cite_spans": [ { "start": 52, "end": 55, "text": "[4]", "ref_id": null }, { "start": 56, "end": 99, "text": "[asking, David,Koresh,FBI, word, processor}", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Apptication Modutes", "sec_num": null }, { "text": "On the text processing side, we first send the question directly to a search engine in order to narrow down the document pool to the first n, say 200, documents for IE processing. Currently, this includes tokenization, POS tagging and NE tagging. Future plans include several levels of parsing as well; these are required to support CE and GE extraction. It should be noted that all these operations are extremely robust and fast, features necessary for large volume text indexing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Processing", "sec_num": "2.2" }, { "text": "Parsing is accomplished through cascaded finite state transducer grammars.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Processing", "sec_num": "2.2" }, { "text": "The Text Matcher attempts to match the question template with the processed documents for both the asking point and the key words. There is a preliminary ranking standard built-in the matcher in order to find the most probable answers. The primary rank is a count of how many unique keywords are contained within a sentence. The secondary ranking is based on the order that the keywords appear in the sentence compared to their order in the question. The third ranking is based on whether there is an exact match or a variant match for the key verb. In the TREC-8 QA track competition, Cymfony QA accuracy was 66.0%. Considering we have only used NE technology to support QA in this run, 66.0% is a very encouraging result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Matching", "sec_num": "2.3" }, { "text": "The first limitation comes from the types of questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitation", "sec_num": "3" }, { "text": "Currently only wh-questions are handled although it is planned that yes-no questions will be handled once we introduce CE and GE templates to support QA. Among the wh-questions, the why-question and how-question t are more challenging because the asking point cannot be simply mapped to the NE types/sub-types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitation", "sec_num": "3" }, { "text": "The second limitation is from the nature of the questions. Questions like Where can l find the homepage for Oscar winners or Where can I find info on Shakespeare's works might be answerable easily by a system based on a well-maintained data base of home pages. Since our system is based on the processing of the underlying documents, no correct answer can be provided if there is no such an answer (explicitly expressed in English) in the processed documents. In TREC-8 QA, this is not a problem since every question is guaranteed to have at least one answer in the given document pool. However, in the real world scenario such as a QA portal, it is conceived that the IE results based on the processing of the documents should be complemented by other knowledge sources such as e-copy of yellow pages or other manually maintained and updated data bases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitation", "sec_num": "3" }, { "text": "The third limitation is the lack of linguistic processing such as sentence-level parsing and cross-sentential co-reference (CO). This problem will be gradually solved when high-level IE technology is introduced into the system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitation", "sec_num": "3" }, { "text": "A new QA architecture is under development; it will exploit all levels of the IE system, including CE and GE. The first issue is how much CE can contribute to a better support of QA. It is found that there are some frequently seen questions which can be better answered once the CE information is provided. These questions are of two types: (i) what/who questions about an NE; (ii) relationship questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work: Multi-level IE Supported QA", "sec_num": "4" }, { "text": "Questions The next issue is the relationships between GE and QA. It is our belief that the GE technology will result in a breakthrough for QA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work: Multi-level IE Supported QA", "sec_num": "4" }, { "text": "In order to extract GE templates, the text goes through a series of linguistic processing as shown in Figure 1 . It should be noted that the question processing is designed to go through parallel processes and share the same NLP resources until the point of matching and ranking.", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 110, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Future Work: Multi-level IE Supported QA", "sec_num": "4" }, { "text": "The merging of question templates and GE templates in Template Matcher are fairly straightforward. As they both undergo the same NLP processing, the resulting semantic templates are of the same form. Both question templates and GE templates correspond to fairly standard/predictable patterns (the PREDICATE value is open-ended, but the structure remains stable). More precisely, a user can ask questions on general events themselves (did what) and/or on the participants of the event (who, whom, what) and/or the time, frequency and place of events (when, how often, where). This addresses 2 An alpha version of TextractQA supported by both NE and CE has been implemented and is being tested. by far the most types of general questions of a potential user.", "cite_spans": [ { "start": 484, "end": 501, "text": "(who, whom, what)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Future Work: Multi-level IE Supported QA", "sec_num": "4" }, { "text": "For example, if a user is interested in company acquisition events, he can ask questions like: Which companies ware acquired by Microsoft in 1999? Which companies did Microsoft acquire in 1999? Our system will then parse these questions into the templates as shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work: Multi-level IE Supported QA", "sec_num": "4" }, { "text": "[31] := PREDICATE: acquire ARGUMENT1: Microsoft ARGUMENT2: WHAT(COMPANY) TIME: 1999", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work: Multi-level IE Supported QA", "sec_num": "4" }, { "text": "If the user wants to know when some acquisition happened, he can ask: When was Netscape acquired?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work: Multi-level IE Supported QA", "sec_num": "4" }, { "text": "Our system will then translate it into the pattern below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work: Multi-level IE Supported QA", "sec_num": "4" }, { "text": "[32] := PREDICATE: acquire ARGUMENT1: WHO ARGUMENT2: Netscape TIME: WHEN Note that WHO, WHAT, WHEN above are variable to be instantiated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work: Multi-level IE Supported QA", "sec_num": "4" }, { "text": "Such question templates serve as search constraints to filter the events in our extracted GE template database. Because the question templates and the extracted GE template share the same structure, a simple merging operation would suffice. Nevertheless, there are two important questions to be answered: (i) what if a different verb with the same meaning is used in the question from the one used in the processed text? (ii) what if the question asks about something beyond the GE (or CE) information? These are issues that we are currently researching.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work: Multi-level IE Supported QA", "sec_num": "4" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Nymble: a High-Performance Learning Name-finder", "authors": [ { "first": "D", "middle": [ "M" ], "last": "Bikel", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "194--201", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bikel D.M. et al. (1997) Nymble: a High-Performance Learning Name-finder. \"Proceedings of the Fifth Conference on Applied Natural Language Processing\", Morgan Kaufmann Publishers, pp. 194-201", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "MUC-7 Information Extraction Task Definition (version 5.1)", "authors": [ { "first": "N", "middle": [], "last": "Chinchor", "suffix": "" }, { "first": "E", "middle": [], "last": "Marsh", "suffix": "" } ], "year": 1998, "venue": "Proceedings of MUC-7", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chinchor N. and Marsh E. (1998) MUC-7 Information Extraction Task Definition (version 5.1), \"Proceedings of MUC-7\".", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "TIPSTER Architecture Design Document Version 2.3", "authors": [ { "first": "R", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grishman R. (1997) TIPSTER Architecture Design Document Version 2.3. Technical report, DARPA", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "IsoQuest Inc.: Description of the NetOwl (TM) Extractor System as Used for MUC-7", "authors": [ { "first": "G", "middle": [ "R" ], "last": "Krupka", "suffix": "" }, { "first": "K", "middle": [], "last": "Hausman", "suffix": "" } ], "year": 1998, "venue": "Proceedings of MUC-7", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krupka G.R. and Hausman K. (1998) IsoQuest Inc.: Description of the NetOwl (TM) Extractor System as Used for MUC-7, \"Proceedings of MUC-7\".", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "MURAX: A Robust Linguistic Approach For Question Answering Using An On-Line Encyclopaedia", "authors": [ { "first": "J", "middle": [], "last": "Kupiec", "suffix": "" } ], "year": 1993, "venue": "Proceedings of SIGIR-93 93", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kupiec J. (1993) MURAX: A Robust Linguistic Approach For Question Answering Using An On-Line Encyclopaedia, \"Proceedings of SIGIR-93 93\" Pittsburgh, Penna.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Flexible Information Extraction Learning Algorithm, Final Technical Report", "authors": [ { "first": "W &", "middle": [], "last": "Li", "suffix": "" }, { "first": "R", "middle": [], "last": "Srihari", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Seventh Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, W & Srihari, R. 2000. Flexible Information Extraction Learning Algorithm, Final Technical Report, Air Force Research Laboratory, Rome Research Site, New York MUC-7 (1998) Proceedings of the Seventh Message Understanding Conference (MUC-7), published on the website _http://www.muc.saic.com/", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A Domain Independent Event Extraction Toolkit", "authors": [ { "first": "E", "middle": [], "last": "Roche", "suffix": "" }, { "first": "Y", "middle": [], "last": "Schabes", "suffix": "" }, { "first": "R", "middle": [], "last": "Srihari", "suffix": "" } ], "year": 1997, "venue": "Finite-State Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roche E. and Schabes Y. (1997) Finite-State Language Processing, MIT Press, Cambridge, MA Srihari R. (1998) A Domain Independent Event Extraction Toolkit, AFRL-IF-RS-TR-1998-152 Final Technical Report, Air Force Research Laboratory, Rome Research Site, New York", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Textract IE System Architecture", "type_str": "figure" }, "TABREF4": { "html": null, "num": null, "content": "
Process Question
Shallow parse question
Determine Asking Point
Question expansion (using word lists)
Process Documents
Tokenization, POS tagging, NE Indexing
Shallow Parsing (not yet utilized)
Text Matcher
Intersect search engine results with NE
rank answers
2.1 Question Processing
The Question Processing results are a list of
keywords plus the information for asking point.
For example, the question:
The output before
question expansion is a simple 2-feature template
as shown below:
[3] asking_point: PERSON
key_word:{ won, 1998, Nobel,
Peace, Prize }
Question Prc~:essor
i : :eXt i .... i
Figure 2: Textract/QA 1.0 Prototype Architecture
The general algorithm for question
answering is as follows:
", "type_str": "table", "text": "P r~_~ ............ ?~ i i ~ ..............................." }, "TABREF7": { "html": null, "num": null, "content": "
Q: Who is Julian Hill?
A: name:Julian Werner Hill
type:PERSON
age:91
gender:MALE
position:research chemist
affiliation:Du Pont Co.
education:Washington University;
MIT
Q: What is Du Pont?
A: name: Du Pont Co,
type: COMPANY
staff: Julian Hill; Wallace Carothers.
Questionsspecifically abouta CE
relationship include: For which company did
Julian Hill work? (affiliation relationship) Who
are employees of Du Pont Co.? (staff
relationship) What does Julian Hill do?
(position/profession relationship)Which
university did Julian Hill graduate from?
(education relationship), etc. 2
", "type_str": "table", "text": "of the following format require CE templates as best answers: who/what is NE? For example, Who is Julian Hill? Who is Bill Clinton? What is Du Pont? What is Cymfony? To answer these questions, the system can simply 1 For example, How did one make a chocolate cake? How+Adjective questions (e.g. how long, how big, how old, etc.) are handled fairly well.retrieve the corresponding CE template to provide an \"assembled\" answer, as shown below." } } } }