{ "paper_id": "A00-1039", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:12:12.645894Z" }, "title": "Unsupervised Discovery of Scenario-Level Patterns for Information Extraction", "authors": [ { "first": "Roman", "middle": [], "last": "Yangarber", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Information Extraction (IE) systems are commonly based on pattern matching. Adapting an IE system to a new scenario entails the construction of a new pattern base-a timeconsuming and expensive process. We have implemented a system for finding patterns automatically from un-annotated text. Starting with a small initial set of seed patterns proposed by the user, the system applies an incremental discovery procedure to identify new patterns. We present experiments with evaluations which show that the resulting patterns exhibit high precision and recall.", "pdf_parse": { "paper_id": "A00-1039", "_pdf_hash": "", "abstract": [ { "text": "Information Extraction (IE) systems are commonly based on pattern matching. Adapting an IE system to a new scenario entails the construction of a new pattern base-a timeconsuming and expensive process. We have implemented a system for finding patterns automatically from un-annotated text. Starting with a small initial set of seed patterns proposed by the user, the system applies an incremental discovery procedure to identify new patterns. We present experiments with evaluations which show that the resulting patterns exhibit high precision and recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The task of Information Extraction (I-E) is the selective extraction of meaning from free natural language text. I \"Meaning\" is understood here in terms of a fixed set of semantic objects--entities, relationships among entities, and events in which entities participate. The semantic objects belong to a small number of types, all having fixed regular structure, within a fixed and closely circumscribed subject domain. The extracted objects are then stored in a relational database. In this paper, we use the nomenclature accepted in current IE literature; the term subject domain denotes a class of textual documents to be processed, e.g., \"business news,\" and scenario denotes the specific topic of interest within the domain, i.e., the set of facts to be extracted. One example of a scenario is \"management succession,\" the topic of MUC-6 (the Sixth Message Understanding Conference); in this scenario the system seeks to identify events in which corporate managers left 1For general references on IE, cf., e.g., (Pazienza, 1997; muc, 1995; muc, 1993) . their posts or assumed new ones. We will consider this scenario in detail in a later section describing experiments. IE systems today are commonly based on pattern matching. The patterns are regular expressions, stored in a \"pattern base\" containing a general-purpose component and a substantial domain-and scenario-specific component.", "cite_spans": [ { "start": 1017, "end": 1033, "text": "(Pazienza, 1997;", "ref_id": null }, { "start": 1034, "end": 1044, "text": "muc, 1995;", "ref_id": null }, { "start": 1045, "end": 1055, "text": "muc, 1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Portability and performance are two major problem areas which are recognized as impeding widespread use of IE. This paper presents a novel approach, which addresses both of these problems by automatically discovering good patterns for a new scenario. The viability of our approach is tested and evaluated with an actual IE system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "In the next section we describe the problem in more detail in the context of our IE system; sections 2 and 3 describe our algorithm for pattern discovery; section 4 describes our experimental results, followed by comparison with prior work and discussion, in section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "1 The IE System", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Our IE system, among others, contains a a backend core engine, at the heart of which is a regular-e~xpression pattern matcher. The engine draws on attendant knowledge bases (KBs) of varying degrees of domain-specificity. The KB components are commonly factored out to make the systems portable to new scenarios. There are four customizable knowledge bases in our IE system: the Lexicon contains general dictionaries and scenario-specific terms; the concept base groups terms into classes; the predicate base describes the logical structure of events to be extracted, and the pattern base contains patterns that catch the events in text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Each KB has a. substantial domain-specific component, which must be modified when mov-ing to new domains and scenarios. The system allows the user (i.e. scenario developer) to start with example sentences in text which contain events of interest, the candidates, and generalize them into patterns. However, the user is ultimately responsible for finding all the candidates, which amounts to manually processing example sentences in a very large training corpus. Should s/he fail to provide an example of a particular class of syntactic/semantic construction, the system has no hope of recovering the corresponding events. Our experience has shown that (1) the process of discovering candidates is highly expensive, and (2) gaps in patterns directly translate into gaps in coverage. How can the system help automate the process of discovering new good candidates? The system should find examples of all common linguistic constructs relevant to a scenario. While there has been prior research on identifying the primary lexical patterns of a sub-language or corpus (Grishman et al., 1986; Riloff, 1996) , the task here is more complex, since we are typically not provided in advance with a sub-corpus of relevant passages; these passages must themselves be found as part of the discovery process.", "cite_spans": [ { "start": 1063, "end": 1086, "text": "(Grishman et al., 1986;", "ref_id": "BIBREF2" }, { "start": 1087, "end": 1100, "text": "Riloff, 1996)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "The difficulty is that one of the best indications of the relevance of the passages is precisely the presence of these constructs. Because of this circularity, we propose to acquire the constructs and passages in tandem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "We outline our procedure for automatic acquisition of patterns; details are elaborated in later sections. The procedure is unsupervised in that it does not require the training corpus to be manually annotated with events of interest, nor a pro-classified corpus with relevance judgements, nor any feedback or intervention from the user 2. The idea is to combine IR-style document selection with an iterative relaxation process; this is similar to techniques used elsewhere in NLP, and is inspired in large part, if remotely, by the work of (Kay and RSscheisen, 1993) on automatic alignment of sentences and words in a bilingual corpus. There, the reasoning was: sentences that are translations of each 2however, it may be supervised after each iteration, where the user can answer yes/no questions to improve the quality of the results other are good indicators that words they contain are translation pairs; conversely, words that are translation pairs indicate that the sentences which contain them correspond to one another.", "cite_spans": [ { "start": 540, "end": 566, "text": "(Kay and RSscheisen, 1993)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Solution", "sec_num": "2" }, { "text": "In our context, we observe that documents that are relevant to the scenario will necessarily contain good patterns; conversely, good patterns are strong indicators of relevant documents. The outline of our approach is as follows. (2) an initial set of trusted scenario patterns, as chosen ad hoc by the user--the seed; as will be seen, the seed can be quite small--two or three patterns seem to suffice. (3) an initial (possibly empty) set of concept classes", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution", "sec_num": "2" }, { "text": "The pattern set induces a binary partition (a split) on the corpus: on any document, either zero or more than zero patterns will match. Thus the universe of documents, U, is partitioned into the relevant sub-corpus, R, vs. the non-relevant sub-corpus, R = U -R, with respect to the given pattern set. Actually, the documents are assigned weights which are 1 for documents matched by the trusted seed, and 0 otherwise. 3 2. Search for new candidate patterns: (a) Automatically convert each sentence in the corpus,into a set of candidate patterns, 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution", "sec_num": "2" }, { "text": "(b) Generalize each pattern by replacing each lexical item which is a member of a concept class by the class name.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution", "sec_num": "2" }, { "text": "(c) Working from the relevant documents, select those patterns whose distribution is strongly correlated with other relevant documents (i.e., much more 3R represents the trusted truth through the discovery iterations, since it was induced by the manually-selected seed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution", "sec_num": "2" }, { "text": "4Here, for each clause in the sentence we extract a tuple of its major roles: the head of the subject, the verb group, the object, object complement, as described below. This tuple is considered to be a pattern for the present purposes of discovery; it is a skeleton for the rich, syntactically transformed patterns our system uses in the extraction phase. densely distributed among the relevant documents than among the nonrelevant ones). The idea is to consider those candidate patterns, p, which meet the density, criterion:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution", "sec_num": "2" }, { "text": "IHnRI IRI -->> IHnUI IUI", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution", "sec_num": "2" }, { "text": "where H = H(p) is the set of documents where p hits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution", "sec_num": "2" }, { "text": "(d) Based on co-occurrence with the chosen patterns, extend the concept classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution", "sec_num": "2" }, { "text": "classes to the user for review, retaining those relevant to the scenario.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optional: Present the new candidates and", "sec_num": "3." }, { "text": "The new pattern set induces a new partition on the corpus. With this pattern set, return to step 1. Repeat the procedure until no more patterns can be added.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "3 Methodology 3.1 Pre-proeessing: Normalization Before applying the discovery procedure, we subject the corpus to several stages of preprocessing. First, we apply a name recognition module, and replace each name with a token describing its class, e.g. C-Person, C-Company, etc. We collapse together all numeric expressions, currency values, dates, etc., using a single token to designate each of these classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "We then apply a parser to perform syntactic normalization to transform each clause into a common predicate-argument structure. We use the general-purpose dependency parser of English, based on the FDG formalism (Tapanainen and J~rvinen, 1997) and developed by the Research Unit for Multilingual Language Technology at the University of Helsinki, and Conexor Oy. The parser (modified to understand the name labels attached in the previous step) is used for reducing such variants as passive and relative clauses to a tuple, consisting of several elements.", "cite_spans": [ { "start": 211, "end": 242, "text": "(Tapanainen and J~rvinen, 1997)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "3.2" }, { "text": "1. For each claus, the first element is the subject, a \"semantic\" subject of a non-finite sentence or agent of the passive. 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "3.2" }, { "text": "2. The second element is the verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "3.2" }, { "text": "3. The third element is the object, certain object-like adverbs, subject of the passive or subject complement 6 4. The fourth element is a phrase which refers to the object or the subject. A typical example of such an argument is an object complement, such as Company named John Smith president. Another instance is the so-called copredicatire (Nichols, 1978) , in the parsing system (J~irvinen and . A copredicative refers to a subject or an object, though this distinction is typically difficult to resolve automatically/ Clausal tuples also contain a locative modifier, and a temporal modifier. We used a corpus of 5,963 articles from the Wall Street Journal, randomly chosen. The parsed articles yielded a total of 250,000 clausal tuples, of which 135,000 were distinct.", "cite_spans": [ { "start": 344, "end": 359, "text": "(Nichols, 1978)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "3.2" }, { "text": "Because tuples may not repeat with sufficient frequency to obtain reliable statistics, each tuple is reduced to a set of pairs: e.g., a verbobject pair, a subject-object pair, etc. Each pair is used as a generalized pattern during the candidate selection stage. Once we have identified pairs which are relevant to the scenario, we use them to construct or augment concept classes, by grouping together the missing roles, (for example, a class of verbs which occur with a relevant subject-object pair: \"company (hire/fire/expel...} person\"). This is similar to work by several other groups which aims to induce semantic classes through syntactic co-occurrence analysis (Riloff and Jones, 1999; Pereira et al., 1993; Dagan et al., 1993; Hirschman et al., 1975) , although in .our case the contexts are limited to selected patterns, relevant to the scenario. SE.g., \"John sleeps\", \"John is appointed by Company\", \"I saw a dog which sleeps\", \"She asked John to buy a car\". 6E.g., \"John is appointed by Company\", \"John is the president of Company\", \"I saw a dog which sleeps\", The dog which I saw sleeps. 7For example, \"She gave us our coffee black\", \"Company appointed John Smith as president\".", "cite_spans": [ { "start": 668, "end": 692, "text": "(Riloff and Jones, 1999;", "ref_id": "BIBREF13" }, { "start": 693, "end": 714, "text": "Pereira et al., 1993;", "ref_id": "BIBREF12" }, { "start": 715, "end": 734, "text": "Dagan et al., 1993;", "ref_id": "BIBREF0" }, { "start": 735, "end": 758, "text": "Hirschman et al., 1975)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Generalization and Concept Classes", "sec_num": "3.3" }, { "text": "Here we present the results from experiments we conducted on the MUC-6 scenario, \"management succession\". The discovery procedure was seeded with a small pattern set, namely:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pattern Discovery", "sec_num": "3.4" }, { "text": "Subject Verb Direct Object C-Company C-Appoint C-Person C-Person C-Resign", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pattern Discovery", "sec_num": "3.4" }, { "text": "Documents are assigned relevance scores on a scale between 0 and 1. The seed patterns are accepted as ground truth; thus the documents they match have relevance 1. On subsequent iterations, the newly accepted patterns are not trusted as absolutely. On iteration number i q-1, each pattern p is assigned a precision measure, based on the relevance of the documents it matches: Pc(P) --Igl is the conditional probability of relevance. We further impose two support criteria: we distrust such frequent patterns where [HA U{ > a[U[ as uninformative, and rare patterns for which [H A R[ II :I :I .I ,I .I :I \u2022Ii.i~iBi0.90.90.8iii::iiiii~00.7(I)ii i0. 5..iz0.4iI00.10.20.30.40.50.60.70.80.91RecallFigure 2: Precision vs. Recall", "num": null, "type_str": "table" } } } }