{ "paper_id": "A00-1037", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:12:36.515000Z" }, "title": "Domain-Specific Knowledge Acquisition from Text", "authors": [ { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern Methodist University Dallas", "location": { "postCode": "75275-0122", "settlement": "Texas" } }, "email": "moldovan@seas.smu.edu" }, { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern Methodist University Dallas", "location": { "postCode": "75275-0122", "settlement": "Texas" } }, "email": "roxana@seas.smu.edu" }, { "first": "Vasile", "middle": [], "last": "Rus", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern Methodist University Dallas", "location": { "postCode": "75275-0122", "settlement": "Texas" } }, "email": "rus@seas.smu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In many knowledge intensive applications, it is necessary to have extensive domain-specific knowledge in addition to general-purpose knowledge bases. This paper presents a methodology for discovering domain-specific concepts and relationships in an attempt to extend WordNet. The method was tested on five seed concepts selected from the financial domain: interest rate, stock market, inflation, economic growth, and employment.", "pdf_parse": { "paper_id": "A00-1037", "_pdf_hash": "", "abstract": [ { "text": "In many knowledge intensive applications, it is necessary to have extensive domain-specific knowledge in addition to general-purpose knowledge bases. This paper presents a methodology for discovering domain-specific concepts and relationships in an attempt to extend WordNet. The method was tested on five seed concepts selected from the financial domain: interest rate, stock market, inflation, economic growth, and employment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "1 Desiderata for Automated Knowledge Acquisition The need for knowledge The knowledge is infinite and no matter how large a knowledge base is, it is not possible to store all the concepts and procedures for all domains. Even if that were possible, the knowledge is generative and there are no guarantees that a system will have the latest information all the time. And yet, if we are to build common-sense knowledge processing systems in the future, it is necessary to have general-purpose and domain-specific knowledge that is up to date. Our inability to build large knowledge bases without much effort has impeded many ANLP developments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The most successful current Information Extraction systems rely on hand coded linguistic rules representing lexico-syntactic patterns capable of matching natural language expressions of events. Since the rules are hand-coded it is difficult to port systems across domains. Question answering, inference, summarization, and other applications can benefit from large linguistic knowledge bases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The basic idea A possible solution to the problem of rapid development of flexible knowledge bases is to design an automatic knowledge acquisition system that extracts knowledge from texts for the purpose of merging it with a core ontological knowledge base. The attempt to create a knowledge base manually is time consuming and error prone, even for small application domains, and we believe that automatic knowledge acquisition and classification is the only viable solution to large-scale, knowledge intensive applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This paper presents an interactive method that acquires new concepts and connections associated with user-selected seed concepts, and adds them to the WordNet linguistic knowledge structure (Fellbaum 1998) . The sources of the new knowledge are texts acquired from the Internet or other corpora. At the present time, our system works in a semi-automatic mode, in the sense that it acquires concepts and relations automatically, but their validation is done by the user.", "cite_spans": [ { "start": 190, "end": 205, "text": "(Fellbaum 1998)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We believe that domain knowledge should not be acquired in a vacuum; it should expand an existent ontology with a skeletal structure built on consistent and acceptable principles. The method presented in this paper is applicable to any Machine Readable Dictionary. However, we chose WordNet because it is freely available and widely used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This work was inspired in part by Marti Hearst's paper (Hearst 1998) where she discovers manually lexico-syntactic patterns for the HYPERNYMY relation in WordNet.", "cite_spans": [ { "start": 55, "end": 68, "text": "(Hearst 1998)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": null }, { "text": "Much of the work in pattern extraction from texts was done for improving the performance of Information Extraction systems. Research in this area was done by (Kim and Moldovan 1995) (Riloff 1996) , (Soderland 1997) and others.", "cite_spans": [ { "start": 158, "end": 181, "text": "(Kim and Moldovan 1995)", "ref_id": null }, { "start": 182, "end": 195, "text": "(Riloff 1996)", "ref_id": null }, { "start": 198, "end": 214, "text": "(Soderland 1997)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": null }, { "text": "The MindNet (Richardson 1998) project at Microsoft is an attempt to transform the Longman Dictionary of Contemporary English (LDOCE) into a form of knowledge base for text processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": null }, { "text": "Woods studied knowledge representation and classification for long time (Woods 1991) , and more recently is trying to automate the construction of taxonomies by extracting concepts directly from texts (Woods 1997) .", "cite_spans": [ { "start": 72, "end": 84, "text": "(Woods 1991)", "ref_id": null }, { "start": 201, "end": 213, "text": "(Woods 1997)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": null }, { "text": "The Knowledge Acquisition from Text (KAT) system is presented next. It consists of four parts: (1) discovery of new concepts, (2) discovery of new lexical patterns, (3) discovery of new relationships reflected by the lexical patterns, and (4) the classification and integration of the knowledge discovered with a WordNet -like knowledge base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": null }, { "text": "2.1 Discover new concepts Select seed concepts. New domain knowledge can be acquired around some seed concepts that a user considers important. In this paper we focus on the financial domain, and use: interest rate, stock market, inflation, economic growth, and employment as seed concepts. The knowledge we seek to acquire relates to one or more of these concepts, and consists of new concepts not defined in WordNet and new relations that link these concepts with other concepts, some of which are in WordNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KAT System", "sec_num": "2" }, { "text": "For example, from the sentence: When the US economy enters a boom, mortgage interest rates rise, the system discovers: (1) the new concept mortgage interest rate not defined in WordNet but related to the seed concept interest rate, and (2) the state of the US economy and the value of mortgage interest rate are in a DIRECT RELATIONSHIP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KAT System", "sec_num": "2" }, { "text": "In WordNet, a concept is represented as a synset that contains words sharing the same meaning. In our experiments, we extend the seed words to their corresponding synset. For example, stock market is synonym with stock exchange and securities market, and we aim to learn concepts related to all these terms, not only to stock market.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KAT System", "sec_num": "2" }, { "text": "Extract sentences. Queries are formed with each seed concept to extract documents from the Internet and other possible sources. The documents retrieved are further processed such that only the sentences that contain the seed concepts are retained. This way, an arbitrarily large corpus .4 is formed of sentences containing the seed concepts. We limit the size of this corpus to 1000 sentences per seed concept.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KAT System", "sec_num": "2" }, { "text": "Parse sentences. Each sentence in this corpus is first part-of-speech (POS) tagged then parsed. We use Brill's POS tagger and our own parser. The output of the POS tagger for the example above is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KAT System", "sec_num": "2" }, { "text": "When/WRB the/DW U.~./NNP economy/NN enters/VBZ a/DT boom/NN ,/, mortgage/NN inter-est_rates/NNS rise/vBP ./.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KAT System", "sec_num": "2" }, { "text": "The syntactic parser output is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KAT System", "sec_num": "2" }, { "text": "TOP (S (SBAR (WHADVP (WRB When)) (S (NP (DT the) (NNP U.S.) (NN economy)) (VP (VBZ enters) (NP (DT a) (NN boom) (, ,))))) (NP (NN mortgage) (NNS interest_rates)) (VP (VI3P rise)))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KAT System", "sec_num": "2" }, { "text": "Extract new concepts. In this paper only noun concepts are considered. Since, most likely, oneword nouns are already defined in WordNet, the focus here is on compound nouns and nouns with modifiers that have meaning but are not in WordNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KAT System", "sec_num": "2" }, { "text": "The new concepts directly related to the seeds are extracted from the noun phrases (NPs) that contain the seeds. In the example above, we see that the seed belongs to the NP: mortgage interest rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KAT System", "sec_num": "2" }, { "text": "This way, a list of NPs containing the seeds is assembled automatically from the parsed texts. Every such NP is considered a potential new concept. This is only the \"raw material\" from which actual concepts are discovered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KAT System", "sec_num": "2" }, { "text": "In some noun phrases the seed is the head noun,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KAT System", "sec_num": "2" }, { "text": "i.e. [word, word,..see~ [mortgage_interest_rate] , since it is defined in the on-line dictionary OneLook Dictionaries (http://www.onelook.com). Procedure 1.3. User validation. Since currently we lack a formal definition of a concept, it is not possible to completely automate the discovery of concepts. The human inspects the list of noun phrases and decides whether to accept or decline each concept.", "cite_spans": [ { "start": 5, "end": 23, "text": "[word, word,..see~", "ref_id": null }, { "start": 24, "end": 48, "text": "[mortgage_interest_rate]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "KAT System", "sec_num": "2" }, { "text": "Texts represent a rich source of information from which in addition to concepts we can also discover relations between concepts. We are interested in discovering semantic relationships that link the concepts extracted above with other concepts, some of which may be in WordNet. The approach is to search for lexico-syntactic patterns comprising the concepts of interest. The semantic relations from WordNet are the first we search for, as it is only natural to add more of these relations to enhance the WordNet knowledge base. However, since the focus is on the acquisition of domain-specific knowledge, there are semantic relations between concepts other than the WordNet relations that are important. These new relations can be discovered automatically from the clauses and sentences in which the seeds occur.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discover lexlco-syntactic patterns", "sec_num": "2.2" }, { "text": "Pick a semantic relation R. These can be Word-Net semantic relations or any other relations defined by the user. So far, we have experimented with the WordNet HYPERNYMY (or so-called IS-A) relation, and three other relations. By inspecting a few sentences containing interest rate one can notice that INFLUENCE is a frequently used relation. The two other relations are CAUSE and EQUIVALENT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discover lexlco-syntactic patterns", "sec_num": "2.2" }, { "text": "Pick a pair of concepts Ci, C# among which R holds. These may be any noun concepts. In the context of finance domain, some examples of concepts linked by the INFLUENCE relation are: interest rate INFLUENCES earnings, or credit worthiness INFLUENCES interest rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discover lexlco-syntactic patterns", "sec_num": "2.2" }, { "text": "Extract lexico-syntactic patterns Ci :P Cj. Search any corpus B, different from ,4 for all instances where Ci and Cj occur in the same sentence. Extract the lexico-syntactic patterns that link the two concepts. For example~ from the sentence : The graph indicates the impact on earnings from several different interest rate scenarios, the generally applicable pattern extracted is: impact on NP2 from NP1 This pattern corresponds unambiguously to the relation R we started with, namely INFLUENCE. Thus we conclude: INFLUENCE(NPI, NP2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discover lexlco-syntactic patterns", "sec_num": "2.2" }, { "text": "Another example is: As the credit worthiness decreases, the interest rate increases. From this sentence we extract another lexical pattern that expresses the INFLUENCE relation: [as NP1 vbl, NP2 vb$] & [vbl and vb2 are antonyms] This pattern is rather complex since it contains not only the lexical part but also the verb condition that needs to be satisfied.", "cite_spans": [ { "start": 178, "end": 190, "text": "[as NP1 vbl,", "ref_id": null }, { "start": 191, "end": 228, "text": "NP2 vb$] & [vbl and vb2 are antonyms]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discover lexlco-syntactic patterns", "sec_num": "2.2" }, { "text": "This procedure repeats for all relations R.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discover lexlco-syntactic patterns", "sec_num": "2.2" }, { "text": "2.3 Discover new relationships between concepts Let us denote with Cs the seed-related concepts found with Procedures 1.1 through 1.3. We search now corpus ,4 for the occurrence of patterns ~ discovered above such that one of their two concepts is a concept Cs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discover lexlco-syntactic patterns", "sec_num": "2.2" }, { "text": "Search corpus ,4 for a pattern ~. Using a lexicosyntactic pattern P, one at a time, search corpus ,4 for its occurrence. If found, search further whether or not one of the NPs is a seed-related concept Cs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discover lexlco-syntactic patterns", "sec_num": "2.2" }, { "text": "Identify new concepts Cn. Part of the pattern 7 ~ are two noun phrases, one of which is Cs. The head noun from the other noun phrase is a concept Cn we are looking for. This may be a WordNet concept, and if it is not it will be added to the list of concepts discovered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discover lexlco-syntactic patterns", "sec_num": "2.2" }, { "text": "Form relation R(Cs, Cn). Since each pattern 7 ~ is a linguistic expression of its corresponding semantic relation R, we conclude R(Cs,Cn) (this is interpreted \"C8 is relation R Cn)'). These steps are repeated for all patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discover lexlco-syntactic patterns", "sec_num": "2.2" }, { "text": "User intervention to accept or reject relationships is necessary mainly due to our system inability of handling coreference resolution and other complex linguistic phenomena.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discover lexlco-syntactic patterns", "sec_num": "2.2" }, { "text": "integration Next, a taxonomy needs to be created that is consistent with WordNet. In addition to creating a taxonomy, this step is also useful for validating the concepts acquired above. The classification is based on the subsumption principle (Schmolze and Lipkis 1983) , (Woods 1991) .", "cite_spans": [ { "start": 244, "end": 270, "text": "(Schmolze and Lipkis 1983)", "ref_id": "BIBREF8" }, { "start": 273, "end": 285, "text": "(Woods 1991)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge classification and", "sec_num": "2.4" }, { "text": "This algorithm provides the overall steps for the classification of concepts within the context of Word-Net. Figure 1 shows the inputs of the Classification Algorithm and suggests that the classification is an iterative process. In addition to WordNet, the inputs consist of the corpus ,4, the sets of concepts Cs and Cn, and the relationships 7~. Let's denote with C = Cs U Cn the union of the seed related concepts with the new concepts. All these concepts need to be classified. Step 1. From the set of relationships 7\"~ discovered in Part 3, pick all the HYPERNYMY relations. From the way these relations were developed, there are two possibilities:", "cite_spans": [], "ref_spans": [ { "start": 109, "end": 117, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Knowledge classification and", "sec_num": "2.4" }, { "text": "(1) A HYPERNYMY relation links a WordNet concept Cw with another concept from the set C denoted with CAw , or (2) A HYPERNYMY relation links a concept Cs with a concept Cn.", "cite_spans": [ { "start": 101, "end": 109, "text": "CAw , or", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge classification and", "sec_num": "2.4" }, { "text": "Concepts C~w are immediately linked to Word-Net and added to the knowledge base. The concepts from case (2) are also added to the knowledge base but they form at this point only some isolated islands since are not yet linked to the rest of the knowledge base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge classification and", "sec_num": "2.4" }, { "text": "Step 2. Search corpus `4 for all the patterns associated with the HYPERNYMY relation that may link Step 3. Classify all concepts in set Ce using Procedures 4.1 through 4.5 below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge classification and", "sec_num": "2.4" }, { "text": "Step 4. Repeat Step 3 for all the concepts in set Cc several times till no more changes occur. This reclassification is necessary since the insertion of a concept into the knowledge base may perturb the ordering of other surrounding concepts in the hierarchy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge classification and", "sec_num": "2.4" }, { "text": "Step 5. Add the rest of relationships 7~ other than the HYPERNYMY to the new knowledge base. The HYPERNYMY relations have already been used in the Classification Algorithm, but the other relations, i.e. INFLUENCE, CAUSE and EQUIVALENT need to be added to the knowledge base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge classification and", "sec_num": "2.4" }, { "text": "Procedure 4.1. Classify a concept of the form [word, head] with respect to concept [head] .", "cite_spans": [ { "start": 46, "end": 58, "text": "[word, head]", "ref_id": null }, { "start": 83, "end": 89, "text": "[head]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Concept classification procedures", "sec_num": null }, { "text": "It is assumed here that the [head] concept exists in WordNet simply because in many instances the \"head\" is the \"seed\" concept, and because frequently the head is a single word common noun usually defined in WordNet. In this procedure we consider only those head nouns that do not have any hyponyms since the other case when the head has other concepts under it is more complex and is treated by Procedure 4.4. Here \"word\" is a noun or an adjective. For a relative classification of two such concepts, the ontological relations between headz and head2 and between word1 and words, if exist, are extended to the two concepts. We distinguish here three possibilities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept classification procedures", "sec_num": null }, { "text": "1. heady subsumes heads and word1 subsumes word2. In this case [wordz, headl] In the previous work on knowledge classification it was assumed that the concepts were accompanied by rolesets and values (Schmolze and Lipkis 1983) , (Woods 1991) , and others. Knowledge classifiers are part of almost any knowledge representation system.", "cite_spans": [ { "start": 63, "end": 77, "text": "[wordz, headl]", "ref_id": null }, { "start": 200, "end": 226, "text": "(Schmolze and Lipkis 1983)", "ref_id": "BIBREF8" }, { "start": 229, "end": 241, "text": "(Woods 1991)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Concept classification procedures", "sec_num": null }, { "text": "However, the problem we face here is more difficult. While in build-by-hand knowledge representation systems, the relations and values defining concepts are readily available, here we have to extract them from text. Fortunately, one can take advantage of the glossary definitions that are associated with concepts in WordNet and other dictionaries. One approach is to identify a set of semantic relations into which the verbs used in the gloss definitions are mapped into for the purpose of working with a manageable set of relations that may describe the concepts restrictions. In WordNet these basic relations are already identified and it is easy to map every verb into such a semantic relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept classification procedures", "sec_num": null }, { "text": "As far as the newly discovered concepts are concerned, their defining relations need to be retrieved from texts. Human assistance is required, at least for now, to pinpoint the most characteristic relations that define a concept.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept classification procedures", "sec_num": null }, { "text": "Below is a two step algorithm that we envision for the relative classification of two concepts A and B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept classification procedures", "sec_num": null }, { "text": "Let's us denote with ARaCa and BRbCb the relationships that define concepts A and B respectively. These are similar to rolesets and values. Figure 4 it is shown the classification of concept monetary policy that has been discovered. By default this concept is placed under policy. However in WordNet there is a hierarchy fiscal policy -IS-Aeconomic policy -IS-A -policy. The question is where exactly to place monetary policy in this hierarchy.", "cite_spans": [], "ref_spans": [ { "start": 140, "end": 148, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Concept classification procedures", "sec_num": null }, { "text": "The gloss of economic policy indicates that it is MADE BY Government, and that it CONTROLS economic growth-(here we simplified the explanation and used economy instead of economic growth). The gloss of fiscal policy leads to relations MADE BY Government, CONTROLS budget, and CONTROLS taxation. The concept money supply was found by Procedure 1.2 in several dictionaries, and its dictionary definition leads to relations MADE BY Federal Government, and CONTROLS money supply. In Word-Net Government subsumes Federal Government, and economy HAS PART money. All necessary conditions are satisfied for economic policy to subsume monetary policy. However, fiscal policy does not subsume monetary policy since monetary policy does not control budget or taxation, or any of their hyponyms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept classification procedures", "sec_num": null }, { "text": "Procedure 4.5 Merge a structure of concepts with the rest of the knowledge base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept classification procedures", "sec_num": null }, { "text": "It is possible that structures consisting of several inter-connected concepts are formed in isolation of the main knowledge base as a result of some procedures. The task here is to merge such structures with the main knowledge base such that the new knowledge base will be consistent with both the structure and the main knowledge base. This is done by bridging whenever possible the structure concepts and the main knowledge base concepts. It is possible that as a result of this merging procedure, some HYPERNYMY relations either from the structure or the main knowledge base will be destroyed to keep the consistency. An example is shown in Figure 5 .", "cite_spans": [], "ref_spans": [ { "start": 644, "end": 652, "text": "Figure 5", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Concept classification procedures", "sec_num": null }, { "text": "Example : The following HYPERNYMY relationships were discovered in Part 3: HYPERNYMY(financial market,capital market) HYPERNYMY(fInancial market,money market) HYPERNYMY(capital market,stock market) The structure obtained from these relationships along with a part of WordNet hierarchy is shown in Figure 5 . An attempt is made to merge the new structure with WordNet. To these relations it corresponds a structure as shown in Figure 5 . An attempt is made to merge this structure with Word-Net. Searching WordNet for all concepts in the structure we find money market and stock market in WordNet where as capital market and financial market are not. Figure 5 shows how the structure merges with WordNet and moreover how concepts that were unrelated in WordNet (i.e. stock market and money market) become connected through the new structure. It is also interesting to notice that the IS-A link in WordNet from money market to market is interrupted by the insertion of financial market in-between them.", "cite_spans": [], "ref_spans": [ { "start": 297, "end": 305, "text": "Figure 5", "ref_id": "FIGREF1" }, { "start": 426, "end": 434, "text": "Figure 5", "ref_id": "FIGREF1" }, { "start": 650, "end": 658, "text": "Figure 5", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Concept classification procedures", "sec_num": null }, { "text": "The KAT Algorithm has been implemented, and when given some seed concepts, it produces new concepts, patterns and relationships between concepts in an interactive mode. Table 1 shows the number of concepts extracted from a 5000 sentence corpus, in which each sentence contains at least one of the five seed concepts. The NPs were automatically searched in Word-Net and other on-line dictionaries. There were 3745 distinct noun phrases of interest extracted; the rest contained only the seeds or repetitions. Most of the ", "cite_spans": [], "ref_spans": [ { "start": 169, "end": 176, "text": "Table 1", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Implementation and Results", "sec_num": "3" }, { "text": ":~ INFLUENCE(NPI,NP2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation and Results", "sec_num": "3" }, { "text": "Phillips, a British economist, stated in 1958 that high inflation causes low unemployment rates. The Bank of Israel governor said that the ti;ht economic policy would have an immediate impact on inflation this year. As the economy picks up steam, so does inflation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation and Results", "sec_num": "3" }, { "text": "Higher interest rates are normally associated with weaker bond markets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation and Results", "sec_num": "3" }, { "text": "On the other hand, if interest rates go down, bonds go up, and your bond becomes more valuable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation and Results", "sec_num": "3" }, { "text": "The effects of inflation on debtors and creditors varies as the actual inflation is compared to the expected one. There exists an inverse relationship between unemployment rates and inflation, best illustrated by the Phillips Curve.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation and Results", "sec_num": "3" }, { "text": "Irish employment is also largely a function of the past high birth rate. We believe that the Treasury bonds (and thus interest rates) are in a downward cycle. processing in Part 1 is taken by the parser. The human intervention to accept or decline concepts takes about 4 min./seed. The next step was to search for lexico-syntactic patterns. We considered one WordNet semantic relation, HYPERNYMY and three other relations that we found relevant for the domain, namely INFLU-ENCE, CAUSE and EQUIVALENT. For each relation, a pair of related words was selected and searched for on the Internet. The first 500 sentences/relation were retained. A human selected and validated semiautomatically the patterns for each sentence. A sample of the results is shown in Table 2 . A total of 22 patterns were obtained and their selection and validation took approximately 35 minutes/relation. Next, the patterns are searched for on the 5000 sentence corpus (Part 3). The procedure provided a total of 43 new concepts and 166 relationships in which at least one of the seeds occurred. From these relationships, by inspection, we have accepted 63 and rejected 102, procedure which took about 7 minutes. Table 3 lists some of the 63 relationships discovered. Applications An application in need of domain-specific knowledge is Question Answering. The concepts and the relationships acquired can be useful in answering difficult questions that normally cannot be easily answered just by using the information from WordNet. Consider the processing of the following questions after the new domain knowledge has been acquired: QI: What factors have an impact on the interest rate? Q2: What happens with the employment when the economic growth rises? Q3: How does deflation influence prices? Figure 6 shows a portion of the new domain knowledge that is relevant to these questions. The first question can be easily answered by extracting the relationships that point to the concept interest rate. The factors that influence the interest rate are Fed, inflation, economic growth, and employment.", "cite_spans": [], "ref_spans": [ { "start": 757, "end": 764, "text": "Table 2", "ref_id": "TABREF6" }, { "start": 1187, "end": 1194, "text": "Table 3", "ref_id": "TABREF8" }, { "start": 1770, "end": 1778, "text": "Figure 6", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Implementation and Results", "sec_num": "3" }, { "text": "The last two questions ask for more detailed information about the complex relationship among these concepts. Following the path from the deflation concept up to prices, the system learns that deflation influences direct proportionally real interest rate, and real interest rate has an inverse proportional impact on prices. Both these relationships came from the sentence: Thus, the deflation and the real interest rate are positively correlated, and so a higher real interest rate leads to falling prices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation and Results", "sec_num": "3" }, { "text": "This method may be adapted to acquire information when the question concepts are not in the knowledge base. Procedures may be invoked to discover these concepts and the relations in which they may be used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation and Results", "sec_num": "3" }, { "text": "The knowledge acquisition technology described above is applicable to any domain, by simply selecting appropriate seed concepts. We started with five concepts interest rate, stock market, inflation, economic growth, and employment and from a corpus of 5000 sentences we acquired a total of 362 concepts of which 319 contain the seeds and 43 relate to these via selected relations. There were 22 distinct le:dco-syntactic patterns discovered used in 63 instances. Most importantly, the new concepts can be integrated with an existing ontology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "The method works in an interactive mode where the user accepts or declines concepts, patterns and relationships. The manual operation took on average 40 minutes per seed for the 5000 sentence corpus. KAT is useful considering that most of the knowledge base construction today is done manually.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Complex linguistic phenomena such as coreference resolution, word sense disambiguation, and others have to be dealt with in order to increase the automation of the knowledge acquisition system. Without a good handling of these problems the results are not always accurate and human intervention is necessary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "LIBOR) HYPERNYMY(leading stock market", "authors": [ { "first": "", "middle": [], "last": "Hypeanymy", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "HYPEaNYMY(interest rate, LIBOR) HYPERNYMY(leading stock market, New York Stock Exchange)", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "HYPERNYMY(market risks, interest rate risk) HYPERNYMY(Capital markets, stock markets) CAUSE(inflation, unemployment) CAUSE(labour shortage, wage inflation) CAUSE(excessive demand, inflation INFLUENCE_DIRECT_PROPORTIONALYI economy, inflation) INFLUENCE_DIRECT_PROPORT1ONALY settlements", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "HYPERNYMY(market risks, interest rate risk) HYPERNYMY(Capital markets, stock markets) CAUSE(inflation, unemployment) CAUSE(labour shortage, wage inflation) CAUSE(excessive demand, inflation INFLUENCE_DIRECT_PROPORTIONALYI economy, inflation) INFLUENCE_DIRECT_PROPORT1ONALY settlements, interest rate) INFLUENCE..DIRECT..", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "interest rates, dollars) INFLUENCE_DIRECT_PROPORTIONALY~ oil prices, inflation) INFLUENCE_DIRECT_PROPORTIONALY' inflation, nominal interest rates) INFLUENCE..DIRECT_PROPORTIONALY~ deflation, real interest rates) INFLUENCE-DIRECT-PROPORTIONALY currencles,lnflation) INFLUENCE_INVERSE_PROPORTIONALY unemployment rates, inflation) INFLUENCE_INVERSE-PKOPOKTIONALY monetary policies, inflation) INFLUENCE_INVERSE_PROPORTIONALY economy, interest rates) INFLUENCE_INVERSE..PROPORTIONALY inflation, unemployment rates) INFLUENCE.JNVERSE-PROPORTIONALY credit worthiness, interest rate) INFLUENCE_INVERSE-PROPORTIONALYlinterest rates, bonds) INFLUENCE(Internal Revenue Service", "authors": [ { "first": "U", "middle": [ "S" ], "last": "Proportionaly~", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "PROPORTIONALY~ U.S. interest rates, dollars) INFLUENCE_DIRECT_PROPORTIONALY~ oil prices, inflation) INFLUENCE_DIRECT_PROPORTIONALY' inflation, nominal interest rates) INFLUENCE..DIRECT_PROPORTIONALY~ deflation, real interest rates) INFLUENCE-DIRECT-PROPORTIONALY currencles,lnflation) INFLUENCE_INVERSE_PROPORTIONALY unemployment rates, inflation) INFLUENCE_INVERSE-PKOPOKTIONALY monetary policies, inflation) INFLUENCE_INVERSE_PROPORTIONALY economy, interest rates) INFLUENCE_INVERSE..PROPORTIONALY inflation, unemployment rates) INFLUENCE.JNVERSE-PROPORTIONALY credit worthiness, interest rate) INFLUENCE_INVERSE-PROPORTIONALYlinterest rates, bonds) INFLUENCE(Internal Revenue Service, interest rates) INFLUENCE(economic growth, share prices)", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "EQUIVALENT(big mistakes, high inflation rates of 1970s) EQUIVALENT(fixed interest rate, coupon) References Christiane Fellbaum. WordNet -An Electronic Lezical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "EQUIVALENT(big mistakes, high inflation rates of 1970s) EQUIVALENT(fixed interest rate, coupon) References Christiane Fellbaum. WordNet -An Electronic Lezical Database, MIT Press, Cambridge, MA, 1998.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automated Discovery of WordNet Relations", "authors": [ { "first": "Marti", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 1998, "venue": "WordNet: An Electronic Lezical Database and Some of its Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marti Hearst. Automated Discovery of WordNet Rela- tions. In WordNet: An Electronic Lezical Database and Some of its Applications, editor Fellbaum, C., MIT Press, Cambridge, MA, 1998.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Acquisition of Linguistic Patterns for knowledge-based information extraction", "authors": [ { "first": "J", "middle": [], "last": "Kim", "suffix": "" }, { "first": "D", "middle": [], "last": "Moldovan", "suffix": "" } ], "year": null, "venue": "IEEE Transactions on Knowledge and Data Engineering", "volume": "7", "issue": "5", "pages": "713--724", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Kim and D. Moldovan. Acquisition of Linguistic Patterns for knowledge-based information extraction. IEEE Transactions on Knowledge and Data Engineer- ing 7(5): pages 713-724.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A Description Classifier for the Predicate Calculus", "authors": [ { "first": "R", "middle": [], "last": "Macgregor ; Stephen", "suffix": "" }, { "first": "D", "middle": [], "last": "Richardson", "suffix": "" }, { "first": "William", "middle": [ "B" ], "last": "Dolan", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "" } ], "year": 1994, "venue": "MindNet: acquiring and structuring semantic information from text. Proceedings of ACL-Coling", "volume": "", "issue": "", "pages": "1098--1102", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. MacGregor. A Description Classifier for the Predicate Calculus. Proceedings of the 12th National Conference on Artificial Intelligence (AAAI94), pp. 213-220, 1994. Stephen D. Richardson, William B. Dolan, Lucy Vander- wende. MindNet: acquiring and structuring seman- tic information from text. Proceedings of ACL-Coling 1998, pages 1098-1102.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Automatically'Generating Extraction Patterns from Untagged Text", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": null, "venue": "Proceedings of the Thirteenth National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "1044--1049", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Riloff. Automatically'Generating Extraction Pat- terns from Untagged Text. In Proceedings of the Thir- teenth National Conference on Artificial Intelligence, 1044-1049. The AAAI Press/MIT Press.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Classification in the KL-ONE knowledge representation system", "authors": [ { "first": "J", "middle": [ "G" ], "last": "Schmolze", "suffix": "" }, { "first": "T", "middle": [], "last": "Lipkis", "suffix": "" } ], "year": 1983, "venue": "Proceedings of 8th Int'l Joint Conference on Artificial Intelligence (IJCAI83)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.G. Schmolze and T. Lipkis. Classification in the KL- ONE knowledge representation system. Proceedings of 8th Int'l Joint Conference on Artificial Intelligence (IJCAI83), 1983.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning to extract text-based information from the world wide web", "authors": [ { "first": "S", "middle": [], "last": "Soderland", "suffix": "" } ], "year": null, "venue": "the Proceedings of the Third International Conference on Knowledge Discover# and Data Mining (KDD-97)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Soderland. Learning to extract text-based informa- tion from the world wide web. In the Proceedings of the Third International Conference on Knowledge Dis- cover# and Data Mining (KDD-97).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Understanding Subsumption and Taxonomy: A Framework for Progress", "authors": [], "year": 1991, "venue": "the Principles of Semantic Networks: Explorations in the Representation of Knowledge", "volume": "", "issue": "", "pages": "45--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Text REtrieval Conference. http://trec.nist.gov 1999 W.A. Woods. Understanding Subsumption and Taxon- omy: A Framework for Progress. In the Principles of Semantic Networks: Explorations in the Represen- tation of Knowledge, Morgan Kaufmann, San Mateo, Calif. 1991, pages 45-94.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A Better way to Organize Knowledge", "authors": [ { "first": "W", "middle": [ "A" ], "last": "Woods", "suffix": "" } ], "year": 1997, "venue": "Technical Report of Sun Microsystems Inc", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W.A. Woods. A Better way to Organize Knowledge. Technical Report of Sun Microsystems Inc., 1997.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Wo,aN=l C\u00b0~Tr~ A Co.=i= ~. C\u00b0, V.=~tio.~=a~[ I The knowledge classification diagram" }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "Merging a structure of concepts with WordNet" }, "FIGREF2": { "num": null, "type_str": "figure", "uris": null, "text": "A sample of concepts and relations acquired from the 5000 sentence corpus. Legend: continue lines represent influence inverse proportionally, dashed lines represent influence direct proportionally, and dotted lines represent influence (the direction of the relationship was not specified in the text)." }, "TABREF2": { "html": null, "num": null, "content": "
thus linked by a relation nYPERNYMY(interest_rate,
mortgage_interest_rate).
", "type_str": "table", "text": "The classification is based on the simple idea that a compound concept [word, head] is ontologically subsumed by concept [head]. For example, mortgage_interest_rate is a kind of interest_rate," }, "TABREF3": { "html": null, "num": null, "content": "
4. When neither [wordl head] nor [words head] are
in the knowledge base, then place [wordl word~
head] under the [head]. The example in Figure
3 corresponds to case 3.
components ;y/
radio componentsautomobile components /
automobile radio components
Figure 3: Classification of a compound concept with respect to its ~ conceptssubsumes [word2, heads]. The subsumption may not al-ways be a direct connection; sometimes it may
Since we do not deal here with the sentence seman-tics, it is not possible to completely determine the meaning of [word1 word2 head], as it may be either [((word1 word2) head)] or [(word1 (words head))] of-ten depending on the sentence context. In the example of Figure 3 there is only one mean-ing, i.e. [(automobile radio) components]. However, in the case of ~erformance skiing equipment] there are two valid interpretations, namely [(performance skiing) equipment] and ~erformance (skiing equip-ment)].consist of a chain of subsumption relations since subsumption is (usually) a transitive relation (Woods 1991). An example is shown in Fig-ure 2a; in WordNet, A particular case of this is when head1 is iden-tical with head2. 2. Another case is when there is no direct sub-sumption relation in WordNet between word1 and words, and/or head1 and heads, but there are a common subsuming concepts, for each pair. When such concepts are found, pick
Procedure 4.4 Classify a concept [word1, head] withthe most specific common subsumer (MSCS) concepts of word1 and words, and of head1
The task here is to identify the most specific sub-sumer (MSS) from all the concepts under the head that subsumes [wordx, head]. By default, [wordl head] is placed under [head], however, since it may be more specific than other hyponyms of [head], a more complex classification analysis needs to be im-plemented.and head2, respectively. Then form a concept [MSCS(wordz, words), MSCS(headl, head2)] and place [word1 headz] and [words heads] un-der it. This is exemplified in Figure 2b. In WordNet, country 3. In all other cases, no subsumption relation is es-
tablished between the two concepts. For exam-
ple, we cannot say whether Asian_country dis-
count_rate is more or less abstract then Japan
interest_rate.
Procedure 4.3. Classify concept [word1 words head].
Several poss!bilities exist: 1. When there is already a concept [words head]
in the knowledge base under the [head], then
place [wordl words head] under concept [words
head].
2. When there is already a concept [wordz head]
in the knowledge base under the [head], then
place [wordl word2 head] under concept [wordl
head].
3. When both cases 1 and 2 are true then place
[wordz word2 head] under both concepts.
", "type_str": "table", "text": "Asian_country subsumes Japan and interest_rate subsumes discount_rate. Subsumes Japan and Germany, and interest_rate subsumes discount_rate and prime_interest_rate." }, "TABREF5": { "html": null, "num": null, "content": "
[I Relations ILexico-syntactic PatternsExamples
HWordNet Relations
HYPERNYMY I NP1 [<be>] a kind of NP2Thus, New Relations
CAUSENPI [<be>] cause NP2
=~ CAUSE(NPI,NP2)
INFLUENCENP1 impact on NP2
INFLU~NCZ(NP1,NP2)
As NP1 vb, so <do> NP2
=> INFLUENCE(NPI,NP2)
NP1 <be> associated with NP2
=> INFLUENCE(NP1,NP2)
INFLUENCE(NP2,NPI)
As/if/when NP1 vbl, NP2 vb2. -{-
vbl, vb2 ----antonyms / go in
opposite directions
::~ INFLUENCE(NPI,NP2)
the effect(s) of NP1 on/upon NP2
::> INFLUENCE(NPI,NP2)
inverse relationship between
NPI and NP2
=> INFLUENCE(NP1,NP2)
=~ INFLUENCE(NP2,NP1)
NP2 <be> function of NP1
=# INFLUENCZ(NP1,NP2)
NP1 (and thus NP2)
", "type_str": "table", "text": "LIBOR is a kind of interest rate, as it is charged I ::~ HYPERNYMY(NPI,NP2) on deposits between banks in the Eurodolar market." }, "TABREF6": { "html": null, "num": null, "content": "
laI blc IdI e II
concepts (NPs)773382833921.
Total concepts extracted with Procedurel
Concepts found
in WordNet20102
ConceptsConcepts
found inwith seed60300
on-linehead
dictionaries,Concepts
but not inwith seed70111
WordNetnot head
I C\u00b0ncepts acceptedI
by human7862586037
", "type_str": "table", "text": "Examples of lexico-syntactic patterns and semantic relations derived from the 5000 sentence corpus" }, "TABREF7": { "html": null, "num": null, "content": "
from the corpus related to (a) interest rate, (b) stock market, (c)
inflation, (d) economic 9rowth, a~ld (e) employment.
", "type_str": "table", "text": "Results showing the number of new concepts learned" }, "TABREF8": { "html": null, "num": null, "content": "
sentence corpus
", "type_str": "table", "text": "A part of the relationships derived from the 5000" } } } }