{ "paper_id": "M98-1018", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:16:02.124596Z" }, "title": "NYU: Description of the MENE Named Entity System as Used in MUC-7", "authors": [ { "first": "Andrew", "middle": [], "last": "Borthwick", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": { "addrLine": "715 Broadway", "postCode": "7th oor, 10003", "settlement": "New York", "region": "NY", "country": "USA" } }, "email": "fborthwic@cs.nyu.edu" }, { "first": "John", "middle": [], "last": "Sterling", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": { "addrLine": "715 Broadway", "postCode": "7th oor, 10003", "settlement": "New York", "region": "NY", "country": "USA" } }, "email": "sterling@cs.nyu.edu" }, { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": { "addrLine": "715 Broadway", "postCode": "7th oor, 10003", "settlement": "New York", "region": "NY", "country": "USA" } }, "email": "agichtn@cs.nyu.edu" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "", "affiliation": { "laboratory": "", "institution": "New York University", "location": { "addrLine": "715 Broadway", "postCode": "7th oor, 10003", "settlement": "New York", "region": "NY", "country": "USA" } }, "email": "grishmang@cs.nyu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "M98-1018", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "This paper describes a new system called Maximum Entropy Named Entity\" or MENE\" pronounced meanie\" which w as NYU's entrant in the MUC-7 named entity e v aluation. By working within the framework of maximum entropy theory and utilizing a exible object-based architecture, the system is able to make use of an extraordinarily diverse range of knowledge sources in making its tagging decisions. These knowledge sources include capitalization features, lexical features and features indicating the current t ype of text i.e. headline or main body. It makes use of a broad array of dictionaries of useful single or multi-word terms such as rst names, company names, and corporate su xes. These dictionaries required no manual editing and were either downloaded from the web or were simply obvious\" lists entered by hand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "This system, built from o -the-shelf knowledge sources, contained no hand-generated patterns and achieved a result on dry run data which is comparable with that of the best statistical systems. Further experiments showed that when combined with handcoded systems from NYU, the University of Manitoba, and IsoQuest, Inc., MENE was able to generate scores which exceeded the highest scores thus-far reported by a n y system on a MUC evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "Given appropriate training data, we believe that this system is highly portable to other domains and languages and have already achieved state-of-the-art results on upper-case English. We also feel that there are plenty o f a v enues to explore in enhancing the system's performance on English-language newspaper text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "Although the system was ranked fourth out of the 14 entries in the N.E. evaluation, we w ere diappointed with our performance on the formal evaluation in which w e got an F-measure of 88.80. We believe that the deterioration in performance was mostly due to the shift in domains caused by training the system on airline disaster articles and testing it on rocket and missile launch articles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "Given a tokenization of a test corpus and a set of n for MUC-7, n = 7 tags which de ne the name categories of the task at hand, the problem of named entity recognition can be reduced to the problem of assigning one of 4n + 1 tags to each token. For any particular tag x from the set of n tags, we could be in one of 4 states: x start, x continue, x end, and x unique. In addition, a token could be tagged as other\" to indicate that it is not part of a named entity. For instance, we w ould tag the phrase Jerry Lee Lewis ew to Paris as person start, person continue, person end, other, other, location unique . This approach i s essentially the same as 7 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MAXIMUM ENTROPY", "sec_num": null }, { "text": "The 29 tags of MUC-7 form the space of futures\" for a maximum entropy formulation of our N.E. problem. A maximum entropy solution to this, or any other similar problem allows the computation of pfjh for any f from the space of possible futures, F, for every h from the space of possible histories, H. A history\" in maximum entropy is all of the conditioning data which enables you to make a decision among the space of futures. In the named entity problem, this could be broadly viewed as all information derivable from the test corpus relative to the current token i.e. the token whose tag you are trying to determine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MAXIMUM ENTROPY", "sec_num": null }, { "text": "The computation of pfjh in M.E. is dependent on a set of binary-valued features\" which, hopefully, are helpful in making a prediction about the future. 1 Given a set of features and some training data, the maximum entropy estimation process produces a model in which every feature g i has associated with it a parameter i . This allows us to compute the conditional probability b y combining the parameters multiplicatively as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MAXIMUM ENTROPY", "sec_num": null }, { "text": "Pfjh = Q i gih;f i Z h 2 Z h = X f Y i gih;f i 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MAXIMUM ENTROPY", "sec_num": null }, { "text": "The maximum entropy estimation technique guarantees that for every feature g i , the expected value of g i according to the M.E. model will equal the empirical expectation of g i in the training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MAXIMUM ENTROPY", "sec_num": null }, { "text": "More complete discussions of M.E., including a description of the M.E. estimation procedure and references to some of the many new computational linguistics systems which are successfully using M.E. can be found in the following useful introduction: 5 . As many authors have remarked, though, the key thing about M.E. is that it allows the modeler to concentrate on nding the features that characterize the problem while letting the M.E. estimation routine worry about assigning relative w eights to the features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MAXIMUM ENTROPY", "sec_num": null }, { "text": "MENE consists of a set of C++ and Perl modules which forms a wrapper around an M.E. toolkit 6 which computes the values of the alpha parameters of equation 2 from a pair of training les created by MENE. MENE's exibility is due to the fact that it can incorporate just about any binary-valued feature which is a function of the history and future of the current token. In the following sections, we will discuss each of MENE's feature classes in turn.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SYSTEM ARCHITECTURE", "sec_num": null }, { "text": "While all of MENE's features have binary-valued output, the binary\" features are features whose history\" can be considered to be either on or o for a given token. Examples are the token begins with a capitalized letter\" or the token is a four-digit number\". The binary features which MENE uses are very similar to those used in BBN's Nymble system 1 . Figure 1 gives an example of a binary feature.", "cite_spans": [], "ref_spans": [ { "start": 352, "end": 360, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Binary Features", "sec_num": null }, { "text": "To create a lexical history, the tokens at w Correctly predicts: Mr Jones A more subtle feature picked up by MENE: preceding word is to\" and future is location unique\". Given the domain of the MUC-7 training data, to\" is a weak indicator, but a real one. This is an example of a feature which MENE can make use of but which the constructor of a hand-coded system would probably regard as too risky to incorporate. This feature, in conjunction with other weak features, can allow MENE to pick up names that other systems might miss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Features", "sec_num": null }, { "text": "The bulk of MENE's power comes from these lexical features. A v ersion of the system which stripped out all features other than section and lexical features achieved a dry run F-score of 88.13. This is very encouraging because these features are completely portable to new domains since they are acquired with absolutely no human intervention or reference to external knowledge sources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Features", "sec_num": null }, { "text": "MENE has features which make predictions based on the current section of the article, like Date\", Preamble\", and Text\". Since section features re on every token in a given section, they have v ery low precision, but they play a key role by establishing the background probability of the occurrence of the di erent futures. For instance, in NYU's evaluation system, the alpha value assigned to the feature which predicts other\" given a current section of main body of text\" is 7.9 times stronger than the feature which predicts person unique\" in the same section. Thus the system predicts other\" by default.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Section Features", "sec_num": null }, { "text": "Multi-word dictionaries are an important element of MENE. A pre-processing step summarizes the information in the dictionary on a token-by-token basis by assigning to every token one of the following ve tags for each dictionary: start, continue, end, unique, other. I.e. if British Airways\" was in our dictionary, a dictionary feature would see the phrase on British Airways Flight 962\" as other, start, end, other, other\". Note that we don't have t o w orry about words appearing in the dictionary which are commonly used in another sense. I.e. we can leave dangerous-looking names like Storm\" in the rst-name dictionary because whenever the rst-name feature res on Storm, the lexical feature for Storm will also re and, assuming that the use of Storm as other\" exceeded the use of Storm as person start, we can expect that the lexical feature will have a high enough alpha value to outweigh the dictionary feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dictionary Features", "sec_num": null }, { "text": "For NYU's o cial entry in the MUC-7 evaluation, MENE took in the output of a signi cantly enhanced version of the traditional, hand-coded Proteus\" named-entity tagger which w e entered in MUC-6 2 . In addition, subsequent t o t h e evaluation, the University of Manitoba 4 and IsoQuest, Inc. 3 shared with us the outputs of their systems on our training corpora as well as on various test corpora. The output sent to us was the standard MUC-7 output, so our collaborators didn't have t o d o a n y special processing for us. These systems were incorporated into MENE by a fairly simple process of token alignment which resulted in the futures\" produced by the three external systems become three di erent histories\" for MENE. It is important to note that MENE has features which predict a di erent future than the future predicted by the external system. This can be seen as the process by which MENE learns the errors which the external system is likely to make. An example of this is that on the evaluation system the feature which predicted person unique given a tag of person unique by Proteus had only a 76 higher weight than the feature which predicted person start given person unique. In other words, Proteus had a tendency to chop o multi-word names at the rst word. MENE learned this and made it easy to override Proteus in this way. Given proper training data, MENE can pinpoint and selectively correct the weaknesses of a handcoded system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "External Systems Features", "sec_num": null }, { "text": "Features are chosen by a v ery simple method. All possible features from the classes we w ant included in our model are put into a feature pool\". For instance, if we want lexical features in our model which activate on a range of token ,2 : : : token 2 , our vocabulary has a size of V , and we h a v e 29 futures, we will add 5 V + 1 29 lexical features to the pool. The V + 1 term comes from the fact that we include all words in the vocabulary plus the unknown word. From this pool, we then select all features which re at least three times on the training corpus. Note that this algorithm is entirely free of human intervention. Once the modeler has selected the classes of features, MENE will both select all the relevant features and train the features to have the proper weightings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FEATURE SELECTION", "sec_num": null }, { "text": "After having trained the features of an M.E. model and assigned the proper weight alpha values to each o f the features, decoding i.e. marking up\" a new piece of text is a fairly simple process of tokenizing the text and doing various preprocessing steps like looking up words in the dictionaries. Then for each token we c heck each feature to whether if res and combine the alpha values of the ring features according to equation 2. Finally, we run a viterbi search t o nd the highest probability path through the lattice of conditional probabilities which doesn't produce any i n v alid tag sequences for instance we can't produce the sequence person start, location end . Further details on the viterbi search can be found in 7 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DECODING", "sec_num": null }, { "text": "MENE's maximum entropy training algorithm gives it reasonable performance with moderate-sized training corpora or few information sources, while allowing it to really shine when more training data and information sources are added. Table 2 shows MENE's performance on the within-domain corpus from MUC-7's dry run as well as the out-of-domain data from MUC-7's formal run. All systems shown were trained on 350 aviation disaster articles this training corpus consisted of about 270,000 words, which our system turned into 321,000 tokens. Table 2 : System combinations on unseen data from the MUC-7 dry-run and formal test sets", "cite_spans": [], "ref_spans": [ { "start": 232, "end": 239, "text": "Table 2", "ref_id": null }, { "start": 538, "end": 545, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "RESULTS", "sec_num": null }, { "text": "Note the smooth progression of the dry run scores as more information is added to the system. Also note that, when combined under MENE, the three weakest systems, MENE, Proteus, and Manitoba outperform the strongest single system of the group, IsoQuest's. Finally, the top dry-run score of 97.12 from combining all three systems seems to be competitive with human performance. According to results published elsewhere in this volume, human performance on the MUC-7 formal run data was in a range of 96.95 to 97.60. Even better is the score of 97.38 shown in table 3 below which we a c hieved by adding an additional 75 articles from the formal-run test corpus into our training data. In addition to being an outstanding result, this gure shows MENE's responsiveness to good training material.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems", "sec_num": null }, { "text": "The formal evaluation involved a shift in topic which w as not communicated to the participants beforehand the training data focused on airline disasters while the test data was on missile and rocket launches. MENE faired much more poorly on this data than it did on the dry run data. While our performance was still reasonably good, we feel that it is necessary to view this number as a cross-domain portability result rather than an indicator of how the system can do on unseen data within its training domain. In addition, the progression of scores of the combined systems was less smooth. Although MENE improved the Manitoba and Proteus scores dramatically, it left the IsoQuest score essentially unchanged. This may h a v e been due to the tremendous gap between the MENE-and IsoQuest-only scores. Also, there was no improvement b e t w een the MENE + Proteus + IsoQuest score and the score for all four systems. We suspect that this was due to the relatively low precision of the Manitoba system on formal-run data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems", "sec_num": null }, { "text": "We also did a series of runs to examine how the systems performed on the dry run corpus with di erent amounts of training data. These experiments are summarized in Table 3 : Systems' performances with di erent n umbers of articles A few conclusions can be drawn from this data. First of all, MENE needs at least 20 articles of tagged training data to get acceptable performance on its own. Secondly, there is a minimum amount of training data which is needed for MENE to improve an external system. For Proteus and the Manitoba system, this number seems to be around 80 articles. Since the IsoQuest system was stronger to start with, MENE required 150 articles to show an improvement. MENE has also been run against all-uppercase data. On this we achieved formal run F-measures of 77.98 and 82.76 and dry run F-measures of 88.19 for the MENE-only system and 91.38 for the MENE + Proteus system. The formal run numbers su ered from the same problems as the mixed-case system, but the combined dry run number matches the best currently published result 1 on all-caps data. We h a v e put very little e ort into optimizing MENE on this type of corpus and believe that there is room for improvement here.", "cite_spans": [], "ref_spans": [ { "start": 164, "end": 171, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Systems", "sec_num": null }, { "text": "MENE is a very new, and, we feel, still immature system. Work started on the system in October, 1997, and the system described above w as not largely in place until mid-February, 1998 about three weeks before the evaluation. We believe that we can push the score of the MENE-only system higher by adding long-range reference-resolution features to allow MENE to pro t from terms and their acronyms which it has correctly tagged elsewhere in the corpus. We w ould also like to explore compound features i.e. feature A res if features B and C both re and more sophisticated methods of feature selection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSIONS AND FUTURE WORK", "sec_num": null }, { "text": "Nevertheless, we believe that we have already demonstrated some very useful results. Within-domain scores for MENE-only were good and this system is highly portable as we h a v e already demonstrated with our result on upper-case English text. Porting MENE can be done with very little e ort: our result on running MENE with only lexical and section features shows that it isn't even necessary to provide it with dictionaries to generate an acceptable result. We intend to port the system to Japanese NE to further demonstrate MENE's exibility.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSIONS AND FUTURE WORK", "sec_num": null }, { "text": "However, we believe that the within-domain results on combining MENE with other systems are some of the most intriguing. We would hypothesize that, given su cient training data, any handcoded system would bene t from having its output passed to MENE as a nal step. MENE also opens up new avenues for collaboration whereby di erent organizations could focus on di erent aspects of the problem of N.E. recognition with the maximum entropy system acting as an arbitrator. MENE also o ers the prospect of achieving very high performance with very little e ort. Since MENE starts out with a fairly high base score just on its own, we speculate that a MENE user could then construct a hand-coded system which only focused on MENE's weaknesses, while skipping the areas in which MENE is already strong.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSIONS AND FUTURE WORK", "sec_num": null }, { "text": "Finally, one can imagine a large corporation or government agency acquiring licenses to several di erent N.E. systems, generating some training data, and then combining it all under a MENE-like system. We h a v e shown that this approach can yield performance which is competitive with that of a human tagger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSIONS AND FUTURE WORK", "sec_num": null }, { "text": "Sekine, S. Nyu system for japanese ne -met2. In Proceedings of the Seventh Message Understanding Conference MUC-7 1998.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Nymble: a high-performance learning name-nder", "authors": [ { "first": "D", "middle": [ "M" ], "last": "Bikel", "suffix": "" }, { "first": "S", "middle": [], "last": "Miller", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 1997, "venue": "Fifth Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bikel, D. M., Miller, S., Schwartz, R., and Weischedel, R. Nymble: a high-performance learning name-nder. In Fifth Conference on Applied Natural Language Processing 1997.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The nyu system for muc-6 or where's the syntax?", "authors": [ { "first": "R", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Sixth Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grishman, R. The nyu system for muc-6 or where's the syntax? In Proceedings of the Sixth Message Understanding Conference November 1995, Morgan Kaufmann.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Isoquest: Description of the netowltm extractor system as used in muc-7", "authors": [ { "first": "G", "middle": [ "R" ], "last": "Krupka", "suffix": "" }, { "first": "K", "middle": [], "last": "Hausman", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Seventh Message Understanding Conference MUC-7", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krupka, G. R., and Hausman, K. Isoquest: Description of the netowltm extractor system as used in muc-7. In Proceedings of the Seventh Message Understanding Conference MUC-7 1998.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Using collocation statistics in information extraction", "authors": [ { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Seventh Message Understanding Conference MUC-7", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, D. Using collocation statistics in information extraction. In Proceedings of the Seventh Message Understanding Conference MUC-7 1998.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A simple introduction to maximum entropy models for natural language processing", "authors": [ { "first": "A", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ratnaparkhi, A. A simple introduction to maximum entropy models for natural language processing. Tech. Rep. 97-08, Institute for Research in Cognitive Science, University o f P ennsylvania, May 1997.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Includes documentation which has an overview of MaxEnt modeling", "authors": [ { "first": "E", "middle": [ "S" ], "last": "Ristad", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ristad, E. S. Maximum entropy modeling toolkit, release 1.6 beta, February 1998. Includes documen- tation which has an overview of MaxEnt modeling.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "For instance, one of our features is gh" }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": ",2 : : : w 2 where the current token is denoted as w 0 are compared with the vocabulary and their vocabulary indices are recorded.gh; f = 8 if Lexical Viewtoken ,1 h = Mr\" and f = per-" }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "The external system features can query this data in a window o f w , 1 : : : w 1 around the current token. Richard M. Nixon, in a case where Proteus has correctly tagged Richard\"." }, "TABREF0": { "type_str": "table", "html": null, "content": "
DictionaryNumber Data Source of EntriesExamples
rst names corporate names corporate names without su xes colleges and universities Corporate Su xes Dates and times 2-letter State Abbreviations 50 1245 10300 10300 1225 244 51 World Regions 14www.babyname.com www.marketguide.com corporate names\" processed through a perl script http: www.utexas.edu world New York University; John, Julie, April Exxon Corporation Exxon univ alpha Oberlin College Tipster resource Inc.; Incorporated; AG Hand Entered Wednesday, April, EST, a.m. www.usps.gov NY, CA www.yahoo.com Africa, Paci c Rim
", "text": "The following table lists the dictionaries used by MENE in the MUC-7 evaluation:", "num": null }, "TABREF1": { "type_str": "table", "html": null, "content": "", "text": "Dictionaries used in MENE", "num": null }, "TABREF3": { "type_str": "table", "html": null, "content": "
Systems425 350 250 150 100804020105
MENE MENE + Proteus MENE + Manitoba MENE + IsoQuest M E + P r + M a + I Q 97.38 97.12 92.94 92.20 91.32 90.64 89.17 87.85 84.14 80.97 76.43 63.13 95.73 95.61 95.56 94.46 94.30 93.44 91.69 95.60 95.49 95.26 94.86 94.50 94.15 93.06 96.73 96.55 96.70 96.55 96.11
", "text": "", "num": null } } } }