{ "paper_id": "M92-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:13:01.505495Z" }, "title": "HUGHES RESEARCH LABORATORIE S TRAINABLE TEXT SKIMMER : MUC-4 TEST RESULTS AND ANALYSI S", "authors": [ { "first": "Stephanie", "middle": [ "E" ], "last": "Augus", "suffix": "", "affiliation": {}, "email": "august@sedl70.hac.com" }, { "first": "Charles", "middle": [ "P" ], "last": "Dolan", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "M92-1010", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "The performance, on a slot by slot basis, is, therefore, what one might expect : the pure set fills such as INCIDENT: TYPE and INCIDENT : STAGE OF EXECUTION show much better performance than the string fill s such as HUM TGT: NAME . Table 2 shows the summary rows of the official template-by-template results on TST4 . The complete official score report for TTS-MUC4 on TST4 can be found in Appendix G : Final Test Score Summaries . Performance was comparable on both sets of texts . ", "cite_spans": [], "ref_spans": [ { "start": 233, "end": 240, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Table 1 shows the official template-by-template score results for the Hughes Trainable Text Skimmer use d for MUC-4 (TTS-MUC4) on TST3 . TI'S is a largely statistical system, using a set of Bayesian classifiers with the output of a shallow parser as features. (See the System Summary section of this volume for a detailed description o f TTS-MUC4", "sec_num": null }, { "text": "TTS-MUC4 uses Bayesian classifiers for each of the template slots . The general form for Bayesia n classifiers is to compute,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MUC-4 TEST SETTING S", "sec_num": null }, { "text": "Pr(ci If l A f2 . . . fn )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MUC-4 TEST SETTING S", "sec_num": null }, { "text": "where fi are textual features . For set fill slots, the Ci are the possible values (e .g . DEATH, SOME DAMAGE , etc.) . For the string fill slots, the Ci are yes or no answers to whether a particular item fills a slot, (e .g . HUMAN-TGT-NAME versus HUMAN-TGT-NAME-NOT). For typical Bayesian classifiers, the tunable parameter is th e prior probabilities for the Ci . In TTS-MUC4 we have two different settings, EQUI-PROS and REL-FREQ , respectively for probabilities that are equal for all classes and probabilities that reflect the relative frequency of classe s in the training data . EQUI-PROB favors recall, and REL-FREQ favors precision .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MUC-4 TEST SETTING S", "sec_num": null }, { "text": "In addition, for text applications, there is an issue as to whether one includes only those features present i n the text, or, also, those that are absent. In TTS-MUC4 we used two different settings, PRESENT an d PRESENT&FREQUENT, where PRESENT&FREQUENT considers all those features which are present and als o those that are absent, but which occur very frequently in the texts . The threshold for whether a feature wa s considered frequent was set so that, for each slot, approximately 30 features were considered frequent . In the TTS-MUC4 conceptual hierarchy there are over 400 potential features .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MUC-4 TEST SETTING S", "sec_num": null }, { "text": "For each slot, the parameter settings were optimized to balance recall and precision . The optimization wa s done using TST1 and TST2 . Table 3 gives the parameter settings for each slot. Balancing precision and recall for string fill slots is difficult in TTS-MUC4 . For example, in the training corpus, TTS-MUC4 detects over 4,000 potential HUMAN-TARGET-NAMES, but less than 10% of these are actual string fills.", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 143, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "MUC-4 TEST SETTING S", "sec_num": null }, { "text": "To compute the conditional probabilities, the MUC-3 development (DEV) corpus and the associate d templates where used . Each sentence in the DEV corpus that contained a string fill for some template was used as a training sample . TI'S detects features for important domain words (e.g . explosion, report, etc .), and also for phrases that may map into string fills . For each training sample, the presence or absence of each feature was examined to compute, for example, Pr,f (:explosion -wI:PHYS -TGT -TYPE = :COMMERCIAL )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TRAINING METHODOLOG Y", "sec_num": null }, { "text": "The probability estimates using relative frequency\" Pr,1 , are then combined using Bayes rule on a ne w sentence to compute:~' In addition to training of the Bayesian classifiers, the DEV corpus was used, exactly as in TTS-MUC3, t o derive phrase patterns for potential string fills. For example, \"SIX JESUITS\" would drive the creation of the phrase ( :NUMBER-W : RELIGIOUS-ORDER-W) . The type of the string fill served as the semantic feature for the phrase, which is : CIVILIAN-DESCR, in this example .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TRAINING METHODOLOG Y", "sec_num": null }, { "text": "Improvement that occurred over time in TTS-MUC4 is attributable to two factors: the introduction of the Bayesian classifiers to replace the K-Neighbors technique from TTS-MUC3, and the tuning of the parameters of th e Bayesian classifiers for each slot.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PRESEN T PERP-CONF EQUI-PROB PRESENT&FREQUEN T HUM-TGT-NAME EQUI-PROB PRESEN T HUM-TGT-DESCR EQUI-PROS PRESEN T HUM-TGT-TYPE REL-FRE9 PRESEN T HUM-TGT-EFFECT REL-FREQ PRESEN T PHYS-TGT-ID EfUI-PROS PRESENT&FREQUEN T PHYS-TGT-TYPE REL-FREQ PRESEN T PHYS-TGT-EFFECT _ REL-FREQ PRESENT", "sec_num": null }, { "text": "All of the training for TTS-MUC4 is automated . As with TTS-MUC3, the only manual portion of th e process is choosing the conceptual classes for the lexicon .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PRESEN T PERP-CONF EQUI-PROB PRESENT&FREQUEN T HUM-TGT-NAME EQUI-PROB PRESEN T HUM-TGT-DESCR EQUI-PROS PRESEN T HUM-TGT-TYPE REL-FRE9 PRESEN T HUM-TGT-EFFECT REL-FREQ PRESEN T PHYS-TGT-ID EfUI-PROS PRESENT&FREQUEN T PHYS-TGT-TYPE REL-FREQ PRESEN T PHYS-TGT-EFFECT _ REL-FREQ PRESENT", "sec_num": null }, { "text": "Two calendar months and approximately 2 .5 person months were spent on enhancing the TTS-MUC3 system to create TTS-MUC4. TTS-MUC4 effort falls roughly into three categories : classifier evaluation, system training, and filte r development. Approximately 20% of our time was spent on developing and evaluating the performance of th e Bayesian classifier, and tuning the parameters used in this classifier . This classifier replaced the K-Nearest Neighbor classifier previously employed in TTS-MUC3 . 10% of the development effort focused on tuning other system parameters, such as the *fill-strength-threshold*, which provides a means for filtering out unlikely slot fillers . About 40% of our time was devoted to developing filters to improve the precision of the values of the templat e fillers, and evaluating their effects . Retraining of the system to take advantage of a modified lexicon and t o accommodate the revised templates took up about 10% of the time. The remaining 20% of the effort was spent o n developing code to extract information to fill the new and revised slots of the MUC-4 templates .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ALLOCATION OF EFFORT", "sec_num": null }, { "text": "One limiting factor for the Hughes TTS-MUC4 system was time. The Bayesian classifier is effective for filling most slots, but the K-Nearest Neighbor classifier might provide better fills for others . However, time did not permit us to experiment enough to identify the best classifier to use for each slot . Another aspect of TTS to which we would like to have devoted more attention is on dynamically weighting features retrieved from the knowledge base depending upon their relevance to the slot being processed . Our algorithm for grouping sentences into topic s was responsible for many of our errors . Improving the slot-dependent weighting portion of the system would take a considerable amount of additional time, and would require that domain knowledge be added into the processing .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LIMITING FACTOR S", "sec_num": null }, { "text": "The following enhancements are most relevant to the current MUC-oriented software : (1) filters for string fills based on linguistic knowledge, (2) reference resolution, and (3) better learningfpattem classification algorithms . TTS-MUC4 currently has a very limited amount of processing that is specialized for language . One of the feature s that we would have liked to detect in the MUC-4 corpus was the source of information in a story . Individuals who are the source of a report occurred frequently, and er oneously, as human targets . Another \"language specific\" portio n we would like to add is reference resolution for string fills . TTS-MUC4 currently suffers in its precision score because it lists each referent for a filler several times .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FUTURE WOR K", "sec_num": null }, { "text": "Additional changes would make a more usable \"real syste m\" , although they are not essential for the MU C task as it now stands. These include (1) the development of a user interface for corpus marking, and (2) integratio n with on-line data sources, such as map databases, to eliminate the burden of creating special data files for natura l language processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FUTURE WOR K", "sec_num": null }, { "text": "Currently, TTS only requires a lexicon and a training corpus with templates . Therefore, extension to terrorism in another locale or to a completely different domain would be easy . However, once features are added t o improve performance, as noted in Section 6 above, handling a new domain will be more difficult .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TRANSFERABILITY TO OTHER TASK S", "sec_num": null }, { "text": "TTS-MUC4 represents a small increase in performance beyond TTS-MUC3 . TTS currently has very littl e processing specific to language ; most of the processing is simple feature detection followed, by pattern recognitio n algorithms . We believe that TTS-MUC4 represents a plateau in performance that will require more linguisti c knowledge to increase performance . The goal for TTS, then, is to significantly increase performance withou t increasing development time for new applications .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LESSONS LEARNE D", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Hughes Trainable Tex t Skimmer: description of the US system as used for MUC-3", "authors": [ { "first": "Charles", "middle": [ "P" ], "last": "Dolan", "suffix": "" }, { "first": "Seth", "middle": [ "R" ], "last": "Goldman", "suffix": "" }, { "first": "Thomas", "middle": [ "V" ], "last": "Cuda", "suffix": "" }, { "first": "Alan", "middle": [ "M" ], "last": "Nakamura", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the Third Message Understanding Conference (MUC-3)", "volume": "", "issue": "", "pages": "21--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dolan, Charles P ., Goldman, Seth R ., Cuda, Thomas V ., Nakamura, Alan M. Hughes Trainable Tex t Skimmer: description of the US system as used for MUC-3 . Proceedings of the Third Message Understanding Conference (MUC-3) . San Diego, California, 21-23 May 1991 .", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Hughes Trainable Tex t Skimmer: MUC-3 test results and analysis", "authors": [ { "first": "Charles", "middle": [ "P" ], "last": "Dolan", "suffix": "" }, { "first": "Seth", "middle": [ "R" ], "last": "Goldman", "suffix": "" }, { "first": "Thomas", "middle": [ "V" ], "last": "Cuda", "suffix": "" }, { "first": "Alan", "middle": [ "M" ], "last": "Nakamura", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the Third Message Understanding Conferenc e (MUC-3)", "volume": "", "issue": "", "pages": "21--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dolan, Charles P ., Goldman, Seth R ., Cuda, Thomas V ., Nakamura, Alan M . Hughes Trainable Tex t Skimmer: MUC-3 test results and analysis . Proceedings of the Third Message Understanding Conferenc e (MUC-3) . San Diego, California, 21-23 May 1991 .", "links": null } }, "ref_entries": { "TABREF0": { "text": ") .", "content": "
SLOTPOS ACTICOR PAR INCIICR IPAI SPU MIS NONIREC PRE OVG FAL
template-id112 1061 6300100143 4901 56 59 4 0
inc-date109 1011 2215 241 22 15140 4861 27 29 4 0
inc-loc112871 11 39410 17133 58101 2735 3 8
inc-type112 1061 5580100143 4901 53 56 404
inc-stage112 1061 5904100143 4901 53 56 40 1 3
inc-instr-id33 14151011118 271271 1739 5 7
inc-instr-type52 14140210Cl8 46 10918 28 570
perp-inc-cat69 1011 280 10100163 31231 40 28 62 3 0
perp-ind-id85 871 125 19125151 49351 17 17 5 9
perp-org-id52 521 1207110133 33721 23 23 6 3
perp-org-conf52 5214213102133 33721 10 10 635
phys-tgt-id66 1121 132 1010218741741 21 127 8
phys-tgt-type66 1121 10
", "html": null, "type_str": "table", "num": null }, "TABREF2": { "text": "", "content": "", "html": null, "type_str": "table", "num": null }, "TABREF4": { "text": "", "content": "
", "html": null, "type_str": "table", "num": null }, "TABREF5": { "text": "( c,lft A f2 . . . f,, )", "content": "
SLOTIPriorsTest s
INCIDENT-TYPEREL-FREgPRESEN T
STAGE-OF-EXEC_REL-FREQPRESEN T
INSTRUMENT-ID_EQUI-PROBPRESENT&FREQUEN T
INSTRUMENT-TYPEREL-FRE9PRESENT&FREQUEN T
PERP-INDIVEQUI-PROBPRESEN T
PERP-ORG_EQUI-PROBPRESEN T
PERP-CATEQUI-PROB
", "html": null, "type_str": "table", "num": null }, "TABREF6": { "text": "Test run setting for the Bayesian classifiers .", "content": "", "html": null, "type_str": "table", "num": null } } } }