Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
30.3 kB
{
"paper_id": "M92-1008",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:13:14.055975Z"
},
"title": "GE NLTOOLSET : MUC-4 TEST RESULTS AND ANALYSI S",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Rau",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laborator y GE Research and Developmen t Schenectady",
"institution": "",
"location": {
"postCode": "12301",
"region": "NY",
"country": "US A"
}
},
"email": ""
},
{
"first": "George",
"middle": [],
"last": "Krupka",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laborator y GE Research and Developmen t Schenectady",
"institution": "",
"location": {
"postCode": "12301",
"region": "NY",
"country": "US A"
}
},
"email": ""
},
{
"first": "Paul",
"middle": [],
"last": "Jacob",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laborator y GE Research and Developmen t Schenectady",
"institution": "",
"location": {
"postCode": "12301",
"region": "NY",
"country": "US A"
}
},
"email": ""
},
{
"first": "Ira",
"middle": [],
"last": "Sider",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laborator y GE Research and Developmen t Schenectady",
"institution": "",
"location": {
"postCode": "12301",
"region": "NY",
"country": "US A"
}
},
"email": ""
},
{
"first": "Lois",
"middle": [],
"last": "Childs",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laborator y GE Research and Developmen t Schenectady",
"institution": "",
"location": {
"postCode": "12301",
"region": "NY",
"country": "US A"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper reports on the GE NLTooLSET customization effort for MUC-4, and analyzes th e results of the TST3 and TST4 runs .",
"pdf_parse": {
"paper_id": "M92-1008",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper reports on the GE NLTooLSET customization effort for MUC-4, and analyzes th e results of the TST3 and TST4 runs .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We report on the GE results from the MUC-4 conference and provide an analysis of system performance . In general, MUC-4 was a very successful effort for GE . The NLTooLSET, a suite of natural language tex t processing tools designed for easy application in new domains, proved its mettle, as we were quickly able t o integrate the changes from the MUC-3 to the MUC-4 task .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTIO N",
"sec_num": null
},
{
"text": "On the positive side, MUC-4 provided a thorough, fair test of system capabilities, and allowed us t o implement and test new strategies within the context of a task-driven system . Once again, the methodolog y of testing on a real task, along with the benefit of a common corpus, has produced advances in the field a s well as highlighting certain new aspects of text interpretation . One surprise was that we continued to make improvements in sentence-level parsing and interpretation, while at the end of MUC-3 we had suspected tha t improvements in parsing would not yield substantial improvements to our overall performance .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTIO N",
"sec_num": null
},
{
"text": "On the negative side, major and significant improvements were not easy to make . Although improving the accuracy and coverage of the core language parsing mechanism accounted for some percentage of ou r improvements, the remainder of the gain in score is attributable to increases in the accuracy of the templat e post-filtering and to many small, incremental enhancements and modifications to the existing system . Thes e \"diminishing returns\" continue to stand in the way of vastly improved system performance . Although ther e are some major problems (such as world knowledge, event-based reasoning, and reference resolution) tha t can be said to account for much of the remaining error in MUC, it is not clear that MUC is really measurin g progress toward solving these major problems so much as progress on the many minor problems that ar e more easily solved .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTIO N",
"sec_num": null
},
{
"text": "Our overall results on both TST3 and TST4 were very good in relation to other systems . Figure 1 summarize s our results on these tests .",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 96,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "RESULTS",
"sec_num": null
},
{
"text": "In addition to these core results, Figure 2 summarizes our performance on the adjunct test . Finally, to put these runs in the context of our other results, Figure 3 illustrates how our system improve d over time, and puts the TST3 and TST4 scores in perspective . ",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 43,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "RESULTS",
"sec_num": null
},
{
"text": "We were very pleased that our performance on the new time slice (TST4) was virtually indistinguishabl e from (even higher than) our performance on the test sample from the same time as the training set . We attribute this to the fact that our system was developed and tested for general portability across subjec t areas, application areas, and different types of language and text . We think this is clear evidence that ou r approach to text interpretation is not in any way geared or slanted toward the particulars of the trainin g set . Most of the work in the system is still done from core knowledge and basic linguistic principles . These comparison numbers also indicate our general reluctance to encode any knowledge or write an y code that was domain-specific and would interfere with general processing .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TST3 -TST4 SCORE COMPARISO N",
"sec_num": null
},
{
"text": "In addition to the required testing, we performed one optional test, indicated in the HIGH-PREC rows of Figur e 1 . Our system could not produce significantly higher recall without effectively guessing, so we decided onl y to reconfigure the system to produce a high-precision result . First, we noticed that most of our errors were being introduced through the incorrect application of template merging decisions . There are fewer of these decisions when all the information about an event appears in a sentence . Also, our single sentence leve l Figure 3 : Improvemen t processing was more accurate than our multi-sentence processing, so this strategy was likely to produce hig h precision .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OPTIONAL HIGH-PRECISION RU N",
"sec_num": null
},
{
"text": "For the high-precision configuration, we set the system to use only one sentence to fill the content o f each template, using the single sentence for each template that contained the most fills . This strategy is very crude, and more clever methods are likely to improve upon this . For example . we could perfor m selective merging of information from multiple sentences when there is a high degree of overlap between th e two . However, even this crude method produced significantly higher precision (15% higher on TST3 and 22 % higher on TST4) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OPTIONAL HIGH-PRECISION RU N",
"sec_num": null
},
{
"text": "We spent overall about 10 1/2 person-months on MUC-4, as compared with about 15 person-months o n MUC-3 . This time was divided as follows :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EFFORT",
"sec_num": null
},
{
"text": "2 mo: Knowledge Additions : New or altered patterns, grammar rules, activators, domain expectations , names, and places . Complete addition of all possible target fills . Addition of primary and support templates, lexicon, phrases, patterns and hierarchy .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EFFORT",
"sec_num": null
},
{
"text": "1 mo: New Place and Time Mechanism : A new and clean location and time handler was integrate d into the Toolset .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EFFORT",
"sec_num": null
},
{
"text": "1 mo: Answer Key Mechanism : Design and implementation of mechanism to use information extracted from a canonical, conceptual version of the entire answer key .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EFFORT",
"sec_num": null
},
{
"text": "2 mo: Parser Improvements : Parser recovery and improving attachment . We performed an in-depth study to attribute all errors in the TST3 run to components of the system . For TST3, we had a total of 24 missing templates and 33 spurious templates . For TST4, we had 13 missin g templates and 33 spurious templates . Moreover, 76% of the spurious points came from whole spuriou s templates, whereas 41% of the missing points came from missing templates . This indicates the the larges t single source of immediate improvement in our score should come from increasing the accuracy of ou r template filtering stage . Template filtering is the process when we determine after a template has been fille d out if it is spurious due to relevancy conditions .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EFFORT",
"sec_num": null
},
{
"text": "The Figure 4 summarizes the source of error in terms of the percentage of points between our score an d a perfect score .",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 12,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "EFFORT",
"sec_num": null
},
{
"text": "Most of the code errors were due to inaccuracies in, the determination of event boundaries ; part of th e \"discourse module\" . In fact, 25% of our missing points come from inaccurate reference resolution, an d event splitting and merging problems . We hope to address these problems in the next improvements to th e NLTOOLSET .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EFFORT",
"sec_num": null
},
{
"text": "Our method of training was to run our system over the messages in the development and TST1 corpus . We kept the TST2 set of messages and answers separate as a safeguard to over-training . We used the results of these runs to detect problems and determine where we needed additional effort .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAININ G",
"sec_num": null
},
{
"text": "We experimented with using the answer key as an aid in the automatic acquisition of domain-specifi c knowledge . In particular, the entire development answer key was canonicalized by transforming all natura l language strings present as fillers of slots in the key to their conceptual heads . Second, generalizations were extracted to reflect reliable information on the habitual roles certain concepts play in the database domai n of the texts . This process was found to be useful in four places :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAININ G",
"sec_num": null
},
{
"text": "Detecting Gaps in Knowledge : By sending all the strings present in the answer key through our natural language system, we can detect errors, gaps in knowledge and other problems within the domain o f the answer key . This process is a prerequisite to using the answer key, as it produces a canonical , conceptual version .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAININ G",
"sec_num": null
},
{
"text": "Determining Valid Generalizations : Certain combinations of fills always occur together . These generalizations are automatically detected and used to prevent incorrect slot filling . For example, the terroris t organization SHINING PATH always carries out its terrorist activities in PERU .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAININ G",
"sec_num": null
},
{
"text": "Determining Hard Constraints : Certain fillers make particularly good fillers for certain slots . For example, someone described in a text as a VICTIM is a much better filler for the TARGET slot than th e PERPETRATOR . These constraints serve to prevent inappropriate fillers from appearing .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAININ G",
"sec_num": null
},
{
"text": "Encoding Specific, Recurring Events : In certain domains, and with certain types of texts, frequentl y recurring events can be encoded more specifically to aid in the accuracy of their interpretation .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAININ G",
"sec_num": null
},
{
"text": "Anomaly Detection : Finally, a canonical answer key, when compiled into lists of unique fillers for each slot, allows for the easy detection of incorrect answers .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAININ G",
"sec_num": null
},
{
"text": "Our initial test runs on the MUC-4 TST3 and TST4 showed a small increase (1 point each) in bot h recall and precision on TST3 and a negligible effect on TST4 from the answer key training . In particular , the combined (recall and precision) measure was 53 .93 without these data, and 54 .90 with the data for th e TST3 (same time period as the answer key) test . Recall went from 56 to 57 and precision went from 5 2 to 53 . Although these increments may seem small, our experience has been that any noticeable increase i n performance is significant at these levels of accuracy . That is, as systems make fewer and fewer mistakes i n interpreting texts, it becomes more and more difficult to find areas for any improvements .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAININ G",
"sec_num": null
},
{
"text": "For the TST4 time slice, as we anticipated, there was no noticeable effect of answer key training . Asid e from the use of the knowledge to fill gaps in the system's knowledge base, the information in a novel set of messages does not intersect with the information extracted from an old set of messages . Thus, the answer key training neither helps nor hurts novel messages .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAININ G",
"sec_num": null
},
{
"text": "In addition to this experiment, which used training based on an answer key to resolve template-leve l decisions, we tried several methods for corpus-based training to help with sentence-level interpretation . These experiments included corpus-based part-of-speech tagging, statistically-based information to help wit h attachment, and a \"last ditch\" method for guessing a parse where the parser produced a suspicious result . In some cases, these methods showed a marginal improvement in early tests . However, as we neared th e final MUC test, none of them showed any positive effect . We treat this as evidence that it is difficult to us e automated training to guide sentence-level interpretation when sentence-level accuracy is already very high .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAININ G",
"sec_num": null
},
{
"text": "In retrospect, we probably could have made additional improvements to performance if we had made significant changes to the mechanism that splits stories into events, and the mechanism that resolves reference s (including definite anaphora and multiple descriptions of the same object or event) . Aside from these areas , all the other portions of the NLToolset are working quite well .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RETROSPECTIVE ON THE TASK AND RESULT S",
"sec_num": null
},
{
"text": "The speed of our system, around 500/words per minute on this task, understates its real speed due t o a non-optimized configuration . Nonetheless, this speed is achieved on conventional hardware and is already way ahead of human performance . This suggests that this technology will be able to process large volume s of text . We were able to process TST3 in 1 hour and 28 minutes, and TST4 in 1 hour and 5 minutes .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RETROSPECTIVE ON THE TASK AND RESULT S",
"sec_num": null
},
{
"text": "We were similarly pleased that the sentence-level performance of the NLTooLSET was as good as it was . While we fixed minor problems with the lexicon, grammar, parser, and semantic interpreter, robustness o f linguistic processing did not seem to be a major problem . For MUC-3, we believed this was because because the domain was still quite narrow . It was much broader than MUCK-II, and the linguistic complexity is a challenge, but knowledge base and control issues are relatively minor because there are simply not that man y different ways that bombings, murders, and kidnappings occur . However, given the improvements we mad e to the sentence-level interpretation in MUC-4, we feel that the correct interpretation of individual sentences can have a significant effect on the overall accuracy of interpretation . This is partly due to our observatio n that the effects of parsing \"fan out\" to affect the accuracy of other components of the system . Also, w e changed other components to take advantage of the increased accuracy of the interpretations .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RETROSPECTIVE ON THE TASK AND RESULT S",
"sec_num": null
},
{
"text": "We estimate that about 70% of the knowledge encoded for this effort is reusable, whereas over 80% of th e code is reusable . This includes most of the improvements to the parser recovery (or 20% of the total effort) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REUSABILIT Y",
"sec_num": null
},
{
"text": "This task has proven our system's transportability, robustness and accuracy . The major sources of improve d performance were increasing the accuracy of the template filtering, the coverage and robustness of th e parser, and error recovery handler . The other improvements have come from the additive effect of man y little enhancements and fixes throughout the system (See the GE system summary paper in this volume) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LESSONS LEARNE D The GE Syste m",
"sec_num": null
},
{
"text": "There is a bit of a puzzle in the observation that our improvement seems to have come from doin g better at things we already could do well, while the error that remains seems to come predominately fro m problems we don ' t really have solutions for at all, like general reference resolution and reasoning abou t background information . One conclusion that we have drawn from this analysis is that our progress on thes e major problems will come not from new modules, but from adding new sources of knowledge to our existin g modules . This was not the approach we had taken in MUC-3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LESSONS LEARNE D The GE Syste m",
"sec_num": null
},
{
"text": "Our experience with MUC-4 has pointed out the need for much closer integration of the discourse (even t and reference resolution) components of the system with the control and core language understanding components. It is clear to us that increased performance will not come easily from separable modules .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LESSONS LEARNE D The GE Syste m",
"sec_num": null
},
{
"text": "These will continue to be the areas where we hope our system will improve over time .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LESSONS LEARNE D The GE Syste m",
"sec_num": null
},
{
"text": "We continue to believe that MUC is an interesting, realistic task and that it advances the state of the art i n natural language processing as well as providing a good test of system capabilities . The move from MUC-3 to MUC-4 produced some clear improvements in the test as well as the systems . For example, the emphasis on ALL TEMPLATES was clearly a better way of evaluating systems, whic h none of the sites seemed to recognize prior to MUC-3 . The minimal matching constraints prevented some of the rewards for overgeneration that were a problem in MUC-3 . The combined F-measure, while hidin g many of the interesting aspects of performance, at least gives an explicit basis for comparison .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and the MUC Task",
"sec_num": null
},
{
"text": "It would be nice if the price tag for these tests could be reduced, by reducing the time and effor t required to run through the mechanics of executing and evaluating each test, and perhaps making forma l evaluations less frequent . In addition, a new domain with a smaller amount of time for development migh t be more rewarding than repeatedly testing on the same task and domain . New tasks or domains with shor t preparation times : (1) minimize the work that each site can do that is task-specific, (2) allow new systems t o participate on equal footing with others, and (3) test transportability and general techniques by preventin g too much specific development or knowledge coding . The strategy of coming up with new domains coul d make it harder to show the overall progress of the field, but it is actually at least as hard to attribute progres s in the MUC-3 -MUC-4 sequence as it was in the MUCK-II -MUC-3 sequence, which represented a muc h more drastic shift in task and domain .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and the MUC Task",
"sec_num": null
},
{
"text": "One of the most promising new areas that has emerged from these evaluations is the ability, within a given system, to test new modules, strategies and configurations, as a controlled way of testing the impact o f a particular algorithm or strategy . We have learned not only from our own experiments of this sort, such as the comparison between the GE and CMU parsers (see the GE-CMU site report and system summary in thi s volume), the high-precision run, and the answer key experiment, but also from the controlled experiment s that other teams have done (such as, in previous MUC's, NYU's analysis of recovery strategies and SRI' s decomposition of errors) . This whole style of experimentation should be embraced as one of the mos t rewarding, non-competitive aspects of the evaluations .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and the MUC Task",
"sec_num": null
},
{
"text": "The GE system performed well, as we had hoped, on both TST3 and TST4 . The tests proved some of th e positive advances we had achieved, as well as surprising us somewhat by showing progress in areas wher e our system was already strong . Some of the experiments and analyses we did as part of the test were a s rewarding as the comparative results . The whole MUC experiment has thus opened up a new methodology that we are beginning to explore in using comparative and controlled testing to guide algorithmic research .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SUMMARY",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Performance on Adjunct Testin g",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "mo: TRUMPET upgrade : Revision of domain expectation mechanism to use structure sharing . 1/2 mo : MUC-4 Upgrade : Upgrade of MUC-specific mechanisms to be compatible with new MUC-4 format . Attribution of Error in TST 3 1 mo : Misc . Bug Fixing: Hundreds of small bugs were found and fixed . 2 mo : Scoring, Reporting : Meetings, reporting, incremental and final scoring, analysis and other overhead .",
"num": null,
"type_str": "figure"
}
}
}
}