Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "M92-1001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:13:05.886358Z"
},
"title": "OVERVIEW OF THE FOURTH MESSAGE UNDERSTANDIN G EVALUATION AND CONFERENC E",
"authors": [
{
"first": "Beth",
"middle": [
"M"
],
"last": "Sundheim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Naval Command, Control, and Ocean Surveillance Cente r RDT&E Division (NRaD",
"location": {}
},
"email": "sundheim@nosc.mil"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "M92-1001",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "The Fourth Message Understanding Conference (MUC-4) is the latest in a serie s of conferences that concern the evaluation of natural language processing (NLP ) systems . These conferences have reported on progress being made both in th e development of systems capable of analyzing relatively short English texts and i n the definition of a rigorous performance evaluation methodology . MUC-4 wa s preceded by a period of intensive system development by each of the participatin g organizations and blind testing using materials prepared by NRaD and SAIC tha t are described in this paper, other papers in this volume, and the MUC-3 proceedings [1] .",
"cite_spans": [
{
"start": 645,
"end": 648,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTIO N",
"sec_num": null
},
{
"text": "The overall objective of the evaluations is to advance our understanding of th e merits of current text analysis techniques, as applied to the performance of a realistic information extraction task . As a task, information extraction require s \"understanding\" of the texts, but it presents a more limited challenge than would a task requiring production of an in-depth representation of the contents o f complete texts . The inputs to the analysis/extraction process consist of naturallyoccurring newswire texts that were obtained in electronic form . The outputs of th e process are a set of templates or semantic frames resembling the contents of a partially formatted database .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTIO N",
"sec_num": null
},
{
"text": "MUC-3 and MUC-4 offer benchmarks for the field of NLP in general and fo r information extraction technology in particular . One of the fundamental ways i n which MUC-3 and MUC-4 are distinct from earlier efforts is in their choice of texts : MUC-3 and MUC-4 made use of news articles on the subject of Latin America n terrorism, whereas the previous conferences had made use of naval tactica l message narratives [2] . The MUC-4 evaluation and conference featured a n enhanced evaluation methodology, greater participation, and significantly mor e conclusive results than those recorded in the MUC-3 proceedings .",
"cite_spans": [
{
"start": 413,
"end": 416,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTIO N",
"sec_num": null
},
{
"text": "Evaluating end-to-end systems in the context of a common task helps in severa l ways to bridge the gap between research and technology . First, it makes it easier for both technology producers and technology consumers to understand an d appreciate the value of the methods that are being explored and applied .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTIO N",
"sec_num": null
},
{
"text": "It also formerly the Naval Ocean Systems Center (NOSC) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTIO N",
"sec_num": null
},
{
"text": "serves as an example of achievable results and inspires ideas for real-life applications such as statistical analysis or trends analysis of world events , routing/retrieval of texts of personal interest, and feeding of data to expert system s and database management systems . Finally, it encourages the development o f large, experimental testbed systems in 'which to conduct research, and th e evaluation results can provide insight into research bottlenecks that are impedin g the development of full-scale, usable systems .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTIO N",
"sec_num": null
},
{
"text": "This paper presents an overall view of the MUC-4 evaluation and, to a larg e extent, reflects the content of introductory presentations made at the conference . This paper is also an overview of the conference proceedings, which include s papers contributed by the organizations that participated in the evaluation (Part s II and III) and by individuals who were involved in designing aspects of th e evaluation (Part I) . The ordering of papers does not necessarily correspond to th e order in which the presentations were made during the conference . Th e proceedings also includes a number of appendices (Part IV) containing material s pertinent to the evaluation .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTIO N",
"sec_num": null
},
{
"text": "Seventeen systems were evaluated for MUC-4, versus 15 for MUC-3 . Participation in MUC demands a great commitment of resources over a n extended period of time . The actual effort expended for MUC-4 ranged from les s than two person-months to over twelve . For the veteran groups, this effort was i n addition to the effort spent preparing for MUC-3 . Given the commitment require d and the limited amount of funding that was available to help support the efforts, i t is not surprising that several MUC-3 groups were unable to continue participatio n in MUC-44 . What is surprising is that there were seven new MUC-4 participants . These include three organizations currently working under separate DARP A contracts in the area of information extraction, namely Brandeis Universit y (Waltham, MA), Carnegie Mellon University (Pittsburgh, PA), and New Mexico Stat e 2 formerly Unisys Center for Advanced Information Technolog y 3 formerly Synchronetics, Inc . 4 Those MUC-3 participants that were unable to participate in the MUC-4 evaluation ar e Advanced Decision Systems (Mountain View, CA), General Telephone and Electronics (Mountai n View, CA), Intelligent Text Processing, Inc . (Santa Monica, CA), the University of Nebraska (Lincoln, NE), and the University of Southwest Louisiana (Lafayette, LA) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EVALUATION PARTICIPANTS",
"sec_num": null
},
{
"text": "University (Las Cruces, N M ) .~ New participants also included the MITRE Corp. (Bedford, MA), Systems Research and Applications (Arlington, VA), the University of Michigan (Ann Arbor, MI), and the University of Southern California (Los Angeles, CA).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EVALUATION PARTICIPANTS",
"sec_num": null
},
{
"text": "Preparations for MUC-4 were made starting in October, 1991, the call for participation was issued in December, and the system development phase was well underway by February, 1992. A dry run of the evaluation was conducted in late March, final testing was done in late May, and the conference was held in mid-~u n e . 6 The program committee7 approved an ambitious plan for updating various aspects of the MUC-3 evaluation design for use for MUC-4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DIFFERENCES BETWEEN THE MUC-3 AND MUC-4 EVALUATIONS",
"sec_num": null
},
{
"text": "Changes to the task definition, corpus, measures of performance, and test protocols were made in order to provide * greater focus on the issue of spurious data generation; * isolation of text filtering performance; * better isolation of language analysis performance; * assessment of system independence from the training data; * assessment of system development progress since MUC-3; * more consistent scoring; * means to make valid score comparisons among systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DIFFERENCES BETWEEN THE MUC-3 AND MUC-4 EVALUATIONS",
"sec_num": null
},
{
"text": "The MUC-3 measures of performance implicitly encouraged participants to strive to develop their systems t o achieve high recall at the expense of high ~v e r~e n e r a t i o n .~ A few changes were made to the template scoring software to make the generation of spurious data more apparent. One of these changes focuses attention on overgeneration at the slot level (generating more slot values than were expected), while the others focus attention on overgeneration at the template level (generating more templates than were expected).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Greater Focus on the Issue of Spurious Data Generation",
"sec_num": null
},
{
"text": "To address the spurious slot-value issue, an additional method of assessing penalties for missing and spurious data (called the \"Matched/Spurious\" method) was incorporated, completing the picture provided by the three that had been developed for MUC-3. To address the spurious template issue, a preliminary step in the alignment of response templates with key templates was implemented that requires that minimal \"content-based mapping conditions\" be met in order for alignment to occur. Response templates that fail to meet these minimal conditions New Mexico State University teamed with Brandeis University for MUC-4, and Carnegie Mellon University teamed with General Electric. 6 The conference was hosted by PRC, Inc. at their conference center in McLean, VA. 7 The MUC-4 program committee included B. Sundheim (NRaD), chair; N. Chinchor (SAIC); R. Grishman (NYU); J. Hobbs (SRI); D. Lewis (U Chicago); L. Rau (GE); C. Weir (Paramax).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Greater Focus on the Issue of Spurious Data Generation",
"sec_num": null
},
{
"text": "Readers unfamiliar with the usage of the terms \"recall,\" \"precision,\" and \"overgeneration\" as information extraction evaluation metrics should refer to [3] .",
"cite_spans": [
{
"start": 152,
"end": 155,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Greater Focus on the Issue of Spurious Data Generation",
"sec_num": null
},
{
"text": "are scored as spurious ; if any unaligned key templates remain, the system get s penalized for missing them . These changes are discussed further in [3] .",
"cite_spans": [
{
"start": 149,
"end": 152,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Greater Focus on the Issue of Spurious Data Generation",
"sec_num": null
},
{
"text": "One way in which the spurious template issue was addressed was to change th e way the scoring software does the mapping (or \"alignment\") of the systemgenerated \"response\" templates with the answer \"key\" templates . A minimum degree of match in the content of the key and response is required before mappin g is allowed ; if disallowed, the system is penalized for having produced one spuriou s template (and, under some circumstances, as many spurious slot values as there are values in the response) and for having missed one template (and, under som e circumstances, as many slot values as there are values in the key) . When multipl e template mappings are possible, the scoring program chooses the mapping that i s likely to produce the best score . The MUC-3 scoring program used only the latte r strategy, the scoring optimization strategy . Thus, no matter how bad the fit in th e content of the template, a mapping would be permitted . The MUC-3 metho d therefore hid the fact that a response template and a key template wer e representing completely different incidents .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Greater Focus on the Issue of Spurious Data Generation",
"sec_num": null
},
{
"text": "In addition to these changes to the scoring software, the test protocol wa s modified so that the focus of most of the attention was shifted from th e \"Matched/Missing\" method of scoring, which penalizes at both the template an d slot-value level for missing information but only at the template level for spuriou s information, to the \"All Templates\" method, which penalizes at both levels for bot h types of error. Greater emphasis was also placed on viewing a system's recall an d precision as a rectangular \"region of performance\", whose boundaries are define d by the four methods of assessing penalties . This view of performance reflects the assumption that real-world aplications would vary according to their degree of tolerance of missing data versus spurious data . See test procedure (appendix B) an d scatter plots (appendix H) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Greater Focus on the Issue of Spurious Data Generation",
"sec_num": null
},
{
"text": "Overall, approximately 50% of the texts in the MUC-3 and MUC-4 corpora are irrelevant to the information extraction evaluation task .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Isolation of Text Filtering Performanc e",
"sec_num": null
},
{
"text": "Thus, a significan t subtask is to discriminate between relevant and irrelevant texts . MUC-3 score s were computed based on performance at the template level, rather than on th e message level, making it difficult to derive a text filtering score . To measure the text filtering capabilities of the MUC-4 systems directly, scores were assigned at the message level and combined using a contingency table .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Isolation of Text Filtering Performanc e",
"sec_num": null
},
{
"text": "This is discussed further in [3] and [4] .",
"cite_spans": [
{
"start": 29,
"end": 32,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 37,
"end": 40,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Isolation of Text Filtering Performanc e",
"sec_num": null
},
{
"text": "Several changes to the template design were made in order to better isolate th e systems' capabilities with respect to the kinds of text processing required to mee t the differing information extraction requirements (appendix A) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Isolation of Language Analysis Performanc e",
"sec_num": null
},
{
"text": "1 . Slots in the MUC-3 template that contained composite values were split int o two slots . Thus, a MUC-3 slot (TYPE OF INCIDENT) filled with the valu e ATTEMPTED BOMBING became two MUC-4 slots (INCIDENT : TYPE and INCIDENT : STAGE OF EXECUTION) filled with BOMBING and ATTEMPTED, respectively ; similarly, a MUC-3 slot (HUMAN TARGET : ID) filled with the value \"MARI O FLORES\" (\"STUDENT\") became two MUC-4 slots (HUM TGT : NAME and HUM TGT : DESCRIPTION) filled with \"MARIO FLORES\" and \"STUDENT\" : \"MARIO FLORES\" , respectively .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Better Isolation of Language Analysis Performanc e",
"sec_num": null
},
{
"text": "was added for identifying the instrument of an attack (e .g ., \"CAR BOMB\") ; this slot is paired wit h the set-fill slot (INCIDENT : INSTRUMENT TYPE) that was used for MUC-3 an d that now contains a cross-reference to the string-fill slot (e .g., VEHICLE BOMB : \"CAR BOMB\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A new string-fill slot (INCIDENT : INSTRUMENT ID)",
"sec_num": "2."
},
{
"text": "3. New slots (PHYS TGT : NUMBER and HUM TGT : NUMBER) were added fo r the number associated with each physical and human target (e .g., 3 : \"POWER PYLONS\" and 4 : \"ENERGY TOWERS\"), supplementing the information in the tota l number slots (PHYS TGT : TOTAL NUMBER and HUM TGT : TOTAL NUMBER) . The usage of the slots containing total numbers was restricted to cases where th e information was not redundant (i .e., to cases where there is more than one suc h target) and was explicitly mentioned in the text (i .e., cases where no computatio n by the system is required) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A new string-fill slot (INCIDENT : INSTRUMENT ID)",
"sec_num": "2."
},
{
"text": "The usage of the ATTACK incident type was extended to cover all murde r incidents ; cases were eliminated in which MURDER templates existed in th e training set, either by deletion or by conversion to ATTACK, depending on th e circumstances . 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "5. The slot ordering was changed so that groups of dependent slots appea r together, and the scoring software was updated to compute subtotal scores for eac h group . These groups were termed \"pseudo-objects\" since they were incorporated a s a compromise between retaining the flat template format and replacing it with a n object-oriented format . The experimental test designed and conducted by Genera l Electric [5] was an attempt to find out what would have happened if the templat e format had been overhauled ; the pseudo-object computations were essential fo r that test .",
"cite_spans": [
{
"start": 415,
"end": 418,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "6. The scoring software was updated to include a \"STRING FILLS ONLY\" row to show how system performance on string-fill slots compares with performance o n set-fill slots, for which a \"SET FILLS ONLY\" row already existed .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "The reuse for MUC-4 of the same domain and fundamentally the same task a s used for MUC-3 raised the concern that the \"generality\" of the systems would com e into question . To address these concerns, a controlled generality test was added t o the test protocol . The MUC-3 corpus originated from an Foreign Broadcas t Information Service (FBIS) archival database containing news articles (from \"FBI S 9 For MUC-3, any incident type other than ATTACK that resulted in the death of one of the human targets was represented in two templates, one of which was a MURDER template . An ATTACK incident that resulted in death to only a subset of the targets was similarl y represented in two templates . For MUC-4, these \"dependent\" MURDER templates were deleted . \"Stand-alone\" MURDER templates, which were created when the result of an attack was death t o all targets, were converted to ATTACK templates .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment of System Independence from the Training Dat a",
"sec_num": null
},
{
"text": "Daily Reports\") that had been disseminated as messages [1] .",
"cite_spans": [
{
"start": 55,
"end": 58,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment of System Independence from the Training Dat a",
"sec_num": null
},
{
"text": "Nearly all thos e articles carried datelines from 1989 through early 1990 ; just a few were from 1988 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment of System Independence from the Training Dat a",
"sec_num": null
},
{
"text": "For the generality test, over 900 different articles of the same varieties as thos e comprising the MUC-4 corpus were retrieved from a CD-ROM covering August -December 1988, and a sample of 100 was selected as test data and labeled TST4 . Sampling factors included the total number of texts for each month in the corpu s and, as for the MUC-3 corpus, the total number of texts for each country of interes t in the corpus . Thus, the two corpora, including the test sets, report somewha t different incidents and show where the hotbeds of terrorist activity differ fro m the one time span to the other. l 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment of System Independence from the Training Dat a",
"sec_num": null
},
{
"text": "For MUC-3, a study was carried out to measure the complexity of the MUC-3 evaluation task vis-a-vis the previous evaluation, and the scores obtained in th e previous evaluation were recomputed using the MUC-3 method of scoring [6] . The evidence was that the MUC-3 task was considerably more complex in most regard s and that the MUC-3 scores were about half as good (had twice the shortfall from the upper bound) . The conclusion was that the increase in difficulty in the task more than offset the decrease in scores, showing that significant progress had been made .",
"cite_spans": [
{
"start": 227,
"end": 230,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment of System Development Progress since MUC-3",
"sec_num": null
},
{
"text": "In the absence of an established, comprehensive methodology, this compariso n was necessarily crude since the two evaluations were so different with respect t o complexity of the data, corpus dimensions, nature of the task, and scoring o f results . In contrast, the differences between MUC-3 and MUC-4 are much les s radical, and it was possible to design a controlled comparison between the two . In fact, an attempt was made to neutralize the differences entirely by forwardconverting the materials from the MUC-3 final test to the MUC-4 format . Converted materials include the TST2 key and response templates and the cumulative TST 2 history file . 11",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment of System Development Progress since MUC-3",
"sec_num": null
},
{
"text": "In addition, the scoring program was configured to disregard thos e slots in the template that were new for MUC-4 (INCIDENT : INSTRUMENT ID , PHYS TGT : NUMBER, HUM TGT : NUMBER) and those that had been incompatibl y redefined for MUC-4 (PHYS TGT : TOTAL NUMBER, HUM TGT : TOTAL NUMBER) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment of System Development Progress since MUC-3",
"sec_num": null
},
{
"text": "NRaD restored TST2 for the MUC-3 veteran sites ; the scoring was done noninteractively, using the converted cumulative history file . The MUC-4 tes t protocol required that all MUC-4 participants do a comparable scoring of TST3, i .e . , 10 Other differences that existed between the corpora were eliminated . For example, the ne w corpus was obtained in mixed upper and lower ease . TST4 was converted to all upper case i n order to be consistent with the original corpus . Also, the new corpus was not stored in th e form of messages and, as a consequence, long articles appeared in their entirety rather tha n being broken up . Any long texts that were selected for inclusion in TST4 were scanned fo r terrorism key words, and all but a one-to one-and-one-half-page section of text containing on e or more of those key words was thrown out. 11 The history file contains a record of all interactive scoring decisions ; the cumulativ e history file is built up as NRaD scores each system . The scoring program does not query th e user if the history covers the case in question . This feature ensures consistency of scoring across systems . one in which all slots except those mentioned above are scored . NRaD and the MUC -4 participants used the same version of the scoring program (version 3 .3) .",
"cite_spans": [
{
"start": 238,
"end": 240,
"text": "10",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment of System Development Progress since MUC-3",
"sec_num": null
},
{
"text": "The scoring program was updated to further automate the scoring of set-fil l slots--the user is now queried only when a set-fill value is cross-referenced to a string-fill value that the scoring program cannot automatically score .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "More Consistent Scorin g",
"sec_num": null
},
{
"text": "It was als o updated to score some string fills automatically . The coverage of the interactiv e scoring guidelines (appendix C) was extended . These updates were meant to ensur e greater consistency in template scoring among people and across scoring runs .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "More Consistent Scorin g",
"sec_num": null
},
{
"text": "The test protocol required that all participants score their own templates . NRaD subsequently rescored the basic test runs for the two new test sets, TST3 an d TST4 ; however, runs such as the one using TST3 to measure progress (describe d above) were not rescored . In terms of the overall scores for TST3, there was very little difference (0-2% in recall or precision) noted between those that the site s reported and those that were produced when NRaD rescored the outputs . For TST4 , the differences ranged from 0-4% . The actual differences due to subjective scoring are much smaller, however . This is because the rescoring done at NRaD used a slightly updated version of th e scoring program (version 3 .4a) and a slightly updated version of the answer keys . With respect to the latter, there were more updates made to TST4 than to TST3 ; hence, the greater range in scoring differences for TST4 . As another side note, i t is the case that the NRaD overall recall and precision scores are almost alway s slightly higher than those the sites reported ; this is probably because NRaD was i n a position to interpret the interactive scoring guidelines more liberally than th e sites were, while maintaining consistency in subjective decisions across systems .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "More Consistent Scorin g",
"sec_num": null
},
{
"text": "A well-defined set of evaluation metrics was used for MUC-3, and for the firs t time, the metrics were implemented as software . This enabled the production of measures of performance at the slot and template levels and measures for subset s of the data (e .g ., for only the set-fill slots, for only certain slots in certai n templates) . With this wealth of data, together with new confidence in the validit y of the scores and the maturing state of development of many of the systems unde r evaluation, there was a growing need for a valid means of making direct crosssystem performance comparisons .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Means to Make Valid Score Comparisons Among System s",
"sec_num": null
},
{
"text": "For MUC-3, there were no scientific grounds for saying that a syste m performing at 50% recall and 50% precision was doing \"better\" than on e performing at 30% recall and 70% precision . The only justification for such a claim came from the test protocol, which specified that the run submitted by each site as the system's \"required\" run be one in which the recall and precision score s were optimized to be as similar as possible . Furthermore, there were no ground s for claiming that a system that got 50% recall and 50% precision was significantly better than one that got 48% recall and 48% precision .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Means to Make Valid Score Comparisons Among System s",
"sec_num": null
},
{
"text": "Two innovations in the area of scoring were made to address these issues . First , a scientifically sound, single-score measure was incorporated that enabled system s to be ranked . This measure, known as the F-measure, allows different weighting s of recall and precision . When they are weighted equally, it does what was onl y implied by the MUC-3 test protocol, i .e ., it would rank a system with 50% recall an d 50% precision higher than one 30% recall and 70% precision . Second, a method o f doing statistical significance testing was incorporated into the test protocol . This i s a computer-intensive method that uses an approximate randomization approach ; for MUC-4, it was used for TST3 and TST4 to determine the significance of th e overall F-measure scores and All Templates scores . These innovations are discusse d further in [3] and [7] , respectively .",
"cite_spans": [
{
"start": 843,
"end": 846,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 851,
"end": 854,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Means to Make Valid Score Comparisons Among System s",
"sec_num": null
},
{
"text": "A number of shortcomings in the evaluation remain .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shortcomings in the Evaluation",
"sec_num": null
},
{
"text": "In fact, one of the interesting outcomes of MUC-4 was the extent to which the improved syste m performance brought out the task deficiencies . It is not difficult to define a n information extraction task but perhaps even more difficult to make neede d improvements without jeopardizing the schedule, placing an undue burden on th e evaluation participants, or incurring large costs in terms of updating existin g answer key templates and documentation . The compromise reached for MUC-4 wa s to minimize the changes to the task definition and to focus instead on makin g improvements to the evaluation metrics and scoring software .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shortcomings in the Evaluation",
"sec_num": null
},
{
"text": "Among th e remaining shortcomings of the evaluation are the following :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shortcomings in the Evaluation",
"sec_num": null
},
{
"text": "The flat template structure created problems as far as meaningfully an d consistently expressing inherently recursive kinds of data such as levels o f description for perpetrators and human targets . The perpetrator slots allowed for a two-level distinction, with very poor conventions for deciding what to do if the tex t made more levels of distinction than that, e .g ., three levels in \"Miguel Vasquez, a member of the Jacobo Carcomo Command of the FMLN\" . The human target slots had more explicit but still inadequate conventions for entering whatever levels o f description were needed to correspond to fillers of other slots, e .g ., \" five peopl e were injured, including two security guards\" . Another consequence of the fla t template structure was the requirement to encode explicit cross-references , greatly complicating the scoring algorithms .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "The definition of a \"relevant terrorist incident\" was inadequate in severa l respects . The distinction between a terrorist incident and other kinds of violen t events --guerrilla actions, shootings of drug traffickers, etc . --is a difficult one t o express in comprehensive, hard and fast rules . It was also difficult to express the relevance criteria of \"specificity\" and \"recency\" in a way that could b e consistently applied . The intent was to not do extraction unless some specifi c information was present that a database user would find useful ; for example , extraction would not be done if no particular incident was being referred t o (\"terrorist bombings have been taking place with increasing frequency\", \"ove r 100 bombings have taken place in the last two weeks\") . If an incident was reporte d as having taken place more than two months prior to the date of the article, n o extraction was to be done unless the article gave \"new\" template-fillin g information, e .g ., when a new suspect was being brought forth . However, withou t prior knowledge of the actual incident, it was sometimes difficult to tell whethe r the information that was being reported was new or not . These problems o f determining relevance were partly due to the task definition and partly due to th e inherent vagueness of the texts .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "There were small gaps in the template fill rules . For example, the rule s concerning stories that give contradictory evidence about some of the facts wer e inadequate . A more frequent problem was that the set-fill lists for physical an d human target types were sparse and sometimes vaguely defined, and some of thes e problems had consequences for determining relevance at the template level . Fo r example, if a text describes the target of an incident only as a \"naval attache\", th e incident is relevant if the target is classified as DIPLOMAT but irrelevant if th e target is classified as ACTIVE MILITARY .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "In terms of the scoring, there were several relatively minor bu t troublesome problems . A bug in the scoring program was discovered just prior t o final testing, and a change was made to the scoring program and the interactiv e scoring guidelines just prior to final testing that had to be retracted when NRa D rescored TST3 and TST4 . The largest number of problems were those that involve d making subjective judgments during interactive scoring . For example, string fill s that closely resembled the ones in the key but originated from remote places in th e texts had to be examined in context to determine whether they were \"fortuitousl y correct\" (as, perhaps, in the case of \"urban guerrillas\" as a substitute for \"urba n terrorists\") or \"infortuitously incorrect\" (as in the case of \"11 peasants\" as a substitute for \"3 peasants\") . Making principled decisions about awarding partia l credit was also difficult when the cases weren't specifically covered by th e interactive scoring guidelines .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "The change in template mapping strategy described earlier as a n improvement made for MUC-4 had one consequence that was at least potentiall y problematic . The problem is due to the inflexibility of one of the mappin g conditions, namely the requirement that there be at least a partial match on th e filler for INCIDENT : TYPE . A partial match existed when the response wa s ATTACK, and the key was any other value; this scoring is based on ATTACK being a supercategory of the other set-fill options . In the reverse case, however, th e response is scored incorrect, thereby disallowing the mapping and, as describe d earlier, resulting in penalties for having generated a spurious template and fo r having missed a template . The disallowance of a mapping simply on the basis of a n incorrect incident type is probably too extreme . (In practice, however, it appear s to have rarely had significant adverse consequences ; see UMass paper in Part II a s one example of it having apparently significantly affected their TST 4 performance . )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5.",
"sec_num": null
},
{
"text": "At a higher level, there are shortcomings that are due to the choice of task . Information extraction has served as an excellent vehicle for elucidating th e application potential of current technology ; however, its utility as a vehicle fo r focusing attention on solving the \"hard\" problems of NLP is not as great . Man y insights have been gained into the nature of NLP by experience in developing th e large-scale systems required to participate in the evaluation . Nevertheless, s o much effort is involved simply to make it through the evaluation that it takes a disciplined effort to resist implementing quick solutions to all the major issue s involved, whether they are well understood problems or not .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5.",
"sec_num": null
},
{
"text": "The attempts that have been made to use the information extraction task t o reveal language analysis capabilities specifically have so far met with limite d success . One of these examined the results of information extraction at the loca l level of processing (apposition handling), and the other looked at the global leve l of processing (discourse handling) . The former was carried out for MUC-3 [8] an d the latter for MUC-4 [9] . The major conclusions of the apposition test were that th e test was isolating the phenomenon to some extent and that the systems as a grou p were doing better on the cases that had been hypothesized as easier than on those that had been hypothesized as more difficult . However, it also appears tha t performance on the apposition test may have reflected the systems' slot-fillin g capabilities at least as much as their apposition analysis capabilities . Appositio n was chosen as the subject of the test partly because of the relatively hig h frequency of occurrence of the phenomen ; however, a substantial portion of th e cases introduced confounding factors and had to be thrown out . The majo r conclusion of the discourse processing test was that the texts that were expected t o be \"easy\" were not and that there was something about the composition of the smal l test samples that were used that was confounding the results . Although there seems to be no theoretical impediment to conducting successful fine-grained taskoriented tests, these two efforts seem to show that such tests cannot be designed a s adjuncts but rather require independent specification in order to ensure adequat e test samples and an appropriately designed information extraction task .",
"cite_spans": [
{
"start": 400,
"end": 403,
"text": "[8]",
"ref_id": "BIBREF7"
},
{
"start": 430,
"end": 433,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "5.",
"sec_num": null
},
{
"text": "Appendix B describes the performance objectives of the evaluation, th e components of the test, how the sites were to conduct the tests and score th e outputs, and what files the sites were to submit to NRaD after finishing the tes t procedure . Appendix G contains summary score reports for the component tests , and appendix H displays some of those results in the form of scatter plots . The discussion below concerns the results for the basic test components, namely TST3 , TST4, and the TST2/TST3 \"progress\" test . The \"adjunct\" tests that are mentioned i n appendix B are reported on in [5] and [9] .",
"cite_spans": [
{
"start": 593,
"end": 596,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 601,
"end": 604,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DISCUSSION OF TEST SETS AND TEST RESULT S",
"sec_num": null
},
{
"text": "The progress test made a controlled comparison between MUC-3 and MUC-4 performance . The data points for MUC-3 were obtained using the templates tha t the veteran participants' systems generated on the MUC-3 final test on TST2 . Th e data points for MUC-4 were obtained for all MUC-4 sites ; they were obtained using the templates generated on TST3 . As described earlier, the TST2 test materials were forward-converted to the MUC-4 format, and scoring included only those templat e slots whose MUC-3 and MUC-4 definitions were consistent . 1 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TST2/TST3 \"Progress\" Tes t",
"sec_num": null
},
{
"text": "12 The TST3 progress scores are generally slightly better (up to 2%) than TST3 \"base\" scores ; this difference is the result of having excluded the number slots and the instrument ID slot from the scoring on the progress test .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TST2/TST3 \"Progress\" Tes t",
"sec_num": null
},
{
"text": "The TST2 progress scores are generally substantially worse (at least 5% lower recall o r precision) than the MUC-3 TST2 \"base\" scores reported on in [3] . The changes (primaril y decreases) are due to such factors as the following:",
"cite_spans": [
{
"start": 149,
"end": 152,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TST2/TST3 \"Progress\" Tes t",
"sec_num": null
},
{
"text": "1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TST2/TST3 \"Progress\" Tes t",
"sec_num": null
},
{
"text": "The manual clean-up of the automatic forward conversion of the templates is subject to a small degree of error . The elimination of some MURDER templates via conversion to ATTACK templates could result in an underestimation of performance ; the splitting of th e human target ID information into two slots could result in an overestimation of performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TST2/TST3 \"Progress\" Tes t",
"sec_num": null
},
{
"text": "2. Since scoring of the TST2 templates for the progress test was done in batch, without any manual template remappings, performance may be slightly underestimated for the few site s Following are some of the hypotheses that were to be tested concerning th e performance of the MUC-3 veteran systems :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TST2/TST3 \"Progress\" Tes t",
"sec_num": null
},
{
"text": "Most MUC-3 veteran systems would improve on at least one measure .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "Systems that were at the leading edge of performance for MUC-3 might no t be able to attain higher scores on one measure without sacrificing performance o n another .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "The limitations of some approaches might emerge .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "The need for progress in certain research areas might become salient .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "The fairest (and most generous) view of progress would come from th e Matched/Missing row, which was the focus of the MUC-3 test, rather than from th e more stringent, All Templates row, on which the MUC-4 TST3 and TST4 tests focused .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5.",
"sec_num": null
},
{
"text": "A comparison of the tables in section 4 of appendix G shows improvement o n one of the three primary measures (recall, precision, overgeneration) by all 1 1 systems, given the Matched/Missing method, and by 10 of the 11 systems, given th e All Templates method . Improvements on all three measures were achieved b y three systems on Matched/Missing (GE, LSI, NYU), including two of the leadin g MUC-3 performers (GE, NYU), and by seven systems on All Templates (GE, LSI, NYU , PRC, SRI, UMBC-ConQuest, UMass), including several of the leading performers (GE , NYU, SRI, UMass) . Tradeoffs resulting in improved recall at the expense of lowe r precision are evident in the results for three systems on Matched/Missin g (Hughes, Paramax, UMass) and for one system on All Templates (Paramax) . Tradeoffs leading to improved precision at the expense of lower recall can be see n in the results for two systems on Matched/Missing (BBN, MDC) and one system on All Templates (BBN) . The differences in scores between the two test sets are summarized in Table 1 . The differences are calculated as the TST3 progress score minus the TST2 progres s score . The first row shows the worst degradation among the 11 systems, the second row shows the most improvement, and the third row shows the average change .",
"cite_spans": [],
"ref_spans": [
{
"start": 1046,
"end": 1053,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "5.",
"sec_num": null
},
{
"text": "that made substantial use of this facility ; however, the need for this facility has declined a s the template alignment capabilities of the scoring program have improved .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M/M&AT REC M/M PRE",
"sec_num": null
},
{
"text": "3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M/M&AT REC M/M PRE",
"sec_num": null
},
{
"text": "The elimination of some MURDER templates via deletion eliminated one source o f inflation of scores .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M/M&AT REC M/M PRE",
"sec_num": null
},
{
"text": "The scoring program now uses more stringent criteria when aligning templates ; the impact is generally a higher missing template count, which lowers recall, and a higher spuriou s template and slot-filler count, which lowers precision .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "Note that there is only one column for recall, which is unaffected by the choice o f Matched/Missing (M/M) versus All Templates (AT) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "Recall improved by an average of eight percentage points . On average, ther e was very little change in precision and overgeneration on Matched/Missing, bu t All Templates shows dramatic improvement on both measures . It is interesting tha t the progress is more evident on All Templates than on Matched/Missing : for nin e of the eleven systems, the All Templates precision and overgeneration scores sho w a larger improvement from MUC-3 to MUC-4 than do the Matched/Missing scores . It appears that the new focus on the All Templates row caused developers to devot e a great deal of attention to reducing overgeneration (thereby increasin g precision), and that they succeeded . Furthermore, of these nine systems, eigh t showed improved recall as well .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "The F-measures provide assistance in interpreting the results of this test , especially for those systems that exhibited a recall-precision tradeoff. The Fmeasure scores show whether or not the tradeoff paid off in terms of overal l performance . Figure 1 shows the MUC-3 veteran systems' All Templates F-measur e scores (with recall and precision equally weighted) from the tables in appendix G , section 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 255,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "n TST2 Figure 1 shows that a \"typical\" increase in F-measure performance is aroun d 10 points (BBN, GE, LSI, NYU, UMass), and two systems (PRC and SRI) show a muc h greater performance improvement than that . The SRI results are especiall y remarkable because of the radical differences between their MUC-3 and MUC-4 systems . The BBN results show that the tradeoff in performance they made fo r MUC-4 clearly paid off in terms of overall progress .",
"cite_spans": [],
"ref_spans": [
{
"start": 7,
"end": 15,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "Two of the systems exhibiting only a slight performance increase (Hughes , Paramax) do little or no linguistically-based processing ; by their developers' own admission, the systems are incapable of much higher performance unless they ar e augmented by other types of processing . The remaining two systems, MDESC and UMBC-ConQuest, were overhauled for MUC-4 . In the case MDESC, this overhau l resulted in lower overall performance than what was achieved for MUC-3 ; in th e case of UMBC-ConQuest, it resulted in a modest increase but still very low overal l performance . It should be noted that the level of effort that . could be afforded b y each of these four sites was minimal and that this undoubtedly was a significan t limiting factor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1 . All Templates F-Measure (P&R) for Progress Test on TST2 and TST 3",
"sec_num": null
},
{
"text": "Systems representing organizations that are not veterans of the MUC-3 evaluation are not included in the above discussion . They were tested on the TST 3 portion of the progress test . Their scores are included in appendix G, section 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1 . All Templates F-Measure (P&R) for Progress Test on TST2 and TST 3",
"sec_num": null
},
{
"text": "In summary, the progress test showed that higher levels of performance b y nearly all systems were achieved despite the relative difficulty of TST3 . Progres s was more evident when the All Templates scores are considered ; this is due to the success of most systems in controlling overgeneration . Most systems did not giv e evidence of a recall-precision tradeoff, which means that there is still a variety o f techniques that exhibit potential for attaining even higher levels of performance in the future . The few systems that exhibited a tradeoff clearly benefited from it i n terms of overall performance . However, minimal improvement was shown b y systems that do not use linguistically-based processing, and minimal progress o r even a degradation in performance were the result in a couple cases wher e systems were radically changed for MUC-4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1 . All Templates F-Measure (P&R) for Progress Test on TST2 and TST 3",
"sec_num": null
},
{
"text": "This section describes the \"base\" MUC-4 tests, which used the TST3 and TST4 tes t sets . As distinct from the progress test discussed in the previous section, the bas e tests scored the entire template rather than selected slots . Thus, the TST3 scores fo r the two tests can be different, but in reality differences turned out not to b e universal . Where differences do exist, they are fairly small --the overall recall , precision, and overgeneration base scores are at most three points lower than th e progress scores .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TST3 and TST4 Test s",
"sec_num": null
},
{
"text": "As described earlier in this paper, TST3 consists of a sample of 100 previousl y unseen texts from the corpus of FBIS texts that had been obtained prior to MUC-3 . The sampling method ensures that the test set contains the same percentage o f texts by country as the corpus as a whole ; aside from enforcing that constraint , sampling is done blindly . The TST4 test set consists of a sample of 100 texts from th e new corpus of FBIS texts that was obtained via CD-ROM specifically for MUC-4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TST3 and TST4 Test s",
"sec_num": null
},
{
"text": "The density of relevant information in TST3 is relatively high, making it i n some ways a more difficult test set than others . The density of relevan t information in TST4 is much more similar to TST1 and the training set than it is t o TST2 and TST3, making it in some respects a relatively simple test . Some of the differences between TST3 and TST4 are summarized below .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TST3 and TST4 Test s",
"sec_num": null
},
{
"text": "1 . Approximately two-thirds of the texts in TST3 (65 out of 100) fall in th e \"definitely relevant\" category, versus approximately one-half in TST4 (48 out 100) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TST3 and TST4 Test s",
"sec_num": null
},
{
"text": "2. Almost one-half the texts in TST3 (30 out of 65) require the generation o f more than one template, versus almost one-third in TST4 (15 out of 48) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TST3 and TST4 Test s",
"sec_num": null
},
{
"text": "In reality, TST3 is just a bit more difficult by each of these criteria than TST2 , which was used for final MUC-3 testing . 13",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Many templates include a greater density of information than usual , especially in such slots as HUM TGT : DESCRIPTION .",
"sec_num": "3."
},
{
"text": "However, TST2 was itself more difficul t than TST1 and the training set . 1 4 As mentioned earlier, the purpose of introducing the TST4 test was to learn t o what extent system performance is independent of the training data . The variabl e introduced by TST4 was the time span covered by the texts . The change in tim e span meant that a somewhat different set of incidents would be reported --n o incidents occuring later than 31 December, 1988 would be reported in TST4 , whereas incidents up through early 1990 would be reported in TST3 . It also meant that the incidents would reflect a different world situation, resulting in a different distribution of articles among the countries of interest .",
"cite_spans": [
{
"start": 74,
"end": 77,
"text": "1 4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Many templates include a greater density of information than usual , especially in such slots as HUM TGT : DESCRIPTION .",
"sec_num": "3."
},
{
"text": "The major differences i n this respect were in the number of articles about El Salvador (down from 40% i n TST3 to 25% in TST4), Chile (up from 5% to 18%), and Peru (up from 6% to 19%) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Many templates include a greater density of information than usual , especially in such slots as HUM TGT : DESCRIPTION .",
"sec_num": "3."
},
{
"text": "The hypotheses to be tested were that systems would not perform as well o n TST4 as on TST3 and that systems that rely more heavily on corpus-based statistic s would suffer a greater hit in performance than other systems . The results , however, are mixed with respect to the first hypothesis and apparently negativ e with respect to the second . Table 2 presents a summary of the All Templates score s for the base runs on TST3 and TST4 (appendix G, sections 1 and 2), including the floating point F-measure with recall and precision equally weighted (appendix G , section 5) . The TST3 scores are quite similar to the TST4 scores, despite the difference s noted between the test sets .",
"cite_spans": [],
"ref_spans": [
{
"start": 347,
"end": 354,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Many templates include a greater density of information than usual , especially in such slots as HUM TGT : DESCRIPTION .",
"sec_num": "3."
},
{
"text": "Naturally, however, the degree of similarity varie s 13 Several participants did not use TST2 to train on for MUC-4 ; instead, they reserved it fo r use as blind test data for internal tests . When reported in the papers in Part II (e .g ., by SR A and SRI), the results seem to confirm the degree of similarity between TST2 and TST3, in th e sense that the systems did just slightly worse on TST3 than on the last internal test run o n TST2 . 14 A table of some summary statistics concerning all four test sets and the training set i s included in the BBN paper in Part II .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REC",
"sec_num": null
},
{
"text": "from one system to the next . Inspection of the individual systems' scores show s that only two systems (LSI, UMich) had both lower recall and lower precision o n TST4 than on TST3, and the degradation in recall for LSI is only one percentage point . For two other systems (PRC, SRI) recall was the same on both test sets, whil e precision was lower on TST4 . Eleven systems showed higher recall and lower precision on TST4 . Two systems (USC, MITRE) scored higher recall and higher precision on TST4 . Where there was a difference in recall or precision, the degre e of difference is as great as 11 recall points and 13 precision points (cf appendix H , figure H7 ) .",
"cite_spans": [],
"ref_spans": [
{
"start": 655,
"end": 664,
"text": "figure H7",
"ref_id": null
}
],
"eq_spans": [],
"section": "REC",
"sec_num": null
},
{
"text": "The F-measure value (with recall equally weighted with precision) is highe r on TST4 than on TST3 for 10 of the 17 systems (BBN, GE, GE-CMU, Hughes, MDESC , MITRE, NYU, SRA, UMBC-ConQuest, USC), is less than two points lower on TST4 fo r four others (NMSU-Brandeis, Paramax, PRC, SRI), and is more than two points lower on TST4 for only three systems (LSI, UMass, UMich) . The absolute ranking s (without considering whether the differences are statistically significant) sho w six systems ranked the same on both test sets, ten changing rank by just on e position, and one changing rank by two positions . Thus, in a very real sense, th e differences in performance from a cross-system perspective are minimal, and i t can be concluded that the two test sets are giving consistent results .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REC",
"sec_num": null
},
{
"text": "The overall performance of more than half the systems was better on TST 4 than on TST3, as determined by the F-measures . The relative straightforwardness of the TST4 test set may have washed out or even reversed the predicted behavio r with respect to recall . 15 The expected negative effect of using a corpus spanning a different period of time was not seen ; it would be necessary to place more control s on the information density characteristics of the test sets in order to isolate such a factor . BBN, GE, NYU, SRI, and UMass submitted the results of optional tests conducte d using TST3 or both TST3 and TST4 . The optional tests explored ways of controllin g system behavior to produce recall-precision tradeoffs that were predicted to b e suboptimal overall (compared to the base run) but distinctly better on one measur e or the other. These tests varied greatly in their design and in the performanc e impact; further information is available in appendices G (section 3) and H (figure s H3, H4, 119) and in the papers in Part II .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REC",
"sec_num": null
},
{
"text": "A few general comments can be made on the basis of the scatter plots i n appendix H concerning the overall performance of the systems . Figures H1 and H 2 show that higher recall is usually correlated with higher precision, just as th e MUC-3 results showed . Therefore, once again there is no reason not to b e optimistic about seeing continued improvement on both measures in the future . Figures H5 and H6 plot overall recall versus overgeneration ; they show that, to a large extent, the overall precision scores seen in H1 and H2 are accounted for b y the overgeneration factor .",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 154,
"text": "Figures H1 and H 2",
"ref_id": null
},
{
"start": 391,
"end": 408,
"text": "Figures H5 and H6",
"ref_id": null
}
],
"eq_spans": [],
"section": "REC",
"sec_num": null
},
{
"text": "This shows that overgeneration is still a seriou s problem, although MUC-4 clearly demonstrated that a great deal of progress ha d been made in this area .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REC",
"sec_num": null
},
{
"text": "Clearly also, the problem of missing information is stil l serious, as witnessed by the fact that recall is still only moderate .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REC",
"sec_num": null
},
{
"text": "15 With respect to precision, it should be noted that the two systems that showed better recal l and precision on TST4 than on TST3 (USC and MITRE) are less mature than most, which ma y make their performance less predictable .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REC",
"sec_num": null
},
{
"text": "The question of how to assess the state of the art has to be addressed in part b y comparison to human capabilities, since the real-life challenge is still for system s to try to match the performance of well-trained people . Although the human performance limits have not been scientifically determined, they are no w estimated to be in the neighborhood of 75% recall and 85% precision, assuming th e All Templates scoring method and a representative test set . These figures may seem low ; however, the experience of generating the key templates for thes e evaluations suggests that they are not .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REC",
"sec_num": null
},
{
"text": "Human factors play a role in estimatin g this limit ; however, the major factors are the task deficiencies and the inheren t ambiguity and vagueness of the texts . These performance goals mean, therefore , that the leading systems are falling perhaps 15% short of the recall target and 30 % short of the precision target .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REC",
"sec_num": null
},
{
"text": "Figures 1110-H12 plot the \"regions of performance\" of the systems as defined b y the overall Matched/Missing, Matched/Spurious, Matched Only, and All Template s recall and precision scores . There are some interesting differences in the shape a s well as the size of those regions . For the systems displaying the smallest regions o f performance (H10), the shape is rather square, or it is elongated more horizontall y than vertically .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REC",
"sec_num": null
},
{
"text": "In contrast, the regions in H11 and to an even greater extent th e regions in H12 are distinctly rectangular and elongated vertically . These shape s are evidence that the systems in H10 are least affected by overgeneration ; those i n 1112 are most affected . There is some comparative proof that the MUC-3 veteran s were bringing overgeneration under control in the fact that H12 includes onl y one veteran syste m Figures H15-H18 show that, as anticipated, system performance on slot s requiring string fills would be worse than on those requiring set fills . The differences would probably be more striking if it were not for the fact that th e scoring of eight of the eleven set-fill slots is confounded by the cross-reference s attached to them .",
"cite_spans": [],
"ref_spans": [
{
"start": 417,
"end": 432,
"text": "Figures H15-H18",
"ref_id": null
}
],
"eq_spans": [],
"section": "REC",
"sec_num": null
},
{
"text": "(In contrast, just one of the six string-fill slots has a crossreference requirement .) Whether for this reason or not, it does not appear tha t the distinction in slot type serves as a discriminator among systems, since ther e are no dramatic differences in the relative position of the systems in th e contrasting graphs across both test sets .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REC",
"sec_num": null
},
{
"text": "There are many ways in which MUC-4 has surpassed MUC-3 in bringin g various aspects of the evaluation into focus, including the deficiencies remaining in the task that were described earlier. The challenge posed by the task appear s less imposing now --it is now the rule rather than the exception to find system s capable of exploiting the large training corpus of texts and templates for th e purposes of knowledge acquisition, automatic training, and internal testing . Th e interaction between systems engineering concerns and theoretical concerns i s receiving increasing attention . In particular, scalability and robustness issue s must be addressed in order to take full advantage of the corpus for training purposes and to perform as well as possible on new test data .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GENERAL OBSERVATION S",
"sec_num": null
},
{
"text": "Whereas the challenge posed by the task has come to be accepted more or les s as a matter of course, the burden of preparing for the evaluation is increasingl y felt .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GENERAL OBSERVATION S",
"sec_num": null
},
{
"text": "Beneficial effects of the task challenge and evaluation burden are, amon g other things, that the algorithms for dealing with large amounts of unrestricte d text have become more robust, the development cycle has gotten shorter, and th e amount of automated knowledge acquisition has increased . On the down side, th e evaluation burden is still such that quantifiable progress is slow ; there is still a strong sentiment that time is the primary limiting factor, not technology, and that therefore level of effort is one of the most significant factors in predictin g performance .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GENERAL OBSERVATION S",
"sec_num": null
},
{
"text": "However, even though one impediment to improved performance is th e amount of time that can be invested in just doing a lot of hard work, including a great deal of knowledge engineering and system engineering, it is even more apparent from MUC-4 than from MUC-3 that there are certain prevalent \"har d problems\" posed by the task that require serious study . One thing that has been noted is how small problems in early stages of processing can have large negativ e effects on the ability of later stages to do their job . MUC-3 (and earlier evaluations ) pressed the point of reducing the fragility of sentence-level processing, and th e sentence analyzers were developed to produce output even when they didn't hav e full coverage . MUC-4 has refocused attention on the sentence and the importanc e of doing more complete linguistic analysis at that level .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GENERAL OBSERVATION S",
"sec_num": null
},
{
"text": "Another thing that has become nearly universal experience is the inadequac y of current approaches to determine when and how to combine information fro m multiple sentences into a single, coherent representation . Although th e approaches are limited in effectiveness by the quality of the sentence-leve l interpretation, they are also inherently limited in their ability to incorporat e information from sentences that lack domain-specific \"key words\", to incorporat e information from anaphors (especially from definite noun phrases), and to dea l with interruptions in the discourse . Currently these discourse phenomena ar e generally dealt with in terms of template \"splitting\" and \"merging\" based on th e compatibility of data in the output representation rather than by trackin g discourse as part of the analysis process . Some of these issues are apparent in th e participants' discussion in Part III of the \"system walkthrough\" example (appendi x F) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GENERAL OBSERVATION S",
"sec_num": null
},
{
"text": "The techniques that were used to improve performance above MUC-3 level s still vary greatly, but the emphasis on hybrid systems combining linguistic an d nonlinguistic processing has increased, and the limitations of the purel y nonlinguistic approaches are very evident . As the viability of informatio n extraction as a useful application of NLP has increased, the idea of building system s specifically for that purpose has emerged, and there is beginning to be a division between those who would insist that the most successful systems will be the most generic ones with respect to application task and domain and those that believ e that the most successful systems will take advantage of whatever reductions i n level of sophistication are permitted by the task of information extraction . At th e bottom is the question of what it will take to get from the current limit of about 60 % recall and 55% precision to the estimated upper limits of human performance . Als o at issue is the issue of portability in terms of system architecture and portability i n terms of cost. Will it cost less to port a large, complicated system that has separat e domain-specific modules to a new domain and/or task, or will it cost less to port a smaller, simpler system to a new domain and to build a new system for a new task ?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GENERAL OBSERVATION S",
"sec_num": null
},
{
"text": "New performance standards were set on the MUC-3 and MUC-4 informatio n extraction task . Despite increased task difficulty and scoring stringency for MUC -4, the results of a MUC-4 test to measure progress since MUC-3 show substantiall y higher overall performance for most systems (at least 10 points higher on the Fmeasure) . It has now proven possible to achieve overall scores above 60% recal l and 55% precision and an F-measure exceeding 55 . The new challenge to contro l overgeneration was successfully met, although overgeneration is still hig h enough that it exerts a major negative impact on precision .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
},
{
"text": "The results of a test to measure the generality of the MUC-4 algorithms sho w that they were not overly tuned to the training set. The usage of a test set from a corpus spanning a different period of time than that of the original corpus wa s expected to have a negative effect on performance, but this effect was not seen . I t would be necessary to place controls on the information density characteristics o f the test sets in order to isolate the time factor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
},
{
"text": "Upper limits on human performance of the task are estimated to be 75% recal l and 85% precision, primarily due to deficiencies in the task definition an d expressiveness of the formalism and to the inherent ambiguity and vagueness o f the texts . System performance falls short of these levels by at least 15 recall point s and 30 precision points . However, some MUC-4 systems attained high enoug h performance that task deficiencies account for a significant portion of th e penalties incurred by the scoring .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
},
{
"text": "Clearly, the performance envelope could have been pushed out even farther i f the participants had had the opportunity to work on the systems steadily for th e entire year . The level of effort is reflected to some extent in the scores, and tim e was again a limiting factor . The differences in sophistication among the system s may be great, but these differences may not be so great in terms of the scores . However, it could well be that there is a great qualitative difference between an Fmeasure score of 45 and one of 55 . Since the task deficiencies are being raised as a limiting factor and certain theoretical issues such as those involving sentenceand discourse-level analysis are becoming limiting factors as well, it may b e possible to conclude that the ceiling on performance is much more perceptibl e than it was after MUC-3 and that major steps forward in the state of the art may no t be easy to obtain .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
},
{
"text": "Error analyses point toward the critical need for research in areas such a s discourse reference resolution and inferencing . For example, the inability t o reliably determine whether a description found in one part of the text refers o r does not refer to something previously described inhibits both recall and precisio n because it could result in either missed information or spurious information ; the inability to pick up subtle cues to relevant information places a limitation on recal l because it results in missed information . The ability to take advantage of sophisticated approaches to discourse that have already received computationa l treatment is limited by their dependence on error-free outputs from earlier stage s of processing .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
},
{
"text": "There is a need for renewed attention to robust processing at th e sentence level .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
},
{
"text": "It is time to move on to a different information extraction task and domain i n order to make further progress in the evaluation methodology and to ensure tha t the challenge to handle unrestricted text remains high . MUC-4 has clarified man y of the issues pertaining to the definition of a performance evaluation using a n information extraction task ; at some point, it will be worthwhile to try to design a more comprehensive performance test of NLP capabilities than what th e information extraction task covers .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION S",
"sec_num": null
}
],
"back_matter": [
{
"text": "All the evaluation participants are to be congratulated for their efforts i n support of MUC-4 . I would like to thank Nancy Chinchor, Ralph Grishman, Jerr y Hobbs, David Lewis, Lisa Rau, and Carl Weir for their commitment as progra m committee members to improving the evaluation methodology and for thei r attention to the deluge of email communications. Thanks also to PRC, Inc ., fo r hosting the conference and to Richard Tong and Lynette Hirschman in their role s as \"outside evaluation experts\" . The NRaD work was supported by DARPA/SIST O under ARPA order 6359 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ACKNOWLEDGEMEN T",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Proceedings of the Third Message Understanding Conference (MUC-3)",
"authors": [],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Proceedings of the Third Message Understanding Conference (MUC-3), May , 1991, Morgan Kaufmann .",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Plans for a Task-Oriented Evaluation of Natural Languag e Understanding Systems",
"authors": [
{
"first": "B",
"middle": [],
"last": "Sundheim",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the Speech and Natural Languag e Workshop",
"volume": "",
"issue": "",
"pages": "197--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sundheim, B ., Plans for a Task-Oriented Evaluation of Natural Languag e Understanding Systems, in Proceedings of the Speech and Natural Languag e Workshop, February, 1989, Morgan Kaufmann, pp . 197-202 .",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "MUC-4 Evaluation Metrics",
"authors": [
{
"first": "N",
"middle": [],
"last": "Chinchor",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinchor, N ., MUC-4 Evaluation Metrics (in this volume) .",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Text Filtering in MUC-3 and MUC-4",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Tong",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis, D . and Tong, R., Text Filtering in MUC-3 and MUC-4 (in this volume) .",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "GE Adjunct Test Report : Object-Oriented Design an d Scoring for MUC-4",
"authors": [
{
"first": "G",
"middle": [],
"last": "Krupka",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Rau",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krupka, G . and Rau, L ., GE Adjunct Test Report : Object-Oriented Design an d Scoring for MUC-4 (in this volume) .",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Comparing MUCK-II and MUC-3 : Assessing the Difficulty of Different Tasks",
"authors": [
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the Third Message Understanding Conferenc e (MUC-3)",
"volume": "",
"issue": "",
"pages": "25--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hirschman, L ., Comparing MUCK-II and MUC-3 : Assessing the Difficulty of Different Tasks, in Proceedings of the Third Message Understanding Conferenc e (MUC-3), May, 1991, Morgan Kaufmann, pp . 25-30 .",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Statistical Significance of MUC-4 Results",
"authors": [
{
"first": "N",
"middle": [],
"last": "Chinchor",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinchor, N ., Statistical Significance of MUC-4 Results (in this volume) .",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "MUC-3 Linguistic Phenomena Test Experiment",
"authors": [
{
"first": "N",
"middle": [],
"last": "Chinchor",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the Third Message Understanding Conference (MUC-3)",
"volume": "",
"issue": "",
"pages": "31--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinchor, N ., MUC-3 Linguistic Phenomena Test Experiment, in Proceedings of the Third Message Understanding Conference (MUC-3), May, 1991, Morga n Kaufmann, pp . 31-45 .",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An Adjunct Test for Discourse Processing in MUC-4 (in thi s volume)",
"authors": [
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hirschman, L ., An Adjunct Test for Discourse Processing in MUC-4 (in thi s volume) .",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Overview of the Third Message Understanding Evaluation an d Conference",
"authors": [
{
"first": "B",
"middle": [],
"last": "Sundheim",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Third Message Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sundheim, B ., Overview of the Third Message Understanding Evaluation an d Conference, in Proceedings of the Third Message Understanding Conference (MUC -",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Nineteen organizations participated in the development of the MUC-4 systems, including 1 2 of the 17 MUC-3 participants . These veteran groups are BBN Systems and Technologies (Cambridge, MA), General Electric (Schenectady, NY), Hughe s Research Laboratories (Malibu, CA), Language Systems, Inc . (Woodland Hills, CA) , McDonnell Douglas Electronic Systems (Santa Ana, CA), New York University (Ne w York City, NY), Paramax Systems 2 (Paoli, PA), PRC, Inc . (McLean, VA), SRI International (Menlo Park, CA), the University of Maryland together with ConQuest, Inc . 3 (Baltimore, MD), and the University of Massachusetts (Amherst , MA) .",
"num": null,
"uris": null
},
"TABREF0": {
"type_str": "table",
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td>AT PRE</td><td>M/M OVG</td><td>AT OV G</td></tr></table>"
},
"TABREF1": {
"type_str": "table",
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td/><td>PRE</td><td>OVG</td><td/><td>F-MEA S</td></tr><tr><td/><td/><td/><td/><td>(R&amp;P)</td></tr><tr><td>TST3 BEST</td><td>5 8</td><td>5 5</td><td>26_</td><td>5 6 .01_</td></tr><tr><td>TST4 BEST</td><td>6 2</td><td>5 3</td><td>3 4</td><td>-5 7 .0 5</td></tr><tr><td>TST3 WORST</td><td>2</td><td>8</td><td>9 0</td><td>4 .4 7</td></tr><tr><td>TST4 WORST</td><td>3</td><td>10</td><td>8 7</td><td>5 .7 9</td></tr><tr><td>TST3 AVG _</td><td>31</td><td>34</td><td>55</td><td>31 .3 5</td></tr><tr><td>TST4 AVG</td><td>35</td><td>33</td><td>57</td><td>32 .26 ,</td></tr></table>"
}
}
}
}