Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
32.1 kB
{
"paper_id": "M92-1002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:13:12.852990Z"
},
"title": "MUC-4 EVALUATION METRIC S",
"authors": [
{
"first": "Nancy",
"middle": [],
"last": "Chinchor",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Science Applications International Corporation",
"location": {
"addrLine": "10260 Campus Point Drive, M/S A2-F",
"postCode": "9212",
"settlement": "San Diego",
"region": "CA"
}
},
"email": "chinchor@esosun.css.gov"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "INTRODUCTION The MUC-4 evaluation metrics measure the performance of the message understanding systems. This paper describes the scoring algorithms used to arrive at the metrics as well as the improvements that were made to th e MUC-3 methods. MUC-4 evaluation metrics were stricter than those used in MUC-3. Given the differences in scoring between MUC-3 and MUC-4, the MUC-4 systems' scores represent a larger improvement over MUC-3 performance than the numbers themselves suggest. The major improvements in the scoring of MUC-4 were the automation of the scoring of set fill slots, partia l automation of the scoring of string fill slots, content-based mapping enforced across the board, the focus on the AL L TEMPLATES score as opposed to the MATCHED/MISSING score in MUC-3, the exclusion of the template id scores from the score tallies, and the addition of the object level scores, string fills only scores, text filtering scores , and F-measures. These improvements and their effects on the scores are discussed in detail in this paper. SCORE REPORT The MUC-4 Scoring System produces score reports in various formats. These reports show the scores for the templates and messages in the test set. Varying amounts of detail can be reported. The scores that are of the most interest are those that appear in the comprehensive summary report. Figure 1 shows a sample summary score report. The rows and columns of this report are explained below.",
"pdf_parse": {
"paper_id": "M92-1002",
"_pdf_hash": "",
"abstract": [
{
"text": "INTRODUCTION The MUC-4 evaluation metrics measure the performance of the message understanding systems. This paper describes the scoring algorithms used to arrive at the metrics as well as the improvements that were made to th e MUC-3 methods. MUC-4 evaluation metrics were stricter than those used in MUC-3. Given the differences in scoring between MUC-3 and MUC-4, the MUC-4 systems' scores represent a larger improvement over MUC-3 performance than the numbers themselves suggest. The major improvements in the scoring of MUC-4 were the automation of the scoring of set fill slots, partia l automation of the scoring of string fill slots, content-based mapping enforced across the board, the focus on the AL L TEMPLATES score as opposed to the MATCHED/MISSING score in MUC-3, the exclusion of the template id scores from the score tallies, and the addition of the object level scores, string fills only scores, text filtering scores , and F-measures. These improvements and their effects on the scores are discussed in detail in this paper. SCORE REPORT The MUC-4 Scoring System produces score reports in various formats. These reports show the scores for the templates and messages in the test set. Varying amounts of detail can be reported. The scores that are of the most interest are those that appear in the comprehensive summary report. Figure 1 shows a sample summary score report. The rows and columns of this report are explained below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The basic scoring categories are located at the top of the score report These categories are defined in Tabl e 1 . The scoring program determines the scoring category for each system response . Depending on the type of slot being scored, the program can either determine the category automatically or prompt the user to determine th e amount of credit the response should be assigned .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Categories",
"sec_num": null
},
{
"text": "\u2022 If the response and the key are deemed to be equivalent, then the fill is assigned the category of correct (COR).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Categories",
"sec_num": null
},
{
"text": "\u2022 If partial credit can be given, the category is partial (PAR).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Categories",
"sec_num": null
},
{
"text": "\u2022 If the key and response simply do not match, the response is assigned an incorrect (INC) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Categories",
"sec_num": null
},
{
"text": "\u2022 If the key has a fill and the response has no corresponding fill, the response is missing (MIS).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Categories",
"sec_num": null
},
{
"text": "\u2022 If the response has a fill which has no corresponding fill in the key, the response is spurious (SPU) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Categories",
"sec_num": null
},
{
"text": "\u2022 If the key and response are both left intentionally blank, then the response is scored as noncommittal (NON) . ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Categories",
"sec_num": null
},
{
"text": "The evaluation metrics were adapted from the field of Information Retrieval (IR) and extended for MUC . They measure four different aspects of performance and an overall combined view of performance . The four evaluation metrics of recall, precision, overgeneration, and fallout are calculated for the slots and summary score rows (se e Table 2 ). These are listed in the four rightmost columns of the score report in Figure 1 . The fifth metric, the F-measure, is a combined score for the entire system and is listed at the bottom of the score report . In IR, a common way of representing the characteristic performance of systems is in a precision-recal l graph . Normally as recall goes up, precision tends to go down and vice versa [1] . One approach to improving recall is to increase the system's generation of slot fills . To avoid overpopulation of the template database by the message understanding systems, we introduced the measure of overgeneration . Overgeneration (OVG) measures the percentage of the actual attempted fills that were spurious . Fallout (FAL) is a measure of the false positive rate for slots with fills that come from a finite set. Fallout is the tendency for a system to choose incorrect responses as the number of possible responses increases . Fallout is calculated for all of the set fill slots listed in the score report in Figure 1 and is shown in the last column on the right . Fallout can be calculated for the SET FILLS ONLY row because that row contains the summary tallies for the set fill slots . The TEXT FILTERING row discussed later contains a score for fallout because the text filtering problem als o has a finite set of responses possible . ",
"cite_spans": [
{
"start": 736,
"end": 739,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 337,
"end": 344,
"text": "Table 2",
"ref_id": null
},
{
"start": 418,
"end": 426,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 1360,
"end": 1368,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": null
},
{
"text": "The four rows labeled \"inc-total,\" \"perp-total,\" \"phys-tgt-total,\" and \"hum-tgt-total\" in the summary scor e report in Figure 1 show the subtotals for associated groups of slots referred to as \"objects . \" These are object level scores for the incident, perpetrator, physical target, and human target . They are the sums of the scores shown for th e F= individual slots associated with the object as designated by the first part of the individual slot labels . The template for MUC-4 was designed as a transition from a flat template to an object-oriented template . Although referred to as object-oriented, the template is not strictly object-oriented, but rather serves as a data representation upon which a n object-oriented system could be built [3] . However, no object-oriented database system was developed using this template as a basis.",
"cite_spans": [
{
"start": 750,
"end": 753,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 119,
"end": 127,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Summary Rows",
"sec_num": null
},
{
"text": "The four summary rows in the score report labelled \"MATCHED/MISSING,\" \"MATCHED/SPURIOUS, \" \"MATCHED ONLY,\" and \"ALL TEMPLATES\" show the accumulated tallies obtained by scoring spurious and missing templates in different manners . Each message can cause multiple templates to be generated depending on the number of terrorist incidents it reports. The keys and responses may not agree in the number of templates generate d or the content-based mapping restrictions may not allow generated key and response templates to be mapped to eac h other. These cases lead to spurious and/or missing templates. There are differing views as to how much systems should be penalized for spurious or missing templates depending upon the requirements of the application . These differing views have lead us to provide the four ways of scoring spurious and missing information as outlined in Table 3 . These four manners of scoring provide four points defining a rectangle on a precision-recall graph which we refer to as the \"region of performance\" for a system (see Figure 2) . At one time, we thought that it would be useful to compare the position of the center of this rectangle across systems, but later realized that two systems could have th e same centers but very different size rectangles . Plotting the entire region of performance for each system does provid e a useful comparison of systems .",
"cite_spans": [],
"ref_spans": [
{
"start": 874,
"end": 881,
"text": "Table 3",
"ref_id": null
},
{
"start": 1050,
"end": 1059,
"text": "Figure 2)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Summary Rows",
"sec_num": null
},
{
"text": "In Figure 1 , the score report contains two summary rows (SET FILLS ONLY and STRING FILLS ONLY ) which give tallies for a subset of the slots based on the type of fill the slot can take. These rows give tallies that show the system's performance on these two types of slots: set fill slots and string fill lots . Set fill slots take a fill from a finite set specified in a configuration file . String fill slots take a fill that is a string from a potentially infinite set .",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Figure 2: Region of Performance",
"sec_num": null
},
{
"text": "The purpose of the text filtering row is to report how well systems distinguish relevant and irrelevant messages. The scoring program keeps track of how many times each of the situations in the contingency table arises for a system (see Table 4 ). It then uses those values to calculate the entries in the TEXT FILTERING row. The evaluation metrics are calculated for the row as indicated by the formulas at the bottom of Table 4 . An analysis of the text filtering results appears elsewhere in these proceedings .",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 244,
"text": "Table 4",
"ref_id": null
},
{
"start": 422,
"end": 429,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Text Filtering",
"sec_num": null
},
{
"text": "The major improvements in the scoring of MUC-4 included: These changes are interdependent ; they interact in ways that affect the overall scores of systems and serve to make MUC-4 a more demanding evaluation than MUC-3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IMPROVEMENTS OVER MUC-3",
"sec_num": null
},
{
"text": "The complete automation of the scoring of set fill slots was possible due to the information in a slot configuration file which told the program the hierarchical structure of the set fills . If a response exactly matches the key, it is scored as correct. If a response is a more general set fill element than the key according to the pre-specified hierarchy , it is scored as partially correct. If the response cannot be scored as correct or partially correct by these criteria then th e set fill is scored as incorrect . All set fills can thus be automatically scored . Often, however, the set fill is cross-referenced to another slot which is a string fill . The scoring of string fills cannot be totally automated . Instead the scoring program refers to the history of the interactive scoring of the cross-referenced slot, and with that information, it the n determines the score for the set fill slot which cross-references the string fill slot .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 4 : Text Filtering",
"sec_num": null
},
{
"text": "The scoring of the string fill slots was partially automated by using two methods . In the first method, used for mapping purposes, strings were considered correct if there was a one-word overlap and the word was not from a short list of premodifiers. In the second method, used for scoring purposes, some mismatching string fills could b e matched automatically by stripping these premodifiers from the key and response and seeing if the remaining materia l matched. Other mismatching string fills caused the user to be queried for the score . The automation of the set fill an d string fill scoring was critical to the functioning of the content-based mapping.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 4 : Text Filtering",
"sec_num": null
},
{
"text": "The content-based mapping restrictions were added to MUC-4 to prevent fortuitous mappings whic h occurred in MUC-3 . In MUC-3, templates were mapped to each other based on a simple optimization of scores . Sometimes the optimal score was the result of a lucky mapping which was not really the most appropriate mapping . Certain slots such as incident type were considered essential for the mapping to occur in MUC-4 . The mapping restrictions can be specified in the scorer's configuration file using a primitive logic . For the MUC-4 testing, the templates must have at least a partial match on the incident type and at least one of the following slots : The content-based mapping restrictions could result in systems with sparse templates having few or no templates mapped. When a template does not map, the result is one missing and one spurious template. This kind of penalty is severe when the ALL TEMPLATES row is the official score, because the slots in the unmapped templates all count against the system as either missing or spurious. This aspect of the scoring was one of the main reasons that MUC-4 was more demanding than MUC-3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 4 : Text Filtering",
"sec_num": null
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevant Is",
"sec_num": null
},
{
"text": "The focus on the ALL TEMPLATES score as opposed to the MATCHED/MISSING score in MUC-3 mean t that the strictest scores for a system were its official scores . So even if a system's official scores were the same for MUC-3 and MUC-4, the system had improved in MUC-4 . Additionally, the scores for the template id row were not included in the summary row tallies in MUC-4 as they had been in MUC-3 . Previously, systems were getting extra credit for the optimal mapping. This bonus was taken away by the exclusion of the template id scores from the score tallies in MUC-4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevant Is",
"sec_num": null
},
{
"text": "In addition to the more demanding scoring, MUC-4 also measured more information about system performance . Object level scores were added to see how well the systems did on different groupings of slots concerning th e incident, perpetrator, physical target, and human target. Also, the score for the string fill slots was tallied as a comparison with the score for set fill slots that was already there in MUC-3. The text filtering scores gave additional information on the capabilities of systems to determine relevancy . The F-measures combined recall and precision to give a single measure of performance for the systems .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevant Is",
"sec_num": null
}
],
"back_matter": [
{
"text": "The evaluation metrics used in MUC-4 gave a stricter and more complete view of the performance of th e systems than the metrics used in MUC-3 . The improved overall numerical scores of the systems under these more difficult scoring conditions indicate that the state of the art has moved forward with MUC-4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SUMMARY",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Information Retrieval : Data Structures & Algorithms",
"authors": [],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frakes, W.B .and R . Baeza-Yates (eds .) (1992) Information Retrieval : Data Structures & Algorithms . Englewood Cliffs: Prentice Hall .",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Information Retrieval",
"authors": [
{
"first": "C",
"middle": [
"J"
],
"last": "Van Rijsbergen",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Van Rijsbergen, CJ. (1979) Information Retrieval. London: Butterworths.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Obiect-Oriented Concepts. Databases. and Applications",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "Nierstrasz",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nierstrasz, Oscar (1989) \"A Survey of Object-Oriented Concepts\" in W . Kim and F. H. Lochovsky (Eds . ) Obiect-Oriented Concepts. Databases. and Applications. New York: Addison-Wesley.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": ". . . . .. . . . . . .. . . . . . .. . . . . . . . .. . . . . . . . .. . . . . .. . . . . . . . .. . . . . . . .. . . . . . .. . . . . . . . . . .. . . . . . .. . . . . . . . Inc-be . . . . . .. . . . . . . .. . . . . . . .. . . . . . .. . . . . . . . .. . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . .. . inc-stage . . . . . .. . . . . . . . .. . . . . . .. . . . . . . .. . . . . . . . . . .. . . . . . . . .. . . . . .. . . . . . . .. . . . . . . . . .. . . . . . . . .. . . . . . .. . . . . . . .. . . . . . . .. . . . . . . . . .. . . . . .. . . . . . . .. . . . inc-instr-type . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . . .. . . . . . . .. . . . . . . .. . . . . . .. perp-Ind-id . . . . . .. . . . . . . .. . . . . . . .. . . . . . .. . perp-org-conf . . .. . . . . . . . .. . . . . . .. . . . . . . .. . . . . . .. . . . . . . .. . . . . . .. . . . phys-tgt-type . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . ... .phys-tgt-natlon phys-tgt-total-",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": ". . hum-tgt-desc hum-tgt-num . . .. . . . . . . .. . . . . . . .. . . . . .. . . . . . . . . .. . . . . . the two columns titled ICR (interactive correct) and IPA (interactive partial) indicate the result s of interactive scoring . Interactive scoring occurs when the scoring system finds a mismatch that it cannot automatically resolve . It queries the user for the amount of credit to be assigned . The number of fills that the user assigns to the category of correct appears in the ICR column; the number of fills assigned partial credit by the user appears in the IPA column . In Figure 1, the two columns labelled possible (POS) and actual (ACT) contain the tallies of the numbers o f slots filled in the key and response, respectively . Possible and actual are used in the computation of the evaluation metrics . Possible is the sum of the correct, partial, incorrect, and missing . Actual is the sum of the correct, partial , incorrect, and spurious.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Evaluation Metric sRecall (REC) is the percentage of possible answers which were correct . Precision (PRE) is the percentage of actual answers given which were correct A system has a high recall score if it does well relative to the number of slo t fills in the key. A system has a high precision score if it does well relative to the number of slot fills it attempted .",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "These four measures of recall, precision, overgeneration, and fallout characterize different aspects of syste m performance. The measures of recall and precision have been the central focus for analysis of the results . Overgeneration is a measure which should be kept under a certain value . Fallout was rarely used in the analyses done of the results. It is difficult to rank the systems since the measures of recall and precision are often equally important ye t negatively correlated. In IR, a method was developed for combining the measures of recall and precision to get a single measure. In MUC-4, we use van Rijsbergen's F-measure [1, 2] for this purpose . The F-measure provides a way of combining recall and precision to get a single measure which fall s between recall and precision . Recall and precision can have relative weights in the calculation of the F-measure giving it the flexibility to be used for different applications . The formula for calculating the F-measure is (13 2 +1 .0)XPX R 132 xP+ R where P is precision, R is recall, and 13 is the relative importance given to recall over precision . If recall and precision are of equal weight, 13 = 1 .0. For recall half as important as precision, 13 = 0.5 . For recall twice as important as precision,13 = 2 .0.The F-measure is higher if the values of recall and precision are more towards the center of the precisionrecall graph than at the extremes and their sums are the same. So, for 3 = 1 .0, a system which has recall of 50% an d precision of 50% has a higher F-measure than a system which has recall of 20% and precision of 80%. This behavior is exactly what we want from a single measure .The F-measures are reported in the bottom row of the summary score report in Figure 1 . The F-measure with recall and precision weighted equally is listed as \"P&R .\" The F-measure with precision twice as important as recall is listed as \"2P&R .\" The F-measure with precision half as important as recall is listed as \"P&2R .\" The F-measure is calculated from the recall and precision values in the ALL TEMPLATES row. Note that the recall and precision value s in the ALL TEMPLATES row are rounded integers and that this causes a slight inaccuracy in the F-measures . The values used for calculating statistical significance of results are floating point values all the way through the calculations. Those more accurate values appear in the paper \"The Statistical Significance of the MUC-4 Results \" and in Appendix G of these proceedings .",
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"text": "Manners of ScoringThe MATCHED ONLY manner of scoring penalizes the least for missing and spurious templates by scorin g them only in the template id slot. This template id score does not impact the overall score because the template id slo t is not included in the summary tallies; the tallies only include the other individual slots . The MATCHED/MISSING method scores the individual slot fills that should have been in the missing template as missing and scores the template as missing in the template id slot ; it does not penalize for slot fills in spurious templates except to score the spurious template in the template id slot . MATCHED/SPURIOUS, on the other hand, penalizes for the individual slo t fills in the spurious templates, but does not penalize for the missing slot fills in the missing templates . ALL TEM-PLATES is the strictest manner of scoring because it penalizes for both the slot fills missing in the missing template s and the slots filled in the spurious templates . The metrics calculated based on the scores in the ALL TEMPLATES row are the official MUC-4 scores .",
"uris": null
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"text": "automating the scoring as effectively as possible \u2022 restricting the mapping of templates to cases where particular slots matched in content a s opposed to mapping only according to an optimized scor e \u2022 a focus on the ALL TEMPLATES score as opposed to the MATCHED/MISSING score i exclusion of template id scores from the summary score tallie s \u2022 the inclusion of more summary information including object level scores, string fills onl y scores, text filtering scores, and F-measures.",
"uris": null
},
"TABREF0": {
"html": null,
"num": null,
"text": "",
"content": "<table/>",
"type_str": "table"
},
"TABREF1": {
"html": null,
"num": null,
"text": "",
"content": "<table/>",
"type_str": "table"
},
"TABREF2": {
"html": null,
"num": null,
"text": "",
"content": "<table/>",
"type_str": "table"
}
}
}
}