Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
18.9 kB
{
"paper_id": "M92-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:13:23.514126Z"
},
"title": "U,SC : MUC-4 Test Results and Analysis",
"authors": [
{
"first": "D",
"middle": [],
"last": "Moldovan",
"suffix": "",
"affiliation": {},
"email": "moldovan@gringo.usc.edu"
},
{
"first": "S",
"middle": [],
"last": "Cha",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "M",
"middle": [],
"last": "Chung",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "K",
"middle": [],
"last": "Hendrickson",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "J",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "S",
"middle": [],
"last": "Kowalsk",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "California is participating, for the first time, in the message understanding conferences. A team consisting of one faculty and ifive doctoral students started the work for MUC-4 i n January 1992. This work is an extension of a project to build a massively parallel computer for natural language processing called Semantic Network Array Processor (SNAP). RESULT S Scoring Result s During the final week of testing, our system was run on test sets TST3 and TST4. Test set TST3 contains 100 articles from the same time period as the training corpus (DEV), and test sets TST1 and TST2. Th e summary of score results for TST3 is shown in Table 1. Test set TST4 contains 100 articles from a differen t time period then those of TST3. The summary of score results for TST4 is shown in Table 2. The complet e score results for TST3 and TST4 can be found in Appendix G .",
"pdf_parse": {
"paper_id": "M92-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "California is participating, for the first time, in the message understanding conferences. A team consisting of one faculty and ifive doctoral students started the work for MUC-4 i n January 1992. This work is an extension of a project to build a massively parallel computer for natural language processing called Semantic Network Array Processor (SNAP). RESULT S Scoring Result s During the final week of testing, our system was run on test sets TST3 and TST4. Test set TST3 contains 100 articles from the same time period as the training corpus (DEV), and test sets TST1 and TST2. Th e summary of score results for TST3 is shown in Table 1. Test set TST4 contains 100 articles from a differen t time period then those of TST3. The summary of score results for TST4 is shown in Table 2. The complet e score results for TST3 and TST4 can be found in Appendix G .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The recall metric (REC column in Tables 1 and 2) is a measure of the system ' s ability to extract relevant information from the text . For the TST3 test set, our recall score was 7% as shown in the ALL TEMPLATE S and MATCHED/MISSING rows of Table 1 . If missing templates are disregarded, our recall score for TST 3 improves to 30% as is shown in the MATCHED/SPURIOUS and MATCHED ONLY rows of Table 1 . For the TST4 test set, our recall score was 12% as shown in the ALL TEMPLATES and MATCHED/MISSIN G rows of Table 2 . If missing templates are disregarded, our recall score for TST4 improves to 31% as is show n in the MATCHED/SPURIOUS and MATCHED ONLY rows of Tables 1 and 2 ) is a measure of the correctness of the system' s output . For the TST3 test set, our precision score was 16% as shown in the ALL TEMPLATES and MATCHED/SPURIOUS rows of Table 1 . If spurious templates are disregarded, our precision score for TST 3 improves to 58% as is shown in the MATCHED/MISSING and MATCHED ONLY rows of Table 1 . For the TST4 test set, our precision score was 26% as shown in the ALL TEMPLATES and MATCHED/SPURIOU S rows of Table 2 . If missing templates are disregarded, our precision score for TST4 improves to 69% as i s shown in the MATCHED/MISSING and MATCHED ONLY rows of Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 33,
"end": 48,
"text": "Tables 1 and 2)",
"ref_id": "TABREF0"
},
{
"start": 242,
"end": 249,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 394,
"end": 401,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 511,
"end": 518,
"text": "Table 2",
"ref_id": "TABREF0"
},
{
"start": 663,
"end": 677,
"text": "Tables 1 and 2",
"ref_id": "TABREF0"
},
{
"start": 848,
"end": 855,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1003,
"end": 1010,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1124,
"end": 1131,
"text": "Table 2",
"ref_id": "TABREF0"
},
{
"start": 1278,
"end": 1285,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Recall",
"sec_num": null
},
{
"text": "The large disparity of scores between TST3 and TST4 can be partially attributed to the ability of ou r system to generate the required templates with enough correct slots that they can exceed the minimu m matching criteria of the scoring software . For TST3, we only generated 16 templates out of the 103 possible . 61 of our templates were spurious . We did much better with TST4, in that we generated 24 of the 71 possibl e templates and had only 41 spurious templates .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Results",
"sec_num": null
},
{
"text": "The total effort for MUC-4 is estimated at approximately 1,450 hours . This breaks down as follows :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEVEL OF EFFORT",
"sec_num": null
},
{
"text": "Knowledge base construction 25 % Preprocessor 15 % Memory based parser 25 % Template generation 20% System integration 10% Scoring procedure 5%",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEVEL OF EFFORT",
"sec_num": null
},
{
"text": "The main limiting factor for us was that we started almost from scratch . We did not have a lexicon , parser, knowledge base, nor an inference engine ; we only had ideas and a small parser which turned out t o be useless for this large application . As our knowledge base grew we started to run out of memory in the parallel computer's controller board, so we had to redesign this board . Since it was not ready in time to b e useful for MUC-4 testing, we ended up using the software simulator of the parallel computer which was ver y slow . It takes more than one hour to process a message using the simulator, but only seconds when usin g the actual parallel computer .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LIMITING FACTORS",
"sec_num": null
},
{
"text": "Regarding the limiting factors in performance of the system we have noticed that : (1) our discourse processing capability was insufficient, (2) the lexicon was too small, (3) the parser does not address enoug h linguistic problems, (4) more basic concept sequences are needed, and (5) more inferencing rules are needed .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LIMITING FACTORS",
"sec_num": null
},
{
"text": "Although the MUC-4 experiment presented many challenging problems, we have not yet reached th e limit of our technology. We built the system using only one test message, and only had a working syste m starting in April . The last month was used to fine tune the system using all 100 messages in the previou s corpus .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LIMITING FACTORS",
"sec_num": null
},
{
"text": "Memory based parsing seems powerful and offers many advantages . The use of integrated semantic an d syntactic parsing was successful . The structure of the knowledge base and the dynamic combination of various concept sequences to handle arbitrary input sentences worked well .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Strength s",
"sec_num": null
},
{
"text": "Because of insufficient concept sequences in the knowledge base, the parser's output is mostly a syntacti c description of the sentences, as opposed to a semantic description . The template generator doesn't yet do any discourse processing . High-level inferencing is needed . The knowledge base was built to work with the parser, without much regard for the inferencing process .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "'Weaknesses",
"sec_num": null
},
{
"text": "Assuming that the domain and the required output is changed, approximately 75% of the knowledge base and the lexicon is reusable. None of the inferencing rules for filling templates are reusable, although som e of the structure might be reusable .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REUSABILIT Y",
"sec_num": null
},
{
"text": "We have come to a greater appreciation of how complex the problem really is . Further improvements of the system need to focus on discourse processing and high-level inferencing . Also, common-sense knowledg e must be added to the knowledge base, and parallel inferencing methods must be developed to apply thi s knowledge. We also see a great need for automating the construction and enhancement of the knowledg e base.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WHAT WAS LEARNE D",
"sec_num": null
},
{
"text": "Over all, our experience with MUC-4 has been useful and rewarding . More than anything, it has focused our work .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WHAT WAS LEARNE D",
"sec_num": null
},
{
"text": "We are grateful to Richard Tong from ADS for making available to us part of the dictionary and taxonomy, and to Beth Sundheim for facilitating this . This work was partially funded by the National Scienc e Foundation under grant #MIP-9009109 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ACKNOWLEDGEMEN T",
"sec_num": null
},
{
"text": "The papers in this section, which were prepared by each of the sites tha t completed the MUC-4 evaluation, describe the systems that were tested . The papers are intended not only to outline each system's architecture but also to provide the reader with an understanding of the effectiveness of the techniques that wer e used to handle the particular phenomena found in the MUC-4 corpus . To make the discussion of these techniques concrete, most of the sites make specific referenc e to some of the phenomena found in message TST2-MUC4-0048 from the dry-run tes t set and discuss their system's handling of those phenomena . The full text an d answer key templates for that message are found in appendix F of the proceedings .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PART III : SYSTEM DESCRIPTION S",
"sec_num": null
},
{
"text": "The sites were asked to include the following pieces of information in this paper : Reference resolutio n -Template fil l * Sample filled-in template, with an explanation of interestin g things :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PART III : SYSTEM DESCRIPTION S",
"sec_num": null
},
{
"text": "things system got righ t things system got wrong",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PART III : SYSTEM DESCRIPTION S",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Background : how/for what the system was developed, an d how much time was spent on the system before MUC-4 * Explanation of the modules of the syste m * Explanation of flow of control (interleaved/sequential/ . . . ) * Explanation (without system-specific jargon) of processing stages : Identification of relevant texts and paragraph s Lexical look-up (example of output and lexicon ) Syntactic analysis (example of output and grammar ) -Semantic analysis (example of output and semantic rules ) -",
"uris": null,
"num": null
},
"TABREF0": {
"text": "",
"html": null,
"content": "<table><tr><td>Precision</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>SLOT</td><td>POS</td><td>ACT</td><td>COR</td><td>PAR</td><td>INC</td><td>ICR</td><td>IPA</td><td>SPU</td><td>MIS</td><td>NON</td><td>REC</td><td>PRE</td><td>OVG</td><td>FA L</td></tr><tr><td>MATCHED/MISSING</td><td>1508</td><td>174</td><td>85</td><td>31</td><td>26</td><td>4</td><td>11</td><td>32</td><td>1366</td><td>1142</td><td>7</td><td>58</td><td>1 8</td><td/></tr><tr><td>MATCHED/SPURIOUS</td><td>332</td><td>637</td><td>85</td><td>31</td><td>26</td><td>4</td><td>11</td><td>495</td><td>190</td><td>1110</td><td>30</td><td>16</td><td>7 8</td><td/></tr><tr><td>MATCHED ONLY</td><td>332</td><td>174</td><td>85</td><td>31</td><td>26</td><td>4</td><td>11</td><td>32</td><td>190</td><td>148</td><td>30</td><td>58</td><td>1 8</td><td/></tr><tr><td>ALL TEMPLATES</td><td>1508</td><td>637</td><td>85</td><td>31</td><td>26</td><td>4</td><td>11</td><td>495</td><td>1366</td><td>2104</td><td>7</td><td>16</td><td>7 8</td><td/></tr><tr><td>SET FILLS ONLY</td><td>719</td><td>89</td><td>46</td><td>16</td><td>14</td><td>0</td><td>1</td><td>13</td><td>643</td><td>537</td><td>8</td><td>61</td><td>15</td><td>0</td></tr><tr><td>STRING FILLS ONLY</td><td>390</td><td>48</td><td>20</td><td>5</td><td>7</td><td>1</td><td>5</td><td>16</td><td>358</td><td>320</td><td>6</td><td>47</td><td>33</td><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>P&amp;R</td><td/><td>2P&amp;R</td><td/><td/><td>P&amp;2R</td></tr><tr><td>F-MEASURES</td><td/><td/><td/><td/><td/><td/><td/><td/><td>9 .74</td><td/><td>12 .73</td><td/><td/><td>7.89</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF1": {
"text": "",
"html": null,
"content": "<table><tr><td>SLOT MATCHED/MISSING MATCHED/SPURIOUS MATCHED ONLY ALL TEMPLATES SET FILLS ONLY STRING FILLS ONLY F-MEASURES</td><td>POS 1105 456 456 1105 538 288</td><td>ACT 208 508 208 508 115 50</td><td>COR 124 124 124 124 78 25</td><td>PAR 40 40 40 40 20 6</td><td>INC 30 30 30 30 10 13</td><td>ICR 8 8 8 8 0 0</td><td>IPA 23 23 23 23 6 6</td><td>SPU 14 314 14 314 7 6</td><td>MIS 911 262 262 911 430 244 P&amp;R 17 .76</td><td>NON 745 844 236 1353 339 209</td><td>REC 13 32 32 13 16 10 2P&amp;R 22 .75</td><td>PRE 69 28 69 28 76 56</td><td>OVG 7 6 2 7 62 6 1 2</td><td>FAL 0 P&amp;2R 14 .56</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF2": {
"text": "Summary of Score Results for TST4 .",
"html": null,
"content": "<table><tr><td>The precision metric (PRE column in</td></tr></table>",
"num": null,
"type_str": "table"
}
}
}
}