Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
18.3 kB
{
"paper_id": "M91-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:15:37.952729Z"
},
"title": "ADVANCED DECISION SYSTEMS' CODEX: MUC-3 TEST RESULTS AND ANALYSI S",
"authors": [
{
"first": "Laura",
"middle": [
"Blumer"
],
"last": "Balcom",
"suffix": "",
"affiliation": {
"laboratory": "Advanced Decision System s",
"institution": "",
"location": {
"addrLine": "1500 Plymouth Stree t Mountain View",
"postCode": "9404 3",
"region": "California"
}
},
"email": ""
},
{
"first": "Richard",
"middle": [
"M"
],
"last": "Tong",
"suffix": "",
"affiliation": {
"laboratory": "Advanced Decision System s",
"institution": "",
"location": {
"addrLine": "1500 Plymouth Stree t Mountain View",
"postCode": "9404 3",
"region": "California"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "INTRODUCTIO N ADS has developed a general purpose message data extraction system concept, called CODEX (for COntex t directed Data EXtraction), that we instantiated for MUC-3 as shown in Figure 1 .",
"pdf_parse": {
"paper_id": "M91-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "INTRODUCTIO N ADS has developed a general purpose message data extraction system concept, called CODEX (for COntex t directed Data EXtraction), that we instantiated for MUC-3 as shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "profile",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Profiler (RUBRIC)",
"sec_num": null
},
{
"text": "The underlying principle of the CODEX concept is to do the analysis in two phases . First, we perform a \"surface level\" analysis of the message text to determine if there is any evidence for data items of interest . If there is not, then no further processing of the message is attempted. If, however, there is evidence for the items of interest, we proceed to the second phase in which detailed analysis is performed only on those sections of the message identified b y the first phase as having potentially relevant material in them .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Profiler (RUBRIC)",
"sec_num": null
},
{
"text": "For MUC-3, the profiler was set-up to overgenerate partially completed templates based on the detection o f the basic event types (i .e ., murder, bombing, kidnapping, etc .) at the sentence level, with the parser/analyzer used to either reject these hypothesized templates or to complete them based on more detailed analysis of the text. The parser/ analyzer would also do the appropriate reasoning to determine whether the incident was attempted, threatened, or accomplished. The generation of templates based solely on profiler output was designed to be a fail-safe feature in th e event of parser failure . Since CAUCUS was not enabled for TST2, this fail-safe mechanism provided our only output . It produced one template per incident type per sentence in which a concept of the incident type was found . Table 1 shows the official Total Slot Scores for TST2-MUC3 . SLOT POS ACT COR PAR INC ICR IPA SPU MIS NON REC PRE OVG FA L template-id 118 221 97 0 0 0 0 124 21 17 82 44 5 Because of unanticipated system engineering issues associated with the scale-up to a lexicon of the size required for MUC-3, we were unable to run the parser/analyzer on the TST2 message set . The results therefore reflec t just the output from the profiler. Since this was set up to provide input to the parser (rather than to perform the best template fill possible), our templates have only the incident-type slot filled . We could have used profiler output to fil l out some of the other slots to some degree of confidence less than could be obtained through a finer-grained analysis , but we did not do this because our strategy was to have the parser fill out these slots . This strategy might change in th e future, depending on the robustness of the parser in analyzing MUC-3 texts .",
"cite_spans": [],
"ref_spans": [
{
"start": 810,
"end": 1001,
"text": "Table 1 shows the official Total Slot Scores for TST2-MUC3 . SLOT POS ACT COR PAR INC ICR IPA SPU MIS NON REC PRE OVG FA L template-id 118 221 97 0 0 0 0 124 21 17 82 44 5",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Profiler (RUBRIC)",
"sec_num": null
},
{
"text": "The high overgeneration of templates based on profiler output was expected, given the processing strateg y explained above . Most of this overgeneration will be eliminated with the addition of the parser, assuming it can analyz e the sentences given to it as input by the controller .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MUC-3 RESULTS AND ANALYSI S",
"sec_num": null
},
{
"text": "Our analysis of failures to correctly identify templates is summarized in Table 2 . The numbers in this table do not always sum to the total for a row because a single failure may have multiple causes .",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 81,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "MUC-3 RESULTS AND ANALYSI S",
"sec_num": null
},
{
"text": "Failures appear to reduce to three types. First, we could have gained a few more points in both precision an d recall of the incident-type slot by correcting some template mappings . Although we did not realize this when we were scoring our results, the overgeneration resulted in some faulty mappings of optional templates to incorrect overgenerated templates . Second, the addition of the parser should, in theory, fix most of the incorrect and partially correct responses . The parser should, for example, fix some of the partially correct responses by resolving attacks into bombings or arso n by recognizing the incident-instrument relation, resolving attacks into murders by recognizing that death was cause d by the event, and resolving murders into death threats by recognizing the speech act-event relation . Most of the overgenerated templates that resulted in incorrect mappings would also have been eliminated by recognizing that the description is too vague or the event does not meet one of the relevance criteria . We note, however, that the parser will not eliminate all of the overgeneration until we have developed appropriate modules for reference resolution and discourse structure analysis .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MUC-3 RESULTS AND ANALYSI S",
"sec_num": null
},
{
"text": "A third type of failure, inadequacies of the profiler knowledge base, contributed primarily to missing templates . The types of profiler knowledge base faults are summarized in Table 3 . Most of the suggested solutions to these failures will improve recall while reducing the precision of this portion of the system . By increasing the profiler recall at the expense of precision, we add to the number of sentences tha t must be analyzed by the parser . RUBRIC is designed so that we can easily experiment with this trade-off, but we di d not do this because we expected that improvements to the parser at this stage in its development would achieve higher payoffs in our MUC-3 score than experiments of this sort .",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 184,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Type of Failure",
"sec_num": null
},
{
"text": "Two failure types, the misspelled keyword and the unusual term, point out what might be considered a fla w in the filter-before-parsing approach . One could, of course, apply the reverse of spelling correction to all words not found in the parser's lexicon to see if we can make them match keywords, or one could apply a spelling-mess-up program to the profiler keywords to catch potentially misspelled keywords in the text, but these are likely to be high-cost , low-payoff solutions . Another solution might be to parse all sentences with unrecognized words. This approach might have a higher payoff because it is likely to uncover sentences that have lists of perpetrators or victims without mentio n of any event keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type of Failure",
"sec_num": null
},
{
"text": "Detecting events described with unusual terms is a problem for all keyword-based concept detection techniques, especially if the knowledge bases are developed automatically through statistical techniques . However, w e have an idea for developing concept knowledge bases based on the structure of the parser's semantic network and lexicon that may result in a more complete profiler knowledge base than would be feasible using only statistical techniques .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type of Failure",
"sec_num": null
},
{
"text": "Effort expended for MUC-3 involved : (1) Development of profiler rules, (2) Development of grammar, lexicon and semantics for the parser/analyzer, (3) System integration and testing, and (4) General administration . Approximate staff-hours of effort expended on these tasks over the period of the MUC-3 evaluation cycle (i .e., December 1990 through May 1991) is shown in Table 4 separated by domain independent activities and MUC-3 specific activities . Other than the administrative cost of hosting the MUC-3 interim conference, the largest MUC-3 specific task s were developing the back-end procedures for extracting template fillers from the parser and profiler results . We hav e ideas for reducing or eliminating these application-specific tasks, which we hope to implement in the next year . Other domain-specific knowledge engineering tasks involved relatively minor additions to the lexicon and semantic network .",
"cite_spans": [],
"ref_spans": [
{
"start": 372,
"end": 379,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "MUC-3 PREPARATION",
"sec_num": null
},
{
"text": "The bulk of effort for MUC-3 went into expanding the capacity of CAUCUS for lexical processing . Unde r system engineering, we added a lexical analyzer and a facility for storing and accessing compiled lexical entries in dis k files, as well as a few tools for partially automating lexical acquisition. As can be seen in Table 5 , our core lexicon grew from a few hundred entries to about 10,000 . Our grammar grew by about 20%, adding relative clauses and conjunction of clauses and adjectival phrases . We also added a few semantic net nodes to handle some new types of violence and some new speech acts. Before Domain Independent MUC-3 Specific MUC-3 Profiler rules 0 0 62 Controller rules 0 0 1 Grammar rules 108 20 0 Words -500 -9000 -100 0 Semantic net nodes -1100 -50 -50 ",
"cite_spans": [],
"ref_spans": [
{
"start": 321,
"end": 328,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 607,
"end": 744,
"text": "Before Domain Independent MUC-3 Specific MUC-3 Profiler rules 0 0 62 Controller rules 0 0 1 Grammar rules 108 20 0 Words",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Category",
"sec_num": null
},
{
"text": "CODEX was designed to scale up to large realistic problems such as MUC-3 . The C/Lisp implementatio n used in MUC-3 was designed to facilitate knowledge engineering and experimentation with parameters and algorithm s that will at least partially automate adaptation to new domains and applications. The Profiler component of the syste m uses the relatively mature RUBRIC technology, but CAUCUS, the Parser/Analyzer component of the system is stil l in its infancy . Prior to MUC-3, we had implemented CAUCUS' Generalized Composition Grammar, grammar compiler, chart parser with prioritized agenda, and a semantic network and lexicon that covered several small domains tha t we have worked in the past. During MUC-3, most of our effort was expended in upgrading CAUCUS' facilities fo r processing lexemes and adding roughly 10,000 lexical entries . Although we were not, in the end, able to get this facilit y up in time to test the parser on MUC-3 TST2, all of this work will be generally useful to future applications . For ADS then, MUC-3 served as a catalyst in the development of generally applicable language processing facilities for messag e data extraction applications .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LESSONS LEARNED",
"sec_num": null
},
{
"text": "Though the timing was not quite right for us, our analysis of MUC-3 results shows that this type of testing i s useful for assessing the capabilities and weaknesses of a message understanding system as a whole and for showin g where future efforts will achieve the highest payoff in improving the system's performance . ADS' results validate our CODEX approach to the degree that it was implemented for MUC-3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LESSONS LEARNED",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "CODEX -Status for MUC3 Phase 2",
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td>KBs for</td><td>KBs for.</td></tr><tr><td>\u2022 Murder \u2022 Bombing</td><td>\u2022 Extensive grammar (but still has some missing elements that would have affected TST2 results )</td></tr><tr><td colspan=\"2\">\u2022 Kidnapping \u2022 Hijacking good TST2 results ) \u2022 Core semantic network (probably adequate for \u2022 Arson ~ -10,000 words, most with limited semantic s</td></tr><tr><td>\u2022 Robbery \u2022</td><td/></tr></table>",
"type_str": "table",
"html": null,
"text": "Attack Disabled for TST2 because of unanticipated problems in engineering the scale-up to a large lexicon .",
"num": null
},
"TABREF2": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "",
"num": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Failure Analysis on Incorrectly Identified Template s",
"num": null
},
"TABREF5": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "",
"num": null
},
"TABREF6": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "MUC-3 Effort by Category",
"num": null
},
"TABREF7": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Number of Knowledge Base Structures for MUC-3",
"num": null
}
}
}
}