{ "paper_id": "M98-1002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:16:06.830228Z" }, "title": "MUC-7 EVALUATION OF IE TECHNOLOGY: Overview of Results Elaine Marsh (NRL) Dennis Perzanowski (NRL)", "authors": [], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "M98-1002", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "Approximately 158,000 articles -Training and test sets retrieved from corpus using Managing Gigabytes text retrieval system using domain relevant terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "-2 sets of 100 articles (aircraft accident domain)preliminary training, including dryrun.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "-2 sets of 100 articles selected balanced for relevancy, type and source for formal run (launch event domain).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Training and Data Sets (con't)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Training Set Training keys for NE, TE, TR available from preliminary set of 100 articles; CO from preliminary training set of 30 articles. Formal training set of 100 articles and answer keys for ST task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Test Set 100 Articles (and answer keys) for NE (Formal Training set) 100 articles (and answer keys) for TE, TR, ST Subset of 30 articles (and answer keys) for CO task. Named Entity (NE) (con't)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\u2022 Non-markables -Artifacts (Wall Street Journal, MTV) -Common nouns used in anaphoric reference (the plane, the company,) -Names of groups of people and laws named after people (Republicans, Gramm-Rudman amendment, the Nobel prize) -Adjectival forms of location names (American, Japanese) -Miscellaneous uses of numbers which are not specifically currency or percentages (1 1/2 points, 1.5 times) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Template Relations Task (TR)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recall", "sec_num": null }, { "text": "\u2022 New task for MUC-7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recall", "sec_num": null }, { "text": "\u2022 TRs express domain-independent relationships between entities, as compared with TEs which identify entities themselves. \u2022 TR uses LOCATION_OF, EMPLOYEE_OF, and PRODUCT_OF relations. \u2022 Answer key contains entities for all organizations, persons, and artifacts that enter into these relations, whether relevant to scenario or not. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recall", "sec_num": null } ], "back_matter": [], "bib_entries": {}, "ref_entries": { "TABREF0": { "html": null, "type_str": "table", "text": "Named Entity Task [NE]: Insert SGML tags into the text to mark each string that represents a person, organization, or location name, or a date or time stamp, or a currency or percentage figure \u2022 Multi-lingual Entity Task [MET]: NE task for Chinese and Japanese \u2022 Template Element Task [TE]: Extract basic information related to organization, person, and artifact entities, drawing evidence from anywhere in the text IE Evaluation Tasks \u2022 Template Relation Task [TR]: Extract relational information on employee_of, manufacture_of, and location_of relations \u2022 Scenario Template Task [ST]: Extract prespecified event information and relate the event information to particular organization, person, or artifact entities involved in the event. \u2022 Coreference Task [CO]: Capture information on coreferring expressions: all mentions of a given entity, including those tagged in NE, TE tasks", "content": "
Evaluation Participation by Task tional Taiwan University T: IE Evaluation Tasks Corpus \u2022 Training and Data Sets tional University of Singapore w York University New York Times News Service (supplied by Linguistic Data
TConsortium)
i Electric Evaluation Epoch: January 1 -September 11,
med Entity:
N
CILE
quest
TRE
tional Taiwan University
tional University of Singapore
w York University
i Electric
iversity of Durham
iversity of Edinburgh and Thomson
iversity of Manitoba
iversity of Sheffield
", "num": null }, "TABREF2": { "html": null, "type_str": "table", "text": "", "content": "
\u2022 Caveats: \"newspaper\" style, domain bias toward ST topic NE Overall F-Measures MUC F-Measure Error Recall Precision 93.39 11 92 95 91.60 14 90 93 90.44 15 89 92 88.80 18 85 93 86.37 22 85 87 85.83 22 83 89 85.31 23 85 86 84.05 26 77 92 83.70 26 79 89 82.61 29 74 93 81.91 28 78 87 77.74 33 76 80 76.43 34 75 78 69.67 44 66 73 Annotators: 97.60 4 98 98 96.95 5 96 98 NE Overall F-Measures MUC 6 F-measure Error Recall Precision 96.42 5 96 97 95.66 7 95 96 94.92 8 93 96 94.00 10 92 96 93.65 10 94 93 93.33 11 92 95 92.88 10 94 92 92.74 12 92 93 92.61 12 89 96 91.20 13 91 91 90.84 14 91 91 89.06 18 84 94 88.19 19 86 90 85.82 20 85 87 85.73 23 80 92 84.95 22 82 89 Annotators: 96.68 6 95 98 93.18 11 92 95 NE Scores by Document Section (ERR) sorted by F-Measure MUC 7 F-Measure Slug Date Preamble Text 93.39 14 0 7 13 91.60 28 0 9 15 90.44 24 0 11 16 88.80 54 0 16 19 86.37 34 0 19 23 85.83 28 0 18 24 85.31 45 0 25 24 84.05 33 0 31 27 83.70 39 0 23 28 82.61 32 0 27 27 81.91 49 0 24 30 77.74 100 0 44 32 76.43 51 0 34 36 69.67 93 0 50 44 Annotators: 97.60 3 0 2 4 96.95 2 9 2 6 NE Scores by Document Section (ERR) sorted by F-Measure MUC F-Measure Doc Date Dateline Headline Text 96.42 0 0 8 5 95.66 0 0 7 7 94.92 0 0 8 8 94 0 0 20 9 93.65 0 2 16 10 93.33 0 4 38 9 92.88 0 0 18 10 92.74 0 0 22 11 92.61 100 0 18 9 91.2 0 0 30 13 90.84 3 11 19 14 89.06 3 4 28 18 88.19 0 0 22 20 85.82 0 6 18 21 85.73 0 44 53 21 84.95 0 0 50 21 Annotator: 96.68 0 0 7 6 NE Subcategory Scores (ERR) sorted by F-measure MUC enamex timex numex F-measure org per loc date time money percent 93.39 13 5 10 12 21 8 91.60 21 7 10 12 19 11 90.44 22 8 11 14 21 19 88.80 25 12 16 15 22 23 86.37 21 22 26 18 18 15 85.83 27 19 24 16 20 20 85.31 29 16 26 14 23 21 84.05 44 22 17 14 19 10 83.70 33 22 27 18 19 15 82.61 25 10 12 58 100 17 81.91 38 19 31 19 17 21 77.74 40 24 32 27 27 26 76.43 47 32 35 21 22 17 69.67 60 47 44 26 22 25 Annotators: 97.60 3 1 1 5 5 96.95 5 1 3 8 21 96.68 6 1 4 8 * 0 0 8 Annotator: 1 NE Subcategory Scores (ERR) sorted by F-measure MUC enamex timex F-measure org per loc date time money percent 96.42 10 2 6 3 * 0 0 95.66 11 3 9 7 * 1 0 94.92 16 3 7 3 * 0 0 94.00 16 3 15 9 * 3 0 93.65 13 4 8 8 * 8 32 93.33 16 6 12 9 * 4 6 92.88 15 4 13 8 * 8 32 92.74 16 4 9 16 * 2 0 92.61 14 4 5 43 * 1 0 91.20 18 9 19 8 * 6 36 90.84 16 10 29 12 * 6 89.06 22 17 18 10 * 3 0 88.19 29 7 20 17 * 11 36 85.82 29 9 16 13 * 6 32 85.73 26 14 29 18 * 9 40 84.95 45 4 31 10 * 4 32 MUC enamex 69% MUC 6 enamex 82% MUC 7 person 22% MUC MUC 7 69% MUC 6 82% MUC 7 date 85% MUC 7 0 10 20 30 40 50 60 70 80 90 100 Recall MUC 6 0 10 20 30 40 50 60 70 80 90 100 Recall Xichang; Long March as TIMEX, ENAMEX -One site missed only one entity in whole document within six months MUC 0 10 20 30 40 50 60 70 80 90 100 Recall MUC 7 MUC 7 MUC 7 MUC 7 Recall MUC 6 Recall 36% 57% 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 country org 69% money money -Common mistakes on ENAMEX: missed Globo, MURDOCH, entity water 0 timex 25% numex 6% enamex timex numex timex 10% numex 8% enamex timex numex org 46% loc 32% org person loc location 12% person 39% organization 49% organization person location percent 31% money percent percent 18% money percent timex 15% date timex \u2022 Number of tags in answer key: -Enamex -14 Timex \u2022 System scoring: -Common mistakes on TIMEX: missed early Thursday unk morning, within six months 31% location person region 10% entity 30% org province province artifact 13% country 31% city city airport -1 Numex location artifact person region 13% airport 3% 5% 2% unk water numex Distribution of NE tag elements Distribution of NE tag elements Distribution of NE ENAMEX tag elements Distribution of NE tag elements ENAMEX tag elements Distribution of NE NUMEX tag elements Distribution of NE NUMEX tag elements Distribution of NE TIMEX tag elements NE Results Overall NE Overall Results NE Results on Walkthrough NE Results on Walkthrough INFORMATION EXTRACTION: TEMPLATE ELEMENT (TE) TE Objects TE ENT_TYPE Distribution TE Results Overall TE Overall Results TE Results for TE Objects
", "num": null }, "TABREF3": { "html": null, "type_str": "table", "text": "", "content": "
TR Overall 40 50 60 70 Recall TR Results by Relation 20 30 80 90 100 0 10 20 30 40 50 60 70 80 90 100 0 10 Recall location_of product_of employee_of TR Error Scores location_of product_of employee_of BEST AVG TR Results on Walkthrough 0 10 20 30 40 50 60 70 80 90 100 Recall Scenario Template (ST) \u2022 STs express domain and task-specific entities and MUC MUC MUC MUC relations. Similar to MUC-6 template. \u2022 ST tests portability to new extraction problem; short time frame for system preparation (1 month) \u2022 Scenario concerns vehicle launch events. -Template consists of one high-level event object (LAUNCH_EVENT) with 7 slots, including 2 relational objects (VEHICLE_INFO, PAYLOAD_INFO), 3 set fills (MISSION_TYPE, MISSION_FUNCTION, MISSION_STATUS), and 2 pointers to low-level objects (LAUNCH_SITE, LAUNCH_DATE) Scenario Template (con't) -Relational objects have pointers to Template Elements, set-fills. -Set fills require inferences from the text. \u2022 Test set statistics: 63/100 documents relevant to the scenario. \u2022 Systems scored points lower (F-measure) on ST than on TE. \u2022 Interannotator variability (measured on all articles) was between 85.15 and 96.64 on the F-measures. \u2022 Document-water 2% airport 10% city 21% country 44% province 9% unk 10% region 4% airport city country province region unk water MUC 7 airport 3% water 2% unk 5% region 13% province 10% city 31% country airport city country province region unk water ST Results Overall 0 10 20 30 40 50 60 70 80 90 100 Annotators Systems ST Overall Results 0 10 20 30 40 50 60 70 80 90 ST Results for Text Filtering MUC-7 \u2022 F-measures for annotators: 98.13, 91.40 ERR for annotators: 4%, 14% \u2022 F-Measures for systems(all-1): 35.60-41.18 ERR for systems (all-1): 56-75% ST Results on Walkthrough Coreference Task (CO) \u2022 Capture information on coreferring expressions: all mentions of a given entity, including those tagged in NE, TE tasks. \u2022 Focused on the IDENTITY (IDENT) relation: symmetrical and transitive relation, equivalence classes used for scoring. CO Results Overall CO Overall Results CO Results for Walkthrough \u2022 Walkthrough article non-relevant for ST \u2022 F-measures range from 23.2-62.3% \u2022 Missing: -Dates: Thursday, Sept. 10 -Money: $30 Million -Unusual Conjunctions: GM, GE PROJECTS -Miscellaneous: CO Results on Walkthrough 0 10 20 ST Results on Walkthrough MUC-6: Thursday's meeting, agency's meeting, \u2022 Markables: Nouns, Noun Phrases, Pronouns FCC's allocation\u2026, transmissions from satellites to earth stations \u2022 ERR for annotators: 8% 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 US satellite industry, federal regulators 36% MUC 7 Recall MUC 6 Recall MUC \u2022 ERR for systems: 30-89% MUC Recall Recall MUC 7 MUC 6 Recall NEEDED AIRWAVES. MUC satellite downlinks, Recall
", "num": null } } } }