Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
30.8 kB
{
"paper_id": "M98-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:16:08.636337Z"
},
"title": "DESCRIPTION OF LOCKHEED MARTIN'S NLTOOLSET AS APPLIED TO MUC-7 (AATM7)",
"authors": [
{
"first": "Deborah",
"middle": [],
"last": "Brady",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Lois Childs David Cassel Bob Magee Norris Heintzelman Dr. Carl Weir",
"location": {}
},
"email": ""
},
{
"first": "Lockheed",
"middle": [],
"last": "Martin",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "BACKGROUND The NLToolset has been used to build a variety of information extraction applications, ranging from military message traffic to newswire accounts of corporate activity. AATM7 is an acronym for As Applied To MUC-7. AATM7 was not tailored specifically for MUC-7, but rather represents the NLToolset in a state of flux, as TIPSTER experimentation and the delivery of a real-world application were taking place, simultaneously. This contrast in domains proved beneficial for our real-world applications, perhaps to the detriment of the MUC-7 system, which had to compete for developers. NLToolset applications are delivered under the Windows NT, as well as the UNIX Solaris operating system. TEMPLATE ELEMENT TASK AATM7 was applied to the MUC-7 Template Element task in order to test some theories of coreference that were being investigated under the TIPSTER III research activity. The Template Element task requires an automatic system to build templates for every person, organization, and artifact entity, as well as every location. Entities The Entities are defined as follows: An organization object consists of: organization's name and aliases found in the text,",
"pdf_parse": {
"paper_id": "M98-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "BACKGROUND The NLToolset has been used to build a variety of information extraction applications, ranging from military message traffic to newswire accounts of corporate activity. AATM7 is an acronym for As Applied To MUC-7. AATM7 was not tailored specifically for MUC-7, but rather represents the NLToolset in a state of flux, as TIPSTER experimentation and the delivery of a real-world application were taking place, simultaneously. This contrast in domains proved beneficial for our real-world applications, perhaps to the detriment of the MUC-7 system, which had to compete for developers. NLToolset applications are delivered under the Windows NT, as well as the UNIX Solaris operating system. TEMPLATE ELEMENT TASK AATM7 was applied to the MUC-7 Template Element task in order to test some theories of coreference that were being investigated under the TIPSTER III research activity. The Template Element task requires an automatic system to build templates for every person, organization, and artifact entity, as well as every location. Entities The Entities are defined as follows: An organization object consists of: organization's name and aliases found in the text,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "To perform this task perfectly, an automatic system must link all references to the same entity within a text, and collect those references, whether they be names or descriptive noun phrases. The entire list of unique names for an entity is placed in the \"NAME\" slot. Of the descriptors, the system must pick one of those found, and put it in the \"DESCRIPTOR\" slot, as long as it is not \"insubstantial\" according to the fill rules, e.g. \"the company\" or \"Dr.\" Pronouns are also excluded from the entity object. Additionally, the system must decide to what category the entity belongs, either through its knowledge base or the surrounding context, e.g. \"Gen. Smith\" vs. \"Ms. Smith\" as PER_MIL vs. PER_CIV.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The limitation to one descriptor can have the effect of hiding how well the coreference resolution has performed, since a system may have found all descriptive phrases, plus one incorrect descriptor, and chosen the incorrect descriptor, thus getting a score of incorrect for the entire slot. Lockheed Martin is planning to test a multiple-descriptor version of MUC-7, in the near future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Of the three entity types, those of \"PERSON\" and \"ORGANIZATION\" are the most similar, since language is used in similar ways to describe them. They both can be named, where the \"name\" is an identity which, within the context of a story, is usually unique. The artifact, which in MUC terms can be a land, air, sea, or space vehicle, is sometimes named, but often the tag which is considered the name is merely a type. For example, a story that tells about three different F-14 crashes may, according to MUC rules, produce three different entities named \"F-14\", whose only difference would be found in information not captured by the TE object.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Locations are defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locations",
"sec_num": null
},
{
"text": "A location object consists of: locale found in the text, the country where the locale exists, and the locale type: CITY, PROVINCE, COUNTRY, REGION, AIRPORT, or UNK.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locations",
"sec_num": null
},
{
"text": "The location object's locale slot is filled with the most specific reference to a location. For example, if the location were \"Philadelphia, PA,\" the locale slot would be filled with \"Philadelphia.\" The country would be \"United States\" and the locale type would be \"CITY.\" The deficiency of this design is obvious; it fails to differentiate between the actual location and any other city named \"Philadelphia\" in the nation. An alternative design, which has been used for other NLToolset applications, contains a locale slot which holds the entire phrase describing the locale. Some examples are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locations",
"sec_num": null
},
{
"text": "\"at the checkpoint on Route 30\" \"southwest of Miami\" \"Wilmington, Delaware\" Additionally, the location object contains slots for whatever other information can be gleaned from the text or from on-line resources, such as a gazetteer. This includes slots for city, country, province, latitute/longitude, region, or water.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locations",
"sec_num": null
},
{
"text": "AATM7 was developed with a focus on the investigation of a number of techniques involved in coreference resolution. Coreference Resolution can be thought of as the identification and linking of all references to a particular entity. References may be in the form of names, pronouns, or noun phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TIPSTER Research",
"sec_num": null
},
{
"text": "Syntax is frequently used by an author to associate a descriptive phrase with an entity. This can be seen in the following examples: APPOSITIVE: \"Lockheed Martin, an aerospace firm,\" PRENOMIAL: \"the aerospace firm, Lockheed Martin\" NAME-MODIFIED HEAD NOUN: \"the Lockheed Martin aerospace firm\" PREDICATIVE NOMINATIVE: \"Lockheed Martin is an aerospace firm\" When an entity is referred to only by a descriptive phrase, finding its true identity is very challenging. The following sentence \"The president has announced that he will resign.\" has varying degrees of import, depending on its preceding sentence\u2026 \"Coca Cola Company today revealed the future plans of its president, James Murphy.\" \"Impeachment hearings were scheduled to begin today against President Clinton.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TIPSTER Research",
"sec_num": null
},
{
"text": "An automatic system can use the information closely related by syntax to the entity, in this case the title \"President\" or the prenominal \"its president\", to identify the entity referred to by \"the president.\" This is the heart of our current research. Our aim is to find all descriptive information closely related by syntax and to build a story-specific ontology for each entity so that far-flung references that depend on this semantic information can be identified.As part of this research, the Template Element development keys were analyzed to determine how often the descriptors of an organization and person are directly associated by syntax. A surprisingly large number of descriptive phrases within the keys can be directly associated to an entity by way of syntax. Of a total of approximately 900 descriptors, 125 were organization descriptors, and 775, person descriptors ---a disproportionate number, since there are actually more organization entities (985) than person entities (802) in the keys.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TIPSTER Research",
"sec_num": null
},
{
"text": "The following table shows the breakdown by category and entity type. \"Association by Context\" refers to descriptors that have been found in titles, prenominal phrases, appositives, and predicate nominatives. \"Association by Reference\" refers to a remote reference which refers to a named entity. \"Un-named\" refers to entities described by noun phrases alone, e.g. \"a local bank.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TIPSTER Research",
"sec_num": null
},
{
"text": "Association by Context 548 (71%) 33 26%Association by Reference 103 (13%) 53 42%Un-named 119 (15%) 38 (30%) This data supports the hypothesis that much reliable descriptive information can be obtained through syntactic association. This descriptive information can be associated with the entity object and then be used to help resolve associations by reference, in a manner similar to that used for organizations in the Lockheed Martin MUC-6 system, LOUELLA. This is the idea of a semantic filter, which was used to compare descriptive phrases with the semantic content of organization names, as in the following example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category Person Organization",
"sec_num": null
},
{
"text": "\"Buster Brown Shoes\" => (buster brown shoes shoe footwear)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category Person Organization",
"sec_num": null
},
{
"text": "\"the footwear maker\" => (footwear maker make manufacturer)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category Person Organization",
"sec_num": null
},
{
"text": "Since person names rarely include semantic content, we must rely on other descriptive information to build the semantics, either through world knowledge stored in the system's knowledge base or through associations found in the text itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category Person Organization",
"sec_num": null
},
{
"text": "As part of Lockheed Martin's TIPSTER research, the freeware Brill part-of-speech tagger was connected to the NLToolset to see if it could help streamline the process of building patterns to find descriptors. Since standard NLToolset processing provides all possible parts of speech for each token, a part-of-speech tagger was introduced to see if it could simplify the process of pattern writing. It was found that a package for finding and correctly linking the majority of person descriptors could be written in about a week by incorporating the information that Brill provides with that provided by the NLToolset, i.e. symbol name, semantic category, and possible parts of speech as found in the NLToolset's lexicon. The contrast between the descriptor scores for persons and organizations in the test set is striking. Finding artifacts and linking up all references to the same entity has proved especially challenging because of the unusual way that artifacts are described in text, and the way that the descriptions are categorized for MUC-7. For instance, \"Boeing 747\" and \"F-14\" are considered names, whereas \"TWA Flight 800\" is considered a descriptor. Under the TIPSTER research, a new algorithm was developed to find vehicles and resolve coreferences. The algorithm differs from that for organizations and people in that a match is assumed to belong to the most recently seen entity, unless there is some information to contradict this assumption. The possible types of contradictory information are: model information, manufacturer, military branch, airline, and flight number. Further, if the comparison reveals that one entity has military information and the other has airline information, there is a contradiction. Further, the variable-binding feature of the NLToolset's pattern matching allows the developer to extract type information while finding the entities in the text. This type information helps the system to distinguish between entities during coreference resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category Person Organization",
"sec_num": null
},
{
"text": "Overall, AATM7's scores for MUC-7 are good. There are a few errors, as well as some quirks of the MUC-7 domain, that will be discussed which significantly effected the scores for entity names and locations. The artifact scores are significantly below the NLToolset's usual performance; this is due to the newness of this entity, particularly of the space vehicle artifacts. This capability is still a work in progress, as the need arises for our real-world applications . Since the TE task spans four separate subtasks with very different characteristics, an analysis was done on each. The formal run keys were split into four sets: organization, person, artifact, and location keys. The formal run was then also split into organization, person, artifact, and location responses. Each set was then respectively scored with SAIC's version 3.3 of the MUC scoring program. The results are described below. This scoring method removes the mapping ambiguity between entities of different types and allows an accurate analysis of the performance of each individual entity type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RESULTS ANALYSIS",
"sec_num": null
},
{
"text": "AATM7 found 97% of the people objects, with 86% of the names correctly. The slot scores are high, even the descriptor slot, which has traditionally been at less than 50%. To improve on this performance, one problem that could very easily be resolved is an incorrect interpretation of expressions like \"(NI FRX)\" in the formal text. \"NI\" is a common first name in some languages and therefore, AATM7 interpreted all thirteen of these as person names. This error accounted for 13 of the overgenerated or incorrect person names, or the equivalent of 2 points of precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "People",
"sec_num": null
},
{
"text": "Another area for improvement is in the descriptor slot. Twenty-six of AATM7's person descriptors were marked incorrect because they contained only the head of the noun phrase and not the entire phrase, e.g. \"commander\" instead of \"Columbia's commander\" and \"manager\" instead of \"project manager.\" The descriptor rule package will be improved to better encompass the entire phrase. If these descriptors had been extracted correctly for the MUC-7 test, the descriptor recall and precision would have improved to 70 and 63, while the overall person scores would have improved to 89 recall, 79 precision, and 83.7 F-measure. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "People",
"sec_num": null
},
{
"text": "Organizations are complex entities to determine in text because organization names have a more complex structure than person names. A variation algorithm for one name may not work for another. For example, \"Hughes\" is a valid variation for \"Hughes Aerospace, Inc.\" but \"Space\" is not a valid variation for \"Space Technology Industries\". An automatic system must, therefore, look at the surrounding context of variations and filter out those that are spurious. AATM7 found 780 of the 877 organizations in the formal test corpus. Of the 780 it found, points were lost here and there for mistakes in two areas. First, current performance on organization descriptors is woefully inadequate and in sharp contrast to that on person descriptors. An effort is currently underway to improve this with the help of a part-of-speech tagger. Additionally, it was discovered that the mechanism for creating and linking variations of organization names was broken during the training period. The result of this was that 64 name variations were missed. When this problem was fixed, recall and precision for ent_name improved to 76 and 77, with the overall organization recall and precision improving to 80 and 77. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Organizations",
"sec_num": null
},
{
"text": "AATM7's artifact performance really suffers in the area of entity names. It missed almost half of the artifact entities purely from lack of patterns with which to recognize them. This is a sign of the immaturity of the artifact packages and can be overcome by more development. Another problem, which caused the low precision, was the result of incorrectly identifying the owner of the artifact as its name. This accounted for 38 of the spurious entity names and 2% of the precision. Since this is a new package, the coreference resolution is also not up to the NLToolset's usual performance. This is an on-going research effort. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artifacts",
"sec_num": null
},
{
"text": "The NLToolset performs well at finding and disambiguating locations. Determining the country for a given location can be complicated since many named locations exist in multiple countries. A small number of minor changes have been identified to significantly boost the score to its normal level. One of the obvious problems AATM7 had was with the airports. Eleven occurrences of Kennedy Space Center were identified as locale type \"CITY\" instead of the correct type of \"AIRPORT\". This was caused by a simple inconsistency in our location processing. Fixing this one problem, improved the airport-specific recall and precision to 57 and 67 respectively, and improved the precision overall by 1 percentage point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locations",
"sec_num": null
},
{
"text": "The location recall for MUC-7 is slightly depressed because of some challenges which this particular domain presented. AATM7 was not configured to process planet names or other extra-terrestrial bodies as locations. This accounted for sixty-three missing items, at three slots per item; thirty-one of the missing were occurrences of \"earth\" alone. This is reflected in the subtask scores for region and unk. By just adding these locations to the NLToolset's knowledge base, recall and precision was improved to 82 and 83 for the location object.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locations",
"sec_num": null
},
{
"text": "Another quirk of the MUC-7 domain was that adjectival forms of nation names were to be extracted as location objects, if they were the only references to the nation in the text. In other words, if the text contains the phrase \"the Italian satellite\" but no other mention of Italy, a location object with the locale \"Italian\" would be extracted. This was not addressed in AATM7 and resulted in a loss of thirty-two location objects, at three slots per object. This feature could be added just for the MUC-7 test. It is unlikely that a real-world application would want this information extracted. If it is added, recall and precision for the location object rise to 86 and 84 with an overall F-measure of 85. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locations",
"sec_num": null
},
{
"text": "Our overall score for the walkthrough message is slightly below our overall performance. location locale 19 17 16 0 1 2 0 1 84 94 11 0 6 locale_type 17 17 0 0 0 2 100 11 0 0 country 19 16 15 0 1 3 0 0 79 94 16 0 ",
"cite_spans": [],
"ref_spans": [
{
"start": 89,
"end": 258,
"text": "location locale 19 17 16 0 1 2 0 1 84 94 11 0 6 locale_type 17 17 0 0 0 2 100 11 0 0 country 19 16 15 0 1 3 0 0 79 94 16 0",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "WALKTHROUGH MESSAGE",
"sec_num": null
},
{
"text": "AATM7 found all of the persons in the walkthrough document. Of the five person descriptors, it missed only two; it made a separate entity for one of the descriptors and found only part of the other. The other spurious person entity is really an organization (\"ING Barings\") that was mistaken for a person, due to the fact that Ing is in the firstnames list. AATM7 did confuse another organization (\"Bloomberg Business\") as a person because of the context (\"the parent of\"), but this was marked incorrect, instead of spurious, because it was mapped to the organization object in the keys.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Persons",
"sec_num": null
},
{
"text": "Of the twenty-three organization entities, AATM7 found twenty-one. It missed \"International Technology Underwriters\" and \"Axa SA.\" Two other organizations were typed incorrectly as people, as has been mentioned. Five of the nine organization descriptors were found correctly. The remaining error in the organization area is the result of the breaking of the variation linking mechanism that has been mentioned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Organizations",
"sec_num": null
}
],
"back_matter": [
{
"text": "AATM7 correctly identified all three of the artifacts in the walkthrough article; however, because it overgenerated, precision for this object is a low 33%. This was due to the previously discussed mistake in which an organization that owned the satellite was incorrectly identified as the name. In fact, the organizations \"Intelsat\" and \"United States\" account for five of the six spurious artifacts. Two of the three descriptors were identified correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artifacts",
"sec_num": null
},
{
"text": "AATM7 correctly identified sixteen of the nineteen locations, but missed \"Arlington,\" \"China,\" and the \"Central\" part of \"Central America.\" This was due to overzealous context-based filtering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locations",
"sec_num": null
},
{
"text": "A cursory analysis of AATM's MUC-7 scores revealed seven specific improvements to improve MUC-7 performance. Of these seven, five will be made in order to improve NLToolset performance. The sixth, adding extra-terrestrial bodies to the knowledge base, will be done to expand the NLToolset's reach. The seventh, making nation adjectives into locations, will not be done until a real-world application requires it.If one were to make all of the changes specified, AATM7's overall scores would be improved to:The NLToolset continues to improve, as it is applied to new problems, whether real-world application or standardized test. Its accuracy remains high and its speed is constantly improving, currently standing, in its compiled state, at under twenty seconds for an average document.For more information contact: Donna Harman Last updated: Friday, ",
"cite_spans": [
{
"start": 815,
"end": 849,
"text": "Donna Harman Last updated: Friday,",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSIONS",
"sec_num": null
}
],
"bib_entries": {},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "type slot of ORGANIZATION, one descriptor phrase, and the category of the organization: ORG_CO, ORG_GOVT, or ORG_OTHER. A person object consists of: person's name and aliases found in the text, a type slot of PERSON, one descriptor phrase, and the category of the person: PER_CIV or PER_MIL An artifact object consists of: artifact's name and aliases found in the text, a type slot of ARTIFACT, one descriptor phrase, and the category of the artifact: ART_AIR, ART_LAND, or ART_WATER.",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"num": null,
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table"
},
"TABREF2": {
"num": null,
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table"
},
"TABREF4": {
"num": null,
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table"
},
"TABREF6": {
"num": null,
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table"
},
"TABREF8": {
"num": null,
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table"
},
"TABREF10": {
"num": null,
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table"
},
"TABREF12": {
"num": null,
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table"
},
"TABREF14": {
"num": null,
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table"
}
}
}
}